code
stringlengths 3
1.05M
| repo_name
stringlengths 5
104
| path
stringlengths 4
251
| language
stringclasses 1
value | license
stringclasses 15
values | size
int64 3
1.05M
|
---|---|---|---|---|---|
#!/usr/bin/env python
"""Stomp Protocol Connectivity
This provides basic connectivity to a message broker supporting the 'stomp' protocol.
At the moment ACK, SEND, SUBSCRIBE, UNSUBSCRIBE, BEGIN, ABORT, COMMIT, CONNECT and DISCONNECT operations
are supported.
This changes the previous version which required a listener per subscription -- now a listener object
just calls the 'addlistener' method and will receive all messages sent in response to all/any subscriptions.
(The reason for the change is that the handling of an 'ack' becomes problematic unless the listener mechanism
is decoupled from subscriptions).
Note that you must 'start' an instance of Connection to begin receiving messages. For example:
conn = stomp.Connection([('localhost', 62003)], 'myuser', 'mypass')
conn.start()
Meta-Data
---------
Author: Jason R Briggs
License: http://www.apache.org/licenses/LICENSE-2.0
Start Date: 2005/12/01
Last Revision Date: $Date: 2008/09/11 00:16 $
Notes/Attribution
-----------------
* uuid method courtesy of Carl Free Jr:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/213761
* patch from Andreas Schobel
* patches from Julian Scheid of Rising Sun Pictures (http://open.rsp.com.au)
* patch from Fernando
* patches from Eugene Strulyov
Updates
-------
* 2007/03/31 : (Andreas Schobel) patch to fix newlines problem in ActiveMQ 4.1
* 2007/09 : (JRB) updated to get stomp.py working in Jython as well as Python
* 2007/09/05 : (Julian Scheid) patch to allow sending custom headers
* 2007/09/18 : (JRB) changed code to use logging instead of just print. added logger for jython to work
* 2007/09/18 : (Julian Scheid) various updates, including:
- change incoming message handling so that callbacks are invoked on the listener not only for MESSAGE, but also for
CONNECTED, RECEIPT and ERROR frames.
- callbacks now get not only the payload but any headers specified by the server
- all outgoing messages now sent via a single method
- only one connection used
- change to use thread instead of threading
- sends performed on the calling thread
- receiver loop now deals with multiple messages in one received chunk of data
- added reconnection attempts and connection fail-over
- changed defaults for "user" and "passcode" to None instead of empty string (fixed transmission of those values)
- added readline support
* 2008/03/26 : (Fernando) added cStringIO for faster performance on large messages
* 2008/09/10 : (Eugene) remove lower() on headers to support case-sensitive header names
* 2008/09/11 : (JRB) fix incompatibilities with RabbitMQ, add wait for socket-connect
* 2008/10/28 : (Eugene) add jms map (from stomp1.1 ideas)
* 2008/11/25 : (Eugene) remove superfluous (incorrect) locking code
"""
import math
import random
import re
import socket
import sys
import thread
import threading
import time
import types
import xml.dom.minidom
from cStringIO import StringIO
try:
from hashlib import md5 as _md5
except ImportError:
import md5
_md5 = md5.new
#
# stomp.py version number
#
_version = 1.8
def _uuid( *args ):
"""
uuid courtesy of Carl Free Jr:
(http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/213761)
"""
t = long( time.time() * 1000 )
r = long( random.random() * 100000000000000000L )
try:
a = socket.gethostbyname( socket.gethostname() )
except:
# if we can't get a network address, just imagine one
a = random.random() * 100000000000000000L
data = str(t) + ' ' + str(r) + ' ' + str(a) + ' ' + str(args)
data = _md5(data).hexdigest()
return data
class DevNullLogger(object):
"""
dummy logging class for environments without the logging module
"""
def log(self, msg):
print msg
def devnull(self, msg):
pass
debug = devnull
info = devnull
warning = log
error = log
critical = log
exception = log
def isEnabledFor(self, lvl):
return False
#
# add logging if available
#
try:
import logging
log = logging.getLogger('stomp.py')
except ImportError:
log = DevNullLogger()
class ConnectionClosedException(Exception):
"""
Raised in the receiver thread when the connection has been closed
by the server.
"""
pass
class NotConnectedException(Exception):
"""
Raised by Connection.__send_frame when there is currently no server
connection.
"""
pass
class ConnectionListener(object):
"""
This class should be used as a base class for objects registered
using Connection.add_listener().
"""
def on_connecting(self, host_and_port):
"""
Called by the STOMP connection once a TCP/IP connection to the
STOMP server has been established or re-established. Note that
at this point, no connection has been established on the STOMP
protocol level. For this, you need to invoke the "connect"
method on the connection.
\param host_and_port a tuple containing the host name and port
number to which the connection has been established.
"""
pass
def on_connected(self, headers, body):
"""
Called by the STOMP connection when a CONNECTED frame is
received, that is after a connection has been established or
re-established.
\param headers a dictionary containing all headers sent by the
server as key/value pairs.
\param body the frame's payload. This is usually empty for
CONNECTED frames.
"""
pass
def on_disconnected(self):
"""
Called by the STOMP connection when a TCP/IP connection to the
STOMP server has been lost. No messages should be sent via
the connection until it has been reestablished.
"""
pass
def on_message(self, headers, body):
"""
Called by the STOMP connection when a MESSAGE frame is
received.
\param headers a dictionary containing all headers sent by the
server as key/value pairs.
\param body the frame's payload - the message body.
"""
pass
def on_receipt(self, headers, body):
"""
Called by the STOMP connection when a RECEIPT frame is
received, sent by the server if requested by the client using
the 'receipt' header.
\param headers a dictionary containing all headers sent by the
server as key/value pairs.
\param body the frame's payload. This is usually empty for
RECEIPT frames.
"""
pass
def on_error(self, headers, body):
"""
Called by the STOMP connection when an ERROR frame is
received.
\param headers a dictionary containing all headers sent by the
server as key/value pairs.
\param body the frame's payload - usually a detailed error
description.
"""
pass
class Connection(object):
"""
Represents a STOMP client connection.
"""
def __init__(self,
host_and_ports = [ ('localhost', 61613) ],
user = None,
passcode = None,
prefer_localhost = True,
try_loopback_connect = True,
reconnect_sleep_initial = 0.1,
reconnect_sleep_increase = 0.5,
reconnect_sleep_jitter = 0.1,
reconnect_sleep_max = 60.0):
"""
Initialize and start this connection.
\param host_and_ports
a list of (host, port) tuples.
\param prefer_localhost
if True and the local host is mentioned in the (host,
port) tuples, try to connect to this first
\param try_loopback_connect
if True and the local host is found in the host
tuples, try connecting to it using loopback interface
(127.0.0.1)
\param reconnect_sleep_initial
initial delay in seconds to wait before reattempting
to establish a connection if connection to any of the
hosts fails.
\param reconnect_sleep_increase
factor by which the sleep delay is increased after
each connection attempt. For example, 0.5 means
to wait 50% longer than before the previous attempt,
1.0 means wait twice as long, and 0.0 means keep
the delay constant.
\param reconnect_sleep_max
maximum delay between connection attempts, regardless
of the reconnect_sleep_increase.
\param reconnect_sleep_jitter
random additional time to wait (as a percentage of
the time determined using the previous parameters)
between connection attempts in order to avoid
stampeding. For example, a value of 0.1 means to wait
an extra 0%-10% (randomly determined) of the delay
calculated using the previous three parameters.
"""
sorted_host_and_ports = []
sorted_host_and_ports.extend(host_and_ports)
# If localhost is preferred, make sure all (host, port) tuples
# that refer to the local host come first in the list
if prefer_localhost:
def is_local_host(host):
return host in Connection.__localhost_names
sorted_host_and_ports.sort(lambda x, y: (int(is_local_host(y[0]))
- int(is_local_host(x[0]))))
# If the user wishes to attempt connecting to local ports
# using the loopback interface, for each (host, port) tuple
# referring to a local host, add an entry with the host name
# replaced by 127.0.0.1 if it doesn't exist already
loopback_host_and_ports = []
if try_loopback_connect:
for host_and_port in sorted_host_and_ports:
if is_local_host(host_and_port[0]):
port = host_and_port[1]
if (not ("127.0.0.1", port) in sorted_host_and_ports
and not ("localhost", port) in sorted_host_and_ports):
loopback_host_and_ports.append(("127.0.0.1", port))
# Assemble the final, possibly sorted list of (host, port) tuples
self.__host_and_ports = []
self.__host_and_ports.extend(loopback_host_and_ports)
self.__host_and_ports.extend(sorted_host_and_ports)
self.__recvbuf = ''
self.__listeners = [ ]
self.__reconnect_sleep_initial = reconnect_sleep_initial
self.__reconnect_sleep_increase = reconnect_sleep_increase
self.__reconnect_sleep_jitter = reconnect_sleep_jitter
self.__reconnect_sleep_max = reconnect_sleep_max
self.__connect_headers = {}
if user is not None and passcode is not None:
self.__connect_headers['login'] = user
self.__connect_headers['passcode'] = passcode
self.__socket = None
self.__current_host_and_port = None
self.__receiver_thread_exit_condition = threading.Condition()
self.__receiver_thread_exited = False
#
# Manage the connection
#
def start(self):
"""
Start the connection. This should be called after all
listeners have been registered. If this method is not called,
no frames will be received by the connection.
"""
self.__running = True
self.__attempt_connection()
thread.start_new_thread(self.__receiver_loop, ())
def stop(self):
"""
Stop the connection. This is equivalent to calling
disconnect() but will do a clean shutdown by waiting for the
receiver thread to exit.
"""
self.disconnect()
self.__receiver_thread_exit_condition.acquire()
if not self.__receiver_thread_exited:
self.__receiver_thread_exit_condition.wait()
self.__receiver_thread_exit_condition.release()
def get_host_and_port(self):
"""
Return a (host, port) tuple indicating which STOMP host and
port is currently connected, or None if there is currently no
connection.
"""
return self.__current_host_and_port
def is_connected(self):
try:
return self.__socket is not None and self.__socket.getsockname()[1] != 0
except socket.error:
return False
#
# Manage objects listening to incoming frames
#
def add_listener(self, listener):
self.__listeners.append(listener)
def remove_listener(self, listener):
self.__listeners.remove(listener)
#
# STOMP transmissions
#
def subscribe(self, headers={}, **keyword_headers):
self.__send_frame_helper('SUBSCRIBE', '', self.__merge_headers([headers, keyword_headers]), [ 'destination' ])
def unsubscribe(self, headers={}, **keyword_headers):
self.__send_frame_helper('UNSUBSCRIBE', '', self.__merge_headers([headers, keyword_headers]), [ ('destination', 'id') ])
def send(self, message='', headers={}, **keyword_headers):
if '\x00' in message:
content_length_headers = {'content-length': len(message)}
else:
content_length_headers = {}
self.__send_frame_helper('SEND', message, self.__merge_headers([headers,
keyword_headers,
content_length_headers]), [ 'destination' ])
def ack(self, headers={}, **keyword_headers):
self.__send_frame_helper('ACK', '', self.__merge_headers([headers, keyword_headers]), [ 'message-id' ])
def begin(self, headers={}, **keyword_headers):
use_headers = self.__merge_headers([headers, keyword_headers])
if not 'transaction' in use_headers.keys():
use_headers['transaction'] = _uuid()
self.__send_frame_helper('BEGIN', '', use_headers, [ 'transaction' ])
return use_headers['transaction']
def abort(self, headers={}, **keyword_headers):
self.__send_frame_helper('ABORT', '', self.__merge_headers([headers, keyword_headers]), [ 'transaction' ])
def commit(self, headers={}, **keyword_headers):
self.__send_frame_helper('COMMIT', '', self.__merge_headers([headers, keyword_headers]), [ 'transaction' ])
def connect(self, headers={}, **keyword_headers):
if keyword_headers.has_key('wait') and keyword_headers['wait']:
while not self.is_connected(): time.sleep(0.1)
del keyword_headers['wait']
self.__send_frame_helper('CONNECT', '', self.__merge_headers([self.__connect_headers, headers, keyword_headers]), [ ])
def disconnect(self, headers={}, **keyword_headers):
self.__send_frame_helper('DISCONNECT', '', self.__merge_headers([self.__connect_headers, headers, keyword_headers]), [ ])
self.__running = False
if hasattr(socket, 'SHUT_RDWR'):
self.__socket.shutdown(socket.SHUT_RDWR)
if self.__socket:
self.__socket.close()
self.__current_host_and_port = None
# ========= PRIVATE MEMBERS =========
# List of all host names (unqualified, fully-qualified, and IP
# addresses) that refer to the local host (both loopback interface
# and external interfaces). This is used for determining
# preferred targets.
__localhost_names = [ "localhost",
"127.0.0.1",
socket.gethostbyname(socket.gethostname()),
socket.gethostname(),
socket.getfqdn(socket.gethostname()) ]
#
# Used to parse STOMP header lines in the format "key:value",
#
__header_line_re = re.compile('(?P<key>[^:]+)[:](?P<value>.*)')
#
# Used to parse the STOMP "content-length" header lines,
#
__content_length_re = re.compile('^content-length[:]\\s*(?P<value>[0-9]+)', re.MULTILINE)
def __merge_headers(self, header_map_list):
"""
Helper function for combining multiple header maps into one.
Any underscores ('_') in header names (keys) will be replaced by dashes ('-').
"""
headers = {}
for header_map in header_map_list:
for header_key in header_map.keys():
headers[header_key.replace('_', '-')] = header_map[header_key]
return headers
def __convert_dict(self, payload):
"""
Encode python dictionary as <map>...</map> structure.
"""
xmlStr = "<map>\n"
for key in payload:
xmlStr += "<entry>\n"
xmlStr += "<string>%s</string>" % key
xmlStr += "<string>%s</string>" % payload[key]
xmlStr += "</entry>\n"
xmlStr += "</map>"
return xmlStr
def __send_frame_helper(self, command, payload, headers, required_header_keys):
"""
Helper function for sending a frame after verifying that a
given set of headers are present.
\param command the command to send
\param payload the frame's payload
\param headers a dictionary containing the frame's headers
\param required_header_keys a sequence enumerating all
required header keys. If an element in this sequence is itself
a tuple, that tuple is taken as a list of alternatives, one of
which must be present.
\throws ArgumentError if one of the required header keys is
not present in the header map.
"""
for required_header_key in required_header_keys:
if type(required_header_key) == tuple:
found_alternative = False
for alternative in required_header_key:
if alternative in headers.keys():
found_alternative = True
if not found_alternative:
raise KeyError("Command %s requires one of the following headers: %s" % (command, str(required_header_key)))
elif not required_header_key in headers.keys():
raise KeyError("Command %s requires header %r" % (command, required_header_key))
self.__send_frame(command, headers, payload)
def __send_frame(self, command, headers={}, payload=''):
"""
Send a STOMP frame.
"""
if type(payload) == dict:
headers["transformation"] = "jms-map-xml"
payload = self.__convert_dict(payload)
if self.__socket is not None:
frame = '%s\n%s\n%s\x00' % (command,
reduce(lambda accu, key: accu + ('%s:%s\n' % (key, headers[key])), headers.keys(), ''),
payload)
self.__socket.sendall(frame)
log.debug("Sent frame: type=%s, headers=%r, body=%r" % (command, headers, payload))
else:
raise NotConnectedException()
def __receiver_loop(self):
"""
Main loop listening for incoming data.
"""
try:
try:
threading.currentThread().setName("StompReceiver")
while self.__running:
log.debug('starting receiver loop')
if self.__socket is None:
break
try:
try:
for listener in self.__listeners:
if hasattr(listener, 'on_connecting'):
listener.on_connecting(self.__current_host_and_port)
while self.__running:
frames = self.__read()
for frame in frames:
(frame_type, headers, body) = self.__parse_frame(frame)
log.debug("Received frame: result=%r, headers=%r, body=%r" % (frame_type, headers, body))
frame_type = frame_type.lower()
if frame_type in [ 'connected',
'message',
'receipt',
'error' ]:
for listener in self.__listeners:
if hasattr(listener, 'on_%s' % frame_type):
eval('listener.on_%s(headers, body)' % frame_type)
else:
log.debug('listener %s has no such method on_%s' % (listener, frame_type))
else:
log.warning('Unknown response frame type: "%s" (frame length was %d)' % (frame_type, len(frame)))
finally:
try:
self.__socket.close()
except:
pass # ignore errors when attempting to close socket
self.__socket = None
self.__current_host_and_port = None
except ConnectionClosedException:
if self.__running:
log.error("Lost connection")
# Notify listeners
for listener in self.__listeners:
if hasattr(listener, 'on_disconnected'):
listener.on_disconnected()
# Clear out any half-received messages after losing connection
self.__recvbuf = ''
continue
else:
break
except:
log.exception("An unhandled exception was encountered in the stomp receiver loop")
finally:
self.__receiver_thread_exit_condition.acquire()
self.__receiver_thread_exited = True
self.__receiver_thread_exit_condition.notifyAll()
self.__receiver_thread_exit_condition.release()
def __read(self):
"""
Read the next frame(s) from the socket.
"""
fastbuf = StringIO()
while self.__running:
try:
c = self.__socket.recv(1024)
except:
c = ''
if len(c) == 0:
raise ConnectionClosedException
fastbuf.write(c)
if '\x00' in c:
break
self.__recvbuf += fastbuf.getvalue()
fastbuf.close()
result = []
if len(self.__recvbuf) > 0 and self.__running:
while True:
pos = self.__recvbuf.find('\x00')
if pos >= 0:
frame = self.__recvbuf[0:pos]
preamble_end = frame.find('\n\n')
if preamble_end >= 0:
content_length_match = Connection.__content_length_re.search(frame[0:preamble_end])
if content_length_match:
content_length = int(content_length_match.group('value'))
content_offset = preamble_end + 2
frame_size = content_offset + content_length
if frame_size > len(frame):
# Frame contains NUL bytes, need to
# read more
if frame_size < len(self.__recvbuf):
pos = frame_size
frame = self.__recvbuf[0:pos]
else:
# Haven't read enough data yet,
# exit loop and wait for more to
# arrive
break
result.append(frame)
self.__recvbuf = self.__recvbuf[pos+1:]
else:
break
return result
def __transform(self, body, transType):
"""
Perform body transformation. Currently, the only supported transformation is
'jms-map-xml', which converts a map into python dictionary. This can be extended
to support other transformation types.
The body has the following format:
<map>
<entry>
<string>name</string>
<string>Dejan</string>
</entry>
<entry>
<string>city</string>
<string>Belgrade</string>
</entry>
</map>
(see http://docs.codehaus.org/display/STOMP/Stomp+v1.1+Ideas)
"""
if transType != 'jms-map-xml':
return body
try:
entries = {}
doc = xml.dom.minidom.parseString(body)
rootElem = doc.documentElement
for entryElem in rootElem.getElementsByTagName("entry"):
pair = []
for node in entryElem.childNodes:
if not isinstance(node, xml.dom.minidom.Element): continue
pair.append(node.firstChild.nodeValue)
assert len(pair) == 2
entries[pair[0]] = pair[1]
return entries
except Exception, ex:
# unable to parse message. return original
return body
def __parse_frame(self, frame):
"""
Parse a STOMP frame into a (frame_type, headers, body) tuple,
where frame_type is the frame type as a string (e.g. MESSAGE),
headers is a map containing all header key/value pairs, and
body is a string containing the frame's payload.
"""
preamble_end = frame.find('\n\n')
preamble = frame[0:preamble_end]
preamble_lines = preamble.split('\n')
body = frame[preamble_end+2:]
# Skip any leading newlines
first_line = 0
while first_line < len(preamble_lines) and len(preamble_lines[first_line]) == 0:
first_line += 1
# Extract frame type
frame_type = preamble_lines[first_line]
# Put headers into a key/value map
headers = {}
for header_line in preamble_lines[first_line+1:]:
header_match = Connection.__header_line_re.match(header_line)
if header_match:
headers[header_match.group('key')] = header_match.group('value')
if 'transformation' in headers:
body = self.__transform(body, headers['transformation'])
return (frame_type, headers, body)
def __attempt_connection(self):
"""
Try connecting to the (host, port) tuples specified at construction time.
"""
sleep_exp = 1
while self.__running and self.__socket is None:
for host_and_port in self.__host_and_ports:
try:
log.debug("Attempting connection to host %s, port %s" % host_and_port)
self.__socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.__socket.connect(host_and_port)
self.__current_host_and_port = host_and_port
log.info("Established connection to host %s, port %s" % host_and_port)
break
except socket.error:
self.__socket = None
if type(sys.exc_info()[1]) == types.TupleType:
exc = sys.exc_info()[1][1]
else:
exc = sys.exc_info()[1]
log.warning("Could not connect to host %s, port %s: %s" % (host_and_port[0], host_and_port[1], exc))
if self.__socket is None:
sleep_duration = (min(self.__reconnect_sleep_max,
((self.__reconnect_sleep_initial / (1.0 + self.__reconnect_sleep_increase))
* math.pow(1.0 + self.__reconnect_sleep_increase, sleep_exp)))
* (1.0 + random.random() * self.__reconnect_sleep_jitter))
sleep_end = time.time() + sleep_duration
log.debug("Sleeping for %.1f seconds before attempting reconnect" % sleep_duration)
while self.__running and time.time() < sleep_end:
time.sleep(0.2)
sleep_exp += 1
#
# command line testing
#
if __name__ == '__main__':
# If the readline module is available, make command input easier
try:
import readline
def stomp_completer(text, state):
commands = [ 'subscribe', 'unsubscribe',
'send', 'ack',
'begin', 'abort', 'commit',
'connect', 'disconnect'
]
for command in commands[state:]:
if command.startswith(text):
return "%s " % command
return None
readline.parse_and_bind("tab: complete")
readline.set_completer(stomp_completer)
readline.set_completer_delims("")
except ImportError:
pass # ignore unavailable readline module
class StompTester(object):
def __init__(self, host='localhost', port=61613, user='', passcode=''):
self.c = Connection([(host, port)], user, passcode)
self.c.add_listener(self)
self.c.start()
def __print_async(self, frame_type, headers, body):
print "\r \r",
print frame_type
for header_key in headers.keys():
print '%s: %s' % (header_key, headers[header_key])
print
print body
print '> ',
sys.stdout.flush()
def on_connecting(self, host_and_port):
self.c.connect(wait=True)
def on_disconnected(self):
print "lost connection"
def on_message(self, headers, body):
self.__print_async("MESSAGE", headers, body)
def on_error(self, headers, body):
self.__print_async("ERROR", headers, body)
def on_receipt(self, headers, body):
self.__print_async("RECEIPT", headers, body)
def on_connected(self, headers, body):
self.__print_async("CONNECTED", headers, body)
def ack(self, args):
if len(args) < 3:
self.c.ack(message_id=args[1])
else:
self.c.ack(message_id=args[1], transaction=args[2])
def abort(self, args):
self.c.abort(transaction=args[1])
def begin(self, args):
print 'transaction id: %s' % self.c.begin()
def commit(self, args):
if len(args) < 2:
print 'expecting: commit <transid>'
else:
print 'committing %s' % args[1]
self.c.commit(transaction=args[1])
def disconnect(self, args):
try:
self.c.disconnect()
except NotConnectedException:
pass # ignore if no longer connected
def send(self, args):
if len(args) < 3:
print 'expecting: send <destination> <message>'
else:
self.c.send(destination=args[1], message=' '.join(args[2:]))
def sendtrans(self, args):
if len(args) < 3:
print 'expecting: sendtrans <destination> <transid> <message>'
else:
self.c.send(destination=args[1], message="%s\n" % ' '.join(args[3:]), transaction=args[2])
def subscribe(self, args):
if len(args) < 2:
print 'expecting: subscribe <destination> [ack]'
elif len(args) > 2:
print 'subscribing to "%s" with acknowledge set to "%s"' % (args[1], args[2])
self.c.subscribe(destination=args[1], ack=args[2])
else:
print 'subscribing to "%s" with auto acknowledge' % args[1]
self.c.subscribe(destination=args[1], ack='auto')
def unsubscribe(self, args):
if len(args) < 2:
print 'expecting: unsubscribe <destination>'
else:
print 'unsubscribing from "%s"' % args[1]
self.c.unsubscribe(destination=args[1])
if len(sys.argv) > 5:
print 'USAGE: stomp.py [host] [port] [user] [passcode]'
sys.exit(1)
if len(sys.argv) >= 2:
host = sys.argv[1]
else:
host = "localhost"
if len(sys.argv) >= 3:
port = int(sys.argv[2])
else:
port = 61613
if len(sys.argv) >= 5:
user = sys.argv[3]
passcode = sys.argv[4]
else:
user = None
passcode = None
st = StompTester(host, port, user, passcode)
try:
while True:
line = raw_input("\r> ")
if not line or line.lstrip().rstrip() == '':
continue
elif 'quit' in line or 'disconnect' in line:
break
split = line.split()
command = split[0]
if not command.startswith("on_") and hasattr(st, command):
getattr(st, command)(split)
else:
print 'unrecognized command'
finally:
st.disconnect(None)
| jjgod/confbot | stomp.py | Python | gpl-2.0 | 34,366 |
import sys
class gene:
# Initialize the gene with the correct types from a list of fields
def __init__(self, fields):
# Read in fields
self.fields = fields # Store for printing original gene later
self.start = int(fields[0])
self.end = int(fields[1])
self.score = float(fields[2])
# Flip genes have a greater start position than end position
if self.start > self.end:
tmp = self.end
self.end = self.start
self.start = tmp
# String representation of a gene
def __str__(self):
return " ".join(self.fields)
def read_gene_list(fname):
"""
Return a list of genes from the file sorted by end position
"""
with open(fname) as f:
# None is because we want the gene list to start at index 1
return tuple([None]+sorted([gene(line.rstrip().split()) for line in f],
key=lambda g: g.end))
def prev_disjoint_gene(j, genes):
"""
Given the index j of a gene in genes, find the index i < j
of a gene that does not overlap with gene j
"""
for i in range(j, -1, -1):
if i == 0:
return 0
if genes[i].end < genes[j].start:
return i
def chosen_genes(best_gene_score, genes):
chosen = []
j = len(best_gene_score)-1
while j > 0:
if best_gene_score[j] != best_gene_score[j-1]:
chosen.append(genes[j])
j = prev_disjoint_gene(j, genes)
else:
j = j-1
return chosen
def main():
"""
Find an optimal set of non-overlapping genes from a text file of genes.
Each line representing gene i in the text file should be: si ei ri
where si is the start position of the gene, ei is the end position of
the gene, and ri is the score of the gene.
"""
if len(sys.argv) != 3:
print "Usage: python choosegenes.py genes.txt outfile.txt"
genes_file = sys.argv[1]
outfile = sys.argv[2]
genes = read_gene_list(genes_file)
n = len(genes)-1
# Computing score iteratively, initialize array
best_gene_score = [0]*(n+1)
# Cached version of prev_disjoint_gene
p = {j: prev_disjoint_gene(j, genes) for j in range(1, n+1)}
for j in range(1, n+1):
# Compute the score for each case
score_if_chosen = genes[j].score + best_gene_score[p[j]]
score_if_ignored = best_gene_score[j-1]
# Choose the case with the best score
best_gene_score[j] = max(score_if_chosen, score_if_ignored)
with open(outfile, "w") as f:
sp = 0
for gene in chosen_genes(best_gene_score, genes):
print "Include:", gene
f.write("{0}\t{1}\t{2}\n".format("chr"+gene.fields[3], gene.start, gene.end))
sp += gene.score
print "Sum Probability: {0}".format(best_gene_score[n])
print "SP check: {0}".format(sp)
if __name__ == "__main__":
main()
| COMBINE-lab/matryoshka_work | coredomains-import/python-src/choosegenes.py | Python | gpl-3.0 | 2,956 |
"""
Test Utility Helper
Function to help unit tests
"""
import os
import yaml
from rest_framework import status
from ozpcenter import model_access as generic_model_access
TEST_BASE_PATH = os.path.realpath(os.path.join(os.path.dirname(__file__), '..', '..', 'ozpcenter', 'scripts'))
TEST_DATA_PATH = os.path.join(TEST_BASE_PATH, 'test_data')
def patch_environ(new_environ=None, clear_orig=False):
"""
https://stackoverflow.com/questions/2059482/python-temporarily-modify-the-current-processs-environment/34333710#34333710
"""
if not new_environ:
new_environ = dict()
def actual_decorator(func):
from functools import wraps
@wraps(func)
def wrapper(*args, **kwargs):
original_env = dict(os.environ)
if clear_orig:
os.environ.clear()
os.environ.update(new_environ)
try:
result = func(*args, **kwargs)
except:
raise
finally: # restore even if Exception was raised
os.environ = original_env
return result
return wrapper
return actual_decorator
class ExceptionUnitTestHelper(object):
"""
This class returns dictionaries of exceptions to compare with response data
"""
# HTTP_400
@staticmethod
def validation_error(detailmsg=None):
detail = detailmsg or 'Invalid input.'
return {'detail': detail,
'error': True,
'error_code': 'validation_error'}
# HTTP_400
@staticmethod
def parse_error(detailmsg=None):
detail = detailmsg or 'Malformed request.'
return {'detail': detail,
'error': True,
'error_code': 'parse_error'}
# HTTP_400
@staticmethod
def request_error(detailmsg=None):
detail = detailmsg or 'Invalid input.'
return {'detail': detail,
'error': True,
'error_code': 'request'}
# HTTP_401
@staticmethod
def authorization_failure(detailmsg=None):
detail = detailmsg or 'Not authorized to view.'
# 'Incorrect authentication credentials'
return {'detail': detail,
'error': True,
'error_code': 'authorization_failed'}
# HTTP_401
@staticmethod
def not_authenticated(detailmsg=None):
detail = detailmsg or 'Authentication credentials were not provided.'
return {'detail': detail,
'error': True,
'error_code': 'not_authenticated'}
# 'error_code': 'authorization_failure'}
# HTTP_403
@staticmethod
def permission_denied(detailmsg=None):
detail = detailmsg or 'You do not have permission to perform this action.'
return {'detail': detail,
'error': True,
'error_code': 'permission_denied'}
# HTTP_404
@staticmethod
def not_found(detailmsg=None):
detail = detailmsg or 'Not found.'
return {'detail': detail,
'error': True,
'error_code': 'not_found'}
# HTTP_405
@staticmethod
def method_not_allowed(detailmsg=None):
detail = detailmsg or 'Method < > not allowed.'
return {'detail': detail,
'error': True,
'error_code': 'method_not_allowed'}
# HTTP_406
@staticmethod
def not_acceptable(detailmsg=None):
detail = detailmsg or 'Could not satisfy the request Accept header.'
return {'detail': detail,
'error': True,
'error_code': 'not_acceptable'}
# HTTP_416
@staticmethod
def unsupported_media_type(detailmsg=None):
detail = detailmsg or 'Unsupported media type < > in request.'
return {'detail': detail,
'error': True,
'error_code': 'unsupported_media_type'}
# HTTP_429
@staticmethod
def too_many_requests(detailmsg=None):
detail = detailmsg or 'Request was throttled.'
return {'detail': detail,
'error': True,
'error_code': 'throttled'}
class APITestHelper(object):
@staticmethod
def _delete_bookmark_folder(test_case_instance, username, folder_id, status_code=201):
url = '/api/self/library/{0!s}/delete_folder/'.format(folder_id)
user = generic_model_access.get_profile(username).user
test_case_instance.client.force_authenticate(user=user)
response = test_case_instance.client.delete(url, format='json')
if response:
if status_code == 204:
test_case_instance.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
elif status_code == 400:
test_case_instance.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
else:
raise Exception('status code is not supported')
return response
@staticmethod
def _import_bookmarks(test_case_instance, username, bookmark_notification_id, status_code=201):
url = '/api/self/library/import_bookmarks/'
data = {'bookmark_notification_id': bookmark_notification_id}
user = generic_model_access.get_profile(username).user
test_case_instance.client.force_authenticate(user=user)
response = test_case_instance.client.post(url, data, format='json')
if response:
if status_code == 201:
test_case_instance.assertEqual(response.status_code, status.HTTP_201_CREATED)
elif status_code == 400:
test_case_instance.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
else:
raise Exception('status code is not supported')
return response
@staticmethod
def create_bookmark(test_case_instance, username, listing_id, folder_name=None, status_code=200):
"""
Create Bookmark Helper Function
Args:
test_case_instance
username
listing_id
folder_name(optional)
status_code
Returns:
response
"""
url = '/api/self/library/'
data = {'listing': {'id': listing_id}, 'folder': folder_name}
user = generic_model_access.get_profile(username).user
test_case_instance.client.force_authenticate(user=user)
response = test_case_instance.client.post(url, data, format='json')
if response:
if status_code == 201:
test_case_instance.assertEqual(response.status_code, status.HTTP_201_CREATED)
elif status_code == 400:
test_case_instance.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
else:
raise Exception('status code is not supported')
return response
@staticmethod
def edit_listing(test_case_instance, id, input_data, default_user='bigbrother'):
"""
Helper Method to modify a listing
Args:
test_case_instance
id
input_data
default_user(optional)
Return:
response
"""
assert id is not None, "Id can not be None"
url = '/api/listing/{0!s}/'.format(id)
user = generic_model_access.get_profile(default_user).user
test_case_instance.client.force_authenticate(user=user)
data = test_case_instance.client.get(url, format='json').data
for current_key in input_data:
if current_key in data:
data[current_key] = input_data[current_key]
# PUT the Modification
response = test_case_instance.client.put(url, data, format='json')
test_case_instance.assertEqual(response.status_code, status.HTTP_200_OK)
return response
@staticmethod
def request(test_case_instance, url, method, data=None, username='bigbrother', status_code=200, validator=None, format_str=None):
user = generic_model_access.get_profile(username).user
test_case_instance.client.force_authenticate(user=user)
format_str = format_str or 'json'
response = None
if method.upper() == 'GET':
response = test_case_instance.client.get(url, format=format_str)
elif method.upper() == 'POST':
response = test_case_instance.client.post(url, data, format=format_str)
elif method.upper() == 'PUT':
response = test_case_instance.client.put(url, data, format=format_str)
elif method.upper() == 'DELETE':
response = test_case_instance.client.delete(url, data, format=format_str)
elif method.upper() == 'PATCH':
response = test_case_instance.client.patch(url, format=format_str)
else:
raise Exception('method is not supported')
if response:
if status_code == 200:
test_case_instance.assertEqual(response.status_code, status.HTTP_200_OK)
elif status_code == 201:
test_case_instance.assertEqual(response.status_code, status.HTTP_201_CREATED)
elif status_code == 204:
test_case_instance.assertEqual(response.status_code, status.HTTP_204_NO_CONTENT)
elif status_code == 400:
test_case_instance.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
elif status_code == 403:
test_case_instance.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
elif status_code == 404:
test_case_instance.assertEqual(response.status_code, status.HTTP_404_NOT_FOUND)
elif status_code == 405:
test_case_instance.assertEqual(response.status_code, status.HTTP_405_METHOD_NOT_ALLOWED)
elif status_code == 501:
test_case_instance.assertEqual(response.status_code, status.HTTP_501_NOT_IMPLEMENTED)
else:
raise Exception('status code is not supported')
try:
if validator:
validator(response.data, test_case_instance=test_case_instance)
except Exception as err:
# print(response.data)
raise err
return response
def validate_listing_map_keys_list(response_data, test_case_instance=None):
for listing_map in response_data:
test_case_instance.assertEqual(validate_listing_map_keys(listing_map), [])
def validate_listing_search_keys_list(response_data, test_case_instance=None):
for listing_map in response_data['results']:
test_case_instance.assertEqual(validate_listing_map_keys(listing_map), [])
def validate_listing_map_keys(listing_map, test_case_instance=None):
"""
Used to validate the keys of a listing
"""
if not isinstance(listing_map, dict):
raise Exception('listing_map is not type dict, it is {0!s}'.format(type(listing_map)))
listing_map_default_keys = ['id', 'is_bookmarked', 'screenshots',
'doc_urls', 'owners', 'categories', 'tags', 'contacts', 'intents',
'small_icon', 'large_icon', 'banner_icon', 'large_banner_icon',
'agency', 'last_activity', 'current_rejection', 'listing_type',
'title', 'approved_date', 'edited_date', 'featured_date', 'description', 'launch_url',
'version_name', 'unique_name', 'what_is_new', 'description_short',
'usage_requirements', 'system_requirements', 'approval_status', 'is_enabled', 'is_featured',
'is_deleted', 'avg_rate', 'total_votes', 'total_rate5', 'total_rate4',
'total_rate3', 'total_rate2', 'total_rate1', 'total_reviews',
'iframe_compatible', 'security_marking', 'is_private',
'required_listings']
listing_keys = [k for k, v in listing_map.items()]
invalid_key_list = []
for current_key in listing_map_default_keys:
if current_key not in listing_keys:
invalid_key_list.append(current_key)
return invalid_key_list
| aml-development/ozp-backend | tests/ozpcenter/helper.py | Python | apache-2.0 | 12,254 |
from django.conf.urls.defaults import patterns, url
from feeds import LatestPostFeed, TagFeed, CatFeed
from feeds import LatestPostFeedAtom, TagFeedAtom, CatFeedAtom
from sitemaps import PostSitemap
from commons.sitemaps import StaticSitemap
from commons.urls import live_edit_url
post_r = '(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/(?P<slug>[-\w]+)'
static_urlpatterns = patterns(
'blog.views',
url(r'^$', 'home',
name='blog'),
)
urlpatterns = patterns(
'blog.views',
url(r'^(?P<year>\d{4})/$',
'post_list_by_archives',
name='archives-year'),
url(r'^(?P<year>\d{4})/(?P<month>\d{2})/$',
'post_list_by_archives',
name='archives-month'),
url(r'^(?P<year>\d{4})/(?P<month>\d{2})/(?P<day>\d{2})/$',
'post_list_by_archives',
name='archives-day'),
url(r'^%s/$' % post_r,
'display_post',
name='post'),
url(r'^category/(?P<slug>[-\w]+)/$',
'post_list_by_categories',
name='category'),
url(r'^tag/(?P<slug>[-\w]+)/$',
'post_list_by_tags',
name='tag'),
)
urlpatterns += patterns(
'blog.views.ajax',
url(r"^get/%s/comment/list/$" % post_r,
'comment_list',
name='post-comment-list'),
url(r"^get/%s/comment/form/$" % post_r,
'comment_form',
name='post-comment-form'),
url(r"^get/%s/comment/count/$" % post_r,
'comment_count',
name='post-comment-count'),
)
urlpatterns += live_edit_url('blog', 'post', 'title')
urlpatterns += live_edit_url('blog', 'post', 'is_public')
urlpatterns += live_edit_url('blog', 'post', 'parsed_content')
urlpatterns += live_edit_url('blog', 'post', 'category')
urlpatterns += live_edit_url('blog', 'comment', 'comment')
feeds_urlpatterns = patterns(
'',
url(r'^feed/latest/rss/$',
LatestPostFeed(),
name="rss-blog-latest"),
url(r'^tag/(?P<slug>[-\w]+)/rss/$',
TagFeed(),
name="rss-blog-tag-latest"),
url(r'^category/(?P<slug>[-\w]+)/rss/$',
CatFeed(),
name="rss-blog-category-latest"),
url(r'^feed/latest/atom/$',
LatestPostFeedAtom(),
name="atom-blog-latest"),
url(r'^tag/(?P<slug>[-\w]+)/atom/$',
TagFeedAtom(),
name="atom-blog-tag-latest"),
url(r'^category/(?P<slug>[-\w]+)/atom/$',
CatFeedAtom(),
name="atom-blog-category-latest"),
)
sitemaps = {
'blog_static': StaticSitemap(static_urlpatterns, changefreq='daily'),
'blog_post': PostSitemap,
}
urlpatterns += static_urlpatterns + feeds_urlpatterns
| Nivl/www.melvin.re | nivls_website/blog/urls.py | Python | gpl-3.0 | 2,578 |
"""API of mode Grueneisen parameter calculation."""
# Copyright (C) 2015 Atsushi Togo
# All rights reserved.
#
# This file is part of phonopy.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
#
# * Neither the name of the phonopy project nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
from phonopy.gruneisen.band_structure import GruneisenBandStructure
from phonopy.gruneisen.mesh import GruneisenMesh
class PhonopyGruneisen:
"""Class to calculate mode Grueneisen parameters."""
def __init__(self, phonon, phonon_plus, phonon_minus, delta_strain=None):
"""Init method.
Parameters
----------
phonon, phonon_plus, phonon_minus : Phonopy
Phonopy instances of the same crystal with differet volumes,
V_0, V_0 + dV, V_0 - dV.
delta_strain : float, optional
Default is None, which gives dV / V_0.
"""
self._phonon = phonon
self._phonon_plus = phonon_plus
self._phonon_minus = phonon_minus
self._delta_strain = delta_strain
self._mesh = None
self._band_structure = None
def get_phonon(self):
"""Return Phonopy class instance at dV=0."""
return self._phonon
def set_mesh(
self,
mesh,
shift=None,
is_time_reversal=True,
is_gamma_center=False,
is_mesh_symmetry=True,
):
"""Set sampling mesh."""
for phonon in (self._phonon, self._phonon_plus, self._phonon_minus):
if phonon.dynamical_matrix is None:
print("Warning: Dynamical matrix has not yet built.")
return False
symmetry = phonon.primitive_symmetry
rotations = symmetry.pointgroup_operations
self._mesh = GruneisenMesh(
self._phonon.dynamical_matrix,
self._phonon_plus.dynamical_matrix,
self._phonon_minus.dynamical_matrix,
mesh,
delta_strain=self._delta_strain,
shift=shift,
is_time_reversal=is_time_reversal,
is_gamma_center=is_gamma_center,
is_mesh_symmetry=is_mesh_symmetry,
rotations=rotations,
factor=self._phonon.unit_conversion_factor,
)
return True
def get_mesh(self):
"""Return mode Grueneisen parameters calculated on sampling mesh."""
if self._mesh is None:
return None
else:
return (
self._mesh.get_qpoints(),
self._mesh.get_weights(),
self._mesh.get_frequencies(),
self._mesh.get_eigenvectors(),
self._mesh.get_gruneisen(),
)
def write_yaml_mesh(self):
"""Write mesh sampling calculation results to file in yaml."""
self._mesh.write_yaml()
def write_hdf5_mesh(self):
"""Write mesh sampling calculation results to file in hdf5."""
self._mesh.write_hdf5()
def plot_mesh(
self, cutoff_frequency=None, color_scheme=None, marker="o", markersize=None
):
"""Return pyplot of mesh sampling calculation results."""
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.xaxis.set_ticks_position("both")
ax.yaxis.set_ticks_position("both")
ax.xaxis.set_tick_params(which="both", direction="in")
ax.yaxis.set_tick_params(which="both", direction="in")
self._mesh.plot(
plt,
cutoff_frequency=cutoff_frequency,
color_scheme=color_scheme,
marker=marker,
markersize=markersize,
)
return plt
def set_band_structure(self, bands):
"""Set band structure paths."""
self._band_structure = GruneisenBandStructure(
bands,
self._phonon.dynamical_matrix,
self._phonon_plus.dynamical_matrix,
self._phonon_minus.dynamical_matrix,
delta_strain=self._delta_strain,
factor=self._phonon.unit_conversion_factor,
)
def get_band_structure(self):
"""Return band structure calculation results."""
band = self._band_structure
return (
band.get_qpoints(),
band.get_distances(),
band.get_frequencies(),
band.get_eigenvectors(),
band.get_gruneisen(),
)
def write_yaml_band_structure(self):
"""Write band structure calculation results to file in yaml."""
self._band_structure.write_yaml()
def plot_band_structure(self, epsilon=1e-4, color_scheme=None):
"""Return pyplot of band structure calculation results."""
import matplotlib.pyplot as plt
fig, axarr = plt.subplots(2, 1)
for ax in axarr:
ax.xaxis.set_ticks_position("both")
ax.yaxis.set_ticks_position("both")
ax.xaxis.set_tick_params(which="both", direction="in")
ax.yaxis.set_tick_params(which="both", direction="in")
self._band_structure.plot(axarr, epsilon=epsilon, color_scheme=color_scheme)
return plt
| atztogo/phonopy | phonopy/api_gruneisen.py | Python | bsd-3-clause | 6,461 |
# uncomment the import statements for debugging in PyCharm, VS Code or other IDEs.
# import demistomock as demisto
# from CommonServerPython import * # noqa # pylint: disable=unused-wildcard-import
# from CommonServerUserPython import * # noqa
TRANSLATE_OUTPUT_PREFIX = 'Phrase'
# Disable insecure warnings
requests.packages.urllib3.disable_warnings() # pylint: disable=no-member
class Client(BaseClient):
def __init__(self, api_key: str, base_url: str, proxy: bool, verify: bool):
super().__init__(base_url=base_url, proxy=proxy, verify=verify)
self.api_key = api_key
if self.api_key:
self._headers = {'X-Funtranslations-Api-Secret': self.api_key}
def translate(self, text: str):
return self._http_request(method='POST', url_suffix='yoda', data={'text': text}, resp_type='json',
ok_codes=(200,))
def test_module(client: Client) -> str:
"""
Tests API connectivity and authentication'
Returning 'ok' indicates that connection to the service is successful.
Raises exceptions if something goes wrong.
"""
try:
response = client.translate('I have the high ground!')
success = demisto.get(response, 'success.total') # Safe access to response['success']['total']
if success != 1:
return f'Unexpected result from the service: success={success} (expected success=1)'
return 'ok'
except Exception as e:
exception_text = str(e).lower()
if 'forbidden' in exception_text or 'authorization' in exception_text:
return 'Authorization Error: make sure API Key is correctly set'
else:
raise e
def translate_command(client: Client, text: str) -> CommandResults:
if not text:
raise DemistoException('the text argument cannot be empty.')
response = client.translate(text)
translated = demisto.get(response, 'contents.translated')
if translated is None:
raise DemistoException('Translation failed: the response from server did not include `translated`.',
res=response)
output = {'Original': text, 'Translation': translated}
return CommandResults(outputs_prefix='YodaSpeak',
outputs_key_field=f'{TRANSLATE_OUTPUT_PREFIX}.Original',
outputs={TRANSLATE_OUTPUT_PREFIX: output},
raw_response=response,
readable_output=tableToMarkdown(name='Yoda Says...', t=output))
def main() -> None:
params = demisto.params()
args = demisto.args()
command = demisto.command()
api_key = params.get('apikey', {}).get('password')
base_url = params.get('url', '')
verify = not params.get('insecure', False)
proxy = params.get('proxy', False)
demisto.debug(f'Command being called is {command}')
try:
client = Client(api_key=api_key, base_url=base_url, verify=verify, proxy=proxy)
if command == 'test-module':
# This is the call made when clicking the integration Test button.
return_results(test_module(client))
elif command == 'yoda-speak-translate':
return_results(translate_command(client, **args))
else:
raise NotImplementedError(f"command {command} is not implemented.")
# Log exceptions and return errors
except Exception as e:
demisto.error(traceback.format_exc()) # print the traceback
return_error("\n".join(("Failed to execute {command} command.",
"Error:",
str(e))))
if __name__ in ('__main__', '__builtin__', 'builtins'):
main()
| demisto/content | docs/tutorial-integration/YodaSpeak/Integrations/YodaSpeak/YodaSpeak.py | Python | mit | 3,719 |
# -*- coding: utf-8 -*-
"""QGIS Unit tests for QgsServer GetFeatureInfo WMS.
From build dir, run: ctest -R PyQgsServerWMSGetFeatureInfo -V
.. note:: This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
"""
__author__ = 'Alessandro Pasotti'
__date__ = '11/03/2018'
__copyright__ = 'Copyright 2018, The QGIS Project'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
import os
# Needed on Qt 5 so that the serialization of XML is consistent among all executions
os.environ['QT_HASH_SEED'] = '1'
import re
import urllib.request
import urllib.parse
import urllib.error
from qgis.testing import unittest
from qgis.PyQt.QtCore import QSize
import osgeo.gdal # NOQA
from test_qgsserver_wms import TestQgsServerWMSTestBase
from qgis.core import QgsProject
class TestQgsServerWMSGetFeatureInfo(TestQgsServerWMSTestBase):
"""QGIS Server WMS Tests for GetFeatureInfo request"""
def testGetFeatureInfo(self):
# Test getfeatureinfo response xml
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'info_format=text%2Fxml&transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320',
'wms_getfeatureinfo-text-xml')
self.wms_request_compare('GetFeatureInfo',
'&layers=&styles=&' +
'info_format=text%2Fxml&transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320',
'wms_getfeatureinfo-text-xml')
# Test getfeatureinfo response html
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'info_format=text%2Fhtml&transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320',
'wms_getfeatureinfo-text-html')
# Test getfeatureinfo response html with geometry
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'info_format=text%2Fhtml&transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320&' +
'with_geometry=true',
'wms_getfeatureinfo-text-html-geometry')
# Test getfeatureinfo response html with maptip
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'info_format=text%2Fhtml&transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320&' +
'with_maptip=true',
'wms_getfeatureinfo-text-html-maptip')
# Test getfeatureinfo response text
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320&' +
'info_format=text/plain',
'wms_getfeatureinfo-text-plain')
# Test getfeatureinfo default info_format
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320',
'wms_getfeatureinfo-text-plain')
# Test getfeatureinfo invalid info_format
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&styles=&' +
'transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer%20%C3%A8%C3%A9&X=190&Y=320&' +
'info_format=InvalidFormat',
'wms_getfeatureinfo-invalid-format')
# Test feature info request with filter geometry
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A4326&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER_GEOM=POLYGON((8.2035381 44.901459,8.2035562 44.901459,8.2035562 44.901418,8.2035381 44.901418,8.2035381 44.901459))',
'wms_getfeatureinfo_geometry_filter')
# Test feature info request with filter geometry in non-layer CRS
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER_GEOM=POLYGON ((913213.6839952 5606021.5399693, 913215.6988780 5606021.5399693, 913215.6988780 5606015.09643322, 913213.6839952 5606015.0964332, 913213.6839952 5606021.5399693))',
'wms_getfeatureinfo_geometry_filter_3857')
# Test feature info request with invalid query_layer
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=InvalidLayer&' +
'FEATURE_COUNT=10&FILTER_GEOM=POLYGON((8.2035381 44.901459,8.2035562 44.901459,8.2035562 44.901418,8.2035381 44.901418,8.2035381 44.901459))',
'wms_getfeatureinfo_invalid_query_layers')
# Test feature info request with '+' instead of ' ' in layers and
# query_layers parameters
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer+%C3%A8%C3%A9&styles=&' +
'info_format=text%2Fxml&transparent=true&' +
'width=600&height=400&srs=EPSG%3A3857&bbox=913190.6389747962%2C' +
'5606005.488876367%2C913235.426296057%2C5606035.347090538&' +
'query_layers=testlayer+%C3%A8%C3%A9&X=190&Y=320',
'wms_getfeatureinfo-text-xml')
# layer1 is a clone of layer0 but with a scale visibility. Thus,
# GetFeatureInfo response contains only a feature for layer0 and layer1
# is ignored for the required bbox. Without the scale visibility option,
# the feature for layer1 would have been in the response too.
mypath = self.testdata_path + "test_project_scalevisibility.qgs"
self.wms_request_compare('GetFeatureInfo',
'&layers=layer0,layer1&styles=&' +
'VERSION=1.1.0&' +
'info_format=text%2Fxml&' +
'width=500&height=500&srs=EPSG%3A4326' +
'&bbox=8.1976,44.8998,8.2100,44.9027&' +
'query_layers=layer0,layer1&X=235&Y=243',
'wms_getfeatureinfo_notvisible',
'test_project_scalevisibility.qgs')
# Test GetFeatureInfo resolves "value map" widget values but also Server usage of qgs and gpkg file
mypath = self.testdata_path + "test_project_values.qgz"
self.wms_request_compare('GetFeatureInfo',
'&layers=layer0&styles=&' +
'VERSION=1.3.0&' +
'info_format=text%2Fxml&' +
'width=926&height=787&srs=EPSG%3A4326' +
'&bbox=912217,5605059,914099,5606652' +
'&CRS=EPSG:3857' +
'&FEATURE_COUNT=10' +
'&QUERY_LAYERS=layer0&I=487&J=308',
'wms_getfeatureinfo-values0-text-xml',
'test_project_values.qgz')
def testGetFeatureInfoValueRelation(self):
"""Test GetFeatureInfo resolves "value relation" widget values. regression 18518"""
mypath = self.testdata_path + "test_project_values.qgz"
self.wms_request_compare('GetFeatureInfo',
'&layers=layer1&styles=&' +
'VERSION=1.3.0&' +
'info_format=text%2Fxml&' +
'width=926&height=787&srs=EPSG%3A4326' +
'&bbox=912217,5605059,914099,5606652' +
'&CRS=EPSG:3857' +
'&FEATURE_COUNT=10' +
'&WITH_GEOMETRY=True' +
'&QUERY_LAYERS=layer1&I=487&J=308',
'wms_getfeatureinfo-values1-text-xml',
'test_project_values.qgz')
# TODO make GetFeatureInfo show the dictionary values and enable test
@unittest.expectedFailure
def testGetFeatureInfoValueRelationArray(self):
"""Test GetFeatureInfo on "value relation" widget with array field (multiple selections)"""
mypath = self.testdata_path + "test_project_values.qgz"
self.wms_request_compare('GetFeatureInfo',
'&layers=layer3&styles=&' +
'VERSION=1.3.0&' +
'info_format=text%2Fxml&' +
'width=926&height=787&srs=EPSG%3A4326' +
'&bbox=912217,5605059,914099,5606652' +
'&CRS=EPSG:3857' +
'&FEATURE_COUNT=10' +
'&WITH_GEOMETRY=True' +
'&QUERY_LAYERS=layer3&I=487&J=308',
'wms_getfeatureinfo-values3-text-xml',
'test_project_values.qgz')
# TODO make GetFeatureInfo show what's in the display expression and enable test
@unittest.expectedFailure
def testGetFeatureInfoRelationReference(self):
"""Test GetFeatureInfo solves "relation reference" widget "display expression" values"""
mypath = self.testdata_path + "test_project_values.qgz"
self.wms_request_compare('GetFeatureInfo',
'&layers=layer2&styles=&' +
'VERSION=1.3.0&' +
'info_format=text%2Fxml&' +
'width=926&height=787&srs=EPSG%3A4326' +
'&bbox=912217,5605059,914099,5606652' +
'&CRS=EPSG:3857' +
'&FEATURE_COUNT=10' +
'&WITH_GEOMETRY=True' +
'&QUERY_LAYERS=layer2&I=487&J=308',
'wms_getfeatureinfo-values2-text-xml',
'test_project_values.qgz')
# TODO make filter work with gpkg and move test inside testGetFeatureInfoFilter function
@unittest.expectedFailure
def testGetFeatureInfoFilterGPKG(self):
# 'test_project.qgz' ='test_project.qgs' but with a gpkg source + different fid
# Regression for #8656 Test getfeatureinfo response xml with gpkg datasource
# Mind the gap! (the space in the FILTER expression)
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER=testlayer%20%C3%A8%C3%A9' + urllib.parse.quote(':"NAME" = \'two\''),
'wms_getfeatureinfo_filter_gpkg',
'test_project.qgz')
def testGetFeatureInfoFilter(self):
# Test getfeatureinfo response xml
# Regression for #8656
# Mind the gap! (the space in the FILTER expression)
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER=testlayer%20%C3%A8%C3%A9' + urllib.parse.quote(':"NAME" = \'two\''),
'wms_getfeatureinfo_filter')
# Test a filter with NO condition results
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER=testlayer%20%C3%A8%C3%A9' + urllib.parse.quote(':"NAME" = \'two\' AND "utf8nameè" = \'no-results\''),
'wms_getfeatureinfo_filter_no_results')
# Test a filter with OR condition results
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER=testlayer%20%C3%A8%C3%A9' + urllib.parse.quote(':"NAME" = \'two\' OR "NAME" = \'three\''),
'wms_getfeatureinfo_filter_or')
# Test a filter with OR condition and UTF results
# Note that the layer name that contains utf-8 chars cannot be
# to upper case.
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'width=600&height=400&srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER=testlayer%20%C3%A8%C3%A9' + urllib.parse.quote(':"NAME" = \'two\' OR "utf8nameè" = \'three èé↓\''),
'wms_getfeatureinfo_filter_or_utf8')
# Regression #18292 Server GetFeatureInfo FILTER search fails when WIDTH, HEIGHT are not specified
self.wms_request_compare('GetFeatureInfo',
'&layers=testlayer%20%C3%A8%C3%A9&' +
'INFO_FORMAT=text%2Fxml&' +
'srs=EPSG%3A3857&' +
'query_layers=testlayer%20%C3%A8%C3%A9&' +
'FEATURE_COUNT=10&FILTER=testlayer%20%C3%A8%C3%A9' + urllib.parse.quote(':"NAME" = \'two\''),
'wms_getfeatureinfo_filter_no_width')
if __name__ == '__main__':
unittest.main()
| dgoedkoop/QGIS | tests/src/python/test_qgsserver_wms_getfeatureinfo.py | Python | gpl-2.0 | 17,936 |
# (c) 2005 Ian Bicking and contributors; written for Paste (http://pythonpaste.org)
# Licensed under the MIT license: http://www.opensource.org/licenses/mit-license.php
"""
This is a module to check the filesystem for the presence and
permissions of certain files. It can also be used to correct the
permissions (but not existance) of those files.
Currently only supports Posix systems (with Posixy permissions).
Permission stuff can probably be stubbed out later.
"""
import os
import pwd
import grp
def read_perm_spec(spec):
"""
Reads a spec like 'rw-r--r--' into a octal number suitable for
chmod. That is characters in groups of three -- first group is
user, second for group, third for other (all other people). The
characters are r (read), w (write), and x (executable), though the
executable can also be s (sticky). Files in sticky directories
get the directories permission setting.
Examples::
>>> print oct(read_perm_spec('rw-r--r--'))
0644
>>> print oct(read_perm_spec('rw-rwsr--'))
02664
>>> print oct(read_perm_spec('r-xr--r--'))
0544
>>> print oct(read_perm_spec('r--------'))
0400
"""
total_mask = 0
# suid/sgid modes give this mask in user, group, other mode:
set_bits = (04000, 02000, 0)
pieces = (spec[0:3], spec[3:6], spec[6:9])
for i, (mode, set_bit) in enumerate(zip(pieces, set_bits)):
mask = 0
read, write, exe = list(mode)
if read == 'r':
mask = mask | 4
elif read != '-':
raise ValueError, (
"Character %r unexpected (should be '-' or 'r')"
% read)
if write == 'w':
mask = mask | 2
elif write != '-':
raise ValueError, (
"Character %r unexpected (should be '-' or 'w')"
% write)
if exe == 'x':
mask = mask | 1
elif exe not in ('s', '-'):
raise ValueError, (
"Character %r unexpected (should be '-', 'x', or 's')"
% exe)
if exe == 's' and i == 2:
raise ValueError, (
"The 'other' executable setting cannot be suid/sgid ('s')")
mask = mask << ((2-i)*3)
if exe == 's':
mask = mask | set_bit
total_mask = total_mask | mask
return total_mask
modes = [
(04000, 'setuid bit',
'setuid bit: make contents owned by directory owner'),
(02000, 'setgid bit',
'setgid bit: make contents inherit permissions from directory'),
(01000, 'sticky bit',
'sticky bit: append-only directory'),
(00400, 'read by owner', 'read by owner'),
(00200, 'write by owner', 'write by owner'),
(00100, 'execute by owner', 'owner can search directory'),
(00040, 'allow read by group members',
'allow read by group members',),
(00020, 'allow write by group members',
'allow write by group members'),
(00010, 'execute by group members',
'group members can search directory'),
(00004, 'read by others', 'read by others'),
(00002, 'write by others', 'write by others'),
(00001, 'execution by others', 'others can search directory'),
]
exe_bits = [0100, 0010, 0001]
exe_mask = 0111
full_mask = 07777
def mode_diff(filename, mode, **kw):
"""
Returns the differences calculated using ``calc_mode_diff``
"""
cur_mode = os.stat(filename).st_mode
return calc_mode_diff(cur_mode, mode, **kw)
def calc_mode_diff(cur_mode, mode, keep_exe=True,
not_set='not set: ',
set='set: '):
"""
Gives the difference between the actual mode of the file and the
given mode. If ``keep_exe`` is true, then if the mode doesn't
include any executable information the executable information will
simply be ignored. High bits are also always ignored (except
suid/sgid and sticky bit).
Returns a list of differences (empty list if no differences)
"""
for exe_bit in exe_bits:
if mode & exe_bit:
keep_exe = False
diffs = []
isdir = os.path.isdir(filename)
for bit, file_desc, dir_desc in modes:
if keep_exe and bit in exe_bits:
continue
if isdir:
desc = dir_desc
else:
desc = file_desc
if (mode & bit) and not (cur_mode & bit):
diffs.append(not_set + desc)
if not (mode & bit) and (cur_mode & bit):
diffs.append(set + desc)
return diffs
def calc_set_mode(cur_mode, mode, keep_exe=True):
"""
Calculates the new mode given the current node ``cur_mode`` and
the mode spec ``mode`` and if ``keep_exe`` is true then also keep
the executable bits in ``cur_mode`` if ``mode`` has no executable
bits in it. Return the new mode.
Examples::
>>> print oct(calc_set_mode(0775, 0644))
0755
>>> print oct(calc_set_mode(0775, 0744))
0744
>>> print oct(calc_set_mode(010600, 0644))
010644
>>> print oct(calc_set_mode(0775, 0644, False))
0644
"""
for exe_bit in exe_bits:
if mode & exe_bit:
keep_exe = False
# This zeros-out full_mask parts of the current mode:
keep_parts = (cur_mode | full_mask) ^ full_mask
if keep_exe:
keep_parts = keep_parts | (cur_mode & exe_mask)
new_mode = keep_parts | mode
return new_mode
def set_mode(filename, mode, **kw):
"""
Sets the mode on ``filename`` using ``calc_set_mode``
"""
cur_mode = os.stat(filename).st_mode
new_mode = calc_set_mode(cur_mode, mode, **kw)
os.chmod(filename, new_mode)
def calc_ownership_spec(spec):
"""
Calculates what a string spec means, returning (uid, username,
gid, groupname), where there can be None values meaning no
preference.
The spec is a string like ``owner:group``. It may use numbers
instead of user/group names. It may leave out ``:group``. It may
use '-' to mean any-user/any-group.
"""
user = group = None
uid = gid = None
if ':' in spec:
user_spec, group_spec = spec.split(':', 1)
else:
user_spec, group_spec = spec, '-'
if user_spec == '-':
user_spec = '0'
if group_spec == '-':
group_spec = '0'
try:
uid = int(user_spec)
except ValueError:
uid = pwd.getpwnam(user_spec)
user = user_spec
else:
if not uid:
uid = user = None
else:
user = pwd.getpwuid(uid).pw_name
try:
gid = int(group_spec)
except ValueError:
gid = grp.getgrnam(group_spec)
group = group_spec
else:
if not gid:
gid = group = None
else:
group = grp.getgrgid(gid).gr_name
return (uid, user, gid, group)
def ownership_diff(filename, spec):
"""
Return a list of differences between the ownership of ``filename``
and the spec given.
"""
diffs = []
uid, user, gid, group = calc_ownership_spec(spec)
st = os.stat(filename)
if uid and uid != st.st_uid:
diffs.append('owned by %s (should be %s)' %
(pwd.getpwuid(st.st_uid).pw_name, user))
if gid and gid != st.st_gid:
diffs.append('group %s (should be %s)' %
(grp.getgrgid(st.st_gid).gr_name, group))
return diffs
def set_ownership(filename, spec):
"""
Set the ownership of ``filename`` given the spec.
"""
uid, user, gid, group = calc_ownership_spec(spec)
st = os.stat(filename)
if not uid:
uid = st.st_uid
if not gid:
gid = st.st_gid
os.chmod(filename, uid, gid)
class PermissionSpec(object):
"""
Represents a set of specifications for permissions.
Typically reads from a file that looks like this::
rwxrwxrwx user:group filename
If the filename ends in /, then it expected to be a directory, and
the directory is made executable automatically, and the contents
of the directory are given the same permission (recursively). By
default the executable bit on files is left as-is, unless the
permissions specifically say it should be on in some way.
You can use 'nomodify filename' for permissions to say that any
permission is okay, and permissions should not be changed.
Use 'noexist filename' to say that a specific file should not
exist.
Use 'symlink filename symlinked_to' to assert a symlink destination
The entire file is read, and most specific rules are used for each
file (i.e., a rule for a subdirectory overrides the rule for a
superdirectory). Order does not matter.
"""
def __init__(self):
self.paths = {}
def parsefile(self, filename):
f = open(filename)
lines = f.readlines()
f.close()
self.parselines(lines, filename=filename)
commands = {}
def parselines(self, lines, filename=None):
for lineindex, line in enumerate(lines):
line = line.strip()
if not line or line.startswith('#'):
continue
parts = line.split()
command = parts[0]
if command in self.commands:
cmd = self.commands[command](*parts[1:])
else:
cmd = self.commands['*'](*parts)
self.paths[cmd.path] = cmd
def check(self):
action = _Check(self)
self.traverse(action)
def fix(self):
action = _Fixer(self)
self.traverse(action)
def traverse(self, action):
paths = self.paths_sorted()
checked = {}
for path, checker in list(paths)[::-1]:
self.check_tree(action, path, paths, checked)
for path, checker in paths:
if path not in checked:
action.noexists(path, checker)
def traverse_tree(self, action, path, paths, checked):
if path in checked:
return
self.traverse_path(action, path, paths, checked)
if os.path.isdir(path):
for fn in os.listdir(path):
fn = os.path.join(path, fn)
self.traverse_tree(action, fn, paths, checked)
def traverse_path(self, action, path, paths, checked):
checked[path] = None
for check_path, checker in paths:
if path.startswith(check_path):
action.check(check_path, checker)
if not checker.inherit:
break
def paths_sorted(self):
paths = self.paths.items()
paths.sort(lambda a, b: -cmp(len(a[0]), len(b[0])))
class _Rule(object):
class __metaclass__(type):
def __new__(meta, class_name, bases, d):
cls = type.__new__(meta, class_name, bases, d)
PermissionSpec.commands[cls.__name__] = cls
return cls
inherit = False
def noexists(self):
return ['Path %s does not exist' % path]
class _NoModify(_Rule):
name = 'nomodify'
def __init__(self, path):
self.path = path
def fix(self, path):
pass
class _NoExist(_Rule):
name = 'noexist'
def __init__(self, path):
self.path = path
def check(self, path):
return ['Path %s should not exist' % path]
def noexists(self, path):
return []
def fix(self, path):
# @@: Should delete?
pass
class _SymLink(_Rule):
name = 'symlink'
inherit = True
def __init__(self, path, dest):
self.path = path
self.dest = dest
def check(self, path):
assert path == self.path, (
"_Symlink should only be passed specific path %s (not %s)"
% (self.path, path))
try:
link = os.path.readlink(path)
except OSError:
if e.errno != 22:
raise
return ['Path %s is not a symlink (should point to %s)'
% (path, self.dest)]
if link != self.dest:
return ['Path %s should symlink to %s, not %s'
% (path, self.dest, link)]
return []
def fix(self, path):
assert path == self.path, (
"_Symlink should only be passed specific path %s (not %s)"
% (self.path, path))
if not os.path.exists(path):
os.symlink(path, self.dest)
else:
# @@: This should correct the symlink or something:
print 'Not symlinking %s' % path
class _Permission(_Rule):
name = '*'
def __init__(self, perm, owner, dir):
self.perm_spec = read_perm_spec(perm)
self.owner = owner
self.dir = dir
def check(self, path):
return mode_diff(path, self.perm_spec)
def fix(self, path):
set_mode(path, self.perm_spec)
class _Strategy(object):
def __init__(self, spec):
self.spec = spec
class _Check(_Strategy):
def noexists(self, path, checker):
checker.noexists(path)
def check(self, path, checker):
checker.check(path)
class _Fixer(_Strategy):
def noexists(self, path, checker):
pass
def check(self, path, checker):
checker.fix(path)
if __name__ == '__main__':
import doctest
doctest.testmod()
| santisiri/popego | envs/ALPHA-POPEGO/lib/python2.5/site-packages/PasteScript-1.3.6-py2.5.egg/paste/script/checkperms.py | Python | bsd-3-clause | 13,295 |
# -*- coding: utf-8 -*-
"""
***************************************************************************
i_landsat_toar.py
-----------------
Date : March 2016
Copyright : (C) 2016 by Médéric Ribreux
Email : medspx at medspx dot fr
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
from __future__ import absolute_import
__author__ = 'Médéric Ribreux'
__date__ = 'March 2016'
__copyright__ = '(C) 2016, Médéric Ribreux'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
from .i import multipleOutputDir, verifyRasterNum, orderedInput
from processing.core.parameters import getParameterFromString
def checkParameterValuesBeforeExecuting(alg):
return verifyRasterNum(alg, 'rasters', 5, 12)
def processInputs(alg):
orderedInput(alg, 'rasters',
"ParameterString|input|Base name of input raster bands|None|False|False",
[1, 2, 3, 4, 5, 61, 62, 7, 8])
def processCommand(alg):
# Remove rasters parameter
rasters = alg.getParameterFromName('rasters')
alg.parameters.remove(rasters)
# Remove output
output = alg.getOutputFromName('output')
alg.removeOutputFromName('output')
# Create output parameter
param = getParameterFromString("ParameterString|output|output basename|None|False|False")
param.value = '{}_'.format(alg.getTempFilename())
alg.addParameter(param)
alg.processCommand()
# re-add output
alg.addOutput(output)
alg.addParameter(rasters)
def processOutputs(alg):
param = alg.getParameterFromName('output')
multipleOutputDir(alg, 'output', param.value)
# Delete output parameter
alg.parameters.remove(param)
| gioman/QGIS | python/plugins/processing/algs/grass7/ext/i_landsat_toar.py | Python | gpl-2.0 | 2,317 |
# -*- coding: utf-8 -*-
# -----------------------------------------------------------------------------
# Copyright (c) 2014, Nicolas P. Rougier
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
# -----------------------------------------------------------------------------
"""
Azimuthal Equal Area projection
"""
from glumpy import library
from . transform import Transform
class AzimuthalEqualAreaProjection(Transform):
""" Azimuthal Equal Area projection """
aliases = { }
def __init__(self, *args, **kwargs):
"""
Initialize the transform.
Note that parameters must be passed by name (param=value).
Kwargs parameters
-----------------
"""
code = library.get("transforms/azimuthal-equal-area.glsl")
Transform.__init__(self, code, *args, **kwargs)
def on_attach(self, program):
""" Initialization event """
pass
| duyuan11/glumpy | glumpy/transforms/azimuthal_equal_area.py | Python | bsd-3-clause | 944 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras Input Tensor used to track functional API Topology."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_spec
from tensorflow.python.framework import type_spec as type_spec_module
from tensorflow.python.keras.utils import object_identity
from tensorflow.python.ops import array_ops
from tensorflow.python.ops.ragged import ragged_operators # pylint: disable=unused-import
from tensorflow.python.ops.ragged import ragged_tensor
from tensorflow.python.util import nest
# pylint: disable=g-classes-have-attributes
# Tensorflow tensors have a maximum rank of 254
# (See `MaxDimensions()` in //tensorflow/core/framework/tensor_shape.h )
# So we do not try to infer values for int32 tensors larger than this,
# As they cannot represent shapes.
_MAX_TENSOR_RANK = 254
class KerasTensor(object):
"""A representation of a Keras in/output during Functional API construction.
`KerasTensor`s are tensor-like objects that represent the symbolic inputs
and outputs of Keras layers during Functional model construction. They are
comprised of the `tf.TypeSpec` of the (Composite)Tensor that will be
consumed/produced in the corresponding location of the Functional model.
KerasTensors are intended as a private API, so users should never need to
directly instantiate `KerasTensor`s.
**Building Functional Models with KerasTensors**
`tf.keras.Input` produces `KerasTensor`s that represent the symbolic inputs
to your model.
Passing a `KerasTensor` to a `tf.keras.Layer` `__call__` lets the layer know
that you are building a Functional model. The layer __call__ will
infer the output signature and return `KerasTensor`s with `tf.TypeSpec`s
corresponding to the symbolic outputs of that layer call. These output
`KerasTensor`s will have all of the internal KerasHistory metadata attached
to them that Keras needs to construct a Functional Model.
Currently, layers infer the output signature by:
* creating a scratch `FuncGraph`
* making placeholders in the scratch graph that match the input typespecs
* Calling `layer.call` on these placeholders
* extracting the signatures of the outputs before clearing the scratch graph
(Note: names assigned to KerasTensors by this process are not guaranteed to
be unique, and are subject to implementation details).
`tf.nest` methods are used to insure all of the inputs/output data
structures get maintained, with elements swapped between KerasTensors and
placeholders.
In rare cases (such as when directly manipulating shapes using Keras layers),
the layer may be able to partially infer the value of the output in addition
to just inferring the signature.
When this happens, the returned KerasTensor will also contain the inferred
value information. Follow-on layers can use this information.
during their own output signature inference.
E.g. if one layer produces a symbolic `KerasTensor` that the next layer uses
as the shape of its outputs, partially knowing the value helps infer the
output shape.
**Automatically converting TF APIs to layers**:
If you passing a `KerasTensor` to a TF API that supports dispatching,
Keras will automatically turn that API call into a lambda
layer in the Functional model, and return KerasTensors representing the
symbolic outputs.
Most TF APIs that take only tensors as input and produce output tensors
will support dispatching.
Calling a `tf.function` does not support dispatching, so you cannot pass
`KerasTensor`s as inputs to a `tf.function`.
Higher-order APIs that take methods which produce tensors (e.g. `tf.while`,
`tf.map_fn`, `tf.cond`) also do not currently support dispatching. So, you
cannot directly pass KerasTensors as inputs to these APIs either. If you
want to use these APIs inside of a Functional model, you must put them inside
of a custom layer.
Args:
type_spec: The `tf.TypeSpec` for the symbolic input created by
`tf.keras.Input`, or symbolically inferred for the output
during a symbolic layer `__call__`.
inferred_value: (Optional) a non-symbolic static value, possibly partially
specified, that could be symbolically inferred for the outputs during
a symbolic layer `__call__`. This will generally only happen when
grabbing and manipulating `tf.int32` shapes directly as tensors.
Statically inferring values in this way and storing them in the
KerasTensor allows follow-on layers to infer output signatures
more effectively. (e.g. when using a symbolic shape tensor to later
construct a tensor with that shape).
name: (optional) string name for this KerasTensor. Names automatically
generated by symbolic layer `__call__`s are not guaranteed to be unique,
and are subject to implementation details.
"""
def __init__(self, type_spec, inferred_value=None, name=None):
"""Constructs a KerasTensor."""
if not isinstance(type_spec, type_spec_module.TypeSpec):
raise ValueError('KerasTensors must be constructed with a `tf.TypeSpec`.')
self._type_spec = type_spec
self._inferred_value = inferred_value
self._name = name
@property
def type_spec(self):
"""Returns the `tf.TypeSpec` symbolically inferred for this Keras output."""
return self._type_spec
@property
def shape(self):
"""Returns the `TensorShape` symbolically inferred for this Keras output."""
# TODO(kaftan): This is only valid for normal/sparse/ragged tensors.
# may need to raise an error when it's not valid for a type_spec,
# but some keras code (e.g. build-related stuff) will likely fail when
# it can't access shape or dtype
return self._type_spec._shape # pylint: disable=protected-access
@classmethod
def from_tensor(cls, tensor):
"""Convert a traced (composite)tensor to a representative KerasTensor."""
if isinstance(tensor, ops.Tensor):
name = getattr(tensor, 'name', None)
type_spec = type_spec_module.type_spec_from_value(tensor)
inferred_value = None
if (type_spec.dtype == dtypes.int32 and type_spec.shape.rank is not None
and type_spec.shape.rank < 2):
# If this tensor might be representing shape information,
# (dtype=int32, rank of 0 or 1, not too large to represent a shape)
# we attempt to capture any value information tensorflow's
# shape handling can extract from the current scratch graph.
#
# Even though keras layers each trace in their own scratch
# graph, this shape value info extraction allows us to capture
# a sizable and useful subset of the C++ shape value inference TF can do
# if all tf ops appear in the same graph when using shape ops.
#
# Examples of things this cannot infer concrete dimensions for
# that the full single-graph C++ shape inference sometimes can are:
# * cases where the shape tensor is cast out of int32 before being
# manipulated w/ floating point numbers then converted back
# * cases where int32 tensors w/ rank >= 2 are manipulated before being
# used as a shape tensor
# * cases where int32 tensors too large to represent shapes are
# manipulated to a smaller size before being used as a shape tensor
inferred_value = array_ops.ones(shape=tensor).shape
if inferred_value.dims:
inferred_value = inferred_value.as_list()
if len(inferred_value) > _MAX_TENSOR_RANK:
inferred_value = None
else:
inferred_value = None
return KerasTensor(type_spec, inferred_value=inferred_value, name=name)
else:
# Fallback to the generic arbitrary-typespec KerasTensor
name = getattr(tensor, 'name', None)
type_spec = type_spec_module.type_spec_from_value(tensor)
return cls(type_spec, name=name)
@classmethod
def from_type_spec(cls, type_spec, name=None):
return cls(type_spec=type_spec, name=name)
def _to_placeholder(self):
"""Convert this KerasTensor to a placeholder in a graph."""
# If there is an inferred value for this tensor, inject the inferred value
if self._inferred_value is not None:
# If we suspect this KerasTensor might be representing a shape tensor,
# and we were able to extract value information with TensorFlow's shape
# handling when making the KerasTensor, we construct the placeholder by
# re-injecting the inferred value information into the graph. We
# do this injection through the shape of a placeholder, because that
# allows us to specify partially-unspecified shape values.
#
# See the comment on value extraction inside `from_tensor` for more info.
inferred_value = array_ops.shape(
array_ops.placeholder(
shape=self._inferred_value, dtype=dtypes.int32))
if self.type_spec.shape.rank == 0:
# `tf.shape` always returns a rank-1, we may need to turn it back to a
# scalar.
inferred_value = inferred_value[0]
return inferred_value
# Use the generic conversion from typespec to a placeholder.
def component_to_placeholder(component):
return array_ops.placeholder(component.dtype, component.shape)
return nest.map_structure(
component_to_placeholder, self.type_spec, expand_composites=True)
def get_shape(self):
return self.shape
def __len__(self):
raise TypeError('Keras symbolic inputs/outputs do not '
'implement `__len__`. You may be '
'trying to pass Keras symbolic inputs/outputs '
'to a TF API that does not register dispatching, '
'preventing Keras from automatically '
'converting the API call to a lambda layer '
'in the Functional Model. This error will also get raised '
'if you try asserting a symbolic input/output directly.')
@property
def op(self):
raise TypeError('Keras symbolic inputs/outputs do not '
'implement `op`. You may be '
'trying to pass Keras symbolic inputs/outputs '
'to a TF API that does not register dispatching, '
'preventing Keras from automatically '
'converting the API call to a lambda layer '
'in the Functional Model.')
def __hash__(self):
raise TypeError('Tensors are unhashable. (%s)'
'Instead, use tensor.ref() as the key.' % self)
# Note: This enables the KerasTensor's overloaded "right" binary
# operators to run when the left operand is an ndarray, because it
# accords the Tensor class higher priority than an ndarray, or a
# numpy matrix.
# In the future explore chaning this to using numpy's __numpy_ufunc__
# mechanism, which allows more control over how Tensors interact
# with ndarrays.
__array_priority__ = 100
def __array__(self):
raise TypeError(
'Cannot convert a symbolic Keras input/output to a numpy array. '
'This error may indicate that you\'re trying to pass a symbolic value '
'to a NumPy call, which is not supported. Or, '
'you may be trying to pass Keras symbolic inputs/outputs '
'to a TF API that does not register dispatching, '
'preventing Keras from automatically '
'converting the API call to a lambda layer '
'in the Functional Model.')
@property
def is_tensor_like(self):
return True
def set_shape(self, shape):
"""Updates the shape of this KerasTensor. Mimics `tf.Tensor.set_shape()`."""
if not isinstance(shape, tensor_shape.TensorShape):
shape = tensor_shape.TensorShape(shape)
if shape.dims is not None:
dim_list = [dim.value for dim in shape.dims]
for dim in range(len(dim_list)):
if dim_list[dim] is None and self.shape.dims is not None:
dim_list[dim] = self.shape.dims[dim]
shape = tensor_shape.TensorShape(dim_list)
if not self.shape.is_compatible_with(shape):
raise ValueError(
"Keras symbolic input/output's shape %s is not"
"compatible with supplied shape %s" %
(self.shape, shape))
else:
self._type_spec._shape = shape # pylint: disable=protected-access
def __str__(self):
symbolic_description = ''
inferred_value_string = ''
name_string = ''
if hasattr(self, '_keras_history'):
layer = self._keras_history.layer
symbolic_description = (
', description="created by layer \'%s\'"' % (layer.name,))
if self._inferred_value is not None:
inferred_value_string = (
', inferred_value=%s' % self._inferred_value)
if self.name is not None:
name_string = ', name=\'%s\'' % self._name
return 'KerasTensor(type_spec=%s%s%s%s)' % (
self.type_spec, inferred_value_string,
name_string, symbolic_description)
def __repr__(self):
symbolic_description = ''
inferred_value_string = ''
if isinstance(self.type_spec, tensor_spec.TensorSpec):
type_spec_string = 'shape=%s dtype=%s' % (self.shape, self.dtype.name)
else:
type_spec_string = 'type_spec=%s' % self.type_spec
if hasattr(self, '_keras_history'):
layer = self._keras_history.layer
symbolic_description = ' (created by layer \'%s\')' % (layer.name,)
if self._inferred_value is not None:
inferred_value_string = (
' inferred_value=%s' % self._inferred_value)
return '<KerasTensor: %s%s%s>' % (
type_spec_string, inferred_value_string, symbolic_description)
@property
def dtype(self):
"""Returns the `dtype` symbolically inferred for this Keras output."""
# TODO(kaftan): This is only valid for normal/sparse/ragged tensors.
# may need to raise an error when it's not valid for a type_spec,
# but some keras code (e.g. build-related stuff) will likely fail when
# it can't access shape or dtype
return self._type_spec._dtype # pylint: disable=protected-access
def ref(self):
"""Returns a hashable reference object to this KerasTensor.
The primary use case for this API is to put KerasTensors in a
set/dictionary. We can't put tensors in a set/dictionary as
`tensor.__hash__()` is not available and tensor equality (`==`) is supposed
to produce a tensor representing if the two inputs are equal.
See the documentation of `tf.Tensor.ref()` for more info.
"""
return object_identity.Reference(self)
def __iter__(self):
shape = None
if self.shape.ndims is not None:
shape = [dim.value for dim in self.shape.dims]
if shape is None:
raise TypeError('Cannot iterate over a Tensor with unknown shape.')
if not shape:
raise TypeError('Cannot iterate over a scalar.')
if shape[0] is None:
raise TypeError(
'Cannot iterate over a Tensor with unknown first dimension.')
return _KerasTensorIterator(self, shape[0])
@property
def name(self):
"""Returns the (non-unique, optional) name of this symbolic Keras value."""
return self._name
@classmethod
def _overload_all_operators(cls, tensor_class): # pylint: disable=invalid-name
"""Register overloads for all operators."""
for operator in ops.Tensor.OVERLOADABLE_OPERATORS:
cls._overload_operator(tensor_class, operator)
# We include `experimental_ref` for versions of TensorFlow that
# still include the deprecated method in Tensors.
if hasattr(tensor_class, 'experimental_ref'):
cls._overload_operator(tensor_class, 'experimental_ref')
@classmethod
def _overload_operator(cls, tensor_class, operator): # pylint: disable=invalid-name
"""Overload an operator with the same implementation as a base Tensor class.
We pull the operator out of the class dynamically to avoid ordering issues.
Args:
tensor_class: The (Composite)Tensor to get the method from.
operator: string. The operator name.
"""
tensor_oper = getattr(tensor_class, operator)
# Compatibility with Python 2:
# Python 2 unbound methods have type checks for the first arg,
# so we need to extract the underlying function
tensor_oper = getattr(tensor_oper, '__func__', tensor_oper)
setattr(cls, operator, tensor_oper)
KerasTensor._overload_all_operators(ops.Tensor) # pylint: disable=protected-access
class SparseKerasTensor(KerasTensor):
"""A specialized KerasTensor representation for `tf.sparse.SparseTensor`s.
Specifically, it specializes the conversion to a placeholder in order
to maintain dense shape information.
"""
def _to_placeholder(self):
spec = self.type_spec
# nest.map_structure loses dense shape information for sparse tensors.
# So, we special-case sparse placeholder creation.
# This only preserves shape information for top-level sparse tensors;
# not for sparse tensors that are nested inside another composite
# tensor.
return array_ops.sparse_placeholder(dtype=spec.dtype, shape=spec.shape)
class RaggedKerasTensor(KerasTensor):
"""A specialized KerasTensor representation for `tf.RaggedTensor`s.
Specifically, it:
1. Specializes the conversion to a placeholder in order
to maintain shape information for non-ragged dimensions.
2. Overloads the KerasTensor's operators with the RaggedTensor versions
when they don't match the `tf.Tensor` versions
3. Exposes some of the instance method/attribute that are unique to
the RaggedTensor API (such as ragged_rank).
"""
def _to_placeholder(self):
ragged_spec = self.type_spec
if ragged_spec.ragged_rank == 0 or ragged_spec.shape.rank is None:
return super(RaggedKerasTensor, self)._to_placeholder()
flat_shape = ragged_spec.shape[ragged_spec.ragged_rank:]
result = array_ops.placeholder(ragged_spec.dtype, flat_shape)
known_num_splits = []
prod = 1
for axis_size in ragged_spec.shape:
if prod is not None:
if axis_size is None or (
getattr(axis_size, 'value', True) is None):
prod = None
else:
prod = prod * axis_size
known_num_splits.append(prod)
for axis in range(ragged_spec.ragged_rank, 0, -1):
axis_size = ragged_spec.shape[axis]
if axis_size is None or (getattr(axis_size, 'value', True) is None):
num_splits = known_num_splits[axis-1]
if num_splits is not None:
num_splits = num_splits + 1
splits = array_ops.placeholder(
ragged_spec.row_splits_dtype, [num_splits])
result = ragged_tensor.RaggedTensor.from_row_splits(
result, splits, validate=False)
else:
rowlen = constant_op.constant(axis_size, ragged_spec.row_splits_dtype)
result = ragged_tensor.RaggedTensor.from_uniform_row_length(
result, rowlen, validate=False)
return result
@property
def ragged_rank(self):
return self.type_spec.ragged_rank
RaggedKerasTensor._overload_operator(ragged_tensor.RaggedTensor, '__add__') # pylint: disable=protected-access
RaggedKerasTensor._overload_operator(ragged_tensor.RaggedTensor, '__radd__') # pylint: disable=protected-access
RaggedKerasTensor._overload_operator(ragged_tensor.RaggedTensor, '__mul__') # pylint: disable=protected-access
RaggedKerasTensor._overload_operator(ragged_tensor.RaggedTensor, '__rmul__') # pylint: disable=protected-access
# TODO(b/161487382):
# Special-case user-registered symbolic objects (registered by the
# private `register_symbolic_tensor_type` method) by passing them between
# scratch graphs directly.
# This is needed to not break Tensorflow probability
# while they finish migrating to composite tensors.
class UserRegisteredSpec(type_spec_module.TypeSpec):
"""TypeSpec to represent user-registered symbolic objects."""
def __init__(self, shape, dtype):
self.shape = shape
self._dtype = dtype
self.dtype = dtype
def _component_specs(self):
raise NotImplementedError
def _from_components(self, components):
raise NotImplementedError
def _serialize(self):
raise NotImplementedError
def _to_components(self, value):
raise NotImplementedError
def value_type(self):
raise NotImplementedError
# TODO(b/161487382):
# Special-case user-registered symbolic objects (registered by the
# private `register_symbolic_tensor_type` method) by passing them between
# scratch graphs directly.
# This is needed to not break Tensorflow probability
# while they finish migrating to composite tensors.
class UserRegisteredTypeKerasTensor(KerasTensor):
"""KerasTensor that represents legacy register_symbolic_tensor_type."""
def __init__(self, user_registered_symbolic_object):
x = user_registered_symbolic_object
self._user_registered_symbolic_object = x
type_spec = UserRegisteredSpec(x.shape, x.dtype)
name = getattr(x, 'name', None)
super(UserRegisteredTypeKerasTensor, self).__init__(type_spec, name)
@classmethod
def from_tensor(cls, tensor):
return cls(tensor)
@classmethod
def from_type_spec(cls, type_spec, name=None):
raise NotImplementedError('You cannot instantiate a KerasTensor '
'directly from TypeSpec: %s' % type_spec)
def _to_placeholder(self):
return self._user_registered_symbolic_object
class _KerasTensorIterator(object):
"""Iterates over the leading dim of a KerasTensor. Performs 0 error checks."""
def __init__(self, tensor, dim0):
self._tensor = tensor
self._index = 0
self._limit = dim0
def __iter__(self):
return self
def __next__(self):
if self._index == self._limit:
raise StopIteration
result = self._tensor[self._index]
self._index += 1
return result
next = __next__ # python2.x compatibility.
# Specify the mappings of tensor class to KerasTensor class.
# This is specifically a list instead of a dict for now because
# 1. we do a check w/ isinstance because a key lookup based on class
# would miss subclasses
# 2. a list allows us to control lookup ordering
# We include ops.Tensor -> KerasTensor in the first position as a fastpath,
# *and* include object -> KerasTensor at the end as a catch-all.
# We can re-visit these choices in the future as needed.
keras_tensor_classes = [
(ops.Tensor, KerasTensor),
(sparse_tensor.SparseTensor, SparseKerasTensor),
(ragged_tensor.RaggedTensor, RaggedKerasTensor),
(object, KerasTensor)
]
def register_keras_tensor_specialization(cls, keras_tensor_subclass):
"""Register a specialized KerasTensor subclass for a Tensor type."""
# We always leave (object, KerasTensor) at the end as a generic fallback
keras_tensor_classes.insert(-1, (cls, keras_tensor_subclass))
def keras_tensor_to_placeholder(x):
"""Construct a graph placeholder to represent a KerasTensor when tracing."""
if isinstance(x, KerasTensor):
return x._to_placeholder() # pylint: disable=protected-access
else:
return x
def keras_tensor_from_tensor(tensor):
"""Convert a traced (composite)tensor to a representative KerasTensor."""
# Create a specialized KerasTensor that supports instance methods,
# operators, and additional value inference if possible
keras_tensor_cls = None
for tensor_type, cls in keras_tensor_classes:
if isinstance(tensor, tensor_type):
keras_tensor_cls = cls
break
out = keras_tensor_cls.from_tensor(tensor)
if hasattr(tensor, '_keras_mask'):
out._keras_mask = keras_tensor_from_tensor(tensor._keras_mask) # pylint: disable=protected-access
return out
def keras_tensor_from_type_spec(type_spec, name=None):
"""Convert a TypeSpec to a representative KerasTensor."""
# Create a specialized KerasTensor that supports instance methods,
# operators, and additional value inference if possible
keras_tensor_cls = None
value_type = type_spec.value_type
for tensor_type, cls in keras_tensor_classes:
if issubclass(value_type, tensor_type):
keras_tensor_cls = cls
break
return keras_tensor_cls.from_type_spec(type_spec, name=name)
| petewarden/tensorflow | tensorflow/python/keras/engine/keras_tensor.py | Python | apache-2.0 | 25,155 |
#!/usr/bin/python
import sys, os, os.path, subprocess
import re
class Settings:
def bin(self):
return self._bin
class LinuxSettings(Settings):
def __init__(self, dump_syms_dir):
self._bin = os.path.join(dump_syms_dir, 'dump_syms')
def offset(self, path):
return path
def run_file_command(self, f):
return subprocess.Popen(['file', '-Lb', f], stdout=subprocess.PIPE).communicate()[0]
def should_process(self, f):
return (f.endswith('.so') or os.access(f, os.X_OK)) and self.run_file_command(f).startswith("ELF")
def canonicalize_bin_name(self, path):
return path
class Win32Settings(Settings):
def __init__(self, dump_syms_dir):
self._bin = os.path.join(dump_syms_dir, 'dump_syms.exe')
def should_process(self, f):
sans_ext = os.path.splitext(f)[0]
return f.endswith('.pdb') and (os.path.isfile(sans_ext + '.exe') or os.path.isfile(sans_ext + '.dll'))
def offset(self, path):
return os.path.join(path, 'RelWithDebInfo')
def canonicalize_bin_name(self, path):
return re.sub("\.pdb$", "", path)
def process_file(settings, f):
dump_syms_bin = settings.bin()
bin_name = os.path.basename(f)
syms_dump = subprocess.Popen([dump_syms_bin, f], stdout=subprocess.PIPE).communicate()[0]
if not syms_dump:
print 'Failed to dump symbols for', bin_name
return
(bin_hash,bin_file_name) = syms_dump.split()[3:5] #MODULE Linux x86_64 HASH binary
canonical_bin_name = settings.canonicalize_bin_name(bin_file_name)
bin_dir = os.path.join('symbols', bin_file_name, bin_hash)
if not os.path.exists(bin_dir): os.makedirs(bin_dir)
syms_filename = os.path.join(bin_dir, canonical_bin_name + '.sym')
syms_file = open(syms_filename, 'wb')
syms_file.write(syms_dump)
syms_file.close()
def generate_syms(settings, path = None):
"""Scans through files, generating a symbols directory with breakpad symbol files for them."""
path = path or os.getcwd()
path = settings.offset(path)
generate_for = [os.path.join(path, path_file) for path_file in os.listdir(path)]
generate_for = [path_file for path_file in generate_for if settings.should_process(path_file)]
for bin_idx in range(len(generate_for)):
bin = generate_for[bin_idx]
process_file(settings, bin)
print bin_idx+1, '/', len(generate_for)
if __name__ == '__main__':
# Default offset for symbol dumper is just the offset from
# build/cmake to the dump_syms binary. Without specifying the
# second parameter, it will use cwd, so run from with build/cmake.
platform_settings = {
'win32' : Win32Settings,
'linux2' : LinuxSettings
}
generate_syms(
platform_settings[sys.platform]('../../dependencies/installed-breakpad/bin')
)
| atiti/crashcollector | symbolstore.py | Python | gpl-2.0 | 2,873 |
# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
# Copyright 2015-2017 Florian Bruhin (The Compiler) <[email protected]>
#
# This file is part of qutebrowser.
#
# qutebrowser is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# qutebrowser is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
"""Management of sessions - saved tabs/windows."""
import os
import os.path
import sip
from PyQt5.QtCore import pyqtSignal, QUrl, QObject, QPoint, QTimer
from PyQt5.QtWidgets import QApplication
import yaml
try:
from yaml import CSafeLoader as YamlLoader, CSafeDumper as YamlDumper
except ImportError: # pragma: no cover
from yaml import SafeLoader as YamlLoader, SafeDumper as YamlDumper
from qutebrowser.utils import (standarddir, objreg, qtutils, log, usertypes,
message, utils)
from qutebrowser.commands import cmdexc, cmdutils
from qutebrowser.config import config
default = object() # Sentinel value
def init(parent=None):
"""Initialize sessions.
Args:
parent: The parent to use for the SessionManager.
"""
base_path = os.path.join(standarddir.data(), 'sessions')
try:
os.mkdir(base_path)
except FileExistsError:
pass
session_manager = SessionManager(base_path, parent)
objreg.register('session-manager', session_manager)
class SessionError(Exception):
"""Exception raised when a session failed to load/save."""
class SessionNotFoundError(SessionError):
"""Exception raised when a session to be loaded was not found."""
class TabHistoryItem:
"""A single item in the tab history.
Attributes:
url: The QUrl of this item.
original_url: The QUrl of this item which was originally requested.
title: The title as string of this item.
active: Whether this item is the item currently navigated to.
user_data: The user data for this item.
"""
def __init__(self, url, title, *, original_url=None, active=False,
user_data=None):
self.url = url
if original_url is None:
self.original_url = url
else:
self.original_url = original_url
self.title = title
self.active = active
self.user_data = user_data
def __repr__(self):
return utils.get_repr(self, constructor=True, url=self.url,
original_url=self.original_url, title=self.title,
active=self.active, user_data=self.user_data)
class SessionManager(QObject):
"""Manager for sessions.
Attributes:
_base_path: The path to store sessions under.
_last_window_session: The session data of the last window which was
closed.
_current: The name of the currently loaded session, or None.
did_load: Set when a session was loaded.
Signals:
update_completion: Emitted when the session completion should get
updated.
"""
update_completion = pyqtSignal()
def __init__(self, base_path, parent=None):
super().__init__(parent)
self._current = None
self._base_path = base_path
self._last_window_session = None
self.did_load = False
def _get_session_path(self, name, check_exists=False):
"""Get the session path based on a session name or absolute path.
Args:
name: The name of the session.
check_exists: Whether it should also be checked if the session
exists.
"""
path = os.path.expanduser(name)
if os.path.isabs(path) and ((not check_exists) or
os.path.exists(path)):
return path
else:
path = os.path.join(self._base_path, name + '.yml')
if check_exists and not os.path.exists(path):
raise SessionNotFoundError(path)
else:
return path
def exists(self, name):
"""Check if a named session exists."""
try:
self._get_session_path(name, check_exists=True)
except SessionNotFoundError:
return False
else:
return True
def _save_tab_item(self, tab, idx, item):
"""Save a single history item in a tab.
Args:
tab: The tab to save.
idx: The index of the current history item.
item: The history item.
Return:
A dict with the saved data for this item.
"""
data = {
'url': bytes(item.url().toEncoded()).decode('ascii'),
}
if item.title():
data['title'] = item.title()
else:
# https://github.com/qutebrowser/qutebrowser/issues/879
if tab.history.current_idx() == idx:
data['title'] = tab.title()
else:
data['title'] = data['url']
if item.originalUrl() != item.url():
encoded = item.originalUrl().toEncoded()
data['original-url'] = bytes(encoded).decode('ascii')
if tab.history.current_idx() == idx:
data['active'] = True
try:
user_data = item.userData()
except AttributeError:
# QtWebEngine
user_data = None
if tab.history.current_idx() == idx:
pos = tab.scroller.pos_px()
data['zoom'] = tab.zoom.factor()
data['scroll-pos'] = {'x': pos.x(), 'y': pos.y()}
elif user_data is not None:
if 'zoom' in user_data:
data['zoom'] = user_data['zoom']
if 'scroll-pos' in user_data:
pos = user_data['scroll-pos']
data['scroll-pos'] = {'x': pos.x(), 'y': pos.y()}
data['pinned'] = tab.data.pinned
return data
def _save_tab(self, tab, active):
"""Get a dict with data for a single tab.
Args:
tab: The WebView to save.
active: Whether the tab is currently active.
"""
data = {'history': []}
if active:
data['active'] = True
for idx, item in enumerate(tab.history):
qtutils.ensure_valid(item)
item_data = self._save_tab_item(tab, idx, item)
data['history'].append(item_data)
return data
def _save_all(self, *, only_window=None, with_private=False):
"""Get a dict with data for all windows/tabs."""
data = {'windows': []}
if only_window is not None:
winlist = [only_window]
else:
winlist = objreg.window_registry
for win_id in sorted(winlist):
tabbed_browser = objreg.get('tabbed-browser', scope='window',
window=win_id)
main_window = objreg.get('main-window', scope='window',
window=win_id)
# We could be in the middle of destroying a window here
if sip.isdeleted(main_window):
continue
if tabbed_browser.private and not with_private:
continue
win_data = {}
active_window = QApplication.instance().activeWindow()
if getattr(active_window, 'win_id', None) == win_id:
win_data['active'] = True
win_data['geometry'] = bytes(main_window.saveGeometry())
win_data['tabs'] = []
if tabbed_browser.private:
win_data['private'] = True
for i, tab in enumerate(tabbed_browser.widgets()):
active = i == tabbed_browser.currentIndex()
win_data['tabs'].append(self._save_tab(tab, active))
data['windows'].append(win_data)
return data
def _get_session_name(self, name):
"""Helper for save to get the name to save the session to.
Args:
name: The name of the session to save, or the 'default' sentinel
object.
"""
if name is default:
name = config.get('general', 'session-default-name')
if name is None:
if self._current is not None:
name = self._current
else:
name = 'default'
return name
def save(self, name, last_window=False, load_next_time=False,
only_window=None, with_private=False):
"""Save a named session.
Args:
name: The name of the session to save, or the 'default' sentinel
object.
last_window: If set, saves the saved self._last_window_session
instead of the currently open state.
load_next_time: If set, prepares this session to be load next time.
only_window: If set, only tabs in the specified window is saved.
with_private: Include private windows.
Return:
The name of the saved session.
"""
name = self._get_session_name(name)
path = self._get_session_path(name)
log.sessions.debug("Saving session {} to {}...".format(name, path))
if last_window:
data = self._last_window_session
if data is None:
log.sessions.error("last_window_session is None while saving!")
return
else:
data = self._save_all(only_window=only_window,
with_private=with_private)
log.sessions.vdebug("Saving data: {}".format(data))
try:
with qtutils.savefile_open(path) as f:
yaml.dump(data, f, Dumper=YamlDumper, default_flow_style=False,
encoding='utf-8', allow_unicode=True)
except (OSError, UnicodeEncodeError, yaml.YAMLError) as e:
raise SessionError(e)
else:
self.update_completion.emit()
if load_next_time:
state_config = objreg.get('state-config')
state_config['general']['session'] = name
return name
def save_autosave(self):
"""Save the autosave session."""
try:
self.save('_autosave')
except SessionError as e:
log.sessions.error("Failed to save autosave session: {}".format(e))
def delete_autosave(self):
"""Delete the autosave session."""
try:
self.delete('_autosave')
except SessionNotFoundError:
# Exiting before the first load finished
pass
except SessionError as e:
log.sessions.error("Failed to delete autosave session: {}"
.format(e))
def save_last_window_session(self):
"""Temporarily save the session for the last closed window."""
self._last_window_session = self._save_all()
def _load_tab(self, new_tab, data):
"""Load yaml data into a newly opened tab."""
entries = []
for histentry in data['history']:
user_data = {}
if 'zoom' in data:
# The zoom was accidentally stored in 'data' instead of per-tab
# earlier.
# See https://github.com/qutebrowser/qutebrowser/issues/728
user_data['zoom'] = data['zoom']
elif 'zoom' in histentry:
user_data['zoom'] = histentry['zoom']
if 'scroll-pos' in data:
# The scroll position was accidentally stored in 'data' instead
# of per-tab earlier.
# See https://github.com/qutebrowser/qutebrowser/issues/728
pos = data['scroll-pos']
user_data['scroll-pos'] = QPoint(pos['x'], pos['y'])
elif 'scroll-pos' in histentry:
pos = histentry['scroll-pos']
user_data['scroll-pos'] = QPoint(pos['x'], pos['y'])
if 'pinned' in histentry:
new_tab.data.pinned = histentry['pinned']
active = histentry.get('active', False)
url = QUrl.fromEncoded(histentry['url'].encode('ascii'))
if 'original-url' in histentry:
orig_url = QUrl.fromEncoded(
histentry['original-url'].encode('ascii'))
else:
orig_url = url
entry = TabHistoryItem(url=url, original_url=orig_url,
title=histentry['title'], active=active,
user_data=user_data)
entries.append(entry)
if active:
new_tab.title_changed.emit(histentry['title'])
try:
new_tab.history.load_items(entries)
except ValueError as e:
raise SessionError(e)
def load(self, name, temp=False):
"""Load a named session.
Args:
name: The name of the session to load.
temp: If given, don't set the current session.
"""
from qutebrowser.mainwindow import mainwindow
path = self._get_session_path(name, check_exists=True)
try:
with open(path, encoding='utf-8') as f:
data = yaml.load(f, Loader=YamlLoader)
except (OSError, UnicodeDecodeError, yaml.YAMLError) as e:
raise SessionError(e)
log.sessions.debug("Loading session {} from {}...".format(name, path))
for win in data['windows']:
window = mainwindow.MainWindow(geometry=win['geometry'],
private=win.get('private', None))
window.show()
tabbed_browser = objreg.get('tabbed-browser', scope='window',
window=window.win_id)
tab_to_focus = None
for i, tab in enumerate(win['tabs']):
new_tab = tabbed_browser.tabopen()
self._load_tab(new_tab, tab)
if tab.get('active', False):
tab_to_focus = i
if new_tab.data.pinned:
tabbed_browser.set_tab_pinned(
i, new_tab.data.pinned, loading=True)
if tab_to_focus is not None:
tabbed_browser.setCurrentIndex(tab_to_focus)
if win.get('active', False):
QTimer.singleShot(0, tabbed_browser.activateWindow)
if data['windows']:
self.did_load = True
if not name.startswith('_') and not temp:
self._current = name
def delete(self, name):
"""Delete a session."""
path = self._get_session_path(name, check_exists=True)
os.remove(path)
self.update_completion.emit()
def list_sessions(self):
"""Get a list of all session names."""
sessions = []
for filename in os.listdir(self._base_path):
base, ext = os.path.splitext(filename)
if ext == '.yml':
sessions.append(base)
return sessions
@cmdutils.register(instance='session-manager')
@cmdutils.argument('name', completion=usertypes.Completion.sessions)
def session_load(self, name, clear=False, temp=False, force=False):
"""Load a session.
Args:
name: The name of the session.
clear: Close all existing windows.
temp: Don't set the current session for :session-save.
force: Force loading internal sessions (starting with an
underline).
"""
if name.startswith('_') and not force:
raise cmdexc.CommandError("{} is an internal session, use --force "
"to load anyways.".format(name))
old_windows = list(objreg.window_registry.values())
try:
self.load(name, temp=temp)
except SessionNotFoundError:
raise cmdexc.CommandError("Session {} not found!".format(name))
except SessionError as e:
raise cmdexc.CommandError("Error while loading session: {}"
.format(e))
else:
if clear:
for win in old_windows:
win.close()
@cmdutils.register(name=['session-save', 'w'], instance='session-manager')
@cmdutils.argument('name', completion=usertypes.Completion.sessions)
@cmdutils.argument('win_id', win_id=True)
@cmdutils.argument('with_private', flag='p')
def session_save(self, name: str = default, current=False, quiet=False,
force=False, only_active_window=False, with_private=False,
win_id=None):
"""Save a session.
Args:
name: The name of the session. If not given, the session configured
in general -> session-default-name is saved.
current: Save the current session instead of the default.
quiet: Don't show confirmation message.
force: Force saving internal sessions (starting with an underline).
only_active_window: Saves only tabs of the currently active window.
with_private: Include private windows.
"""
if name is not default and name.startswith('_') and not force:
raise cmdexc.CommandError("{} is an internal session, use --force "
"to save anyways.".format(name))
if current:
if self._current is None:
raise cmdexc.CommandError("No session loaded currently!")
name = self._current
assert not name.startswith('_')
try:
if only_active_window:
name = self.save(name, only_window=win_id,
with_private=with_private)
else:
name = self.save(name, with_private=with_private)
except SessionError as e:
raise cmdexc.CommandError("Error while saving session: {}"
.format(e))
else:
if not quiet:
message.info("Saved session {}.".format(name))
@cmdutils.register(instance='session-manager')
@cmdutils.argument('name', completion=usertypes.Completion.sessions)
def session_delete(self, name, force=False):
"""Delete a session.
Args:
name: The name of the session.
force: Force deleting internal sessions (starting with an
underline).
"""
if name.startswith('_') and not force:
raise cmdexc.CommandError("{} is an internal session, use --force "
"to delete anyways.".format(name))
try:
self.delete(name)
except SessionNotFoundError:
raise cmdexc.CommandError("Session {} not found!".format(name))
except (OSError, SessionError) as e:
log.sessions.exception("Error while deleting session!")
raise cmdexc.CommandError("Error while deleting session: {}"
.format(e))
else:
log.sessions.debug("Deleted session {}.".format(name))
| lahwaacz/qutebrowser | qutebrowser/misc/sessions.py | Python | gpl-3.0 | 19,672 |
# 逆ポーランド記法とは、演算子をオペランドの後に記述するプログラムを記述する記法のこと。
# 1 2 + 5 4 + * は(1 + 2) * (5 + 4)となる。
# 1 2 + 3 4 - *
#input_line = input()
RPN_list = []
while True:
n = input()
if n == '+':
print('enter "+" to the stack')
a = RPN_list.pop(len(RPN_list) - 1)
b = RPN_list.pop(len(RPN_list) - 1)
RPN_list.append(int(a) + int(b))
print(RPN_list)
elif n == '-':
print('enter "-" to the stack')
a = RPN_list.pop(len(RPN_list) - 1)
b = RPN_list.pop(len(RPN_list) - 1)
RPN_list.append(int(b) - int(a))
print(RPN_list)
elif n == '*':
print('enter "*" to the stack')
a = RPN_list.pop(len(RPN_list) - 1)
b = RPN_list.pop(len(RPN_list) - 1)
RPN_list.append(int(a) * int(b))
print(RPN_list)
elif n == 'end':
break
else:
if n.isdigit():
print('enter "' + n + '" to the stack')
RPN_list.append(n)
print(RPN_list)
else:
print('You can enter digit only.')
print('the anser is ')
print(RPN_list.pop(len(RPN_list) - 1))
print('finished.') | shofujimoto/examples | until_201803/python/src/algorithm/sort/stack.py | Python | mit | 1,220 |
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
from paddle.fluid.framework import convert_np_dtype_to_dtype_, Program, program_guard
import paddle.fluid.core as core
import numpy as np
import copy
import unittest
import sys
sys.path.append("../")
from op_test import OpTest
class SequenceMaskTestBase(OpTest):
def initDefaultParameters(self):
self.op_type = 'sequence_mask'
self.maxlen = 10
self.mask_dtype = 'int64'
self.x = [[0, 3, 4], [5, 7, 9]]
def initParameters(self):
pass
def setUp(self):
self.initDefaultParameters()
self.initParameters()
if not isinstance(self.x, np.ndarray):
self.x = np.array(self.x)
self.inputs = {'X': self.x}
self.outputs = {'Y': self.calc_ground_truth_mask()}
self.attrs = {
'maxlen': self.maxlen,
'out_dtype': convert_np_dtype_to_dtype_(self.mask_dtype)
}
def calc_ground_truth_mask(self):
maxlen = np.max(self.x) if self.maxlen < 0 else self.maxlen
shape = self.x.shape + (maxlen, )
index_broadcast = np.broadcast_to(
np.reshape(
range(maxlen), newshape=[1] * self.x.ndim + [-1]),
shape=shape)
x_broadcast = np.broadcast_to(
np.reshape(
self.x, newshape=self.x.shape + (-1, )), shape=shape)
return (index_broadcast < x_broadcast).astype(self.mask_dtype)
def test_check_output(self):
self.check_output()
class SequenceMaskTest1(SequenceMaskTestBase):
def initParameters(self):
self.mask_dtype = 'bool'
class SequenceMaskTest2(SequenceMaskTestBase):
def initParameters(self):
self.mask_dtype = 'uint8'
class SequenceMaskTest3(SequenceMaskTestBase):
def initParameters(self):
self.mask_dtype = 'int32'
class SequenceMaskTest4(SequenceMaskTestBase):
def initParameters(self):
self.mask_dtype = 'float32'
class SequenceMaskTest5(SequenceMaskTestBase):
def initParameters(self):
self.mask_dtype = 'float64'
class SequenceMaskTest6(SequenceMaskTestBase):
def initParameters(self):
self.maxlen = -1
class SequenceMaskTestBase_tensor_attr(OpTest):
def initDefaultParameters(self):
self.op_type = 'sequence_mask'
self.maxlen = 10
self.maxlen_tensor = np.ones((1), 'int32') * 10
self.mask_dtype = 'int64'
self.x = [[0, 3, 4], [5, 7, 9]]
def initParameters(self):
pass
def setUp(self):
self.initDefaultParameters()
self.initParameters()
if not isinstance(self.x, np.ndarray):
self.x = np.array(self.x)
self.inputs = {'X': self.x, 'MaxLenTensor': self.maxlen_tensor}
self.outputs = {'Y': self.calc_ground_truth_mask()}
self.attrs = {'out_dtype': convert_np_dtype_to_dtype_(self.mask_dtype)}
def calc_ground_truth_mask(self):
maxlen = np.max(self.x) if self.maxlen < 0 else self.maxlen
shape = self.x.shape + (maxlen, )
index_broadcast = np.broadcast_to(
np.reshape(
range(maxlen), newshape=[1] * self.x.ndim + [-1]),
shape=shape)
x_broadcast = np.broadcast_to(
np.reshape(
self.x, newshape=self.x.shape + (-1, )), shape=shape)
return (index_broadcast < x_broadcast).astype(self.mask_dtype)
def test_check_output(self):
self.check_output()
class SequenceMaskTest1_tensor_attr(SequenceMaskTestBase_tensor_attr):
def initParameters(self):
self.mask_dtype = 'bool'
class SequenceMaskTest2_tensor_attr(SequenceMaskTestBase_tensor_attr):
def initParameters(self):
self.mask_dtype = 'uint8'
class SequenceMaskTest3_tensor_attr(SequenceMaskTestBase_tensor_attr):
def initParameters(self):
self.mask_dtype = 'int32'
class SequenceMaskTest4_tensor_attr(SequenceMaskTestBase_tensor_attr):
def initParameters(self):
self.mask_dtype = 'float32'
class SequenceMaskTest5_tensor_attr(SequenceMaskTestBase_tensor_attr):
def initParameters(self):
self.mask_dtype = 'float64'
class TestSequenceMaskOpError(unittest.TestCase):
def test_errors(self):
with program_guard(Program(), Program()):
input_data = np.random.uniform(1, 5, [4]).astype("float32")
def test_Variable():
# the input must be Variable
fluid.layers.sequence_mask(input_data, maxlen=4)
self.assertRaises(TypeError, test_Variable)
if __name__ == '__main__':
unittest.main()
| luotao1/Paddle | python/paddle/fluid/tests/unittests/sequence/test_sequence_mask.py | Python | apache-2.0 | 5,203 |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
""" Lector: lector.py
Copyright (C) 2011-2014 Davide Setti, Zdenko Podobný
Website: http://code.google.com/p/lector
This program is released under the GNU GPLv2
"""
#pylint: disable-msg=C0103
# System
import sys
import os
from PyQt5.QtGui import QIcon
from PyQt5.QtCore import (QSettings, QPoint, QSize, QTime, qsrand, pyqtSlot,
QLocale, QTranslator)
from PyQt5.QtWidgets import (QApplication, QMainWindow, QFileDialog,
QMessageBox)
# Lector
from ui.ui_lector import Ui_Lector
from settingsdialog import Settings
from ocrwidget import QOcrWidget
from editor.textwidget import TextWidget, EditorBar
from utils import get_tesseract_languages
from utils import settings
__version__ = "1.0.0dev"
class Window(QMainWindow):
""" MainWindow
"""
ocrAvailable = True
thread = None
def __init__(self, hasScanner=True):
QMainWindow.__init__(self)
self.curDir = ""
self.ui = Ui_Lector()
self.ui.setupUi(self)
self.ocrWidget = QOcrWidget("eng", 1, self.statusBar())
self.textEditor = TextWidget()
self.textEditorBar = EditorBar()
self.textEditorBar.saveDocAsSignal.connect(self.textEditor.saveAs)
self.textEditorBar.spellSignal.connect(self.textEditor.toggleSpell)
self.textEditorBar.whiteSpaceSignal.connect(
self.textEditor.togglewhiteSpace)
self.textEditorBar.boldSignal.connect(self.textEditor.toggleBold)
self.textEditorBar.italicSignal.connect(self.textEditor.toggleItalic)
self.textEditorBar.underlineSignal.connect(
self.textEditor.toggleUnderline)
self.textEditorBar.strikethroughSignal.connect(
self.textEditor.toggleStrikethrough)
self.textEditorBar.subscriptSignal.connect(
self.textEditor.toggleSubscript)
self.textEditorBar.superscriptSignal.connect(
self.textEditor.toggleSuperscript)
self.textEditor.fontFormatSignal.connect(
self.textEditorBar.toggleFormat)
self.ui.mwTextEditor.addToolBar(self.textEditorBar)
self.ui.mwTextEditor.setCentralWidget(self.textEditor)
self.ocrWidget.textEditor = self.textEditor
self.setCentralWidget(self.ocrWidget)
self.ui.actionRotateRight.triggered.connect(self.ocrWidget.rotateRight)
self.ui.actionRotateLeft.triggered.connect(self.ocrWidget.rotateLeft)
self.ui.actionRotateFull.triggered.connect(self.ocrWidget.rotateFull)
self.ui.actionZoomIn.triggered.connect(self.ocrWidget.zoomIn)
self.ui.actionZoomOut.triggered.connect(self.ocrWidget.zoomOut)
self.ui.actionOcr.triggered.connect(self.ocrWidget.doOcr)
self.ocrWidget.scene().changedSelectedAreaType.connect(
self.changedSelectedAreaType)
try:
languages = list(get_tesseract_languages())
except TypeError: # tesseract is not installed
# TODO: replace QMessageBox.warning with QErrorMessage (but we need
# to keep state
# dialog = QErrorMessage(self)
# dialog.showMessage(
# self.tr("tessaract not available. Please check requirements"))
QMessageBox.warning(self, "Tesseract",
self.tr("Tessaract is not available. "
"Please check requirements."))
self.ocrAvailable = False
self.on_actionSettings_triggered(2)
else:
languages.sort()
languages_ext = {
'bul': self.tr('Bulgarian'),
'cat': self.tr('Catalan'),
'ces': self.tr('Czech'),
'chi_tra': self.tr('Chinese (Traditional)'),
'chi_sim': self.tr('Chinese (Simplified)'),
'dan': self.tr('Danish'),
'dan-frak': self.tr('Danish (Fraktur)'),
'nld': self.tr('Dutch'),
'eng': self.tr('English'),
'fin': self.tr('Finnish'),
'fra': self.tr('French'),
'deu': self.tr('German'),
'deu-frak': self.tr('German (Fraktur)'),
'ell': self.tr('Greek'),
'hun': self.tr('Hungarian'),
'ind': self.tr('Indonesian'),
'ita': self.tr('Italian'),
'jpn': self.tr('Japanese'),
'kor': self.tr('Korean'),
'lav': self.tr('Latvian'),
'lit': self.tr('Lithuanian'),
'nor': self.tr('Norwegian'),
'pol': self.tr('Polish'),
'por': self.tr('Portuguese'),
'ron': self.tr('Romanian'),
'rus': self.tr('Russian'),
'slk': self.tr('Slovak'),
'slk-frak': self.tr('Slovak (Fraktur)'),
'slv': self.tr('Slovenian'),
'spa': self.tr('Spanish'),
'srp': self.tr('Serbian'),
'swe': self.tr('Swedish'),
'swe-frak': self.tr('Swedish (Fraktur)'),
'tgl': self.tr('Tagalog'),
'tur': self.tr('Turkish'),
'ukr': self.tr('Ukrainian'),
'vie': self.tr('Vietnamese')
}
for lang in languages:
try:
lang_ext = languages_ext[lang]
except KeyError:
continue
self.ui.rbtn_lang_select.addItem(lang_ext, lang)
self.ui.rbtn_lang_select.currentIndexChanged.connect(
self.changeLanguage)
#disable useless actions until a file has been opened
self.enableActions(False)
## load saved settings
self.readSettings()
self.ui.actionScan.setEnabled(False)
if hasScanner:
self.on_actionChangeDevice_triggered()
if not self.statusBar().currentMessage():
self.statusBar().showMessage(self.tr("Ready"), 2000)
@pyqtSlot()
def on_actionChangeDevice_triggered(self):
##SANE
message = ''
try:
import sane
except ImportError:
# sane found no scanner - disable scanning;
message = self.tr("Sane not found! Scanning is disabled.")
else:
from .scannerselect import ScannerSelect
sane.init()
sane_list = sane.get_devices()
saved_device = settings.get('scanner:device')
if saved_device in [x[0] for x in sane_list]:
message = self.tr("Sane found configured device...")
self.scannerSelected()
elif not sane_list:
message = self.tr("Sane dit not find any device! "
"Scanning is disabled.")
else:
# there is not configured device => run configuration
ss = ScannerSelect(sane_list, parent=self)
ss.accepted.connect(self.scannerSelected)
ss.show()
self.statusBar().showMessage(message, 2000)
def scannerSelected(self):
self.ui.actionScan.setEnabled(True)
if self.thread is None:
from .scannerthread import ScannerThread
self.thread = ScannerThread(self, settings.get('scanner:device'))
self.thread.scannedImage.connect(self.on_scannedImage)
def on_scannedImage(self):
self.ocrWidget.scene().im = self.thread.im
fn = self.tr("Unknown")
self.ocrWidget.filename = fn
self.ocrWidget.prepareDimensions()
self.setWindowTitle("Lector: " + fn)
self.enableActions()
@pyqtSlot()
def on_actionSettings_triggered(self, tabIndex = 0):
settings_dialog = Settings(self, tabIndex)
settings_dialog.accepted.connect(self.updateTextEditor)
settings_dialog.show()
def updateTextEditor(self):
self.textEditor.setEditorFont()
@pyqtSlot()
def on_actionOpen_triggered(self):
fn, _ = QFileDialog.getOpenFileName(self,
self.tr("Open image"), self.curDir,
self.tr("Images (*.tif *.tiff *.png *.jpg *.xpm)")
)
if not fn: return
self.ocrWidget.filename = fn
self.curDir = os.path.dirname(fn)
self.ocrWidget.changeImage()
self.setWindowTitle("Lector: " + fn)
self.enableActions(True)
def enableActions(self, enable=True):
for action in (self.ui.actionRotateRight,
self.ui.actionRotateLeft,
self.ui.actionRotateFull,
self.ui.actionZoomIn,
self.ui.actionZoomOut,
self.ui.actionSaveDocumentAs,
self.ui.actionSaveImageAs,):
action.setEnabled(enable)
self.ui.actionOcr.setEnabled(enable and self.ocrAvailable)
@pyqtSlot()
def on_actionScan_triggered(self):
self.thread.run()
##TODO: check thread end before the submission of a new task
#self.thread.wait()
def changeLanguage(self, row):
lang = self.sender().itemData(row)
self.ocrWidget.language = lang
@pyqtSlot()
def on_rbtn_text_clicked(self):
self.ocrWidget.areaType = 1
@pyqtSlot()
def on_rbtn_image_clicked(self):
self.ocrWidget.areaType = 2
# clicking the change box type events
# from image type to text
# or from text type to image
@pyqtSlot()
def on_rbtn_areato_text_clicked(self):
self.ocrWidget.scene().changeSelectedAreaType(1)
@pyqtSlot()
def on_rbtn_areato_image_clicked(self):
self.ocrWidget.scene().changeSelectedAreaType(2)
def readSettings(self):
""" Read settings
"""
setting = QSettings("Davide Setti", "Lector")
pos = setting.value("pos", QPoint(50, 50))
size = setting.value("size", QSize(800, 500))
self.curDir = setting.value("file_dialog_dir", '~/')
self.resize(size)
self.move(pos)
self.restoreGeometry(setting.value("mainWindowGeometry"))
self.restoreState(setting.value("mainWindowState"))
## load saved language
lang = setting.value("rbtn/lang", "")
try:
currentIndex = self.ui.rbtn_lang_select.findData(lang)
self.ui.rbtn_lang_select.setCurrentIndex(currentIndex)
self.ocrWidget.language = lang
except KeyError:
pass
def writeSettings(self):
""" Store settings
"""
settings.set("pos", self.pos())
settings.set("size", self.size())
settings.set("file_dialog_dir", self.curDir)
settings.set("mainWindowGeometry", self.saveGeometry())
settings.set("mainWindowState", self.saveState())
## save language
settings.set("rbtn/lang", self.ocrWidget.language)
def closeEvent(self, event):
""" Action before closing app
"""
if (not self.ocrWidget.scene().isModified) or self.areYouSureToExit():
self.writeSettings()
event.accept()
else:
event.ignore()
def areYouSureToExit(self):
ret = QMessageBox(self.tr("Lector"),
self.tr("Are you sure you want to exit?"),
QMessageBox.Warning,
QMessageBox.Yes | QMessageBox.Default,
QMessageBox.No | QMessageBox.Escape,
QMessageBox.NoButton)
ret.setWindowIcon(QIcon(":/icons/icons/L.png"))
ret.setButtonText(QMessageBox.Yes, self.tr("Yes"))
ret.setButtonText(QMessageBox.No, self.tr("No"))
return ret.exec_() == QMessageBox.Yes
@pyqtSlot()
def on_actionSaveDocumentAs_triggered(self):
self.textEditor.saveAs()
@pyqtSlot()
def on_actionSaveImageAs_triggered(self):
fn = str(QFileDialog.getSaveFileName(self,
self.tr("Save image"), self.curDir,
self.tr("PNG image (*.png);;"
"TIFF image (*.tif *.tiff);;"
"BMP image (*.bmp)")
))
if not fn:
return
self.curDir = os.path.dirname(fn)
## TODO: move this to the Scene?
self.ocrWidget.scene().im.save(fn)
@pyqtSlot()
def on_actionAbout_Lector_triggered(self):
QMessageBox.about(self, self.tr("About Lector"), self.tr(
"<p>The <b>Lector</b> is a graphical ocr solution for GNU/"
"Linux and Windows based on Python, Qt4 and tessaract OCR.</p>"
"<p>Scanning option is available only on GNU/Linux via SANE.</p>"
"<p></p>"
"<p><b>Author:</b> Davide Setti</p><p></p>"
"<p><b>Contributors:</b> chopinX04, filip.dominec, zdposter</p>"
"<p><b>Web site:</b> http://code.google.com/p/lector</p>"
"<p><b>Source code:</b> "
"http://sourceforge.net/projects/lector-ocr/</p>"
"<p><b>Version:</b> %s</p>" % __version__)
)
def changedSelectedAreaType(self, _type):
if _type in (1, 2):
self.ui.rbtn_areato_text.setCheckable(True)
self.ui.rbtn_areato_image.setCheckable(True)
if _type == 1:
self.ui.rbtn_areato_text.setChecked(True)
else: #_type = 2
self.ui.rbtn_areato_image.setChecked(True)
else:
self.ui.rbtn_areato_text.setCheckable(False)
self.ui.rbtn_areato_text.update()
self.ui.rbtn_areato_image.setCheckable(False)
self.ui.rbtn_areato_image.update()
## MAIN
def main():
if settings.get('log:errors'):
log_filename = settings.get('log:filename')
if log_filename:
try:
log_file = open(log_filename,"w")
print ('Redirecting stderr/stdout... to %s' % log_filename)
sys.stderr = log_file
sys.stdout = log_file
except IOError:
print("Lector could not open log file '%s'!\n" % log_filename \
+ " Redirecting will not work.")
else:
print("Log file is not set. Please set it in settings.")
app = QApplication(sys.argv)
opts = [str(arg) for arg in app.arguments()[1:]]
if '--no-scanner' in opts:
scanner = False
else:
scanner = True
qsrand(QTime(0, 0, 0).secsTo(QTime.currentTime()))
locale = settings.get('ui:lang')
if not locale:
locale = QLocale.system().name()
qtTranslator = QTranslator()
if qtTranslator.load(":/translations/ts/lector_" + locale, 'ts'):
app.installTranslator(qtTranslator)
window = Window(scanner)
window.show()
app.exec_()
if __name__ == "__main__":
main()
| zdenop/lector | lector/lector.py | Python | gpl-2.0 | 14,955 |
# coding=utf-8
# Copyright 2022 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
r"""Links the concepts in the questions and reformat the dataset to OpenCSR.
Example usage:
ORI_DATA_DIR=/path/to/datasets/CSQA/; \
DATA_DIR=/path/to/nscskg_data/xm_drfact_output_bert200/; \
VOCAB=/path/to/nscskg_data/gkb_best.vocab.txt \
SPLIT=csqa_train NUM_CHOICES=1; \
link_questions \
--csqa_file $ORI_DATA_DIR/${SPLIT}_processed.jsonl \
--output_file $DATA_DIR/linked_${SPLIT}.jsonl \
--indexed_concept_file ${VOCAB} \
--do_filtering ${NUM_CHOICES} \
--alsologtostderr
SPLIT=csqa_dev; \
link_questions \
--csqa_file $ORI_DATA_DIR/${SPLIT}_processed.jsonl \
--output_file $DATA_DIR/linked_${SPLIT}.jsonl \
--indexed_concept_file ${VOCAB} \
--disable_overlap --alsologtostderr
"""
import itertools
import json
from absl import app
from absl import flags
from absl import logging
from language.google.drfact import index_corpus
import tensorflow.compat.v1 as tf
from tqdm import tqdm
FLAGS = flags.FLAGS
flags.DEFINE_string("csqa_file", None, "Path to dataset file.")
flags.DEFINE_string("index_data_dir", None,
"Path to Entity co-occurrence directory.")
flags.DEFINE_string("vocab_file", None, "Path to vocab for tokenizer.")
flags.DEFINE_string("indexed_concept_file", None, "Path to indexed vocab.")
flags.DEFINE_string("output_file", None, "Path to Output file.")
flags.DEFINE_boolean("tfidf", False, "Whether to use tfidf for linking.")
flags.DEFINE_boolean("disable_overlap", None,
"Whether to use tfidf for linking.")
flags.DEFINE_integer(
"do_filtering", -1,
"Whether to ignore the examples where at least one choice"
" is not in the vocab.")
entity2id = None
def remove_intersection(dict_of_sets):
"""Removes the intersection of any two sets in a list."""
for i, j in itertools.combinations(list(dict_of_sets.keys()), 2):
set_i = dict_of_sets[i]
set_j = dict_of_sets[j]
dict_of_sets[i] = set_i - set_j
dict_of_sets[j] = set_j - set_i
return dict_of_sets
def main(_):
global entity2id
logging.set_verbosity(logging.INFO)
logging.info("Reading CSQA(-formatted) data...")
with tf.gfile.Open(FLAGS.csqa_file) as f:
jsonlines = f.read().split("\n")
data = [json.loads(jsonline) for jsonline in jsonlines if jsonline]
logging.info("Done.")
logging.info("Entity linking %d questions...", len(data))
all_questions = []
entity2id = index_corpus.load_concept_vocab(
FLAGS.indexed_concept_file)
linked_question_entities = []
for item in tqdm(data, desc="Matching concepts in the questions."):
concept_mentions = index_corpus.simple_match(
lemmas=item["question"]["lemmas"],
concept_vocab=entity2id,
max_len=4,
disable_overlap=True) # Note: we want to limit size of init facts.
qry_concept_set = set()
qry_concept_list = []
for m in concept_mentions:
c = m["mention"].lower()
if c not in qry_concept_set:
qry_concept_set.add(c)
qry_concept_list.append({"kb_id": c, "name": c})
linked_question_entities.append(qry_concept_list)
num_empty_questions = 0
num_empty_choices = 0
num_empty_answers = 0
for ii, item in tqdm(enumerate(data), desc="Processing", total=len(data)):
# pylint: disable=g-complex-comprehension
if item["answerKey"] in "ABCDE":
truth_choice = ord(item["answerKey"]) - ord("A") # e.g., A-->0
elif item["answerKey"] in "12345":
truth_choice = int(item["answerKey"]) - 1
choices = item["question"]["choices"]
assert choices[truth_choice]["label"] == item["answerKey"]
correct_answer = choices[truth_choice]["text"].lower()
# Check the mentioned concepts in each choice.
choice2concepts = {}
for c in choices:
mentioned_concepts = []
for m in index_corpus.simple_match(
c["lemmas"],
concept_vocab=entity2id,
max_len=4,
disable_overlap=FLAGS.disable_overlap):
mentioned_concepts.append(m["mention"])
choice2concepts[c["text"].lower()] = set(mentioned_concepts)
choice2concepts = remove_intersection(choice2concepts)
non_empty_choices = sum([bool(co) for _, co in choice2concepts.items()])
num_empty_choices += len(choices) - non_empty_choices
if not linked_question_entities[ii]:
num_empty_questions += 1
continue
if not choice2concepts[correct_answer]:
# the correct answer does not contain any concepts, skip it.
num_empty_answers += 1
continue
if FLAGS.do_filtering > 0:
if non_empty_choices < FLAGS.do_filtering:
continue
choice2concepts = {
k: sorted(list(v), key=lambda x: -len(x)) # Sort concepts by len.
for k, v in choice2concepts.items()
}
sup_facts = choice2concepts[correct_answer]
all_questions.append({
"question": item["question"]["stem"],
"entities": linked_question_entities[ii],
"answer": correct_answer,
"_id": item["id"],
"level": "N/A", # hotpotQA-specific keys: hard/medium/easy
"type": "N/A", # hotpotQA-specific keys: comparison, bridge, etc.
"supporting_facts": [{
"kb_id": c,
"name": c
} for c in sup_facts], # hotpotQA-specific keys
"choice2concepts": choice2concepts,
})
with tf.gfile.Open(FLAGS.output_file, "w") as f_out:
logging.info("Writing questions to output file...%s", f_out.name)
logging.info("Number of questions %d", len(all_questions))
f_out.write("\n".join(json.dumps(q) for q in all_questions))
logging.info("===============================================")
logging.info("%d questions without entities (out of %d)", num_empty_questions,
len(data))
logging.info("%d answers not IN entities (out of %d)", num_empty_answers,
len(data))
logging.info("%d choices not IN entities (out of %d)", num_empty_choices,
5 * len(data))
if __name__ == "__main__":
app.run(main)
| google-research/google-research | drfact/link_questions.py | Python | apache-2.0 | 6,579 |
# -*- encoding: utf-8 -*-
from abjad.tools import indicatortools
from abjad.tools import markuptools
from abjad.tools import pitchtools
from abjad.tools.instrumenttools.Instrument import Instrument
class Marimba(Instrument):
r'''A marimba.
::
>>> staff = Staff("c'4 d'4 e'4 fs'4")
>>> marimba = instrumenttools.Marimba()
>>> attach(marimba, staff)
>>> show(staff) # doctest: +SKIP
.. doctest::
>>> print(format(staff))
\new Staff {
\set Staff.instrumentName = \markup { Marimba }
\set Staff.shortInstrumentName = \markup { Mb. }
c'4
d'4
e'4
fs'4
}
'''
### CLASS VARIABLES ###
__slots__ = ()
### INITIALIZER ###
def __init__(
self,
instrument_name='marimba',
short_instrument_name='mb.',
instrument_name_markup=None,
short_instrument_name_markup=None,
allowable_clefs=('treble', 'bass'),
pitch_range='[F2, C7]',
sounding_pitch_of_written_middle_c=None,
):
Instrument.__init__(
self,
instrument_name=instrument_name,
short_instrument_name=short_instrument_name,
instrument_name_markup=instrument_name_markup,
short_instrument_name_markup=short_instrument_name_markup,
allowable_clefs=allowable_clefs,
pitch_range=pitch_range,
sounding_pitch_of_written_middle_c=\
sounding_pitch_of_written_middle_c,
)
self._performer_names.extend([
'percussionist',
])
### PUBLIC PROPERTIES ###
@property
def allowable_clefs(self):
r'''Gets marimba's allowable clefs.
.. container:: example
::
>>> marimba.allowable_clefs
ClefInventory([Clef(name='treble'), Clef(name='bass')])
::
>>> show(marimba.allowable_clefs) # doctest: +SKIP
Returns clef inventory.
'''
return Instrument.allowable_clefs.fget(self)
@property
def instrument_name(self):
r'''Gets marimba's name.
.. container:: example
::
>>> marimba.instrument_name
'marimba'
Returns string.
'''
return Instrument.instrument_name.fget(self)
@property
def instrument_name_markup(self):
r'''Gets marimba's instrument name markup.
.. container:: example
::
>>> marimba.instrument_name_markup
Markup(contents=('Marimba',))
::
>>> show(marimba.instrument_name_markup) # doctest: +SKIP
Returns markup.
'''
return Instrument.instrument_name_markup.fget(self)
@property
def pitch_range(self):
r'''Gets marimba's range.
.. container:: example
::
>>> marimba.pitch_range
PitchRange(range_string='[F2, C7]')
::
>>> show(marimba.pitch_range) # doctest: +SKIP
Returns pitch range.
'''
return Instrument.pitch_range.fget(self)
@property
def short_instrument_name(self):
r'''Gets marimba's short instrument name.
.. container:: example
::
>>> marimba.short_instrument_name
'mb.'
Returns string.
'''
return Instrument.short_instrument_name.fget(self)
@property
def short_instrument_name_markup(self):
r'''Gets marimba's short instrument name markup.
.. container:: example
::
>>> marimba.short_instrument_name_markup
Markup(contents=('Mb.',))
::
>>> show(marimba.short_instrument_name_markup) # doctest: +SKIP
Returns markup.
'''
return Instrument.short_instrument_name_markup.fget(self)
@property
def sounding_pitch_of_written_middle_c(self):
r'''Gets sounding pitch of marimba's written middle C.
.. container:: example
::
>>> marimba.sounding_pitch_of_written_middle_c
NamedPitch("c'")
::
>>> show(marimba.sounding_pitch_of_written_middle_c) # doctest: +SKIP
Returns named pitch.
'''
return Instrument.sounding_pitch_of_written_middle_c.fget(self)
| mscuthbert/abjad | abjad/tools/instrumenttools/Marimba.py | Python | gpl-3.0 | 4,520 |
from django.template.defaulttags import register
from survey.models import Record
def display_violation(value):
try:
return dict(Record.VIOLATIONS_CHOICES)[value]
except KeyError:
return settings.TEMPLATE_STRING_IF_INVALID
register.filter('display_violation', display_violation)
| simonspa/django-datacollect | datacollect/questionnaire/templatetags/custom_tags.py | Python | gpl-3.0 | 304 |
# coding: utf-8
from __future__ import unicode_literals
import re
import json
import base64
import zlib
from hashlib import sha1
from math import pow, sqrt, floor
from .common import InfoExtractor
from ..compat import (
compat_etree_fromstring,
compat_urllib_parse_urlencode,
compat_urllib_request,
compat_urlparse,
)
from ..utils import (
ExtractorError,
bytes_to_intlist,
intlist_to_bytes,
int_or_none,
lowercase_escape,
remove_end,
sanitized_Request,
unified_strdate,
urlencode_postdata,
xpath_text,
extract_attributes,
)
from ..aes import (
aes_cbc_decrypt,
)
class CrunchyrollBaseIE(InfoExtractor):
_LOGIN_URL = 'https://www.crunchyroll.com/login'
_LOGIN_FORM = 'login_form'
_NETRC_MACHINE = 'crunchyroll'
def _login(self):
(username, password) = self._get_login_info()
if username is None:
return
login_page = self._download_webpage(
self._LOGIN_URL, None, 'Downloading login page')
def is_logged(webpage):
return '<title>Redirecting' in webpage
# Already logged in
if is_logged(login_page):
return
login_form_str = self._search_regex(
r'(?P<form><form[^>]+?id=(["\'])%s\2[^>]*>)' % self._LOGIN_FORM,
login_page, 'login form', group='form')
post_url = extract_attributes(login_form_str).get('action')
if not post_url:
post_url = self._LOGIN_URL
elif not post_url.startswith('http'):
post_url = compat_urlparse.urljoin(self._LOGIN_URL, post_url)
login_form = self._form_hidden_inputs(self._LOGIN_FORM, login_page)
login_form.update({
'login_form[name]': username,
'login_form[password]': password,
})
response = self._download_webpage(
post_url, None, 'Logging in', 'Wrong login info',
data=urlencode_postdata(login_form),
headers={'Content-Type': 'application/x-www-form-urlencoded'})
# Successful login
if is_logged(response):
return
error = self._html_search_regex(
'(?s)<ul[^>]+class=["\']messages["\'][^>]*>(.+?)</ul>',
response, 'error message', default=None)
if error:
raise ExtractorError('Unable to login: %s' % error, expected=True)
raise ExtractorError('Unable to log in')
def _real_initialize(self):
self._login()
def _download_webpage(self, url_or_request, *args, **kwargs):
request = (url_or_request if isinstance(url_or_request, compat_urllib_request.Request)
else sanitized_Request(url_or_request))
# Accept-Language must be set explicitly to accept any language to avoid issues
# similar to https://github.com/rg3/youtube-dl/issues/6797.
# Along with IP address Crunchyroll uses Accept-Language to guess whether georestriction
# should be imposed or not (from what I can see it just takes the first language
# ignoring the priority and requires it to correspond the IP). By the way this causes
# Crunchyroll to not work in georestriction cases in some browsers that don't place
# the locale lang first in header. However allowing any language seems to workaround the issue.
request.add_header('Accept-Language', '*')
return super(CrunchyrollBaseIE, self)._download_webpage(request, *args, **kwargs)
@staticmethod
def _add_skip_wall(url):
parsed_url = compat_urlparse.urlparse(url)
qs = compat_urlparse.parse_qs(parsed_url.query)
# Always force skip_wall to bypass maturity wall, namely 18+ confirmation message:
# > This content may be inappropriate for some people.
# > Are you sure you want to continue?
# since it's not disabled by default in crunchyroll account's settings.
# See https://github.com/rg3/youtube-dl/issues/7202.
qs['skip_wall'] = ['1']
return compat_urlparse.urlunparse(
parsed_url._replace(query=compat_urllib_parse_urlencode(qs, True)))
class CrunchyrollIE(CrunchyrollBaseIE):
_VALID_URL = r'https?://(?:(?P<prefix>www|m)\.)?(?P<url>crunchyroll\.(?:com|fr)/(?:media(?:-|/\?id=)|[^/]*/[^/?&]*?)(?P<video_id>[0-9]+))(?:[/?&]|$)'
_TESTS = [{
'url': 'http://www.crunchyroll.com/wanna-be-the-strongest-in-the-world/episode-1-an-idol-wrestler-is-born-645513',
'info_dict': {
'id': '645513',
'ext': 'mp4',
'title': 'Wanna be the Strongest in the World Episode 1 – An Idol-Wrestler is Born!',
'description': 'md5:2d17137920c64f2f49981a7797d275ef',
'thumbnail': 'http://img1.ak.crunchyroll.com/i/spire1-tmb/20c6b5e10f1a47b10516877d3c039cae1380951166_full.jpg',
'uploader': 'Yomiuri Telecasting Corporation (YTV)',
'upload_date': '20131013',
'url': 're:(?!.*&)',
},
'params': {
# rtmp
'skip_download': True,
},
}, {
'url': 'http://www.crunchyroll.com/media-589804/culture-japan-1',
'info_dict': {
'id': '589804',
'ext': 'flv',
'title': 'Culture Japan Episode 1 – Rebuilding Japan after the 3.11',
'description': 'md5:2fbc01f90b87e8e9137296f37b461c12',
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'Danny Choo Network',
'upload_date': '20120213',
},
'params': {
# rtmp
'skip_download': True,
},
'skip': 'Video gone',
}, {
'url': 'http://www.crunchyroll.com/rezero-starting-life-in-another-world-/episode-5-the-morning-of-our-promise-is-still-distant-702409',
'info_dict': {
'id': '702409',
'ext': 'mp4',
'title': 'Re:ZERO -Starting Life in Another World- Episode 5 – The Morning of Our Promise Is Still Distant',
'description': 'md5:97664de1ab24bbf77a9c01918cb7dca9',
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'TV TOKYO',
'upload_date': '20160508',
},
'params': {
# m3u8 download
'skip_download': True,
},
}, {
'url': 'http://www.crunchyroll.com/konosuba-gods-blessing-on-this-wonderful-world/episode-1-give-me-deliverance-from-this-judicial-injustice-727589',
'info_dict': {
'id': '727589',
'ext': 'mp4',
'title': "KONOSUBA -God's blessing on this wonderful world! 2 Episode 1 – Give Me Deliverance from this Judicial Injustice!",
'description': 'md5:cbcf05e528124b0f3a0a419fc805ea7d',
'thumbnail': r're:^https?://.*\.jpg$',
'uploader': 'Kadokawa Pictures Inc.',
'upload_date': '20170118',
'series': "KONOSUBA -God's blessing on this wonderful world!",
'season': "KONOSUBA -God's blessing on this wonderful world! 2",
'season_number': 2,
'episode': 'Give Me Deliverance from this Judicial Injustice!',
'episode_number': 1,
},
'params': {
# m3u8 download
'skip_download': True,
},
}, {
'url': 'http://www.crunchyroll.fr/girl-friend-beta/episode-11-goodbye-la-mode-661697',
'only_matching': True,
}, {
# geo-restricted (US), 18+ maturity wall, non-premium available
'url': 'http://www.crunchyroll.com/cosplay-complex-ova/episode-1-the-birth-of-the-cosplay-club-565617',
'only_matching': True,
}, {
# A description with double quotes
'url': 'http://www.crunchyroll.com/11eyes/episode-1-piros-jszaka-red-night-535080',
'info_dict': {
'id': '535080',
'ext': 'mp4',
'title': '11eyes Episode 1 – Piros éjszaka - Red Night',
'description': 'Kakeru and Yuka are thrown into an alternate nightmarish world they call "Red Night".',
'uploader': 'Marvelous AQL Inc.',
'upload_date': '20091021',
},
'params': {
# Just test metadata extraction
'skip_download': True,
},
}, {
# make sure we can extract an uploader name that's not a link
'url': 'http://www.crunchyroll.com/hakuoki-reimeiroku/episode-1-dawn-of-the-divine-warriors-606899',
'info_dict': {
'id': '606899',
'ext': 'mp4',
'title': 'Hakuoki Reimeiroku Episode 1 – Dawn of the Divine Warriors',
'description': 'Ryunosuke was left to die, but Serizawa-san asked him a simple question "Do you want to live?"',
'uploader': 'Geneon Entertainment',
'upload_date': '20120717',
},
'params': {
# just test metadata extraction
'skip_download': True,
},
}, {
# A video with a vastly different season name compared to the series name
'url': 'http://www.crunchyroll.com/nyarko-san-another-crawling-chaos/episode-1-test-590532',
'info_dict': {
'id': '590532',
'ext': 'mp4',
'title': 'Haiyoru! Nyaruani (ONA) Episode 1 – Test',
'description': 'Mahiro and Nyaruko talk about official certification.',
'uploader': 'TV TOKYO',
'upload_date': '20120305',
'series': 'Nyarko-san: Another Crawling Chaos',
'season': 'Haiyoru! Nyaruani (ONA)',
},
'params': {
# Just test metadata extraction
'skip_download': True,
},
}]
_FORMAT_IDS = {
'360': ('60', '106'),
'480': ('61', '106'),
'720': ('62', '106'),
'1080': ('80', '108'),
}
def _decrypt_subtitles(self, data, iv, id):
data = bytes_to_intlist(base64.b64decode(data.encode('utf-8')))
iv = bytes_to_intlist(base64.b64decode(iv.encode('utf-8')))
id = int(id)
def obfuscate_key_aux(count, modulo, start):
output = list(start)
for _ in range(count):
output.append(output[-1] + output[-2])
# cut off start values
output = output[2:]
output = list(map(lambda x: x % modulo + 33, output))
return output
def obfuscate_key(key):
num1 = int(floor(pow(2, 25) * sqrt(6.9)))
num2 = (num1 ^ key) << 5
num3 = key ^ num1
num4 = num3 ^ (num3 >> 3) ^ num2
prefix = intlist_to_bytes(obfuscate_key_aux(20, 97, (1, 2)))
shaHash = bytes_to_intlist(sha1(prefix + str(num4).encode('ascii')).digest())
# Extend 160 Bit hash to 256 Bit
return shaHash + [0] * 12
key = obfuscate_key(id)
decrypted_data = intlist_to_bytes(aes_cbc_decrypt(data, key, iv))
return zlib.decompress(decrypted_data)
def _convert_subtitles_to_srt(self, sub_root):
output = ''
for i, event in enumerate(sub_root.findall('./events/event'), 1):
start = event.attrib['start'].replace('.', ',')
end = event.attrib['end'].replace('.', ',')
text = event.attrib['text'].replace('\\N', '\n')
output += '%d\n%s --> %s\n%s\n\n' % (i, start, end, text)
return output
def _convert_subtitles_to_ass(self, sub_root):
output = ''
def ass_bool(strvalue):
assvalue = '0'
if strvalue == '1':
assvalue = '-1'
return assvalue
output = '[Script Info]\n'
output += 'Title: %s\n' % sub_root.attrib['title']
output += 'ScriptType: v4.00+\n'
output += 'WrapStyle: %s\n' % sub_root.attrib['wrap_style']
output += 'PlayResX: %s\n' % sub_root.attrib['play_res_x']
output += 'PlayResY: %s\n' % sub_root.attrib['play_res_y']
output += """
[V4+ Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
"""
for style in sub_root.findall('./styles/style'):
output += 'Style: ' + style.attrib['name']
output += ',' + style.attrib['font_name']
output += ',' + style.attrib['font_size']
output += ',' + style.attrib['primary_colour']
output += ',' + style.attrib['secondary_colour']
output += ',' + style.attrib['outline_colour']
output += ',' + style.attrib['back_colour']
output += ',' + ass_bool(style.attrib['bold'])
output += ',' + ass_bool(style.attrib['italic'])
output += ',' + ass_bool(style.attrib['underline'])
output += ',' + ass_bool(style.attrib['strikeout'])
output += ',' + style.attrib['scale_x']
output += ',' + style.attrib['scale_y']
output += ',' + style.attrib['spacing']
output += ',' + style.attrib['angle']
output += ',' + style.attrib['border_style']
output += ',' + style.attrib['outline']
output += ',' + style.attrib['shadow']
output += ',' + style.attrib['alignment']
output += ',' + style.attrib['margin_l']
output += ',' + style.attrib['margin_r']
output += ',' + style.attrib['margin_v']
output += ',' + style.attrib['encoding']
output += '\n'
output += """
[Events]
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
"""
for event in sub_root.findall('./events/event'):
output += 'Dialogue: 0'
output += ',' + event.attrib['start']
output += ',' + event.attrib['end']
output += ',' + event.attrib['style']
output += ',' + event.attrib['name']
output += ',' + event.attrib['margin_l']
output += ',' + event.attrib['margin_r']
output += ',' + event.attrib['margin_v']
output += ',' + event.attrib['effect']
output += ',' + event.attrib['text']
output += '\n'
return output
def _extract_subtitles(self, subtitle):
sub_root = compat_etree_fromstring(subtitle)
return [{
'ext': 'srt',
'data': self._convert_subtitles_to_srt(sub_root),
}, {
'ext': 'ass',
'data': self._convert_subtitles_to_ass(sub_root),
}]
def _get_subtitles(self, video_id, webpage):
subtitles = {}
for sub_id, sub_name in re.findall(r'\bssid=([0-9]+)"[^>]+?\btitle="([^"]+)', webpage):
sub_page = self._download_webpage(
'http://www.crunchyroll.com/xml/?req=RpcApiSubtitle_GetXml&subtitle_script_id=' + sub_id,
video_id, note='Downloading subtitles for ' + sub_name)
id = self._search_regex(r'id=\'([0-9]+)', sub_page, 'subtitle_id', fatal=False)
iv = self._search_regex(r'<iv>([^<]+)', sub_page, 'subtitle_iv', fatal=False)
data = self._search_regex(r'<data>([^<]+)', sub_page, 'subtitle_data', fatal=False)
if not id or not iv or not data:
continue
subtitle = self._decrypt_subtitles(data, iv, id).decode('utf-8')
lang_code = self._search_regex(r'lang_code=["\']([^"\']+)', subtitle, 'subtitle_lang_code', fatal=False)
if not lang_code:
continue
subtitles[lang_code] = self._extract_subtitles(subtitle)
return subtitles
def _real_extract(self, url):
mobj = re.match(self._VALID_URL, url)
video_id = mobj.group('video_id')
if mobj.group('prefix') == 'm':
mobile_webpage = self._download_webpage(url, video_id, 'Downloading mobile webpage')
webpage_url = self._search_regex(r'<link rel="canonical" href="([^"]+)" />', mobile_webpage, 'webpage_url')
else:
webpage_url = 'http://www.' + mobj.group('url')
webpage = self._download_webpage(
self._add_skip_wall(webpage_url), video_id,
headers=self.geo_verification_headers())
note_m = self._html_search_regex(
r'<div class="showmedia-trailer-notice">(.+?)</div>',
webpage, 'trailer-notice', default='')
if note_m:
raise ExtractorError(note_m)
mobj = re.search(r'Page\.messaging_box_controller\.addItems\(\[(?P<msg>{.+?})\]\)', webpage)
if mobj:
msg = json.loads(mobj.group('msg'))
if msg.get('type') == 'error':
raise ExtractorError('crunchyroll returned error: %s' % msg['message_body'], expected=True)
if 'To view this, please log in to verify you are 18 or older.' in webpage:
self.raise_login_required()
video_title = self._html_search_regex(
r'(?s)<h1[^>]*>((?:(?!<h1).)*?<span[^>]+itemprop=["\']title["\'][^>]*>(?:(?!<h1).)+?)</h1>',
webpage, 'video_title')
video_title = re.sub(r' {2,}', ' ', video_title)
video_description = self._parse_json(self._html_search_regex(
r'<script[^>]*>\s*.+?\[media_id=%s\].+?({.+?"description"\s*:.+?})\);' % video_id,
webpage, 'description', default='{}'), video_id).get('description')
if video_description:
video_description = lowercase_escape(video_description.replace(r'\r\n', '\n'))
video_upload_date = self._html_search_regex(
[r'<div>Availability for free users:(.+?)</div>', r'<div>[^<>]+<span>\s*(.+?\d{4})\s*</span></div>'],
webpage, 'video_upload_date', fatal=False, flags=re.DOTALL)
if video_upload_date:
video_upload_date = unified_strdate(video_upload_date)
video_uploader = self._html_search_regex(
# try looking for both an uploader that's a link and one that's not
[r'<a[^>]+href="/publisher/[^"]+"[^>]*>([^<]+)</a>', r'<div>\s*Publisher:\s*<span>\s*(.+?)\s*</span>\s*</div>'],
webpage, 'video_uploader', fatal=False)
available_fmts = []
for a, fmt in re.findall(r'(<a[^>]+token=["\']showmedia\.([0-9]{3,4})p["\'][^>]+>)', webpage):
attrs = extract_attributes(a)
href = attrs.get('href')
if href and '/freetrial' in href:
continue
available_fmts.append(fmt)
if not available_fmts:
for p in (r'token=["\']showmedia\.([0-9]{3,4})p"', r'showmedia\.([0-9]{3,4})p'):
available_fmts = re.findall(p, webpage)
if available_fmts:
break
video_encode_ids = []
formats = []
for fmt in available_fmts:
stream_quality, stream_format = self._FORMAT_IDS[fmt]
video_format = fmt + 'p'
streamdata_req = sanitized_Request(
'http://www.crunchyroll.com/xml/?req=RpcApiVideoPlayer_GetStandardConfig&media_id=%s&video_format=%s&video_quality=%s'
% (video_id, stream_format, stream_quality),
compat_urllib_parse_urlencode({'current_page': url}).encode('utf-8'))
streamdata_req.add_header('Content-Type', 'application/x-www-form-urlencoded')
streamdata = self._download_xml(
streamdata_req, video_id,
note='Downloading media info for %s' % video_format)
stream_info = streamdata.find('./{default}preload/stream_info')
video_encode_id = xpath_text(stream_info, './video_encode_id')
if video_encode_id in video_encode_ids:
continue
video_encode_ids.append(video_encode_id)
video_file = xpath_text(stream_info, './file')
if not video_file:
continue
if video_file.startswith('http'):
formats.extend(self._extract_m3u8_formats(
video_file, video_id, 'mp4', entry_protocol='m3u8_native',
m3u8_id='hls', fatal=False))
continue
video_url = xpath_text(stream_info, './host')
if not video_url:
continue
metadata = stream_info.find('./metadata')
format_info = {
'format': video_format,
'format_id': video_format,
'height': int_or_none(xpath_text(metadata, './height')),
'width': int_or_none(xpath_text(metadata, './width')),
}
if '.fplive.net/' in video_url:
video_url = re.sub(r'^rtmpe?://', 'http://', video_url.strip())
parsed_video_url = compat_urlparse.urlparse(video_url)
direct_video_url = compat_urlparse.urlunparse(parsed_video_url._replace(
netloc='v.lvlt.crcdn.net',
path='%s/%s' % (remove_end(parsed_video_url.path, '/'), video_file.split(':')[-1])))
if self._is_valid_url(direct_video_url, video_id, video_format):
format_info.update({
'url': direct_video_url,
})
formats.append(format_info)
continue
format_info.update({
'url': video_url,
'play_path': video_file,
'ext': 'flv',
})
formats.append(format_info)
self._sort_formats(formats)
metadata = self._download_xml(
'http://www.crunchyroll.com/xml', video_id,
note='Downloading media info', query={
'req': 'RpcApiVideoPlayer_GetMediaMetadata',
'media_id': video_id,
})
subtitles = self.extract_subtitles(video_id, webpage)
# webpage provide more accurate data than series_title from XML
series = self._html_search_regex(
r'id=["\']showmedia_about_episode_num[^>]+>\s*<a[^>]+>([^<]+)',
webpage, 'series', fatal=False)
season = xpath_text(metadata, 'series_title')
episode = xpath_text(metadata, 'episode_title')
episode_number = int_or_none(xpath_text(metadata, 'episode_number'))
season_number = int_or_none(self._search_regex(
r'(?s)<h4[^>]+id=["\']showmedia_about_episode_num[^>]+>.+?</h4>\s*<h4>\s*Season (\d+)',
webpage, 'season number', default=None))
return {
'id': video_id,
'title': video_title,
'description': video_description,
'thumbnail': xpath_text(metadata, 'episode_image_url'),
'uploader': video_uploader,
'upload_date': video_upload_date,
'series': series,
'season': season,
'season_number': season_number,
'episode': episode,
'episode_number': episode_number,
'subtitles': subtitles,
'formats': formats,
}
class CrunchyrollShowPlaylistIE(CrunchyrollBaseIE):
IE_NAME = 'crunchyroll:playlist'
_VALID_URL = r'https?://(?:(?P<prefix>www|m)\.)?(?P<url>crunchyroll\.com/(?!(?:news|anime-news|library|forum|launchcalendar|lineup|store|comics|freetrial|login))(?P<id>[\w\-]+))/?(?:\?|$)'
_TESTS = [{
'url': 'http://www.crunchyroll.com/a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi',
'info_dict': {
'id': 'a-bridge-to-the-starry-skies-hoshizora-e-kakaru-hashi',
'title': 'A Bridge to the Starry Skies - Hoshizora e Kakaru Hashi'
},
'playlist_count': 13,
}, {
# geo-restricted (US), 18+ maturity wall, non-premium available
'url': 'http://www.crunchyroll.com/cosplay-complex-ova',
'info_dict': {
'id': 'cosplay-complex-ova',
'title': 'Cosplay Complex OVA'
},
'playlist_count': 3,
'skip': 'Georestricted',
}, {
# geo-restricted (US), 18+ maturity wall, non-premium will be available since 2015.11.14
'url': 'http://www.crunchyroll.com/ladies-versus-butlers?skip_wall=1',
'only_matching': True,
}]
def _real_extract(self, url):
show_id = self._match_id(url)
webpage = self._download_webpage(
self._add_skip_wall(url), show_id,
headers=self.geo_verification_headers())
title = self._html_search_regex(
r'(?s)<h1[^>]*>\s*<span itemprop="name">(.*?)</span>',
webpage, 'title')
episode_paths = re.findall(
r'(?s)<li id="showview_videos_media_(\d+)"[^>]+>.*?<a href="([^"]+)"',
webpage)
entries = [
self.url_result('http://www.crunchyroll.com' + ep, 'Crunchyroll', ep_id)
for ep_id, ep in episode_paths
]
entries.reverse()
return {
'_type': 'playlist',
'id': show_id,
'title': title,
'entries': entries,
}
| Tithen-Firion/youtube-dl | youtube_dl/extractor/crunchyroll.py | Python | unlicense | 25,083 |
"""
AsyncWith astroid node
Subclass of With astroid node, which is used to simplify set up/tear down
actions for a block of code. Asynchronous code doesn't wait for an operation
to complete, rather the code executes all operations in one go. Only valid in
body of an AsyncFunctionDef astroid node.
Attributes:
- # Derives directly from "With" node; see "with" node for attributes.
Example:
AsyncWith(
items=[
[
Call(
func=Name(name='open'),
args=[Const(value='/foo/bar'), Const(value='r')],
keywords=None),
AssignName(name='f')]],
body=[Pass()])
"""
async def fun():
async with open("/foo/bar", "r") as f:
pass
| pyta-uoft/pyta | nodes/async_with.py | Python | gpl-3.0 | 756 |
# -*- coding: utf-8 -*-
#
# bcbio_nextgen documentation build configuration file, created by
# sphinx-quickstart on Tue Jan 1 13:33:31 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'bcbio-nextgen'
copyright = u'2015, bcbio-nextgen contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.9.1'
# The full version, including alpha/beta/rc tags.
release = '0.9.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
"index": ["sidebar-links.html", "searchbox.html"]}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'bcbio_nextgendoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'bcbio_nextgen.tex', u'bcbio\\_nextgen Documentation',
u'Brad Chapman', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'bcbio_nextgen', u'bcbio_nextgen Documentation',
[u'Brad Chapman'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'bcbio_nextgen', u'bcbio_nextgen Documentation',
u'Brad Chapman', 'bcbio_nextgen', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
| hjanime/bcbio-nextgen | docs/conf.py | Python | mit | 7,875 |
"""
On a 2 dimensional grid with R rows and C columns, we start at (r0, c0) facing east.
Here, the north-west corner of the grid is at the first row and column, and the south-east corner of the grid is at the last row and column.
Now, we walk in a clockwise spiral shape to visit every position in this grid.
Whenever we would move outside the boundary of the grid, we continue our walk outside the grid (but may return to the grid boundary later.)
Eventually, we reach all R * C spaces of the grid.
Return a list of coordinates representing the positions of the grid in the order they were visited.
Example 1:
Input: R = 1, C = 4, r0 = 0, c0 = 0
Output: [[0,0],[0,1],[0,2],[0,3]]

Example 2:
Input: R = 5, C = 6, r0 = 1, c0 = 4
Output: [[1,4],[1,5],[2,5],[2,4],[2,3],[1,3],[0,3],[0,4],[0,5],[3,5],[3,4],[3,3],[3,2],[2,2],[1,2],[0,2],[4,5],[4,4],[4,3],[4,2],[4,1],[3,1],[2,1],[1,1],[0,1],[4,0],[3,0],[2,0],[1,0],[0,0]]

Note:
1 <= R <= 100
1 <= C <= 100
0 <= r0 < R
0 <= c0 < C
"""
class Solution(object):
def spiralMatrixIII(self, R, C, r0, c0):
"""
:type R: int
:type C: int
:type r0: int
:type c0: int
:rtype: List[List[int]]
"""
def change_dire(dire):
if dire[0] == 0 and dire[1] == 1:
return [1, 0]
elif dire[0] == 1 and dire[1] == 0:
return [0, -1]
elif dire[0] == 0 and dire[1] == -1:
return [-1, 0]
else:
return [0, 1]
curr = [r0, c0]
dire = [0, 1]
step_b, acc = 2, 1
ll1, ll2, final = 1, 1, R * C
ret = [curr]
while ll2 < final:
if ll1 == step_b * step_b:
acc += 1
step_b += 1
elif acc == step_b:
dire = change_dire(dire)
acc = 2
else:
acc += 1
curr = [curr[0] + dire[0], curr[1] + dire[1]]
# print(dire, curr, acc, step_b)
ll1 += 1
if curr[0] < 0 or curr[0] >= R or curr[1] < 0 or curr[1] >= C:
continue
ret.append(curr)
ll2 += 1
return ret
| franklingu/leetcode-solutions | questions/spiral-matrix-iii/Solution.py | Python | mit | 2,400 |
klein = raw_input()
gross = raw_input()
zahl = []
if gross[0] == '0':
zahl.append((1,0))
else:
zahl.append((0,1))
for i in range(1,len(gross)):
if gross[i] == '0':
zahl.append((zahl[i - 1][0] + 1, zahl[i - 1][1]))
else:
zahl.append((zahl[i - 1][0], zahl[i - 1][1] + 1))
plus = 0
for i in range(len(klein)):
if klein[i] == '0':
plus += zahl[len(gross) - len(klein) + i][1]
if i > 0:
plus -= zahl[i - 1][1]
else:
plus += zahl[len(gross) - len(klein) + i][0]
if i > 0:
plus -= zahl[i - 1][0]
print plus
| clarammdantas/Online-Jugde-Problems | online_judge_solutions/python.py | Python | mit | 531 |
from django.db import models
from django.utils.translation import ugettext_lazy as _
from cmsplugins.baseplugin.models import BasePlugin
from filer.fields.image import FilerImageField
class Gallery(BasePlugin):
# TODO add more layouts
layout = models.CharField(
max_length=20,
blank=True,
default='',
verbose_name=_('type'),
)
abstract = models.TextField(
max_length=150,
blank=True,
default='',
verbose_name=_('short description'),
)
description = models.TextField(
blank=True,
default='',
verbose_name=_('long description '),
)
@property
def css_classes(self):
classes = [self.gallery_layout]
if self.height:
classes.append(self.height)
if self.width:
classes.append(self.width)
if self.css_class:
classes.append(self.css_class)
if classes:
return ' {0}'.format(' '.join(classes))
else:
return ''
@property
def gallery_layout(self):
if not self.layout:
return 'standard'
class Picture(BasePlugin):
show_popup = models.BooleanField(
default=False,
verbose_name=_('show popup'),
)
image = FilerImageField(
null=True,
default=None,
on_delete=models.SET_NULL,
verbose_name=_('image'),
)
abstract = models.TextField(
blank=True,
default='',
verbose_name=_('short description'),
)
description = models.TextField(
blank=True,
default='',
verbose_name=_('long description '),
)
| rouxcode/django-cms-plugins | cmsplugins/pictures/models.py | Python | mit | 1,669 |
#!/usr/bin/python
"""
Copyright 2015 Ericsson AB
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
import numpy
import math
import datetime
import requests
import json
import re
from operator import itemgetter
from bson.objectid import ObjectId
from pyspark import SparkContext, SparkConf
from pymongo import MongoClient
from pyspark.mllib.clustering import KMeans, KMeansModel
from numpy import array
from math import sqrt
from geopy.distance import vincenty
# Weights
W_1 = 1.2
W_2 = .8
DISTANCE_THRESHOLD = 0.3
NUM_OF_IT = 8
MIN_LATITUDE = 59.78
MAX_LATITUDE = 59.92
MIN_LONGITUDE = 17.53
MAX_LONGITUDE = 17.75
MIN_COORDINATE = -13750
MAX_COORDINATE = 13750
CIRCLE_CONVERTER = math.pi / 43200
NUMBER_OF_RECOMMENDATIONS = 5
client2 = MongoClient('130.238.15.114')
db2 = client2.monad1
client3 = MongoClient('130.238.15.114')
db3 = client3.monad1
start = datetime.datetime.now()
dontGoBehind = 0
def time_approximation(lat1, lon1, lat2, lon2):
point1 = (lat1, lon1)
point2 = (lat2, lon2)
distance = vincenty(point1, point2).kilometers
return int(round(distance / 10 * 60))
def retrieve_requests():
TravelRequest = db2.TravelRequest
return TravelRequest
def populate_requests(TravelRequest):
results = db2.TravelRequest.find()
for res in results:
dist = time_approximation(res['startPositionLatitude'],
res['startPositionLongitude'],
res['endPositionLatitude'],
res['endPositionLongitude'])
if res['startTime'] == "null":
users.append((res['userID'],(res['startPositionLatitude'],
res['startPositionLongitude'], res['endPositionLatitude'],
res['endPositionLongitude'],
(res['endTime'] - datetime.timedelta(minutes = dist)).time(),
(res['endTime']).time())))
elif res['endTime'] == "null":
users.append((res['userID'],(res['startPositionLatitude'],
res['startPositionLongitude'], res['endPositionLatitude'],
res['endPositionLongitude'], (res['startTime']).time(),
(res['startTime'] + datetime.timedelta(minutes = dist)).time())))
else:
users.append((res['userID'],(res['startPositionLatitude'],
res['startPositionLongitude'], res['endPositionLatitude'],
res['endPositionLongitude'], (res['startTime']).time(),
(res['endTime']).time())))
def get_today_timetable():
TimeTable = db2.TimeTable
first = datetime.datetime.today()
first = first.replace(hour = 0, minute = 0, second = 0, microsecond = 0)
route = TimeTable.find({'date': {'$gte': first}})
return route
def populate_timetable():
route = get_today_timetable()
waypoints = []
for res in route:
for res1 in res['timetable']:
for res2 in db2.BusTrip.find({'_id': res1}):
for res3 in res2['trajectory']:
for res4 in db2.BusStop.find({'_id':res3['busStop']}):
waypoints.append((res3['time'],res4['latitude'],
res4['longitude'], res4['name']))
routes.append((res1, waypoints))
waypoints = []
def iterator(waypoints):
Waypoints = []
for res in waypoints:
Waypoints.append((lat_normalizer(res[1]), lon_normalizer(res[2]),
time_normalizer(to_coordinates(to_seconds(res[0]))[0]),
time_normalizer(to_coordinates(to_seconds(res[0]))[1]),
res[3]))
return Waypoints
# Converting time object to seconds
def to_seconds(dt):
total_time = dt.hour * 3600 + dt.minute * 60 + dt.second
return total_time
# Mapping seconds value to (x, y) coordinates
def to_coordinates(secs):
angle = float(secs) * CIRCLE_CONVERTER
x = 13750 * math.cos(angle)
y = 13750 * math.sin(angle)
return x, y
# Normalization functions
def time_normalizer(value):
new_value = float((float(value) - MIN_COORDINATE) /
(MAX_COORDINATE - MIN_COORDINATE))
return new_value /2
def lat_normalizer(value):
new_value = float((float(value) - MIN_LATITUDE) /
(MAX_LATITUDE - MIN_LATITUDE))
return new_value
def lon_normalizer(value):
new_value = float((float(value) - MIN_LONGITUDE) /
(MAX_LONGITUDE - MIN_LONGITUDE))
return new_value
# Function that implements the kmeans algorithm to group users requests
def kmeans(iterations, theRdd):
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)]))
clusters = KMeans.train(theRdd, iterations, maxIterations=10,
runs=10, initializationMode="random")
WSSSE = theRdd.map(lambda point: error(point)).reduce(lambda x, y: x + y)
return WSSSE, clusters
# Function that runs iteratively the kmeans algorithm to find the best number
# of clusters to group the user's request
def optimalk(theRdd):
results = []
for i in range(NUM_OF_IT):
results.append(kmeans(i+1, theRdd)[0])
optimal = []
for i in range(NUM_OF_IT-1):
optimal.append(results[i] - results[i+1])
optimal1 = []
for i in range(NUM_OF_IT-2):
optimal1.append(optimal[i] - optimal[i+1])
return (optimal1.index(max(optimal1)) + 2)
def back_to_coordinates(lat, lon):
new_lat = (lat * (MAX_LATITUDE - MIN_LATITUDE)) + MIN_LATITUDE
new_lon = (lon * (MAX_LONGITUDE - MIN_LONGITUDE)) + MIN_LONGITUDE
return new_lat, new_lon
def nearest_stops(lat, lon, dist):
stops = []
url = "http://130.238.15.114:9998/get_nearest_stops_from_coordinates"
data = {'lon': lon, 'lat': lat, 'distance': dist}
headers = {'Content-type': 'application/x-www-form-urlencoded'}
answer = requests.post(url, data = data, headers = headers)
p = re.compile("(u'\w*')")
answer = p.findall(answer.text)
answer = [x.encode('UTF8') for x in answer]
answer = [x[2:-1] for x in answer]
answer = list(set(answer))
return answer
# The function that calculate the distance from the given tuple to all the
# cluster centroids and returns the minimum disstance
def calculate_distance_departure(tup1):
dist_departure = []
pos_departure = []
cent_num = 0
for i in selected_centroids:
position = -1
min_value = 1000
min_position = 0
centroid_departure = (i[0]*W_1, i[1]*W_1,i[4]*W_2, i[5]*W_2)
centroid_departure = numpy.array(centroid_departure)
trajectory = []
for l in range(len(tup1)-1):
position = position + 1
if(tup1[l][4] in nearest_stops_dep[cent_num]):
current_stop = (numpy.array(tup1[l][:4])
* numpy.array((W_1,W_1,W_2,W_2)))
distance = numpy.linalg.norm(centroid_departure - current_stop)
if (distance < min_value):
min_value = distance
min_position = position
result = min_value
dist_departure.append(result)
pos_departure.append(min_position)
cent_num += 1
return {"dist_departure":dist_departure,"pos_departure":pos_departure}
def calculate_distance_arrival(tup1,pos_departure):
dist_arrival = []
pos_arrival = []
counter=-1
cent_num = 0
for i in selected_centroids:
min_value = 1000
min_position = 0
centroid_arrival = (i[2]*W_1, i[3]*W_1, i[6]*W_2, i[7]*W_2)
centroid_arrival = numpy.array(centroid_arrival)
counter = counter + 1
position = pos_departure[counter]
for l in range(pos_departure[counter]+1, len(tup1)):
position = position + 1
if(tup1[l][4] in nearest_stops_arr[cent_num]):
current_stop = (numpy.array(tup1[l][:4])
* numpy.array((W_1,W_1,W_2,W_2)))
distance = numpy.linalg.norm(centroid_arrival - current_stop)
if (distance < min_value):
min_value = distance
min_position = position
result = min_value
dist_arrival.append(result)
pos_arrival.append(min_position)
cent_num += 1
return {"dist_arrival":dist_arrival,"pos_arrival":pos_arrival}
def remove_duplicates(alist):
return list(set(map(lambda (w, x, y, z): (w, y, z), alist)))
def recommendations_to_return(alist):
for rec in alist:
trip = db2.BusTrip.find_one({'_id': rec[0]})
traj = trip['trajectory'][rec[2]:rec[3]+1]
trajectory = []
names_only = []
for stop in traj:
name_and_time = (db2.BusStop.find_one({"_id": stop['busStop']})
['name']), stop['time']
trajectory.append(name_and_time)
names_only.append(name_and_time[0])
busid = 1.0
line = trip['line']
result = (int(line), int(busid), names_only[0], names_only[-1],
names_only, trajectory[0][1], trajectory[-1][1], rec[0])
to_return.append(result)
def recommendations_to_db(user, alist):
rec_list = []
for item in to_return:
o_id = ObjectId()
line = item[0]
bus_id = item[1]
start_place = item[2]
end_place = item[3]
start_time = item[5]
end_time = item[6]
bus_trip_id = item[7]
request_time = "null"
feedback = -1
request_id = "null"
next_trip = "null"
booked = False
trajectory = item[4]
new_user_trip = {
"_id":o_id,
"userID" : user,
"line" : line,
"busID" : bus_id,
"startBusStop" : start_place,
"endBusStop" : end_place,
"startTime" : start_time,
"busTripID" : bus_trip_id,
"endTime" : end_time,
"feedback" : feedback,
"trajectory" : trajectory,
"booked" : booked
}
new_recommendation = {
"userID": user,
"userTrip": o_id
}
db3.UserTrip.insert(new_user_trip)
db3.TravelRecommendation.insert(new_recommendation)
def empty_past_recommendations():
db3.TravelRecommendation.drop()
if __name__ == "__main__":
user_ids = []
users = []
routes = []
user_ids = []
sc = SparkContext()
populate_timetable()
my_routes = sc.parallelize(routes, 8)
my_routes = my_routes.map(lambda (x,y): (x, iterator(y))).cache()
req = retrieve_requests()
populate_requests(req)
start = datetime.datetime.now()
initial_rdd = sc.parallelize(users, 4).cache()
user_ids_rdd = (initial_rdd.map(lambda (x,y): (x,1))
.reduceByKey(lambda a, b: a + b)
.collect())
'''
for user in user_ids_rdd:
user_ids.append(user[0])
'''
empty_past_recommendations()
user_ids = []
user_ids.append(1)
for userId in user_ids:
userId = 1
recommendations = []
transition = []
final_recommendation = []
selected_centroids = []
routes_distances = []
to_return = []
nearest_stops_dep = []
nearest_stops_arr = []
my_rdd = (initial_rdd.filter(lambda (x,y): x == userId)
.map(lambda (x,y): y)).cache()
my_rdd = (my_rdd.map(lambda x: (x[0], x[1], x[2], x[3],
to_coordinates(to_seconds(x[4])),
to_coordinates(to_seconds(x[5]))))
.map(lambda (x1, x2, x3, x4, (x5, x6), (x7, x8)):
(lat_normalizer(x1), lon_normalizer(x2),
lat_normalizer(x3), lon_normalizer(x4),
time_normalizer(x5), time_normalizer(x6),
time_normalizer(x7), time_normalizer(x8))))
selected_centroids = kmeans(4, my_rdd)[1].centers
for i in range(len(selected_centroids)):
cent_lat, cent_long = back_to_coordinates(selected_centroids[i][0],
selected_centroids[i][1])
nearest_stops_dep.append(nearest_stops(cent_lat, cent_long, 200))
cent_lat, cent_long = back_to_coordinates(selected_centroids[i][2],
selected_centroids[i][3])
nearest_stops_arr.append(nearest_stops(cent_lat, cent_long, 200))
routes_distances = my_routes.map(lambda x: (x[0],
calculate_distance_departure(x[1])['dist_departure'],
calculate_distance_arrival(x[1],
calculate_distance_departure(x[1])['pos_departure'])['dist_arrival'],
calculate_distance_departure(x[1])['pos_departure'],
calculate_distance_arrival(x[1],
calculate_distance_departure(x[1])['pos_departure'])['pos_arrival']))
for i in range(len(selected_centroids)):
sort_route = (routes_distances.map(lambda (v, w, x, y, z):
(v, w[i] + x[i], y[i], z[i]))
.sortBy(lambda x:x[1]))
final_recommendation.append((sort_route
.take(NUMBER_OF_RECOMMENDATIONS)))
for sug in final_recommendation:
for i in range(len(sug)):
temp = []
for j in range(len(sug[i])):
temp.append(sug[i][j])
recommendations.append(temp)
recommendations.sort(key=lambda x: x[1])
recommendations_final = []
for rec in recommendations:
if abs(rec[2] - rec[3]) > 1 and rec[1] < DISTANCE_THRESHOLD:
recommendations_final.append(rec)
recommendations = recommendations_final[:10]
recommendations_to_return(recommendations)
recommendations_to_db(userId, to_return)
| EricssonResearch/monad | TravelRecommendation/TravelRecommendation_faster.py | Python | apache-2.0 | 14,541 |
class Solution:
# 定义一个函数计算括号里的式子
def helper(self,s):
length=len(s)
# 检查时候含有'--','+-'
s=''.join(s)
s=s.replace('--','+')
s=s.replace('+-','-')
s=[x for x in s if x!='#']
s=['+']+s[1:-1]+['+']
tmp=0
lastsign=0
for i in range(1,len(s)):
if i==1 and s[i]=='-':
lastsign=1
continue
if s[i]=='+' or s[i]=='-':
tmp+=int(''.join(s[lastsign:i]))
lastsign=i
return (list(str(tmp)),(length-len(list(str(tmp)))))
def calculate(self, s):
"""
:type s: str
:rtype: int
"""
s='('+s+')'
s=[x for x in s if x!=' ']
stack=[]
i=0
while i<len(s):
if s[i]=='(':
stack.append(i)
i+=1
elif s[i]==')':
last=stack.pop()
tmp=self.helper(s[last:i+1])
s[last:i+1]=tmp[0]
i-=tmp[1]
else:
i+=1
return int(''.join(s)) | Hehwang/Leetcode-Python | code/224 Basic Calculator.py | Python | mit | 1,162 |
# -*- coding: utf-8 -*-
# Generated by Django 1.9 on 2016-02-03 18:03
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('assettypes', '0002_auto_20151223_1125'),
]
operations = [
migrations.CreateModel(
name='VideoCameraModel',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('model_name', models.CharField(max_length=100)),
('manufacturer', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='assettypes.Manufacturer')),
('other_accessories', models.ManyToManyField(blank=True, to='assettypes.GenericAccessoryModel')),
],
options={
'abstract': False,
},
),
migrations.AlterUniqueTogether(
name='videocameramodel',
unique_together=set([('manufacturer', 'model_name')]),
),
]
| nocarryr/AV-Asset-Manager | avam/assettypes/migrations/0003_auto_20160203_1203.py | Python | gpl-3.0 | 1,097 |
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2005 onwards University of Deusto
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution.
#
# This software consists of contributions made by many individuals,
# listed below:
#
# Author: Pablo Orduña <[email protected]>
#
from __future__ import print_function, unicode_literals
import base64
import os
import traceback
import StringIO
import weblab.data.command as Command
from voodoo.override import Override
from voodoo.gen import CoordAddress
from voodoo.representable import AbstractRepresentable, Representable
from voodoo.typechecker import typecheck
from weblab.core.file_storer import FileStorer
class ExperimentId(object):
__metaclass__ = Representable
@typecheck(basestring, basestring)
def __init__(self, exp_name, cat_name):
self.exp_name = unicode(exp_name)
self.cat_name = unicode(cat_name)
def __cmp__(self, other):
if isinstance(other, ExperimentId):
return -1
if self.exp_name != other.exp_name:
return cmp(self.exp_name, other.exp_name)
return cmp(self.cat_name, other.cat_name)
def to_dict(self):
return {'exp_name': self.exp_name, 'cat_name': self.cat_name}
def to_weblab_str(self):
return '%s@%s' % (self.exp_name, self.cat_name)
def __hash__(self):
return hash(self.to_weblab_str())
@staticmethod
def parse(weblab_str):
pos = weblab_str.find("@")
experiment_name = weblab_str[:pos]
category_name = weblab_str[pos + 1 :]
return ExperimentId(experiment_name, category_name)
class ExperimentInstanceId(object):
__metaclass__ = Representable
@typecheck(basestring, basestring, basestring)
def __init__(self, inst_name, exp_name, cat_name):
self.inst_name = unicode(inst_name)
self.exp_name = unicode(exp_name)
self.cat_name = unicode(cat_name)
def to_experiment_id(self):
return ExperimentId(self.exp_name, self.cat_name)
def to_weblab_str(self):
return "%s:%s@%s" % (self.inst_name, self.exp_name, self.cat_name)
@staticmethod
def parse(weblab_str):
if ':' not in weblab_str:
raise ValueError("Malformed weblab_str: ':' not found: %r" % weblab_str)
pos = weblab_str.find(":")
instance_name = weblab_str[:pos]
experiment_id_str = weblab_str[pos + 1 :]
experiment_id = ExperimentId.parse(experiment_id_str)
return ExperimentInstanceId(instance_name, experiment_id.exp_name, experiment_id.cat_name)
def __cmp__(self, other):
return cmp(str(self), str(other))
def __hash__(self):
return hash(self.inst_name) * 31 ** 3 + hash(self.exp_name) * 31 ** 2 + hash(self.cat_name) * 31 + hash("ExperimentInstanceId")
class CommandSent(object):
__metaclass__ = Representable
@typecheck(Command.Command, float, Command.Command, (float, type(None)))
def __init__(self, command, timestamp_before, response = None, timestamp_after = None):
self.command = command # Command
self.timestamp_before = timestamp_before # seconds.millis since 1970 in GMT
if response == None:
self.response = Command.NullCommand()
else:
self.response = response
self.timestamp_after = timestamp_after
class LoadedFileSent(object):
__metaclass__ = Representable
@typecheck(basestring, float, Command.Command, (float, type(None)), (basestring, type(None)))
def __init__(self, file_content, timestamp_before, response, timestamp_after, file_info):
self.file_content = file_content
self.timestamp_before = timestamp_before
self.response = response
self.timestamp_after = timestamp_after
self.file_info = file_info
# Just in case
def load(self, storage_path):
return self
def is_loaded(self):
return True
@typecheck(basestring)
def save(self, cfg_manager, reservation_id):
content = base64.decodestring(self.file_content)
storer = FileStorer(cfg_manager, reservation_id)
file_stored = storer.store_file(content, self.file_info)
file_path = file_stored.file_path
file_hash = file_stored.file_hash
return FileSent(file_path, file_hash, self.timestamp_before, self.response, self.timestamp_after, self.file_info)
class FileSent(object):
__metaclass__ = Representable
@typecheck(basestring, basestring, float, Command.Command, (float, type(None)), (basestring, type(None)))
def __init__(self, file_path, file_hash, timestamp_before, response = None, timestamp_after = None, file_info = None):
self.file_path = file_path
self.file_hash = file_hash
self.file_info = file_info
self.timestamp_before = timestamp_before
if response == None:
self.response = Command.NullCommand()
else:
self.response = response
self.timestamp_after = timestamp_after
def is_loaded(self):
return False
@typecheck(basestring)
def load(self, storage_path):
try:
content = open(os.path.join(storage_path, self.file_path), 'rb').read()
except:
sio = StringIO.StringIO()
traceback.print_exc(file=sio)
content = "ERROR:File could not be retrieved. Reason: %s" % sio.getvalue()
content_serialized = base64.encodestring(content)
return LoadedFileSent(content_serialized, self.timestamp_before, self.response, self.timestamp_after, self.file_info)
# Just in case
@typecheck(basestring)
def save(self, cfg_manager, reservation_id):
return self
class ExperimentUsage(object):
__metaclass__ = Representable
@typecheck(int, float, float, basestring, ExperimentId, basestring, CoordAddress, dict, list, list)
def __init__(self, experiment_use_id = None, start_date = None, end_date = None, from_ip = u"unknown", experiment_id = None, reservation_id = None, coord_address = None, request_info = None, commands = None, sent_files = None):
self.experiment_use_id = experiment_use_id # int
self.start_date = start_date # seconds.millis since 1970 in GMT
self.end_date = end_date # seconds.millis since 1970 in GMT
self.from_ip = from_ip
self.experiment_id = experiment_id # weblab.data.experiments.ExperimentId
self.reservation_id = reservation_id # string, the reservation identifier
self.coord_address = coord_address # voodoo.gen.CoordAddress
if request_info is None:
self.request_info = {}
else:
self.request_info = request_info
if commands is None:
self.commands = [] # [CommandSent]
else:
self.commands = commands
if sent_files is None:
self.sent_files = [] # [FileSent]
else:
self.sent_files = sent_files
@typecheck(CommandSent)
def append_command(self, command_sent):
"""
append_command(command_sent)
Appends the specified command to the local list of commands,
so that later the commands that were sent during the session
can be retrieved for logging or other purposes.
@param command_sent The command that was just sent, which we will register
@return The index of the command we just added in the internal list. Mostly,
for identification purposes.
"""
# isinstance(command_sent, CommandSent)
self.commands.append(command_sent)
return len(self.commands) - 1
@typecheck(int, CommandSent)
def update_command(self, command_id, command_sent):
self.commands[command_id] = command_sent
@typecheck((FileSent, LoadedFileSent))
def append_file(self, file_sent):
self.sent_files.append(file_sent)
return len(self.sent_files) - 1
@typecheck(int, FileSent)
def update_file(self, file_id, file_sent):
self.sent_files[file_id] = file_sent
@typecheck(basestring)
def load_files(self, path):
loaded_sent_files = []
for sent_file in self.sent_files:
loaded_sent_file = sent_file.load(path)
loaded_sent_files.append(loaded_sent_file)
self.sent_files = loaded_sent_files
return self
@typecheck(basestring)
def save_files(self, cfg_manager):
saved_sent_files = []
for sent_file in self.sent_files:
saved_sent_file = sent_file.save(cfg_manager, self.reservation_id)
saved_sent_files.append(saved_sent_file)
self.sent_files = saved_sent_files
return self
class ReservationResult(object):
__metaclass__ = AbstractRepresentable
ALIVE = 'alive'
CANCELLED = 'cancelled'
FINISHED = 'finished'
FORBIDDEN = 'forbidden'
def __init__(self, status):
self.status = status
def is_alive(self):
return False
def is_finished(self):
return False
def is_cancelled(self):
return False
def is_forbidden(self):
return False
class AliveReservationResult(ReservationResult):
def __init__(self, running):
super(AliveReservationResult, self).__init__(ReservationResult.ALIVE)
self.running = running
self.waiting = not running
@Override(ReservationResult)
def is_alive(self):
return True
class RunningReservationResult(AliveReservationResult):
def __init__(self):
super(RunningReservationResult, self).__init__(True)
class WaitingReservationResult(AliveReservationResult):
def __init__(self):
super(WaitingReservationResult, self).__init__(True)
class CancelledReservationResult(ReservationResult):
def __init__(self):
super(CancelledReservationResult, self).__init__(ReservationResult.CANCELLED)
@Override(ReservationResult)
def is_cancelled(self):
return True
class ForbiddenReservationResult(ReservationResult):
def __init__(self):
super(ForbiddenReservationResult, self).__init__(ReservationResult.FORBIDDEN)
@Override(ReservationResult)
def is_forbidden(self):
return True
class FinishedReservationResult(ReservationResult):
@typecheck(ExperimentUsage)
def __init__(self, experiment_use):
super(FinishedReservationResult, self).__init__(ReservationResult.FINISHED)
self.experiment_use = experiment_use
@Override(ReservationResult)
def is_finished(self):
return True
| morelab/weblabdeusto | server/src/weblab/data/experiments.py | Python | bsd-2-clause | 10,874 |
# -*- coding: utf-8 -*-
# Generated by Django 1.10.1 on 2018-03-12 01:48
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
import jsonfield.fields
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='WebhookTransaction',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('date_generated', models.DateTimeField()),
('date_received', models.DateTimeField(default=django.utils.timezone.now)),
('body', jsonfield.fields.JSONField(default={})),
('request_meta', jsonfield.fields.JSONField(default={})),
('status', models.CharField(choices=[(1, 'Unprocessed'), (2, 'Processed'), (3, 'Error')], default=1, max_length=250)),
],
),
]
| superfluidity/RDCL3D | code/webhookhandler/migrations/0001_initial.py | Python | apache-2.0 | 983 |
"""The tests for the Switch component."""
# pylint: disable=too-many-public-methods,protected-access
import unittest
from homeassistant.bootstrap import setup_component
from homeassistant import loader
from homeassistant.components import switch
from homeassistant.const import STATE_ON, STATE_OFF, CONF_PLATFORM
from tests.common import get_test_home_assistant
class TestSwitch(unittest.TestCase):
"""Test the switch module."""
def setUp(self): # pylint: disable=invalid-name
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
platform = loader.get_component('switch.test')
platform.init()
# Switch 1 is ON, switch 2 is OFF
self.switch_1, self.switch_2, self.switch_3 = \
platform.DEVICES
def tearDown(self): # pylint: disable=invalid-name
"""Stop everything that was started."""
self.hass.stop()
def test_methods(self):
"""Test is_on, turn_on, turn_off methods."""
self.assertTrue(setup_component(
self.hass, switch.DOMAIN, {switch.DOMAIN: {CONF_PLATFORM: 'test'}}
))
self.assertTrue(switch.is_on(self.hass))
self.assertEqual(
STATE_ON,
self.hass.states.get(switch.ENTITY_ID_ALL_SWITCHES).state)
self.assertTrue(switch.is_on(self.hass, self.switch_1.entity_id))
self.assertFalse(switch.is_on(self.hass, self.switch_2.entity_id))
self.assertFalse(switch.is_on(self.hass, self.switch_3.entity_id))
switch.turn_off(self.hass, self.switch_1.entity_id)
switch.turn_on(self.hass, self.switch_2.entity_id)
self.hass.block_till_done()
self.assertTrue(switch.is_on(self.hass))
self.assertFalse(switch.is_on(self.hass, self.switch_1.entity_id))
self.assertTrue(switch.is_on(self.hass, self.switch_2.entity_id))
# Turn all off
switch.turn_off(self.hass)
self.hass.block_till_done()
self.assertFalse(switch.is_on(self.hass))
self.assertEqual(
STATE_OFF,
self.hass.states.get(switch.ENTITY_ID_ALL_SWITCHES).state)
self.assertFalse(switch.is_on(self.hass, self.switch_1.entity_id))
self.assertFalse(switch.is_on(self.hass, self.switch_2.entity_id))
self.assertFalse(switch.is_on(self.hass, self.switch_3.entity_id))
# Turn all on
switch.turn_on(self.hass)
self.hass.block_till_done()
self.assertTrue(switch.is_on(self.hass))
self.assertEqual(
STATE_ON,
self.hass.states.get(switch.ENTITY_ID_ALL_SWITCHES).state)
self.assertTrue(switch.is_on(self.hass, self.switch_1.entity_id))
self.assertTrue(switch.is_on(self.hass, self.switch_2.entity_id))
self.assertTrue(switch.is_on(self.hass, self.switch_3.entity_id))
def test_setup_two_platforms(self):
"""Test with bad configuration."""
# Test if switch component returns 0 switches
test_platform = loader.get_component('switch.test')
test_platform.init(True)
loader.set_component('switch.test2', test_platform)
test_platform.init(False)
self.assertTrue(setup_component(
self.hass, switch.DOMAIN, {
switch.DOMAIN: {CONF_PLATFORM: 'test'},
'{} 2'.format(switch.DOMAIN): {CONF_PLATFORM: 'test2'},
}
))
| hexxter/home-assistant | tests/components/switch/test_init.py | Python | mit | 3,433 |
from django.db.backends.postgresql.schema import DatabaseSchemaEditor as PostgresDatabaseSchemaEditor
from db import deletion
class DatabaseSchemaEditor(PostgresDatabaseSchemaEditor):
ON_DELETE_DEFAULT = 'NO ACTION'
ON_UPDATE_DEFAULT = 'NO ACTION'
sql_create_fk = (
'ALTER TABLE {table} ADD CONSTRAINT {name} FOREIGN KEY ({column}) '
'REFERENCES {to_table} ({to_column}) ON DELETE {on_delete} ON UPDATE {on_update}{deferrable}' # deferrable happens to always have a space in front of it
)
def _create_fk_sql(self, model, field, suffix):
from_table = model._meta.db_table
from_column = field.column
to_table = field.target_field.model._meta.db_table
to_column = field.target_field.column
suffix = suffix.format(to_table=to_table, to_column=to_column)
return self.sql_create_fk.format(
table=self.quote_name(from_table),
name=self.quote_name(self._create_index_name(model, [from_column], suffix=suffix)) % {
'to_table': to_table,
'to_column': to_column,
},
column=self.quote_name(from_column),
to_table=self.quote_name(to_table),
to_column=self.quote_name(to_column),
on_delete=field.rel.on_delete.clause if isinstance(field.rel.on_delete, deletion.DatabaseOnDelete) else self.ON_DELETE_DEFAULT,
on_update=field.rel.on_delete.clause if isinstance(field.rel.on_delete, deletion.DatabaseOnDelete) else self.ON_UPDATE_DEFAULT,
deferrable=self.connection.ops.deferrable_sql(),
)
| laurenbarker/SHARE | db/backends/postgresql/schema.py | Python | apache-2.0 | 1,613 |
from py2neo.ext.spatial.util import parse_lat_long
from .basetest import TestBase
class TestBasic(TestBase):
def test_create_and_fetch_point(self, spatial):
geometry_name = 'basic_test'
layer_name = 'basic_layer'
spatial.create_layer(layer_name)
point = (5.5, -4.5)
shape = parse_lat_long(point)
assert shape.type == 'Point'
spatial.create_geometry(
geometry_name=geometry_name, wkt_string=shape.wkt,
layer_name=layer_name)
application_node = self.get_application_node(spatial, geometry_name)
node_properties = application_node.properties
assert node_properties['_py2neo_geometry_name'] == geometry_name
geometry_node = self.get_geometry_node(spatial, geometry_name)
node_properties = geometry_node.properties
assert node_properties['wkt'] == 'POINT (5.5 -4.5)'
assert node_properties['bbox'] == [5.5, -4.5, 5.5, -4.5]
def test_precision(self, graph, spatial, layer):
x, y = 51.513845, -0.098351
shape = parse_lat_long((x, y))
expected_wkt_string = 'POINT ({x} {y})'.format(x=x, y=y)
assert shape.x == x
assert shape.y == y
assert shape.wkt == 'POINT (51.513845 -0.09835099999999999)'
spatial.create_geometry(
geometry_name='tricky', wkt_string=shape.wkt,
layer_name=layer)
application_node = self.get_application_node(spatial, 'tricky')
assert application_node
# get the geometry node and inspect the WKT string
query = (
"MATCH (l { layer:{layer_name} })<-[r_layer:LAYER]-"
"(root { name:'spatial_root' }), "
"(bbox)-[r_root:RTREE_ROOT]-(l), "
"(geometry_node)-[r_ref:RTREE_REFERENCE]-(bbox) "
"RETURN geometry_node"
)
params = {
'layer_name': layer,
}
result = graph.cypher.execute(query, params)
record = result[0]
geometry_node = record[0]
properties = geometry_node.properties
wkt = properties['wkt']
assert wkt == expected_wkt_string
| fpieper/py2neo | test/ext/spatial/test_basic.py | Python | apache-2.0 | 2,162 |
# -*- encoding: utf-8 -*-
##############################################################################
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses/.
#
##############################################################################
from openerp import models, fields, api
from openerp.addons import decimal_precision as dp
class MrpWorkcenter(models.Model):
_inherit = 'mrp.workcenter'
@api.one
@api.depends('operators')
def _operators_number_avg_cost(self):
self.op_number = len(self.operators)
op_avg_cost = 0.0
for operator in self.operators:
op_avg_cost += operator.employee_ids[:1].product_id.standard_price
self.op_avg_cost = op_avg_cost / (self.op_number or 1)
pre_op_product = fields.Many2one('product.product',
string='Pre-operation costing product')
post_op_product = fields.Many2one('product.product',
string='Post-operation costing product')
rt_operations = fields.Many2many(
'mrp.routing.operation', 'mrp_operation_workcenter_rel', 'workcenter',
'operation', 'Routing Operations')
operators = fields.Many2many('res.users', 'mrp_wc_operator_rel',
'workcenter_id', 'operator_id', 'Operators')
op_number = fields.Integer(
string='# Operators', compute=_operators_number_avg_cost)
op_avg_cost = fields.Float(
string='Operator average hour cost',
digits=dp.get_precision('Product Price'))
| alhashash/odoomrp-wip | mrp_operations_extension/models/mrp_workcenter.py | Python | agpl-3.0 | 2,159 |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='App',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
('slug', models.SlugField(unique=True)),
('video', models.URLField()),
('thumbnail', models.ImageField(upload_to='home/img/apps/thumbnails/')),
('description', models.TextField()),
('posted', models.DateField(auto_now_add=True)),
],
options={
'verbose_name': 'App',
'verbose_name_plural': 'Apps',
},
),
migrations.CreateModel(
name='Art',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
],
options={
'verbose_name': 'Art',
'verbose_name_plural': 'Art',
},
),
migrations.CreateModel(
name='Contributor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('name', models.CharField(max_length=50, unique=True)),
('link', models.URLField()),
],
options={
'verbose_name': 'Contributor',
'verbose_name_plural': 'Contributors',
},
),
migrations.CreateModel(
name='Game',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
('slug', models.SlugField(unique=True)),
('video', models.URLField()),
('thumbnail', models.ImageField(upload_to='home/img/games/thumbnails/')),
('description', models.TextField()),
('posted', models.DateField(auto_now_add=True)),
('contributors', models.ManyToManyField(blank=True, to='home.Contributor')),
],
options={
'verbose_name': 'Game',
'verbose_name_plural': 'Games',
},
),
migrations.CreateModel(
name='Library',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
('slug', models.SlugField(unique=True)),
('thumbnail', models.ImageField(upload_to='home/img/libraries/thumbnails/')),
('description', models.TextField()),
('posted', models.DateField(auto_now_add=True)),
('contributors', models.ManyToManyField(blank=True, to='home.Contributor')),
],
options={
'verbose_name': 'Library',
'verbose_name_plural': 'Libraries',
},
),
migrations.CreateModel(
name='Link',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('name', models.CharField(max_length=50, unique=True)),
('link', models.URLField()),
],
options={
'verbose_name': 'Link',
'verbose_name_plural': 'Links',
},
),
migrations.CreateModel(
name='Tool',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
('slug', models.SlugField(unique=True)),
('video', models.URLField()),
('thumbnail', models.ImageField(upload_to='home/img/tools/thumbnails/')),
('description', models.TextField()),
('posted', models.DateField(auto_now_add=True)),
('contributors', models.ManyToManyField(blank=True, to='home.Contributor')),
('links', models.ManyToManyField(blank=True, to='home.Link')),
],
options={
'verbose_name': 'Tool',
'verbose_name_plural': 'Tools',
},
),
migrations.CreateModel(
name='Tutorial',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
('slug', models.SlugField(unique=True)),
('posted', models.DateField(auto_now_add=True)),
],
options={
'verbose_name': 'Tutorial',
'verbose_name_plural': 'Tutorials',
},
),
migrations.CreateModel(
name='Website',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, verbose_name='ID', serialize=False)),
('title', models.CharField(max_length=50, unique=True)),
('thumbnail', models.ImageField(upload_to='home/img/sites/thumbnails/')),
('link', models.URLField()),
],
options={
'verbose_name': 'Site',
'verbose_name_plural': 'Sites',
},
),
migrations.AddField(
model_name='library',
name='links',
field=models.ManyToManyField(blank=True, to='home.Link'),
),
migrations.AddField(
model_name='game',
name='links',
field=models.ManyToManyField(blank=True, to='home.Link'),
),
migrations.AddField(
model_name='app',
name='contributors',
field=models.ManyToManyField(blank=True, to='home.Contributor'),
),
migrations.AddField(
model_name='app',
name='links',
field=models.ManyToManyField(blank=True, to='home.Link'),
),
]
| Amaranthos/jhcom | home/migrations/0001_initial.py | Python | mit | 6,580 |
#!/usr/bin/env python
import os
import sys
from setuptools import setup
if sys.argv[-1] == "publish":
os.system("python setup.py sdist bdist_wheel upload")
sys.exit()
# Hackishly inject a constant into builtins to enable importing of the
# package before the dependencies are installed.
if sys.version_info[0] < 3:
import __builtin__ as builtins
else:
import builtins
builtins.__CORNER_SETUP__ = True
import corner # NOQA
setup(
name="corner",
version=corner.__version__,
author="Daniel Foreman-Mackey",
author_email="[email protected]",
url="https://github.com/dfm/corner.py",
packages=["corner"],
description="Make some beautiful corner plots of samples.",
long_description=open("README.rst").read(),
package_data={"": ["README.rst", "LICENSE"]},
include_package_data=True,
classifiers=[
"Development Status :: 5 - Production/Stable",
"License :: OSI Approved :: BSD License",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"Operating System :: OS Independent",
"Programming Language :: Python",
],
install_requires=["numpy", "matplotlib"],
)
| mattpitkin/corner.py | setup.py | Python | bsd-2-clause | 1,203 |
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""moderate_subreddit tests"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import unittest
from moderate_subreddit import (
remove_quotes, check_rules, create_mod_comment_output_record)
from perspective_rule import Rule
from test_mocks import MockAuthor, MockComment
class ModerateSubredditTest(unittest.TestCase):
def test_remove_quotes(self):
comment_without_quoting = 'hi\nyay'
self.assertEqual(comment_without_quoting,
remove_quotes(comment_without_quoting))
comment_with_quoting = '> hi\nhello there'
self.assertEqual('hello there',
remove_quotes(comment_with_quoting))
comment_with_lots_of_quoting = '''> hi
>blah
hello there
> yuck
gross'''
self.assertEqual('hello there\ngross',
remove_quotes(comment_with_lots_of_quoting))
def test_check_rules_simple(self):
comment = None # Comment features aren't used by this test.
scores = { 'TOXICITY': 0.9 }
rules = [
Rule('hi_tox', {'TOXICITY': '> 0.5'}, {}, 'report'),
]
actions = check_rules(comment, rules, scores)
self.assertEqual(['report'], actions.keys())
self.assertEqual(['hi_tox'], [r.name for r in actions['report']])
def test_check_rules_multiple_triggered_rules(self):
comment = None # Comment features aren't used by this test.
# hi_tox and hi_spam are triggered, but not hi_threat.
scores = { 'TOXICITY': 0.9, 'SPAM': 0.9, 'THREAT': 0.2 }
rules = [
Rule('hi_tox', {'TOXICITY': '> 0.5'}, {}, 'report'),
Rule('hi_spam', {'SPAM': '> 0.5'}, {}, 'report'),
Rule('hi_threat', {'THREAT': '> 0.5'}, {}, 'report'),
]
actions = check_rules(comment, rules, scores)
self.assertEqual(['report'], actions.keys())
self.assertEqual(['hi_tox', 'hi_spam'],
[r.name for r in actions['report']])
def test_check_rules_multiple_actions(self):
comment = None # Comment features aren't used by this test.
scores = { 'TOXICITY': 0.9, 'THREAT': 0.2 }
rules = [
Rule('hi_tox', {'TOXICITY': '> 0.5'}, {}, 'report'),
Rule('hi_threat', {'THREAT': '> 0.9'}, {}, 'report'),
Rule('med_threat', {'THREAT': '> 0.1'}, {}, 'noop'),
]
actions = check_rules(comment, rules, scores)
self.assertEqual(['noop', 'report'], sorted(actions.keys()))
self.assertEqual(['hi_tox'],
[r.name for r in actions['report']])
self.assertEqual(['med_threat'],
[r.name for r in actions['noop']])
# TODO: These tests are a little complex, mirroring the complexity of the
# these functions (create_mod_comment_output_record, check_rules). Perhaps these can be
# refactored to simplify the control/data flow.
def test_create_mod_comment_output_record_basic(self):
comment = MockComment('hello')
scores = { 'TOXICITY': 0.8 }
hi_tox_rule = Rule('hi_tox', {'TOXICITY': '> 0.5'}, {}, 'report')
rules = [ hi_tox_rule ]
action_dict = check_rules(comment, rules, scores)
record = create_mod_comment_output_record(comment, 'hello', scores,
action_dict, rules)
self.assertEqual('hello', record['orig_comment_text'])
# This field is only present when different from the comment body.
self.assertFalse('scored_comment_text' in record)
self.assertEqual(0.8, record['score:TOXICITY'])
self.assertEqual('report', record['rule:hi_tox'])
def test_create_mod_comment_output_record_more_rules(self):
comment = MockComment('hello')
scores = { 'TOXICITY': 0.8 }
hi_tox_rule = Rule('hi_tox', {'TOXICITY': '> 0.9'}, {}, 'report')
med_tox_rule = Rule('med_tox', {'TOXICITY': '> 0.5'}, {}, 'report')
lo_tox_rule = Rule('lo_tox', {'TOXICITY': '> 0.1'}, {}, 'noop')
rules = [ hi_tox_rule, med_tox_rule, lo_tox_rule ]
action_dict = check_rules(comment, rules, scores)
record = create_mod_comment_output_record(comment, 'hello', scores,
action_dict, rules)
self.assertEqual('rule-not-triggered', record['rule:hi_tox'])
self.assertEqual('report', record['rule:med_tox'])
self.assertEqual('noop', record['rule:lo_tox'])
if __name__ == '__main__':
unittest.main()
| conversationai/conversationai-moderator-reddit | perspective_reddit_bot/moderate_subreddit_test.py | Python | apache-2.0 | 4,897 |
"""General tests for the web interface."""
from __future__ import print_function
from django.core import mail
from django.test import TransactionTestCase
from django.test.utils import override_settings
from django.urls import reverse
from pykeg.backend import get_kegbot_backend
from pykeg.core import models
from pykeg.core import defaults
@override_settings(KEGBOT_BACKEND="pykeg.core.testutils.TestBackend")
class KegwebTestCase(TransactionTestCase):
def setUp(self):
self.client.logout()
defaults.set_defaults(set_is_setup=True, create_controller=True)
def testBasicEndpoints(self):
for endpoint in ("/kegs/", "/stats/", "/drinkers/guest/", "/drinkers/guest/sessions/"):
response = self.client.get(endpoint)
self.assertEqual(200, response.status_code)
for endpoint in ("/sessions/",):
response = self.client.get(endpoint)
self.assertEqual(404, response.status_code)
b = get_kegbot_backend()
keg = b.start_keg(
"kegboard.flow0",
beverage_name="Unknown",
producer_name="Unknown",
beverage_type="beer",
style_name="Unknown",
)
self.assertIsNotNone(keg)
response = self.client.get("/kegs/")
self.assertEqual(200, response.status_code)
d = b.record_drink("kegboard.flow0", ticks=100)
drink_id = d.id
response = self.client.get("/d/%s" % drink_id, follow=True)
self.assertRedirects(response, "/drinks/%s" % drink_id, status_code=301)
session_id = d.session.id
response = self.client.get("/s/%s" % session_id, follow=True)
self.assertRedirects(response, d.session.get_absolute_url(), status_code=301)
def testShout(self):
b = get_kegbot_backend()
b.start_keg(
"kegboard.flow0",
beverage_name="Unknown",
producer_name="Unknown",
beverage_type="beer",
style_name="Unknown",
)
d = b.record_drink("kegboard.flow0", ticks=123, shout="_UNITTEST_")
response = self.client.get(d.get_absolute_url())
self.assertContains(response, "<p>_UNITTEST_</p>", status_code=200)
def test_privacy(self):
b = get_kegbot_backend()
keg = b.start_keg(
"kegboard.flow0",
beverage_name="Unknown",
producer_name="Unknown",
beverage_type="beer",
style_name="Unknown",
)
self.assertIsNotNone(keg)
d = b.record_drink("kegboard.flow0", ticks=100)
# URLs to expected contents
urls = {
"/kegs/": "Keg List",
"/stats/": "System Stats",
"/sessions/": "All Sessions",
"/kegs/{}".format(keg.id): "Keg {}".format(keg.id),
"/drinks/{}".format(d.id): "Drink {}".format(d.id),
}
def test_urls(expect_fail, urls=urls):
for url, expected_content in list(urls.items()):
response = self.client.get(url)
if expect_fail:
self.assertNotContains(
response, expected_content, status_code=401, msg_prefix=url
)
else:
self.assertContains(response, expected_content, status_code=200, msg_prefix=url)
b = get_kegbot_backend()
user = b.create_new_user("testuser", "[email protected]", password="1234")
kbsite = models.KegbotSite.get()
self.client.logout()
# Public mode.
test_urls(expect_fail=False)
# Members-only.
kbsite.privacy = "members"
kbsite.save()
test_urls(expect_fail=True)
logged_in = self.client.login(username="testuser", password="1234")
self.assertTrue(logged_in)
test_urls(expect_fail=False)
# Staff-only
kbsite.privacy = "staff"
kbsite.save()
test_urls(expect_fail=True)
user.is_staff = True
user.save()
test_urls(expect_fail=False)
self.client.logout()
test_urls(expect_fail=True)
def test_whitelisted_urls(self):
"""Verify always-accessible URLs."""
urls = (
"/accounts/password/reset/",
"/accounts/register/",
"/accounts/login/",
)
for url in urls:
response = self.client.get(url)
self.assertNotContains(response, "denied", status_code=200, msg_prefix=url)
def test_activation(self):
b = get_kegbot_backend()
kbsite = models.KegbotSite.get()
self.assertEqual("public", kbsite.privacy)
user = b.create_new_user("testuser", "[email protected]")
self.assertIsNotNone(user.activation_key)
self.assertFalse(user.has_usable_password())
activation_key = user.activation_key
self.assertIsNotNone(activation_key)
activation_url = reverse(
"activate-account", args=(), kwargs={"activation_key": activation_key}
)
# Activation works regardless of privacy settings.
self.client.logout()
response = self.client.get(activation_url)
self.assertContains(response, "Choose a Password", status_code=200)
kbsite.privacy = "staff"
kbsite.save()
response = self.client.get(activation_url)
self.assertContains(response, "Choose a Password", status_code=200)
kbsite.privacy = "members"
kbsite.save()
response = self.client.get(activation_url)
self.assertContains(response, "Choose a Password", status_code=200)
# Activate the account.
form_data = {
"password": "123",
"password2": "123",
}
response = self.client.post(activation_url, data=form_data, follow=True)
self.assertContains(response, "Your account has been activated!", status_code=200)
user = models.User.objects.get(pk=user.id)
self.assertIsNone(user.activation_key)
@override_settings(EMAIL_BACKEND="django.core.mail.backends.locmem.EmailBackend")
@override_settings(DEFAULT_FROM_EMAIL="test-from@example")
def test_registration(self):
kbsite = models.KegbotSite.get()
self.assertEqual("public", kbsite.privacy)
self.assertEqual("public", kbsite.registration_mode)
response = self.client.get("/accounts/register/")
self.assertContains(response, "Register New Account", status_code=200)
response = self.client.post(
"/accounts/register/",
data={
"username": "newuser",
"password1": "1234",
"password2": "1234",
"email": "[email protected]",
},
follow=True,
)
self.assertRedirects(response, "/account/")
self.assertContains(response, "Hello, newuser")
self.assertEqual(1, len(mail.outbox))
msg = mail.outbox[0]
self.assertEqual(["[email protected]"], msg.to)
self.assertTrue("To log in to your account, please click here" in msg.body)
response = self.client.post(
"/accounts/register/",
data={
"username": "newuser",
"password1": "1234",
"password2": "1234",
"email": "[email protected]",
},
follow=False,
)
self.assertContains(response, "User with this Username already exists", status_code=200)
response = self.client.post(
"/accounts/register/",
data={
"username": "newuser 2",
"password1": "1234",
"password2": "1234",
"email": "[email protected]",
},
follow=False,
)
self.assertContains(response, "Enter a valid username", status_code=200)
response = self.client.post(
"/accounts/register/",
data={
"username": "newuser2",
"password1": "1234",
"password2": "1235",
"email": "[email protected]",
},
follow=False,
)
print(response)
self.assertContains(response, "The two password fields didn't match.", status_code=200)
@override_settings(EMAIL_BACKEND="django.core.mail.backends.locmem.EmailBackend")
@override_settings(DEFAULT_FROM_EMAIL="test-from@example")
def test_registration_with_invite(self):
kbsite = models.KegbotSite.get()
kbsite.registration_mode = "staff-invite-online"
kbsite.save()
response = self.client.get("/accounts/register/")
self.assertContains(response, "Invitation Required", status_code=401)
response = self.client.get("/accounts/register/?invite_code=1234")
self.assertContains(response, "Invitation Expired", status_code=401)
models.Invitation.objects.create(invite_code="test", for_email="[email protected]")
self.assertEqual(1, models.Invitation.objects.all().count())
response = self.client.get("/accounts/register/?invite_code=test")
self.assertContains(response, "Register New Account", status_code=200)
response = self.client.post(
"/accounts/register/",
data={
"username": "newuser2",
"password1": "1234",
"password2": "1234",
"email": "[email protected]",
},
follow=True,
)
self.assertRedirects(response, "/account/")
self.assertContains(response, "Hello, newuser2")
self.assertEqual(0, models.Invitation.objects.all().count())
response = self.client.get("/accounts/register/?invite_code=test")
self.assertContains(response, "Invitation Expired", status_code=401)
def test_upgrade_bouncer(self):
kbsite = models.KegbotSite.get()
response = self.client.get("/")
self.assertContains(response, "My Kegbot", status_code=200)
old_version = kbsite.server_version
kbsite.server_version = "0.0.1"
kbsite.save()
response = self.client.get("/")
self.assertContains(response, "Upgrade Required", status_code=403)
kbsite.server_version = old_version
kbsite.is_setup = False
kbsite.save()
response = self.client.get("/")
self.assertContains(response, "Kegbot Offline", status_code=403)
| Kegbot/kegbot-server | pykeg/web/kegweb/kegweb_test.py | Python | gpl-2.0 | 10,527 |
# Copyright (C) 2016 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
#
# This file is part of Kitty.
#
# Kitty is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# Kitty is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Kitty. If not, see <http://www.gnu.org/licenses/>.
'''
Managers for the test list used by the fuzzer
'''
import re
from kitty.core import KittyException
class StartEndList(object):
def __init__(self, start, end):
self._start = start
self._end = end
self._current = self._start
def set_last(self, last):
if self.open_ended() or (self._end > last):
self._end = last + 1
def next(self):
if self._current < self._end:
self._current += 1
def current(self):
if self._current < self._end:
return self._current
return None
def reset(self):
self._current = self._start
def skip(self, count):
if count < self._end - self._current:
self._current += count
skipped = count
else:
skipped = self._end - self._current
self._current = self._end
return skipped
def get_count(self):
return self._end - self._start
def get_progress(self):
if self.current():
return self._current - self._start
else:
return self.get_count()
def as_test_list_str(self):
res = '%d-' % self._start
if not self.open_ended():
res += '%d' % (self._end - 1)
return res
def open_ended(self):
return self._end is None
class RangesList(object):
def __init__(self, ranges_str):
self._ranges_str = ranges_str
self._lists = []
self._idx = 0
self._list_idx = 0
self._count = None
self._parse()
def _parse(self):
'''
Crazy function to check and parse the range list string
'''
if not self._ranges_str:
self._lists = [StartEndList(0, None)]
else:
lists = []
p_single = re.compile(r'(\d+)$')
p_open_left = re.compile(r'-(\d+)$')
p_open_right = re.compile(r'(\d+)-$')
p_closed = re.compile(r'(\d+)-(\d+)$')
for entry in self._ranges_str.split(','):
entry = entry.strip()
# single number
match = p_single.match(entry)
if match:
num = int(match.groups()[0])
lists.append(StartEndList(num, num + 1))
continue
# open left
match = p_open_left.match(entry)
if match:
end = int(match.groups()[0])
lists.append(StartEndList(0, end + 1))
continue
# open right
match = p_open_right.match(entry)
if match:
start = int(match.groups()[0])
self._open_end_start = start
lists.append(StartEndList(start, None))
continue
# closed range
match = p_closed.match(entry)
if match:
start = int(match.groups()[0])
end = int(match.groups()[1])
lists.append(StartEndList(start, end + 1))
continue
# invalid expression
raise KittyException('Invalid range found: %s' % entry)
lists = sorted(lists, key=lambda x: x._start)
for i in range(len(lists) - 1):
if lists[i]._end is None:
# there is an open end which is not the last in our lists
# this is a clear overlap with the last one ...
raise KittyException('Overlapping ranges in range list')
elif lists[i]._end > lists[i + 1]._start:
raise KittyException('Overlapping ranges in range list')
self._lists = lists
def set_last(self, last):
exceeds = False
last_list = self._lists[-1]
if last <= last_list._start:
exceeds = True
elif last_list.open_ended():
last_list.set_last(last)
if exceeds:
raise KittyException('Specified test range exceeds the maximum mutation count')
def next(self):
if self._idx < self.get_count():
if self._list_idx < len(self._lists):
curr_list = self._lists[self._list_idx]
curr_list.next()
if curr_list.current() is None:
self._list_idx += 1
self._idx += 1
def current(self):
if self._idx < self.get_count():
return self._lists[self._list_idx].current()
return None
def reset(self):
self._idx = 0
self._list_idx = 0
for l in self._lists:
l.reset()
def skip(self, count):
while count > 0:
skipped = self._lists[self._list_idx].skip(count)
self._idx += skipped
count -= skipped
if count > 0:
self._list_idx += 1
def get_count(self):
if self._count is None:
self._count = sum(l.get_count() for l in self._lists)
return self._count
def get_progress(self):
if self.current():
return self._current
else:
return self.get_count()
def as_test_list_str(self):
return self._ranges_str
| cisco-sas/kitty | kitty/fuzzers/test_list.py | Python | gpl-2.0 | 6,045 |
import pickle
import os
new_man = []
try:
with open('../File/sketch.txt', 'rb') as man_file:
new_man = pickle.load(man_file)
except IOError as err:
print("File error:" + str(err))
except pickle.PickleError as perr:
print("PickleError error:" + str(perr))
| wxmylife/Python-Study-Tour | HeadPython/chapter4/Demo/Demo5.py | Python | apache-2.0 | 280 |
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import fnmatch
from ansible import constants as C
from ansible.errors import *
from ansible.playbook.block import Block
from ansible.playbook.task import Task
from ansible.utils.boolean import boolean
__all__ = ['PlayIterator']
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class HostState:
def __init__(self, blocks):
self._blocks = blocks[:]
self.cur_block = 0
self.cur_regular_task = 0
self.cur_rescue_task = 0
self.cur_always_task = 0
self.cur_role = None
self.run_state = PlayIterator.ITERATING_SETUP
self.fail_state = PlayIterator.FAILED_NONE
self.pending_setup = False
self.tasks_child_state = None
self.rescue_child_state = None
self.always_child_state = None
def __repr__(self):
return "HOST STATE: block=%d, task=%d, rescue=%d, always=%d, role=%s, run_state=%d, fail_state=%d, pending_setup=%s, tasks child state? %s, rescue child state? %s, always child state? %s" % (
self.cur_block,
self.cur_regular_task,
self.cur_rescue_task,
self.cur_always_task,
self.cur_role,
self.run_state,
self.fail_state,
self.pending_setup,
self.tasks_child_state,
self.rescue_child_state,
self.always_child_state,
)
def get_current_block(self):
return self._blocks[self.cur_block]
def copy(self):
new_state = HostState(self._blocks)
new_state.cur_block = self.cur_block
new_state.cur_regular_task = self.cur_regular_task
new_state.cur_rescue_task = self.cur_rescue_task
new_state.cur_always_task = self.cur_always_task
new_state.cur_role = self.cur_role
new_state.run_state = self.run_state
new_state.fail_state = self.fail_state
new_state.pending_setup = self.pending_setup
if self.tasks_child_state is not None:
new_state.tasks_child_state = self.tasks_child_state.copy()
if self.rescue_child_state is not None:
new_state.rescue_child_state = self.rescue_child_state.copy()
if self.always_child_state is not None:
new_state.always_child_state = self.always_child_state.copy()
return new_state
class PlayIterator:
# the primary running states for the play iteration
ITERATING_SETUP = 0
ITERATING_TASKS = 1
ITERATING_RESCUE = 2
ITERATING_ALWAYS = 3
ITERATING_COMPLETE = 4
# the failure states for the play iteration, which are powers
# of 2 as they may be or'ed together in certain circumstances
FAILED_NONE = 0
FAILED_SETUP = 1
FAILED_TASKS = 2
FAILED_RESCUE = 4
FAILED_ALWAYS = 8
def __init__(self, inventory, play, play_context, all_vars):
self._play = play
self._blocks = []
for block in self._play.compile():
new_block = block.filter_tagged_tasks(play_context, all_vars)
if new_block.has_tasks():
self._blocks.append(new_block)
self._host_states = {}
for host in inventory.get_hosts(self._play.hosts):
self._host_states[host.name] = HostState(blocks=self._blocks)
# if we're looking to start at a specific task, iterate through
# the tasks for this host until we find the specified task
if play_context.start_at_task is not None:
while True:
(s, task) = self.get_next_task_for_host(host, peek=True)
if s.run_state == self.ITERATING_COMPLETE:
break
if task.name == play_context.start_at_task or fnmatch.fnmatch(task.name, play_context.start_at_task):
break
else:
self.get_next_task_for_host(host)
# finally, reset the host's state to ITERATING_SETUP
self._host_states[host.name].run_state = self.ITERATING_SETUP
# Extend the play handlers list to include the handlers defined in roles
self._play.handlers.extend(play.compile_roles_handlers())
def get_host_state(self, host):
try:
return self._host_states[host.name].copy()
except KeyError:
raise AnsibleError("invalid host (%s) specified for playbook iteration" % host)
def get_next_task_for_host(self, host, peek=False):
display.debug("getting the next task for host %s" % host.name)
s = self.get_host_state(host)
task = None
if s.run_state == self.ITERATING_COMPLETE:
display.debug("host %s is done iterating, returning" % host.name)
return (None, None)
elif s.run_state == self.ITERATING_SETUP:
s.run_state = self.ITERATING_TASKS
s.pending_setup = True
# Gather facts if the default is 'smart' and we have not yet
# done it for this host; or if 'explicit' and the play sets
# gather_facts to True; or if 'implicit' and the play does
# NOT explicitly set gather_facts to False.
gathering = C.DEFAULT_GATHERING
implied = self._play.gather_facts is None or boolean(self._play.gather_facts)
if (gathering == 'implicit' and implied) or \
(gathering == 'explicit' and boolean(self._play.gather_facts)) or \
(gathering == 'smart' and implied and not host._gathered_facts):
if not peek:
# mark the host as having gathered facts
host.set_gathered_facts(True)
task = Task()
task.action = 'setup'
task.args = {}
task.set_loader(self._play._loader)
else:
s.pending_setup = False
if not task:
(s, task) = self._get_next_task_from_state(s, peek=peek)
if task and task._role:
# if we had a current role, mark that role as completed
if s.cur_role and task._role != s.cur_role and host.name in s.cur_role._had_task_run and not peek:
s.cur_role._completed[host.name] = True
s.cur_role = task._role
if not peek:
self._host_states[host.name] = s
display.debug("done getting next task for host %s" % host.name)
display.debug(" ^ task is: %s" % task)
display.debug(" ^ state is: %s" % s)
return (s, task)
def _get_next_task_from_state(self, state, peek):
task = None
# try and find the next task, given the current state.
while True:
# try to get the current block from the list of blocks, and
# if we run past the end of the list we know we're done with
# this block
try:
block = state._blocks[state.cur_block]
except IndexError:
state.run_state = self.ITERATING_COMPLETE
return (state, None)
if state.run_state == self.ITERATING_TASKS:
# clear the pending setup flag, since we're past that and it didn't fail
if state.pending_setup:
state.pending_setup = False
if state.fail_state & self.FAILED_TASKS == self.FAILED_TASKS:
state.run_state = self.ITERATING_RESCUE
elif state.cur_regular_task >= len(block.block):
state.run_state = self.ITERATING_ALWAYS
else:
task = block.block[state.cur_regular_task]
# if the current task is actually a child block, we dive into it
if isinstance(task, Block) or state.tasks_child_state is not None:
if state.tasks_child_state is None:
state.tasks_child_state = HostState(blocks=[task])
state.tasks_child_state.run_state = self.ITERATING_TASKS
state.tasks_child_state.cur_role = state.cur_role
(state.tasks_child_state, task) = self._get_next_task_from_state(state.tasks_child_state, peek=peek)
if task is None:
state.tasks_child_state = None
state.cur_regular_task += 1
continue
else:
state.cur_regular_task += 1
elif state.run_state == self.ITERATING_RESCUE:
if state.fail_state & self.FAILED_RESCUE == self.FAILED_RESCUE:
state.run_state = self.ITERATING_ALWAYS
elif state.cur_rescue_task >= len(block.rescue):
if len(block.rescue) > 0:
state.fail_state = self.FAILED_NONE
state.run_state = self.ITERATING_ALWAYS
else:
task = block.rescue[state.cur_rescue_task]
if isinstance(task, Block) or state.rescue_child_state is not None:
if state.rescue_child_state is None:
state.rescue_child_state = HostState(blocks=[task])
state.rescue_child_state.run_state = self.ITERATING_TASKS
state.rescue_child_state.cur_role = state.cur_role
(state.rescue_child_state, task) = self._get_next_task_from_state(state.rescue_child_state, peek=peek)
if task is None:
state.rescue_child_state = None
state.cur_rescue_task += 1
continue
else:
state.cur_rescue_task += 1
elif state.run_state == self.ITERATING_ALWAYS:
if state.cur_always_task >= len(block.always):
if state.fail_state != self.FAILED_NONE:
state.run_state = self.ITERATING_COMPLETE
else:
state.cur_block += 1
state.cur_regular_task = 0
state.cur_rescue_task = 0
state.cur_always_task = 0
state.run_state = self.ITERATING_TASKS
state.child_state = None
else:
task = block.always[state.cur_always_task]
if isinstance(task, Block) or state.always_child_state is not None:
if state.always_child_state is None:
state.always_child_state = HostState(blocks=[task])
state.always_child_state.run_state = self.ITERATING_TASKS
state.always_child_state.cur_role = state.cur_role
(state.always_child_state, task) = self._get_next_task_from_state(state.always_child_state, peek=peek)
if task is None:
state.always_child_state = None
state.cur_always_task += 1
continue
else:
state.cur_always_task += 1
elif state.run_state == self.ITERATING_COMPLETE:
return (state, None)
# if something above set the task, break out of the loop now
if task:
break
return (state, task)
def _set_failed_state(self, state):
if state.pending_setup:
state.fail_state |= self.FAILED_SETUP
state.run_state = self.ITERATING_COMPLETE
elif state.run_state == self.ITERATING_TASKS:
if state.tasks_child_state is not None:
state.tasks_child_state = self._set_failed_state(state.tasks_child_state)
else:
state.fail_state |= self.FAILED_TASKS
state.run_state = self.ITERATING_RESCUE
elif state.run_state == self.ITERATING_RESCUE:
if state.rescue_child_state is not None:
state.rescue_child_state = self._set_failed_state(state.rescue_child_state)
else:
state.fail_state |= self.FAILED_RESCUE
state.run_state = self.ITERATING_ALWAYS
elif state.run_state == self.ITERATING_ALWAYS:
if state.always_child_state is not None:
state.always_child_state = self._set_failed_state(state.always_child_state)
else:
state.fail_state |= self.FAILED_ALWAYS
state.run_state = self.ITERATING_COMPLETE
return state
def mark_host_failed(self, host):
s = self.get_host_state(host)
s = self._set_failed_state(s)
self._host_states[host.name] = s
def get_failed_hosts(self):
return dict((host, True) for (host, state) in self._host_states.iteritems() if state.run_state == self.ITERATING_COMPLETE and state.fail_state != self.FAILED_NONE)
def get_original_task(self, host, task):
'''
Finds the task in the task list which matches the UUID of the given task.
The executor engine serializes/deserializes objects as they are passed through
the different processes, and not all data structures are preserved. This method
allows us to find the original task passed into the executor engine.
'''
def _search_block(block, task):
'''
helper method to check a block's task lists (block/rescue/always)
for a given task uuid. If a Block is encountered in the place of a
task, it will be recursively searched (this happens when a task
include inserts one or more blocks into a task list).
'''
for b in (block.block, block.rescue, block.always):
for t in b:
if isinstance(t, Block):
res = _search_block(t, task)
if res:
return res
elif t._uuid == task._uuid:
return t
return None
def _search_state(state, task):
for block in state._blocks:
res = _search_block(block, task)
if res:
return res
for child_state in (state.tasks_child_state, state.rescue_child_state, state.always_child_state):
res = _search_state(child_state, task)
if res:
return res
return None
s = self.get_host_state(host)
res = _search_state(s, task)
if res:
return res
for block in self._play.handlers:
res = _search_block(block, task)
if res:
return res
return None
def _insert_tasks_into_state(self, state, task_list):
if state.run_state == self.ITERATING_TASKS:
if state.tasks_child_state:
state.tasks_child_state = self._insert_tasks_into_state(state.tasks_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy(exclude_parent=True)
before = target_block.block[:state.cur_regular_task]
after = target_block.block[state.cur_regular_task:]
target_block.block = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == self.ITERATING_RESCUE:
if state.rescue_child_state:
state.rescue_child_state = self._insert_tasks_into_state(state.rescue_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy(exclude_parent=True)
before = target_block.rescue[:state.cur_rescue_task]
after = target_block.rescue[state.cur_rescue_task:]
target_block.rescue = before + task_list + after
state._blocks[state.cur_block] = target_block
elif state.run_state == self.ITERATING_ALWAYS:
if state.always_child_state:
state.always_child_state = self._insert_tasks_into_state(state.always_child_state, task_list)
else:
target_block = state._blocks[state.cur_block].copy(exclude_parent=True)
before = target_block.always[:state.cur_always_task]
after = target_block.always[state.cur_always_task:]
target_block.always = before + task_list + after
state._blocks[state.cur_block] = target_block
return state
def add_tasks(self, host, task_list):
self._host_states[host.name] = self._insert_tasks_into_state(self.get_host_state(host), task_list)
| youprofit/ansible | lib/ansible/executor/play_iterator.py | Python | gpl-3.0 | 17,898 |
# -*- coding: utf-8 -*-
#
# Copyright (c) 2013 the BabelFish authors. All rights reserved.
# Use of this source code is governed by the 3-clause BSD license
# that can be found in the LICENSE file.
#
from __future__ import unicode_literals
from collections import namedtuple
from pkg_resources import resource_stream # @UnresolvedImport
#: Script code to script name mapping
SCRIPTS = {}
#: List of countries in the ISO-15924 as namedtuple of code, number, name, french_name, pva and date
SCRIPT_MATRIX = []
#: The namedtuple used in the :data:`SCRIPT_MATRIX`
IsoScript = namedtuple('IsoScript', ['code', 'number', 'name', 'french_name', 'pva', 'date'])
f = resource_stream('babelfish', 'data/iso15924-utf8-20131012.txt')
f.readline()
for l in f:
l = l.decode('utf-8').strip()
if not l or l.startswith('#'):
continue
script = IsoScript._make(l.split(';'))
SCRIPT_MATRIX.append(script)
SCRIPTS[script.code] = script.name
f.close()
class Script(object):
"""A human writing system
A script is represented by a 4-letter code from the ISO-15924 standard
:param string script: 4-letter ISO-15924 script code
"""
def __init__(self, script):
if script not in SCRIPTS:
raise ValueError('%r is not a valid script' % script)
#: ISO-15924 4-letter script code
self.code = script
@property
def name(self):
"""English name of the script"""
return SCRIPTS[self.code]
def __hash__(self):
return hash(self.code)
def __eq__(self, other):
return self.code == other.code
def __ne__(self, other):
return not self == other
def __repr__(self):
return '<Script [%s]>' % self
def __str__(self):
return self.code
| Hellowlol/PyTunes | libs/babelfish/script.py | Python | gpl-3.0 | 1,773 |
# Copyright (c) by it's authors.
# Some rights reserved. See LICENSE, AUTHORS.
from peer import *
from viewer import Viewer
class Observer(Peer):
from editor import Editor
from documentChanger import DocumentChanger
Sending = [
Viewer.In.Document
]
Routings = [
(Editor.Out.FieldChanged, Viewer.In.Refresh),
(DocumentChanger.Out.SelectionChanged, Viewer.In.Refresh)
]
def __init__(self, room):
Peer.__init__(self, room)
def initialize(self):
from wallaby.pf.room import House
observer = House.observer()
from wallaby.common.document import Document
doc = Document()
peerNames = sorted(observer.allPeers().keys())
peers = []
for peer in peerNames:
ibp = observer.inBoundPillows(peer)
obp = observer.outBoundPillows(peer)
peers.append({
"name": peer,
"inBound": ibp,
"outBound": obp
})
doc.set("peers", peers)
self._throw(Viewer.In.Document, doc)
# Wildcard credentials
from credentials import Credentials
self._throw(Credentials.Out.Credential, Document())
| FreshXOpenSource/wallaby-base | wallaby/pf/peer/observer.py | Python | bsd-2-clause | 1,235 |
from __future__ import absolute_import
from scipy import sparse
from scipy.sparse.linalg import spsolve
import numpy as np
from ._utils import _maybe_get_pandas_wrapper
def hpfilter(X, lamb=1600):
"""
Hodrick-Prescott filter
Parameters
----------
X : array-like
The 1d ndarray timeseries to filter of length (nobs,) or (nobs,1)
lamb : float
The Hodrick-Prescott smoothing parameter. A value of 1600 is
suggested for quarterly data. Ravn and Uhlig suggest using a value
of 6.25 (1600/4**4) for annual data and 129600 (1600*3**4) for monthly
data.
Returns
-------
cycle : array
The estimated cycle in the data given lamb.
trend : array
The estimated trend in the data given lamb.
Examples
---------
>>> import statsmodels.api as sm
>>> import pandas as pd
>>> dta = sm.datasets.macrodata.load_pandas().data
>>> index = pd.DatetimeIndex(start='1959Q1', end='2009Q4', freq='Q')
>>> dta.set_index(index, inplace=True)
>>> cycle, trend = sm.tsa.filters.hpfilter(dta.realgdp, 1600)
>>> gdp_decomp = dta[['realgdp']]
>>> gdp_decomp["cycle"] = cycle
>>> gdp_decomp["trend"] = trend
>>> import matplotlib.pyplot as plt
>>> fig, ax = plt.subplots()
>>> gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax,
... fontsize=16);
>>> plt.show()
.. plot:: plots/hpf_plot.py
Notes
-----
The HP filter removes a smooth trend, `T`, from the data `X`. by solving
min sum((X[t] - T[t])**2 + lamb*((T[t+1] - T[t]) - (T[t] - T[t-1]))**2)
T t
Here we implemented the HP filter as a ridge-regression rule using
scipy.sparse. In this sense, the solution can be written as
T = inv(I - lamb*K'K)X
where I is a nobs x nobs identity matrix, and K is a (nobs-2) x nobs matrix
such that
K[i,j] = 1 if i == j or i == j + 2
K[i,j] = -2 if i == j + 1
K[i,j] = 0 otherwise
See Also
--------
statsmodels.tsa.filters.bk_filter.bkfilter
statsmodels.tsa.filters.cf_filter.cffilter
statsmodels.tsa.seasonal.seasonal_decompose
References
----------
Hodrick, R.J, and E. C. Prescott. 1980. "Postwar U.S. Business Cycles: An
Empricial Investigation." `Carnegie Mellon University discussion
paper no. 451`.
Ravn, M.O and H. Uhlig. 2002. "Notes On Adjusted the Hodrick-Prescott
Filter for the Frequency of Observations." `The Review of Economics and
Statistics`, 84(2), 371-80.
"""
_pandas_wrapper = _maybe_get_pandas_wrapper(X)
X = np.asarray(X, float)
if X.ndim > 1:
X = X.squeeze()
nobs = len(X)
I = sparse.eye(nobs,nobs)
offsets = np.array([0,1,2])
data = np.repeat([[1.],[-2.],[1.]], nobs, axis=1)
K = sparse.dia_matrix((data, offsets), shape=(nobs-2,nobs))
use_umfpack = True
trend = spsolve(I+lamb*K.T.dot(K), X, use_umfpack=use_umfpack)
cycle = X-trend
if _pandas_wrapper is not None:
return _pandas_wrapper(cycle), _pandas_wrapper(trend)
return cycle, trend
| bert9bert/statsmodels | statsmodels/tsa/filters/hp_filter.py | Python | bsd-3-clause | 3,157 |
# -*- coding: utf-8 -*-
"""
======================================
Show noise levels from empty room data
======================================
This shows how to use :meth:`mne.io.Raw.plot_psd` to examine noise levels
of systems. See [1]_ for an example.
References
----------
.. [1] Khan S, Cohen D (2013). Note: Magnetic noise from the inner wall of
a magnetically shielded room. Review of Scientific Instruments 84:56101.
https://doi.org/10.1063/1.4802845
"""
# Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import mne
data_path = mne.datasets.sample.data_path()
raw_erm = mne.io.read_raw_fif(op.join(data_path, 'MEG', 'sample',
'ernoise_raw.fif'), preload=True)
###############################################################################
# We can plot the absolute noise levels:
raw_erm.plot_psd(tmax=10., average=True, spatial_colors=False,
dB=False, xscale='log')
| Teekuningas/mne-python | examples/visualization/plot_sensor_noise_level.py | Python | bsd-3-clause | 991 |
#!/usr/bin/python
# Copyright 2014 BitPay, Inc.
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
from __future__ import division,print_function,unicode_literals
import os
import bctest
import buildenv
if __name__ == '__main__':
bctest.bctester(os.environ["srcdir"] + "/test/data",
"navcoin-util-test.json",buildenv)
| navcoindev/navcoin-core | src/test/navcoin-util-test.py | Python | mit | 410 |
#!/usr/bin/env python
# encoding: utf-8
#
# This file is part of BeRTOS.
#
# Bertos is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
#
# As a special exception, you may use this file as part of a free software
# library without restriction. Specifically, if other files instantiate
# templates or use macros or inline functions from this file, or you compile
# this file and link it with other files to produce an executable, this
# file does not by itself cause the resulting executable to be covered by
# the GNU General Public License. This exception does not however
# invalidate any other reasons why the executable file might be covered by
# the GNU General Public License.
#
# Copyright 2008 Develer S.r.l. (http://www.develer.com/)
#
#
# Author: Lorenzo Berni <[email protected]>
#
import os
from PyQt4.QtGui import *
from BWizardPage import *
import bertos_utils
from const import *
class BFinalPage(BWizardPage):
"""
Last page of the wizard. It creates the project and show a success message.
"""
def __init__(self):
BWizardPage.__init__(self, UI_LOCATION + "/final_page.ui")
self.setTitle(self.tr("Project created successfully!"))
## Overloaded BWizardPage methods ##
def reloadData(self, previous_id=None):
self.setVisible(False)
"""
Overload of the BWizardPage reloadData method.
"""
try:
QApplication.instance().setOverrideCursor(Qt.WaitCursor)
try:
# This operation can throw WindowsError, if the directory is
# locked.
self.project.createBertosProject()
except OSError, e:
QMessageBox.critical(
self,
self.tr("Error removing destination directory"),
self.tr("Error removing the destination directory. This directory or a file in it is in use by another user or application.\nClose the application which is using the directory and retry."))
self.wizard().back()
return
finally:
QApplication.instance().restoreOverrideCursor()
self.setVisible(True)
self._plugin_dict = {}
if os.name == "nt":
output = self.projectInfo("OUTPUT")
import winreg_importer
command_lines = winreg_importer.getCommandLines()
self.setProjectInfo("COMMAND_LINES", command_lines)
layout = QVBoxLayout()
for plugin in output:
if plugin in command_lines:
module = bertos_utils.loadPlugin(plugin)
check = QCheckBox(self.tr("Open project in %s" %module.PLUGIN_NAME))
if len(output) == 1:
check.setCheckState(Qt.Checked)
else:
check.setCheckState(Qt.Unchecked)
layout.addWidget(check)
self._plugin_dict[check] = plugin
widget = QWidget()
widget.setLayout(layout)
if len(self._plugin_dict) > 0:
self.pageContent.scrollArea.setVisible(True)
self.pageContent.scrollArea.setWidget(widget)
for plugin in self._plugin_dict:
self.connect(plugin, SIGNAL("stateChanged(int)"), self.modeChecked)
self.modeChecked()
def setupUi(self):
"""
Overload of the BWizardPage setupUi method.
"""
self.pageContent.scrollArea.setVisible(False)
####
## Slots ##
def modeChecked(self):
to_be_opened = []
for check, plugin in self._plugin_dict.items():
if check.checkState() == Qt.Checked:
to_be_opened.append(plugin)
self.setProjectInfo("TO_BE_OPENED", to_be_opened)
####
| dereks/bertos | wizard/BFinalPage.py | Python | gpl-2.0 | 4,486 |
#! /usr/bin/python
#
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Tests for google.protobuf.pyext behavior."""
__author__ = '[email protected] (Anuraag Agrawal)'
import os
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION'] = 'cpp'
os.environ['PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION'] = '2'
# We must set the implementation version above before the google3 imports.
# pylint: disable=g-import-not-at-top
from google.apputils import basetest
from google.protobuf.internal import api_implementation
# Run all tests from the original module by putting them in our namespace.
# pylint: disable=wildcard-import
from google.protobuf.internal.descriptor_test import *
class ConfirmCppApi2Test(basetest.TestCase):
def testImplementationSetting(self):
self.assertEqual('cpp', api_implementation.Type())
self.assertEqual(2, api_implementation.Version())
if __name__ == '__main__':
basetest.main()
| cherrishes/weilai | xingxing/protobuf/python/lib/Python3.4/google/protobuf/pyext/descriptor_cpp2_test.py | Python | apache-2.0 | 2,506 |
#!/usr/bin/env python3
import sys
sys.setrecursionlimit(10000)
def longest_common_subsequence_recursive(seq1, seq2):
strings_not_lists = None
if isinstance(seq1, str) and isinstance(seq2, str):
strings_not_lists = True
elif isinstance(seq1, list) and isinstance(seq2, list):
strings_not_lists = False
assert strings_not_lists == True or strings_not_lists == False
# Initiate results table
l1, l2 = len(seq1), len(seq2)
tmp, results_table = [], []
for i in range(l2 + 1):
tmp.append(None)
for i in range(l1 + 1):
results_table.append(tmp[:])
def longest(seq1, seq2):
return seq1 if len(seq1) > len(seq2) else seq2
# Recursive search, results are cached in results_table
def LCS(seq1, seq2):
l1, l2 = len(seq1), len(seq2)
if results_table[l1][l2] != None:
pass
elif 0 == l1 or 0 == l2:
results_table[l1][l2] = '' if strings_not_lists else []
elif seq1[-1] == seq2[-1]:
if strings_not_lists:
results_table[l1][l2] = LCS(seq1[:-1], seq2[:-1]) + seq1[-1]
else:
results_table[l1][l2] = LCS(seq1[:-1], seq2[:-1])
results_table[l1][l2].append(seq1[-1])
else:
results_table[l1][l2] = longest(LCS(seq1, seq2[:-1]), LCS(seq1[:-1], seq2))
return results_table[l1][l2][:]
return LCS(seq1, seq2)
# end of longest_common_subsequence_recursive
def get_sub_indices(string, sub):
if len(sub) == 0:
return []
for i in range(len(string)):
if string[i] == sub[0]:
indices = get_sub_indices(string[i+1:], sub[1:])
return [i] + [num + i + 1 for num in indices]
assert True == False
def get_insertions(string, sub):
result = []
start = 0
for end in get_sub_indices(string, sub):
result += [string[start:end]]
start = end + 1
result += [string[start:]]
return result
def apply_insertions(ins, sub):
return ''.join(ins[i] + sub[i] for i in range(len(sub))) + ins[-1]
def shortest_common_supersequence(str1, str2):
sub = longest_common_subsequence_recursive(str1, str2)
insertions = [z[0] + z[1] for z in zip(get_insertions(str1, sub), get_insertions(str2, sub))]
return apply_insertions(insertions, sub)
def main():
with open("Input.txt") as input_file:
strings = list(line.strip() for line in input_file)
print(shortest_common_supersequence(strings[0], strings[1]))
main()
| Daerdemandt/Learning-bioinformatics | SCSP/Solution.py | Python | apache-2.0 | 2,256 |
# -*- coding: utf-8 -*-
#
#
# TheVirtualBrain-Scientific Package. This package holds all simulators, and
# analysers necessary to run brain-simulations. You can use it stand alone or
# in conjunction with TheVirtualBrain-Framework Package. See content of the
# documentation-folder for more details. See also http://www.thevirtualbrain.org
#
# (c) 2012-2013, Baycrest Centre for Geriatric Care ("Baycrest")
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License version 2 as published by the Free
# Software Foundation. This program is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
# License for more details. You should have received a copy of the GNU General
# Public License along with this program; if not, you can download it here
# http://www.gnu.org/licenses/old-licenses/gpl-2.0
#
#
# CITATION:
# When using The Virtual Brain for scientific publications, please cite it as follows:
#
# Paula Sanz Leon, Stuart A. Knock, M. Marmaduke Woodman, Lia Domide,
# Jochen Mersmann, Anthony R. McIntosh, Viktor Jirsa (2013)
# The Virtual Brain: a simulator of primate brain network dynamics.
# Frontiers in Neuroinformatics (7:10. doi: 10.3389/fninf.2013.00010)
#
#
"""
Test for tvb.simulator.models module
.. moduleauthor:: Paula Sanz Leon <[email protected]>
"""
if __name__ == "__main__":
from tvb.tests.library import setup_test_console_env
setup_test_console_env()
import unittest
from tvb.simulator import models
from tvb.tests.library.base_testcase import BaseTestCase
import numpy
class ModelsTest(BaseTestCase):
"""
Define test cases for models:
- initialise each class
TODO - check default parameters (should correspond to those used in the original work with the aim to
reproduce at least one figure)
- check that initial conditions are always in range
"""
def _validate_initialization(self, model, expected_sv, expected_models=1):
model.configure()
dt = 2 ** -4
history_shape = (1, model._nvar, 1, model.number_of_modes)
model_ic = model.initial(dt, history_shape)
self.assertEqual(expected_sv, model._nvar)
self.assertEqual(expected_models, model.number_of_modes)
svr = model.state_variable_range
sv = model.state_variables
for i, (lo, hi) in enumerate([svr[sv[i]] for i in range(model._nvar)]):
for val in model_ic[:, i, :].flatten():
self.assertTrue(lo < val < hi)
state = numpy.zeros((expected_sv, 10, model.number_of_modes))
obser = model.observe(state)
self.assertEqual((len(model.variables_of_interest), 10, model.number_of_modes), obser.shape)
return state, obser
def test_wilson_cowan(self):
"""
Default parameters are taken from figure 4 of [WC_1972]_, pag. 10
"""
model = models.WilsonCowan()
self._validate_initialization(model, 2)
def test_g2d(self):
"""
Default parameters:
+---------------------------+
| SanzLeonetAl 2013 |
+--------------+------------+
|Parameter | Value |
+==============+============+
| a | - 0.5 |
+--------------+------------+
| b | -10.0 |
+--------------+------------+
| c | 0.0 |
+--------------+------------+
| d | 0.02 |
+--------------+------------+
| I | 0.0 |
+--------------+------------+
|* excitable regime if |
|* intrinsic frequency is |
| approx 10 Hz |
+---------------------------+
"""
model = models.Generic2dOscillator()
state, obser = self._validate_initialization(model, 2)
numpy.testing.assert_allclose(obser[0], state[0])
def test_g2d_voi(self):
model = models.Generic2dOscillator(
variables_of_interest = ['W', 'W - V']
)
(V, W), (voi_W, voi_WmV) = self._validate_initialization(model, 2)
numpy.testing.assert_allclose(voi_W, W)
numpy.testing.assert_allclose(voi_WmV, W - V)
def test_jansen_rit(self):
"""
"""
model = models.JansenRit()
self._validate_initialization(model, 6)
def test_sj2d(self):
"""
"""
model = models.ReducedSetFitzHughNagumo()
self._validate_initialization(model, 4, 3)
def test_sj3d(self):
"""
"""
model = models.ReducedSetHindmarshRose()
self._validate_initialization(model, 6, 3)
def test_reduced_wong_wang(self):
"""
"""
model = models.ReducedWongWang()
self._validate_initialization(model, 1)
def test_zetterberg_jansen(self):
"""
"""
model = models.ZetterbergJansen()
self._validate_initialization(model, 12)
def test_epileptor(self):
"""
"""
model = models.Epileptor()
self._validate_initialization(model, 6)
def test_hopfield(self):
"""
"""
model = models.Hopfield()
self._validate_initialization(model, 2)
def test_kuramoto(self):
"""
"""
model = models.Kuramoto()
self._validate_initialization(model, 1)
def test_larter(self):
"""
"""
model = models.LarterBreakspear()
self._validate_initialization(model, 3)
def test_linear(self):
model = models.Linear()
self._validate_initialization(model, 1)
def suite():
"""
Gather all the tests in a test suite.
"""
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.makeSuite(ModelsTest))
return test_suite
if __name__ == "__main__":
#So you can run tests from this package individually.
TEST_RUNNER = unittest.TextTestRunner()
TEST_SUITE = suite()
TEST_RUNNER.run(TEST_SUITE)
| stuart-knock/tvb-library | tvb/tests/library/simulator/models_test.py | Python | gpl-2.0 | 6,216 |
# Generated by Django 2.0.1 on 2018-01-17 13:30
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('crypsis_tests', '0010_auto_20180117_1242'),
]
operations = [
migrations.AlterModelOptions(
name='item',
options={'ordering': ['sort_order', 'name']},
),
migrations.RenameField(
model_name='item',
old_name='ORDER',
new_name='sort_order',
),
migrations.RemoveField(
model_name='item',
name='list_order',
),
]
| sdolemelipone/django-crypsis | crypsis_tests/migrations/0011_auto_20180117_1330.py | Python | gpl-3.0 | 607 |
# SPDX-License-Identifier: Apache-2.0
# -*- coding: utf-8 -*-
#
# Devicetree Specification documentation build configuration file, created by
#
# This file does only contain a selection of the most common options. For a
# full list see the documentation:
# http://www.sphinx-doc.org/en/master/config
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
import time
import subprocess
# sys.path.insert(0, os.path.abspath('.'))
sys.path.append(os.path.abspath('extensions'))
from DtsLexer import DtsLexer
def setup(app):
from sphinx.highlighting import lexers
lexers['dts'] = DtsLexer()
# -- Project information -----------------------------------------------------
project = u'Devicetree Specification'
copyright = u'2016,2017, devicetree.org'
author = u'devicetree.org'
# The short X.Y version
try:
version = str(subprocess.check_output(["git", "describe", "--dirty"]), 'utf-8').strip()
except:
version = "unknown-rev"
# The full version, including alpha/beta/rc tags
release = version
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
needs_sphinx = '1.2.3'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.todo',
'sphinx.ext.graphviz'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = '%d %B %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = []
# Include at the beginning of every source file that is read
with open('rst_prolog', 'rb') as pr:
rst_prolog = pr.read().decode('utf-8')
rst_epilog = """
.. |SpecVersion| replace:: {versionnum}
""".format(
versionnum = version,
)
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
numfig = True
highlight_language = 'none'
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
'github_user': 'devicetree-org',
'github_repo': 'devicetree-specification',
}
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = "devicetree-logo.svg"
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = "devicetree-favicon.png"
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# The default sidebars (for documents that don't match any pattern) are
# defined by theme itself. Builtin themes are using these templates by
# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
# 'searchbox.html']``.
#
# html_sidebars = {}
# -- Options for HTMLHelp output ---------------------------------------------
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html',
'searchbox.html',
]
}
# Output file base name for HTML help builder.
htmlhelp_basename = 'DevicetreeSpecificationdoc'
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
'classoptions': ',oneside',
'babel': '\\usepackage[english]{babel}',
'sphinxsetup': 'hmargin=2cm',
# The paper size ('letterpaper' or 'a4paper').
#
'papersize': 'a4paper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
'figure_align': 'H',
}
# Release numbers with a qualifier (ex. '-rc', '-pre') get a watermark.
if '-' in release:
latex_elements['preamble'] = '\\usepackage{draftwatermark}\\SetWatermarkScale{.45}\\SetWatermarkText{%s}' % (release)
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'devicetree-specification.tex', u'Devicetree Specification',
u'devicetree.org', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
latex_logo = "devicetree-logo.png"
# -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'devicetree-specification', u'Devicetree Specification',
[author], 1)
]
# -- Options for Texinfo output ----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'devicetree-specification', u'Devicetree Specification',
author, 'DevicetreeSpecification', 'Devicetree hardware description language specification.',
'Miscellaneous'),
]
# -- Options for Epub output -------------------------------------------------
# Bibliographic Dublin Core info.
epub_title = project
# The unique identifier of the text. This can be a ISBN number
# or the project homepage.
#
# epub_identifier = ''
# A unique identification for the text.
#
# epub_uid = ''
# A list of files that should not be packed into the epub file.
epub_exclude_files = ['search.html']
# -- Extension configuration -------------------------------------------------
# -- Options for todo extension ----------------------------------------------
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
| devicetree-org/devicetree-specification | source/conf.py | Python | apache-2.0 | 7,522 |
# -*- coding: utf-8 -*-
'''
Genesis Add-on
Copyright (C) 2015 lambda
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
'''
import re
from resources.lib.libraries import client
def resolve(url):
try:
id = re.compile('//.+?/.+?/([\w]+)').findall(url)
id += re.compile('//.+?/.+?v=([\w]+)').findall(url)
id = id[0]
url = 'http://embed.novamov.com/embed.php?v=%s' % id
result = client.request(url)
key = re.compile('flashvars.filekey=(.+?);').findall(result)[-1]
try: key = re.compile('\s+%s="(.+?)"' % key).findall(result)[-1]
except: pass
url = 'http://www.novamov.com/api/player.api.php?key=%s&file=%s' % (key, id)
result = client.request(url)
url = re.compile('url=(.+?)&').findall(result)[0]
return url
except:
return
| hexpl0it/plugin.video.genesi-ita | resources/lib/resolvers/novamov.py | Python | gpl-3.0 | 1,500 |
import itertools
import numpy as np
import pandas as pd
import pytest
from vivarium.interpolation import (
Interpolation,
Order0Interp,
check_data_complete,
validate_parameters,
)
def make_bin_edges(data: pd.DataFrame, col: str) -> pd.DataFrame:
"""Given a dataframe and a column containing midpoints, construct
equally sized bins around midpoints.
"""
mid_pts = data[[col]].drop_duplicates().sort_values(by=col).reset_index(drop=True)
mid_pts["shift"] = mid_pts[col].shift()
mid_pts["left"] = mid_pts.apply(
lambda row: (row[col] if pd.isna(row["shift"]) else 0.5 * (row[col] + row["shift"])),
axis=1,
)
mid_pts["right"] = mid_pts["left"].shift(-1)
mid_pts["right"] = mid_pts.right.fillna(
mid_pts.right.max() + mid_pts.left.tolist()[-1] - mid_pts.left.tolist()[-2]
)
data = data.copy()
idx = data.index
data = data.set_index(col, drop=False)
mid_pts = mid_pts.set_index(col, drop=False)
data[[col, f"{col}_left", f"{col}_right"]] = mid_pts[[col, "left", "right"]]
return data.set_index(idx)
@pytest.mark.skip(reason="only order 0 interpolation currently supported")
def test_1d_interpolation():
df = pd.DataFrame({"a": np.arange(100), "b": np.arange(100), "c": np.arange(100, 0, -1)})
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(df, (), ("a",), 1, True)
query = pd.DataFrame({"a": np.arange(100, step=0.01)})
assert np.allclose(query.a, i(query).b)
assert np.allclose(100 - query.a, i(query).c)
@pytest.mark.skip(reason="only order 0 interpolation currently supported")
def test_age_year_interpolation():
years = list(range(1990, 2010))
ages = list(range(0, 90))
pops = np.array(ages) * 11.1
data = []
for age, pop in zip(ages, pops):
for year in years:
for sex in ["Male", "Female"]:
data.append({"age": age, "sex": sex, "year": year, "pop": pop})
df = pd.DataFrame(data)
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(df, ("sex", "age"), ("year",), 1, True)
query = pd.DataFrame({"year": [1990, 1990], "age": [35, 35], "sex": ["Male", "Female"]})
assert np.allclose(i(query), 388.5)
@pytest.mark.skip(reason="only order 0 interpolation currently supported")
def test_interpolation_called_missing_key_col():
a = [range(1990, 1995), range(25, 30), ["Male", "Female"]]
df = pd.DataFrame(list(itertools.product(*a)), columns=["year", "age", "sex"])
df["pop"] = df.age * 11.1
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(
df,
[
"sex",
],
["year", "age"],
1,
True,
)
query = pd.DataFrame({"year": [1990, 1990], "age": [35, 35]})
with pytest.raises(ValueError):
i(query)
@pytest.mark.skip(reason="only order 0 interpolation currently supported")
def test_interpolation_called_missing_param_col():
a = [range(1990, 1995), range(25, 30), ["Male", "Female"]]
df = pd.DataFrame(list(itertools.product(*a)), columns=["year", "age", "sex"])
df["pop"] = df.age * 11.1
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(
df,
[
"sex",
],
["year", "age"],
1,
True,
)
query = pd.DataFrame({"year": [1990, 1990], "sex": ["Male", "Female"]})
with pytest.raises(ValueError):
i(query)
@pytest.mark.skip(reason="only order 0 interpolation currently supported")
def test_2d_interpolation():
a = np.mgrid[0:5, 0:5][0].reshape(25)
b = np.mgrid[0:5, 0:5][1].reshape(25)
df = pd.DataFrame({"a": a, "b": b, "c": b, "d": a})
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(df, (), ("a", "b"), 1, True)
query = pd.DataFrame({"a": np.arange(4, step=0.01), "b": np.arange(4, step=0.01)})
assert np.allclose(query.b, i(query).c)
assert np.allclose(query.a, i(query).d)
@pytest.mark.skip(reason="only order 0 interpolation currently supported")
def test_interpolation_with_categorical_parameters():
a = ["one"] * 100 + ["two"] * 100
b = np.append(np.arange(100), np.arange(100))
c = np.append(np.arange(100), np.arange(100, 0, -1))
df = pd.DataFrame({"a": a, "b": b, "c": c})
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(df, ("a",), ("b",), 1, True)
query_one = pd.DataFrame({"a": "one", "b": np.arange(100, step=0.01)})
query_two = pd.DataFrame({"a": "two", "b": np.arange(100, step=0.01)})
assert np.allclose(np.arange(100, step=0.01), i(query_one).c)
assert np.allclose(np.arange(100, 0, step=-0.01), i(query_two).c)
def test_order_zero_2d():
a = np.mgrid[0:5, 0:5][0].reshape(25)
b = np.mgrid[0:5, 0:5][1].reshape(25)
df = pd.DataFrame({"a": a + 0.5, "b": b + 0.5, "c": b * 3, "garbage": ["test"] * len(a)})
df = make_bin_edges(df, "a")
df = make_bin_edges(df, "b")
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(
df,
("garbage",),
[("a", "a_left", "a_right"), ("b", "b_left", "b_right")],
order=0,
extrapolate=True,
validate=True,
)
column = np.arange(0.5, 4, step=0.011)
query = pd.DataFrame({"a": column, "b": column, "garbage": ["test"] * (len(column))})
assert np.allclose(query.b.astype(int) * 3, i(query).c)
def test_order_zero_2d_fails_on_extrapolation():
a = np.mgrid[0:5, 0:5][0].reshape(25)
b = np.mgrid[0:5, 0:5][1].reshape(25)
df = pd.DataFrame({"a": a + 0.5, "b": b + 0.5, "c": b * 3, "garbage": ["test"] * len(a)})
df = make_bin_edges(df, "a")
df = make_bin_edges(df, "b")
df = df.sample(frac=1) # Shuffle table to assure interpolation works given unsorted input
i = Interpolation(
df,
("garbage",),
[("a", "a_left", "a_right"), ("b", "b_left", "b_right")],
order=0,
extrapolate=False,
validate=True,
)
column = np.arange(4, step=0.011)
query = pd.DataFrame({"a": column, "b": column, "garbage": ["test"] * (len(column))})
with pytest.raises(ValueError) as error:
i(query)
message = error.value.args[0]
assert "Extrapolation" in message and "a" in message
def test_order_zero_1d_no_extrapolation():
s = pd.Series({0: 0, 1: 1}).reset_index()
s = make_bin_edges(s, "index")
f = Interpolation(
s,
tuple(),
[["index", "index_left", "index_right"]],
order=0,
extrapolate=False,
validate=True,
)
assert f(pd.DataFrame({"index": [0]}))[0][0] == 0, "should be precise at index values"
assert f(pd.DataFrame({"index": [0.999]}))[0][0] == 1
with pytest.raises(ValueError) as error:
f(pd.DataFrame({"index": [1]}))
message = error.value.args[0]
assert "Extrapolation" in message and "index" in message
def test_order_zero_1d_constant_extrapolation():
s = pd.Series({0: 0, 1: 1}).reset_index()
s = make_bin_edges(s, "index")
f = Interpolation(
s,
tuple(),
[["index", "index_left", "index_right"]],
order=0,
extrapolate=True,
validate=True,
)
assert f(pd.DataFrame({"index": [1]}))[0][0] == 1
assert (
f(pd.DataFrame({"index": [2]}))[0][0] == 1
), "should be constant extrapolation outside of input range"
assert f(pd.DataFrame({"index": [-1]}))[0][0] == 0
def test_validate_parameters__empty_data():
with pytest.raises(ValueError) as error:
validate_parameters(
pd.DataFrame(
columns=["age_left", "age_right", "sex", "year_left", "year_right", "value"]
),
["sex"],
[("age", "age_left", "age_right"), ["year", "year_left", "year_right"]],
)
message = error.value.args[0]
assert "empty" in message
def test_check_data_complete_gaps():
data = pd.DataFrame(
{
"year_start": [1990, 1990, 1995, 1995],
"year_end": [1995, 1995, 2000, 2000],
"age_start": [16, 10, 10, 16],
"age_end": [20, 15, 15, 20],
}
)
with pytest.raises(NotImplementedError) as error:
check_data_complete(
data, [("year", "year_start", "year_end"), ["age", "age_start", "age_end"]]
)
message = error.value.args[0]
assert "age_start" in message and "age_end" in message
def test_check_data_complete_overlap():
data = pd.DataFrame(
{
"year_start": [1995, 1995, 2000, 2005, 2010],
"year_end": [2000, 2000, 2005, 2010, 2015],
}
)
with pytest.raises(ValueError) as error:
check_data_complete(data, [("year", "year_start", "year_end")])
message = error.value.args[0]
assert "year_start" in message and "year_end" in message
def test_check_data_missing_combos():
data = pd.DataFrame(
{
"year_start": [1990, 1990, 1995],
"year_end": [1995, 1995, 2000],
"age_start": [10, 15, 10],
"age_end": [15, 20, 15],
}
)
with pytest.raises(ValueError) as error:
check_data_complete(
data, [["year", "year_start", "year_end"], ("age", "age_start", "age_end")]
)
message = error.value.args[0]
assert "combination" in message
def test_order0interp():
data = pd.DataFrame(
{
"year_start": [1990, 1990, 1990, 1990, 1995, 1995, 1995, 1995],
"year_end": [1995, 1995, 1995, 1995, 2000, 2000, 2000, 2000],
"age_start": [15, 10, 10, 15, 10, 10, 15, 15],
"age_end": [20, 15, 15, 20, 15, 15, 20, 20],
"height_start": [140, 160, 140, 160, 140, 160, 140, 160],
"height_end": [160, 180, 160, 180, 160, 180, 160, 180],
"value": [5, 3, 1, 7, 8, 6, 4, 2],
}
)
interp = Order0Interp(
data,
[
("age", "age_start", "age_end"),
("year", "year_start", "year_end"),
("height", "height_start", "height_end"),
],
["value"],
True,
True,
)
interpolants = pd.DataFrame(
{
"age": [12, 17, 8, 24, 12],
"year": [1992, 1998, 1985, 1992, 1992],
"height": [160, 145, 140, 179, 160],
}
)
result = interp(interpolants)
assert result.equals(pd.DataFrame({"value": [3, 4, 1, 7, 3]}))
def test_order_zero_1d_with_key_column():
data = pd.DataFrame(
{
"year_start": [1990, 1990, 1995, 1995],
"year_end": [1995, 1995, 2000, 2000],
"sex": ["Male", "Female", "Male", "Female"],
"value_1": [10, 7, 2, 12],
"value_2": [1200, 1350, 1476, 1046],
}
)
i = Interpolation(
data,
[
"sex",
],
[
("year", "year_start", "year_end"),
],
0,
True,
True,
)
query = pd.DataFrame(
{
"year": [
1992,
1993,
],
"sex": ["Male", "Female"],
}
)
expected_result = pd.DataFrame({"value_1": [10.0, 7.0], "value_2": [1200.0, 1350.0]})
assert i(query).equals(expected_result)
def test_order_zero_non_numeric_values():
data = pd.DataFrame(
{
"year_start": [1990, 1990],
"year_end": [1995, 1995],
"age_start": [
15,
24,
],
"age_end": [24, 30],
"value_1": ["blue", "red"],
}
)
i = Interpolation(
data,
tuple(),
[("year", "year_start", "year_end"), ("age", "age_start", "age_end")],
0,
True,
True,
)
query = pd.DataFrame(
{
"year": [1990, 1990],
"age": [
15,
24,
],
},
index=[1, 0],
)
expected_result = pd.DataFrame({"value_1": ["blue", "red"]}, index=[1, 0])
assert i(query).equals(expected_result)
def test_order_zero_3d_with_key_col():
data = pd.DataFrame(
{
"year_start": [1990, 1990, 1990, 1990, 1995, 1995, 1995, 1995] * 2,
"year_end": [1995, 1995, 1995, 1995, 2000, 2000, 2000, 2000] * 2,
"age_start": [15, 10, 10, 15, 10, 10, 15, 15] * 2,
"age_end": [20, 15, 15, 20, 15, 15, 20, 20] * 2,
"height_start": [140, 160, 140, 160, 140, 160, 140, 160] * 2,
"height_end": [160, 180, 160, 180, 160, 180, 160, 180] * 2,
"sex": ["Male"] * 8 + ["Female"] * 8,
"value": [5, 3, 1, 7, 8, 6, 4, 2, 6, 4, 2, 8, 9, 7, 5, 3],
}
)
interp = Interpolation(
data,
("sex",),
[
("age", "age_start", "age_end"),
("year", "year_start", "year_end"),
("height", "height_start", "height_end"),
],
0,
True,
True,
)
interpolants = pd.DataFrame(
{
"age": [12, 17, 8, 24, 12],
"year": [1992, 1998, 1985, 1992, 1992],
"height": [160, 145, 140, 185, 160],
"sex": ["Male", "Female", "Female", "Male", "Male"],
},
index=[10, 4, 7, 0, 9],
)
result = interp(interpolants)
assert result.equals(
pd.DataFrame({"value": [3.0, 5.0, 2.0, 7.0, 3.0]}, index=[10, 4, 7, 0, 9])
)
def test_order_zero_diff_bin_sizes():
data = pd.DataFrame(
{
"year_start": [
1990,
1995,
1996,
2005,
2005.5,
],
"year_end": [1995, 1996, 2005, 2005.5, 2010],
"value": [1, 5, 2.3, 6, 100],
}
)
i = Interpolation(data, tuple(), [("year", "year_start", "year_end")], 0, False, True)
query = pd.DataFrame({"year": [2007, 1990, 2005.4, 1994, 2004, 1995, 2002, 1995.5, 1996]})
expected_result = pd.DataFrame({"value": [100, 1, 6, 1, 2.3, 5, 2.3, 5, 2.3]})
assert i(query).equals(expected_result)
def test_order_zero_given_call_column():
data = pd.DataFrame(
{
"year_start": [
1990,
1995,
1996,
2005,
2005.5,
],
"year_end": [1995, 1996, 2005, 2005.5, 2010],
"year": [1992.5, 1995.5, 2000, 2005.25, 2007.75],
"value": [1, 5, 2.3, 6, 100],
}
)
i = Interpolation(data, tuple(), [("year", "year_start", "year_end")], 0, False, True)
query = pd.DataFrame({"year": [2007, 1990, 2005.4, 1994, 2004, 1995, 2002, 1995.5, 1996]})
expected_result = pd.DataFrame({"value": [100, 1, 6, 1, 2.3, 5, 2.3, 5, 2.3]})
assert i(query).equals(expected_result)
@pytest.mark.parametrize("validate", [True, False])
def test_interpolation_init_validate_option_invalid_data(validate):
if validate:
with pytest.raises(
ValueError, match="You must supply non-empty data to create the interpolation."
):
i = Interpolation(pd.DataFrame(), [], [], 0, True, validate)
else:
i = Interpolation(pd.DataFrame(), [], [], 0, True, validate)
@pytest.mark.parametrize("validate", [True, False])
def test_interpolation_init_validate_option_valid_data(validate):
s = pd.Series({0: 0, 1: 1}).reset_index()
s = make_bin_edges(s, "index")
i = Interpolation(s, tuple(), [["index", "index_left", "index_right"]], 0, True, validate)
@pytest.mark.parametrize("validate", [True, False])
def test_interpolation_call_validate_option_invalid_data(validate):
s = pd.Series({0: 0, 1: 1}).reset_index()
s = make_bin_edges(s, "index")
i = Interpolation(s, tuple(), [["index", "index_left", "index_right"]], 0, True, validate)
if validate:
with pytest.raises(
TypeError, match=r"Interpolations can only be called on pandas.DataFrames.*"
):
result = i(1)
else:
with pytest.raises(AttributeError):
result = i(1)
@pytest.mark.parametrize("validate", [True, False])
def test_interpolation_call_validate_option_valid_data(validate):
data = pd.DataFrame(
{
"year_start": [
1990,
1995,
1996,
2005,
2005.5,
],
"year_end": [1995, 1996, 2005, 2005.5, 2010],
"value": [1, 5, 2.3, 6, 100],
}
)
i = Interpolation(data, tuple(), [("year", "year_start", "year_end")], 0, False, validate)
query = pd.DataFrame({"year": [2007, 1990, 2005.4, 1994, 2004, 1995, 2002, 1995.5, 1996]})
result = i(query)
@pytest.mark.parametrize("validate", [True, False])
def test_order0interp_validate_option_invalid_data(validate):
data = pd.DataFrame(
{
"year_start": [1995, 1995, 2000, 2005, 2010],
"year_end": [2000, 2000, 2005, 2010, 2015],
}
)
if validate:
with pytest.raises(ValueError) as error:
interp = Order0Interp(
data, [("year", "year_start", "year_end")], [], True, validate
)
message = error.value.args[0]
assert "year_start" in message and "year_end" in message
else:
interp = Order0Interp(data, [("year", "year_start", "year_end")], [], True, validate)
@pytest.mark.parametrize("validate", [True, False])
def test_order0interp_validate_option_valid_data(validate):
data = pd.DataFrame(
{"year_start": [1990, 1995], "year_end": [1995, 2000], "value": [5, 3]}
)
interp = Order0Interp(
data, [("year", "year_start", "year_end")], ["value"], True, validate
)
| ihmeuw/vivarium | tests/test_interpolation.py | Python | bsd-3-clause | 18,136 |
import bpy
from ..common import *
from .widgets import *
from .. import tools
class RevoltLightPanel(bpy.types.Panel):
bl_label = "Light and Shadow"
bl_space_type = "VIEW_3D"
bl_region_type = "TOOLS"
bl_context = "objectmode"
bl_category = "Re-Volt"
bl_options = {"DEFAULT_CLOSED"}
@classmethod
def poll(self, context):
return context.object and len(context.selected_objects) >= 1 and context.object.type == "MESH"
def draw_header(self, context):
self.layout.label("", icon="RENDER_STILL")
def draw(self, context):
view = context.space_data
obj = context.object
props = context.scene.revolt
# Warns if texture mode is not enabled
widget_texture_mode(self)
if obj and obj.select:
# Checks if the object has a vertex color layer
if widget_vertex_color_channel(self, obj):
pass
else:
# Light orientation selection
box = self.layout.box()
box.label(text="Shade Object")
row = box.row()
row.prop(props, "light_orientation", text="Orientation")
if props.light_orientation == "X":
dirs = ["Left", "Right"]
if props.light_orientation == "Y":
dirs = ["Front", "Back"]
if props.light_orientation == "Z":
dirs = ["Top", "Bottom"]
# Headings
row = box.row()
row.label(text="Direction")
row.label(text="Light")
row.label(text="Intensity")
# Settings for the first light
row = box.row(align=True)
row.label(text=dirs[0])
row.prop(props, "light1", text="")
row.prop(props, "light_intensity1", text="")
# Settings for the second light
row = box.row(align=True)
row.label(text=dirs[1])
row.prop(props, "light2", text="")
row.prop(props, "light_intensity2", text="")
# Bake button
row = box.row()
row.operator("lighttools.bakevertex",
text="Generate Shading",
icon="LIGHTPAINT")
# Shadow tool
box = self.layout.box()
box.label(text="Generate Shadow Texture")
row = box.row()
row.prop(props, "shadow_method")
col = box.column(align=True)
col.prop(props, "shadow_quality")
col.prop(props, "shadow_softness")
col.prop(props, "shadow_resolution")
row = box.row()
row.operator("lighttools.bakeshadow",
text="Generate Shadow",
icon="LAMP_SPOT")
row = box.row()
row.prop(props, "shadow_table", text="Table")
# Batch baking tool
box = self.layout.box()
box.label(text="Batch Bake Light")
box.prop(props, "batch_bake_model_rgb")
box.prop(props, "batch_bake_model_env")
box.operator("helpers.batch_bake_model")
class ButtonBakeShadow(bpy.types.Operator):
bl_idname = "lighttools.bakeshadow"
bl_label = "Bake Shadow"
bl_description = "Creates a shadow plane beneath the selected object"
def execute(self, context):
tools.bake_shadow(self, context)
return{"FINISHED"}
class ButtonBakeLightToVertex(bpy.types.Operator):
bl_idname = "lighttools.bakevertex"
bl_label = "Bake light"
bl_description = "Bakes the light to the active vertex color layer"
def execute(self, context):
tools.bake_vertex(self, context)
return{"FINISHED"} | Yethiel/re-volt-addon | io_revolt/ui/light.py | Python | gpl-3.0 | 3,842 |
#!/usr/bin/python2.6
import sys, string, os, time, fnmatch, imgFG, markup, re
from markup import oneliner as o
from numpy import *
rootDir = 'html/'
pngDir = 'png/'
pathwayNameDict = {}
entityDict = {}
entityFile = {}
imgFG.printPDF = True
def getPathwayName(pid):
pid = pid.split('_')
if len(pid) != 2:
return "N/A"
pid = pid[1]
pid = re.sub("\.","", pid)
try:
name = pathwayNameDict[pid]
except:
name = "N/A"
return name
def initEntityDict(file_name):
inFile = open(file_name)
lineCount = 0
for line in inFile:
lineCount+=1
data = line[:-1].split('\t')
if len(data) == 2:
type = data[0]
name = data[1]
if name in entityDict:
if entityDict[name] != type and file_name == entityFile[name]:
print "on line ", lineCount, name, "cannot be assigned ",type, "when it is", entityDict[name] , "in", file_name , entityFile[name]
assert(entityDict[name] == type)
elif entityDict[name] != type:
if type != 'protein' and entityFile[name] == 'protein':
print "WARNING", lineCount, name, "has multiple types ",type, "and", entityDict[name] , "in", file_name , entityFile[name]
type = 'protein'
entityDict[name] = type
entityFile[name] = file_name
inFile.close()
def initPathwayNameDict(path_file="pathway_pids.tab"):
inFile = open(path_file)
for line in inFile:
data = line[:-1].split('\t')
pid = data[0]
name = data[1]
pathwayNameDict[pid] = name
inFile.close()
def getFilesMatching(baseDir, patterns):
list = []
for root, dirs, files in os.walk(baseDir):
for file in files:
ptr = os.path.join(root, file)
for pattern in patterns:
if fnmatch.fnmatch(ptr, pattern):
list.append(ptr)
return list
def writePageToFile(page, fname):
outFile = open(fname, 'w')
outFile.write(str(page))
outFile.close()
def initializePage(t, h, sort_list = "[[9,1]]"):
currentTime = time.localtime()
dateTime = str(currentTime[1]) + '/' + str(currentTime[2]) + '/' + str(currentTime[0]) + " "
dateTime += str(currentTime[3]) + ":" + str(currentTime[4]) + ":" + str(currentTime[5])
csses = "style.css"
tsStr = '\n$(document).ready(function()\n'
tsStr += ' {\n'
tsStr += ' $("table").tablesorter({\n'
tsStr += ' // sort on the tenth column , order desc \n'
tsStr += ' sortList: '+sort_list+' \n'
tsStr += ' }); \n'
tsStr += ' }\n'
tsStr += ');\n'
scripts = [('js/jquery-latest.js',['javascript','']),
('js/jquery.tablesorter.min.js',['javascript','']),
('js/jquery.metadata.js',['javascript','']),
('',['javascript',tsStr])]
page = markup.page()
page.init(title = t,
header = h,
script=scripts,
css = (csses, 'print, projection, screen'),
footer = "Last modified on " + dateTime)
return page
def putSummaryTable(p, b, data, id):
labels = data["sample"]["labels"]
p.table(border=b, id=id, class_='tablesorter')
p.thead()
p.tr()
p.th("Entity")
p.th(labels, class_="{sorter:'digit'}")
p.tr.close()
p.thead.close()
p.tbody()
for d in data["sample"]:
if d == "labels":
continue
vals = data["sample"][d]
p.tr()
p.td(d)
tmp = [round(v, 3) for v in vals]
p.td(tmp)
p.tr.close()
p.tbody.close()
p.table.close()
def getPathwayByFilename(f):
i = f.find("pid")
if i == -1:
print "string 'pid' not found in file name", f
sys.exit(0)
tmp = f[i:-3].split('_')
pid = tmp[0] + '_' + tmp[1]
pid = re.sub("\.","", pid)
print "pid:",pid
return pid, getPathwayName(pid)
def summarizePathway(samples, data, entitySummary):
sampleIndex = []
nwIndex = []
naIndex = []
for i in range(len(samples)):
s = samples[i]
if s.startswith("nw_"):
nwIndex.append(i)
elif s.startswith("na_"):
naIndex.append(i)
else:
sampleIndex.append(i)
totalOutliers = 0
totalActivity = 0
count = 0
geneCount = 0
for d in entitySummary["sample"]:
if d == "labels":
continue
vals = entitySummary["sample"][d]
totalOutliers += vals[6]
try:
totalActivity += vals[7]
except:
print "error: no activity for ",d
sys.exit(2)
totalActivity += 0
try:
if entityDict[d] == 'protein':
geneCount += 1
except:
pass
count += 1
if geneCount > 0:
avgOutliers = totalOutliers / geneCount;
else:
avgOutliers = 0
print "entities", count, "genes", geneCount
minMean = 1000
maxMean = -1000
minMeanNw = 1000
maxMeanNw = -1000
minMeanNa = 1000
maxMeanNa = -1000
for d in data:
vals = data[d]
tmp = [vals[i] for i in sampleIndex]
m = mean(tmp)
if m < minMean:
minMean = m
elif m > maxMean:
maxMean = m
tmp = [vals[i] for i in nwIndex]
m = mean(tmp)
if m < minMeanNw:
minMeanNw = m
elif m > maxMeanNw:
maxMeanNw = m
tmp = [vals[i] for i in naIndex]
m = mean(tmp)
if m < minMeanNa:
minMeanNa = m
elif m > maxMeanNa:
maxMeanNa = m
summary = {}
summary["Avg Num Perturbations"] = avgOutliers
summary["Total Perturbations"] = totalOutliers
summary["Num Genes"] = geneCount
summary["Min Mean Truth"] = minMean
summary["Max Mean Truth"] = maxMean
summary["Min Mean Any"] = minMeanNa
summary["Max Mean Any"] = maxMeanNa
summary["Normalized Activity"] = 100 * totalActivity / geneCount
print "summary Normalized Activity", 100 * totalActivity / geneCount
summary["order"] = ("Avg Num Perturbations", "Total Perturbations",
"Num Genes",
"Min Mean Truth", "Max Mean Truth",
"Min Mean Any", "Max Mean Any", "Normalized Activity")
return summary
def fileData(fname):
inFile = open(fname)
line = inFile.readline()
header = line[:-1].split('\t')
sample_names = header[1:]
fData = {}
for line in inFile:
data = line[:-1].split('\t')
name = data[0]
data = data[1:]
if len(name.split("__")) > 1:
continue
try:
vals = [float(d) for d in data]
fData[name] = vals
except:
continue
return sample_names, fData
def createResultsPage(f, parametric, uniqueName):
samples, data = fileData(f)
pid, pathwayName = getPathwayByFilename(f)
print "pathway:", pathwayName
if parametric:
imgFilename = rootDir + pngDir + uniqueName + '_' + pid + "_p_summary.png"
else:
imgFilename = rootDir + pngDir + uniqueName + '_' + pid + "_np_summary.png"
imgSize = (12,5)
pathwayName, entitySummary, pngFile = imgFG.createPlotFromData(pathwayName, imgSize,
imgFilename, parametric,
samples, data)
#print "pathway:", pathwayName, entitySummary['sample']
print "pathway:", pathwayName
basePNG = os.path.basename(pngFile)
page = initializePage(t = pathwayName + " -- " + uniqueName,
h = "", sort_list = "[[8,1]]")
page.img(src=pngDir+basePNG, alt="Summary Plot")
page.p("Result table")
putSummaryTable(p=page, b="1", data=entitySummary, id="result_table")
fname = basePNG[:-4] + ".html"
writePageToFile(page, rootDir + fname)
summary = summarizePathway(samples, data, entitySummary)
return fname, pathwayName, summary
def putResultsTable(p, b, data, id):
r = data[0]
summaryVals = r[2]
header = summaryVals["order"]
p.table(border=b, id=id, class_='tablesorter')
p.thead()
p.tr()
p.th("Image")
p.th("Name")
p.th(header, class_="{sorter:'digit'}")
p.tr.close()
p.thead.close()
p.tbody()
rowCount = 0
rowSum = [0 for h in header]
for r in data:
htmlFile = r[0]
pathwayName = r[1]
summaryVals = r[2]
p.tr()
base = os.path.basename(htmlFile)
p.td(o.a(o.img(src = pngDir + base[:-5] + ".png", width=100), href=base))
p.td(o.a(pathwayName, href=base))
vals = [round(summaryVals[h],3) for h in header]
p.td(vals)
i = 0
for h in header:
rowSum[i] += summaryVals[h]
i += 1
p.tr.close()
p.tbody.close()
p.tbody()
p.tr()
p.td('')
p.td('Total')
p.td(rowSum)
p.tr.close()
p.tbody.close()
p.table.close()
def createIndexPage(pResults, npResults):
page = initializePage(t = "Factor Graph Results",
h = "")
page.p("Parametric Results")
putResultsTable(p=page, b="1", data=pResults, id="result_table1")
#page.p("Non-Parametric Results")
#putResultsTable(p=page, b="1", data=npResults, id="result_table2")
writePageToFile(page, rootDir + "index.html")
def createTopPathwaysPage(pResults):
page = initializePage(t = "Per-pathway summoray of activity", h = "")
page.p("Per-pathway summary of activity")
page.p('<a href="index.html">Click here for all pathways</a>')
putResultsTable(p=page, b="1", data=pResults[0:10], id="results")
page.p('<a href="index.html">Click here for all pathways</a>')
writePageToFile(page, rootDir + "summary.html")
def main(directory, pathway_directory):
# create all html pages for each individual run, including images
# collect objects containing html page, whatever pathway-level summary info (in 2d dict)
# use objects to generate root level index.html
pathways = getFilesMatching(pathway_directory, ["*pid*tab","*pid*spf"])
for fname in pathways:
initEntityDict(fname)
print "reading ipls", directory
files = getFilesMatching(directory, ["*out"])
pResults = []
parametric = True
datasetName = os.path.basename(directory.strip('/'))
for f in files:
if f == "merged_transpose_pid_example.out":
continue
print "File: "+f, "dataset:", datasetName
r = createResultsPage(f, parametric, datasetName)
print "#triple", r[0], r[1], r[2]
pResults.append(r)
npResults = []
#parametric = False
#for f in files:
# if f == "merged_transpose_pid_example.out":
# continue
# r = createResultsPage(f, parametric, directory.strip('/'))
# npResults.append(r)
#pResults.sort(key=lambda x: -x[2]["Avg Num Perturbations"])
pResults.sort(key=lambda x: -x[2]["Normalized Activity"])
createIndexPage(pResults, npResults)
createTopPathwaysPage(pResults)
def usage():
print "usage: python htmlFG.py ipl_directory pathway_directory pathway_pids.tab"
print " ipl_directory contains one IPL matrix per pathway"
print " pathway_directory contains one spf file per pathway"
print " pathway_pids.tab is a 3 col file with list of pathways in pathway_directory: pid, description, source"
print " Note: pathway names must start with pid_ and end with _pathway.tab"
print
sys.exit(0)
if __name__ == "__main__":
if len(sys.argv) != 4:
usage()
directory = sys.argv[1]
pathways = sys.argv[2]
path_list = sys.argv[3]
initPathwayNameDict(path_file=path_list)
#import pdb ; pdb.set_trace()
main(directory, pathways)
| UCSC-MedBook/MedBook_ | tools/old-external-tools/shazam/old.htmlFG.py | Python | bsd-3-clause | 12,143 |
# Generated by Django 2.0 on 2018-03-23 11:19
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('backend', '0054_auto_20180322_1027'),
]
operations = [
migrations.CreateModel(
name='WeizhanItemView',
fields=[
('viewId', models.BigIntegerField(primary_key=True, serialize=False)),
('itemType', models.CharField(max_length=20)),
('ua', models.CharField(max_length=255)),
('itemId', models.BigIntegerField()),
('shareUserId', models.BigIntegerField()),
('partnerId', models.IntegerField()),
('time', models.DateTimeField()),
],
options={
'managed': False,
'db_table': 'datasystem_WeizhanItemView',
},
),
migrations.CreateModel(
name='ArticleDailyInfo',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('stat_date', models.CharField(max_length=15, verbose_name='统计日期')),
('item_id', models.BigIntegerField(verbose_name='分发的文章')),
('majia_id', models.BigIntegerField(verbose_name='分发马甲')),
('majia_type', models.IntegerField(default=0)),
('pv', models.IntegerField(default=0)),
('uv', models.IntegerField(default=0)),
('reshare', models.IntegerField(default=0)),
('down', models.IntegerField(default=0, verbose_name='下载页')),
('mobile_pv', models.IntegerField(default=0)),
('query_time', models.DateTimeField(default=0)),
('updated_at', models.DateTimeField(auto_now=True)),
('created_at', models.DateTimeField(auto_now_add=True)),
('app', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='backend.App')),
],
),
migrations.CreateModel(
name='RuntimeData',
fields=[
('name', models.CharField(max_length=20, primary_key=True, serialize=False)),
('value', models.CharField(max_length=255)),
],
),
]
| ourbest/sns_app | backend/migrations/0055_articledailyinfo_runtimedata_weizhanitemview.py | Python | lgpl-3.0 | 2,389 |
import logging
from mxcube3 import socketio
from mxcube3 import app as mxcube
from mxcube3.routes import Utils
from mxcube3.routes import qutils
from mxcube3.remote_access import safe_emit
from sample_changer.GenericSampleChanger import SampleChangerState
def last_queue_node():
node = mxcube.queue.queue_hwobj._current_queue_entries[-1].get_data_model()
if 'refdc' in node.get_name(): # link to the corresponding char
parent = node.get_parent() # same parent as char
node = parent._children[0] # but the rfdc children not here @#@#!!!
return qutils.node_index(node)
collect_signals = ['collectStarted', 'testSignal', 'warning']
collect_osc_signals = ['collectOscillationStarted', 'collectOscillationFailed', 'collectOscillationFinished']
beam_signals = ['beamPosChanged', 'beamInfoChanged']
queueSignals = ['queue_execution_finished', 'queue_paused', 'queue_stopped', 'testSignal', 'warning'] # 'centringAllowed',
microdiffSignals = ['centringInvalid', 'newAutomaticCentringPoint', 'centringStarted','centringAccepted','centringMoving',\
'centringFailed', 'centringSuccessful', 'progressMessage', 'centringSnapshots', 'warning',
'minidiffPhaseChanged', 'minidiffSampleIsLoadedChanged',\
'zoomMotorPredefinedPositionChanged', 'minidiffTransferModeChanged']
okSignals = ['Successful', 'Finished', 'finished', 'Ended', 'Accepted']
failedSignals = ['Failed', 'Invalid']
progressSignals = ['Started', 'Ready', 'paused', 'stopped',
'Moving', 'progress', 'centringAllowed']
warnSignals = ['warning']
error_signals = {}
logging_signals = {}
samplechanger_signals = {}
moveables_signals = {}
task_signals = { # missing egyscan, xrf, etc...
'collectStarted': 'Data collection has started',
'collectOscillationStarted': 'Data collection oscillation has started',
'collectOscillationFailed': 'Data collection oscillacion has failed',
'collectOscillationFinished': 'Data collection oscillacion has finished',
'collectEnded': 'Data collection has finished',
'warning': 'Data collection finished with a warning',
'collect_finished': 'Data collection has finished'
}
motor_signals = {
'actuatorStateChanged': 'actuatorStateChanged',
'minidiffPhaseChanged': 'minidiffPhaseChanged',
'minidiffTransferModeChanged': 'minidiffTransferModeChanged',
'minidiffSampleIsLoadedChanged': 'minidiffSampleIsLoadedChanged',
'zoomMotorPredefinedPositionChanged': 'zoomMotorPredefinedPositionChanged',
}
mach_info_signals = {
'mach_info_changed': 'mach_info_changed'
}
def get_signal_result(signal):
result = 0
for sig in progressSignals:
if sig in signal:
result = 1
for sig in okSignals:
if sig in signal:
result = 2
for sig in failedSignals:
if sig in signal:
result = 3
for sig in warnSignals:
if sig in signal:
result = 4
return result
def get_signal_progress(signal):
result = 0
for sig in progressSignals:
if sig in signal:
result = 50
for sig in okSignals:
if sig in signal:
result = 100
for sig in failedSignals:
if sig in signal:
result = 100
for sig in warnSignals:
if sig in signal:
result = 100
return result
def sc_state_changed(*args):
new_state = args[0]
old_state = None
if len(args) == 2:
old_state = args[1]
location = ''
if mxcube.sample_changer.getLoadedSample():
location = mxcube.sample_changer.getLoadedSample().getAddress()
if location:
if new_state == SampleChangerState.Moving and old_state == None:
msg = {'signal': 'loadingSample',
'location': location,
'message': 'Please wait, operating sample changer'}
socketio.emit('sc', msg, namespace='/hwr')
elif new_state == SampleChangerState.Unloading and location:
msg = {'signal': 'loadingSample',
'location': location,
'message': 'Please wait, Unloading sample %s' % location}
socketio.emit('sc', msg, namespace='/hwr')
elif new_state == SampleChangerState.Ready and old_state == SampleChangerState.Loading:
msg = {'signal': 'loadedSample',
'location': location,
'message': 'Please wait, Loaded sample %s' % location}
socketio.emit('sc', msg, namespace='/hwr')
elif new_state == SampleChangerState.Ready and old_state == None:
msg = {'signal': 'loadReady',
'location': location}
socketio.emit('sc', msg, namespace='/hwr')
def centring_started(method, *args):
usr_msg = 'Using 3-click centring, please click the position on '
usr_msg += 'the sample you would like to center (three times)'
msg = {'signal': 'SampleCentringRequest',
'message': usr_msg}
socketio.emit('sample_centring', msg, namespace='/hwr')
def queue_execution_started(entry):
msg = {'Signal': qutils.queue_exec_state(),
'Message': 'Queue execution started',
'State': 1}
safe_emit('queue', msg, namespace='/hwr')
def queue_execution_finished(entry):
msg = {'Signal': qutils.queue_exec_state(),
'Message': 'Queue execution stopped',
'State': 1}
safe_emit('queue', msg, namespace='/hwr')
def queue_execution_failed(entry):
msg = {'Signal': qutils.queue_exec_state(),
'Message': 'Queue execution stopped',
'State': 2}
safe_emit('queue', msg, namespace='/hwr')
def collect_oscillation_started(*args):
msg = {'Signal': 'collectOscillationStarted',
'Message': task_signals['collectOscillationStarted'],
'taskIndex': last_queue_node()['idx'] ,
'sample': last_queue_node()['sample'],
'state': get_signal_result('collectOscillationStarted'),
'progress': 0}
logging.getLogger('HWR').debug('[TASK CALLBACK] ' + str(msg))
try:
safe_emit('task', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: ' + str(msg))
def collect_oscillation_failed(owner, status, state, lims_id, osc_id, params):
msg = {'Signal': 'collectOscillationFailed',
'Message': task_signals['collectOscillationFailed'],
'taskIndex' : last_queue_node()['idx'] ,
'sample': last_queue_node()['sample'],
'limstResultData': mxcube.rest_lims.get_dc(lims_id),
'state': get_signal_result('collectOscillationFailed'),
'progress': 100}
logging.getLogger('HWR').debug('[TASK CALLBACK] ' + str(msg))
try:
safe_emit('task', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: ' + str(msg))
def collect_oscillation_finished(owner, status, state, lims_id, osc_id, params):
qutils.enable_entry(last_queue_node()['queue_id'], False)
msg = {'Signal': 'collectOscillationFinished',
'Message': task_signals['collectOscillationFinished'],
'taskIndex': last_queue_node()['idx'] ,
'sample': last_queue_node()['sample'],
'limsResultData': mxcube.rest_lims.get_dc(lims_id),
'state': 2,
'progress': 100}
logging.getLogger('HWR').debug('[TASK CALLBACK] ' + str(msg))
try:
safe_emit('task', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: ' + str(msg))
def collect_ended(owner, success, message):
state = 2 if success else 3
msg = {'Signal': 'collectOscillationFinished',
'Message': message,
'taskIndex': last_queue_node()['idx'] ,
'sample': last_queue_node()['sample'],
'state': state,
'progress': 100}
logging.getLogger('HWR').debug('[TASK CALLBACK] ' + str(msg))
try:
safe_emit('task', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: ' + str(msg))
def task_event_callback(*args, **kwargs):
logging.getLogger('HWR').debug('[TASK CALLBACK]')
logging.getLogger("HWR").debug(kwargs)
logging.getLogger("HWR").debug(args)
msg = {'Signal': kwargs['signal'],
'Message': task_signals[kwargs['signal']],
'taskIndex': last_queue_node()['idx'] ,
'sample': last_queue_node()['sample'],
'state': get_signal_result(kwargs['signal']),
'progress': get_signal_progress(kwargs['signal'])}
logging.getLogger('HWR').debug('[TASK CALLBACK] ' + str(msg))
try:
safe_emit('task', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: ' + str(msg))
# try:
# msg = {"message": sender + ':' + signal,
# "severity": 'INFO',
# "timestamp": time.asctime(),
# "logger": 'HWR',
# "stack_trace": ''}
# socketio.emit('log_record', msg, namespace='/logging')
# except Exception:
# logging.getLogger("HWR").error('error sending message: ' + str(msg))
def motor_position_callback(motor, pos):
socketio.emit('motor_position', { 'name': motor, 'position': pos }, namespace='/hwr')
def motor_state_callback(motor, state, sender=None, **kw):
centred_positions = dict()
for pos in mxcube.diffractometer.savedCentredPos:
centred_positions.update({pos['posId']: pos})
if state == 2:
# READY
motor_position_callback(motor, sender.getPosition())
socketio.emit('motor_state', { 'name': motor, 'state': state, 'centredPositions': centred_positions }, namespace='/hwr')
def motor_event_callback(*args, **kwargs):
# logging.getLogger('HWR').debug('[MOTOR CALLBACK]')
# logging.getLogger("HWR").debug(kwargs)
# logging.getLogger("HWR").debug(args)
signal = kwargs['signal']
sender = str(kwargs['sender'].__class__).split('.')[0]
motors_info = Utils.get_centring_motors_info()
motors_info.update(Utils.get_light_state_and_intensity())
motors_info['pixelsPerMm'] = mxcube.diffractometer.get_pixels_per_mm()
aux = {}
for pos in mxcube.diffractometer.savedCentredPos:
aux.update({pos['posId']: pos})
# sending all motors position/status, and the current centred positions
msg = {'Signal': signal,
'Message': signal,
'Motors': motors_info,
'CentredPositions': aux,
'Data': args[0] if len(args) == 1 else args}
# logging.getLogger('HWR').debug('[MOTOR CALLBACK] ' + str(msg))
try:
socketio.emit('Motors', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: %s' + str(msg))
# try:
# msg = {"message": sender + ':' + signal,
# "severity": 'INFO',
# "timestamp": time.asctime(),
# "logger": 'HWR',
# "stack_trace": ''}
# socketio.emit('log_record', msg, namespace='/logging')
# except Exception:
# logging.getLogger("HWR").error('error sending message: %s' + str(msg))
def beam_changed(*args, **kwargs):
ret = {}
signal = kwargs['signal']
beam_info = mxcube.beamline.getObjectByRole("beam_info")
if beam_info is None:
logging.getLogger('HWR').error("beamInfo is not defined")
return Response(status=409)
try:
beam_info_dict = beam_info.get_beam_info()
except Exception:
beam_info_dict = dict()
ret.update({'position': beam_info.get_beam_position(),
'shape': beam_info_dict.get("shape"),
'size_x': beam_info_dict.get("size_x"),
'size_y': beam_info_dict.get("size_y")
})
msg = {'Signal': signal, 'Message': signal, 'Data': ret}
# logging.getLogger('HWR').debug('[MOTOR CALLBACK] ' + str(msg))
try:
socketio.emit('beam_changed', msg, namespace='/hwr')
except Exception:
logging.getLogger("HWR").error('error sending message: %s' + str(msg))
def mach_info_changed(values):
try:
socketio.emit("mach_info_changed", values, namespace="/hwr")
except Exception:
logging.getLogger("HWR").error('error sending message: %s' + str(msg))
| amilan/mxcube3 | mxcube3/routes/signals.py | Python | gpl-2.0 | 12,584 |
"""test_models.py"""
import json
from mongoengine.errors import NotUniqueError
from models import Dashboard, Job, Event, ApiKey
from .base_testcase import ApiTestCase
class TestApiKey(ApiTestCase):
"""Test ApiKey related stuff"""
def test_api_key_default(self):
"""test key defaulting works"""
api_key = ApiKey.objects.create(user='me')
self.assertNotEqual(api_key.api_key, '')
def test_api_key_to_dict(self):
"""test to_dict output"""
api_key = ApiKey.objects.create(user='me')
self.assertEqual(
api_key.to_dict(),
{'user': 'me', 'api_key': api_key.api_key, '_id': str(api_key.id)}
)
def test_api_key_str(self):
"""test __str__ representation"""
api_key = ApiKey.objects.create(user='me')
self.assertEqual(
str(api_key),
'{} ({})'.format(api_key.api_key, 'me')
)
class TestModelCreation(ApiTestCase):
"""Test Model Creation"""
def test_dashboard(self):
"""test creating a Dashboard model."""
dash = Dashboard.objects.create(
slug='testing',
title='Testing Dashboard'
)
self.assertEqual(str(dash), 'Testing Dashboard')
dash2 = Dashboard.objects.create(
title='Testing Again The Dashböard'
)
self.assertEqual(dash2.slug, 'testing-again-the-dashboard')
def test_job(self):
"""test creating a Job model."""
job = Job.objects.create(
slug='test-job',
title='Test Job'
)
self.assertEqual(str(job), 'Test Job')
def test_event(self):
"""test creating an Event model."""
job = Job.objects.create(slug='test-e-job', title='Test E Job')
event = Event.objects.create(
job=job,
result=0,
extra_value=42.2,
text_value='särskild'
)
self.assertEqual(str(event), 'Test E Job {}: 0 (0)'.format(event.datetimestamp))
self.assertEqual(event.extra_value, 42.2)
self.assertEqual(event.text_value, 'särskild')
def test_display_field(self):
"""test display_field property method"""
job = Job.objects.create(
slug='test-d-job',
title='Test D Job',
config={'display_field': 'value'}
)
event = Event.objects.create(
job=job,
result=0,
value=42.0,
extra_text='something else'
)
self.assertEqual(job.display_field, 'value')
self.assertEqual(event.display_field, 42.0)
self.assertEqual(str(event), 'Test D Job {}: 0 (42.0)'.format(event.datetimestamp))
class TestDelete(ApiTestCase):
"""Test Deletion Logic"""
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.job = Job.objects.create(
slug='test-job',
title='Test Job',
config={'display_field': 'value'}
)
cls.e1 = Event.objects.create(job=cls.job, result=0, value=40)
cls.e2 = Event.objects.create(job=cls.job, result=0, value=42, extra_text='särskild')
cls.e3 = Event.objects.create(job=cls.job, result=0, value=64)
cls.dashboard = Dashboard.objects.create(
slug='test-dashboard',
title='Test Dashboard'
)
cls.dashboard.jobs.append(cls.job)
cls.dashboard.save()
def test_delete(self):
"""test delete cascade"""
# dashboard deletes should not cascade
self.assertEqual(1, Job.objects.all().count())
self.dashboard.delete()
self.assertEqual(1, Job.objects.all().count())
# job deletes *should* cascade
self.assertEqual(3, Event.objects.all().count())
self.job.delete()
self.assertEqual(0, Event.objects.all().count())
class TestToDict(ApiTestCase):
"""Test `to_dict` logic on Models"""
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.job = Job.objects.create(
slug='test-job',
title='Test Job',
config={'display_field': 'value'}
)
cls.e1 = Event.objects.create(job=cls.job, result=0, value=40)
cls.e2 = Event.objects.create(job=cls.job, result=0, value=42, extra_text='särskild')
cls.e3 = Event.objects.create(job=cls.job, result=0, value=64)
cls.dashboard = Dashboard.objects.create(
slug='test-dashboard',
title='Test Dashboard'
)
cls.dashboard.jobs.append(cls.job)
def test_job(self):
"""test job.to_dict() method"""
self.assertEqual(Event.objects.filter(job=self.job).count(), 3)
self.assertEqual(json.loads(self.job.to_json())['title'], 'Test Job')
job_dict = self.job.to_dict()
self.assertEqual(job_dict['events'][0]['value'], self.e3.value)
self.assertEqual(job_dict['title'], 'Test Job')
self.assertEqual(len(job_dict['events']), 3)
job_dict = self.job.to_dict(page_size=2)
self.assertEqual(len(job_dict['events']), 2)
self.assertEqual(job_dict['events'][0]['value'], self.e3.value)
job_dict = self.job.to_dict(page_size=2, page_number=2)
self.assertEqual(len(job_dict['events']), 1)
self.assertEqual(job_dict['events'][0]['value'], self.e1.value)
def test_no_job_dupes(self):
"""test that we can't create duplicate jobs"""
with self.assertRaises(NotUniqueError):
Job.objects.create(
title='Another test job',
slug='test-job', # dupe!
description='ha ha'
)
def test_dashboard(self):
"""test dashboard.to_dict() method"""
dash_dict = self.dashboard.to_dict()
self.assertEqual(dash_dict['title'], 'Test Dashboard')
self.assertEqual(len(dash_dict['jobs']), 1)
self.assertEqual(dash_dict['jobs'][0], self.job.to_dict())
def test_no_dashboard_dupes(self):
"""test that we can't create a duplicate"""
with self.assertRaises(NotUniqueError):
Dashboard.objects.create(
title='Dupe Dashboard',
slug='test-dashboard', # dupe!
description='Blah'
)
def test_event(self):
"""test event.to_dict() method"""
event_dict = Event.objects.filter(job=self.job)[1].to_dict()
self.assertEqual(len(event_dict['_id']), 24)
# TODO: resolve differences between mongomock and regular mongo
# self.assertEqual(len(event_dict['datetimestamp']), 27)
self.assertEqual(event_dict['job'], 'test-job')
self.assertEqual(event_dict['result'], 0)
self.assertEqual(event_dict['value'], 42)
self.assertEqual(event_dict['extra_text'], 'särskild')
| swilcox/badash | badash-api/tests/test_models.py | Python | mit | 6,841 |
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import base64
from keystone.common import pemutils
from keystone import tests
from six import moves
# List of 2-tuples, (pem_type, pem_header)
headers = pemutils.PEM_TYPE_TO_HEADER.items()
def make_data(size, offset=0):
return ''.join([chr(x % 255) for x in moves.range(offset, size + offset)])
def make_base64_from_data(data):
return base64.b64encode(data)
def wrap_base64(base64_text):
wrapped_text = '\n'.join([base64_text[x:x + 64]
for x in moves.range(0, len(base64_text), 64)])
wrapped_text += '\n'
return wrapped_text
def make_pem(header, data):
base64_text = make_base64_from_data(data)
wrapped_text = wrap_base64(base64_text)
result = '-----BEGIN %s-----\n' % header
result += wrapped_text
result += '-----END %s-----\n' % header
return result
class PEM(object):
"""PEM text and it's associated data broken out, used for testing.
"""
def __init__(self, pem_header='CERTIFICATE', pem_type='cert',
data_size=70, data_offset=0):
self.pem_header = pem_header
self.pem_type = pem_type
self.data_size = data_size
self.data_offset = data_offset
self.data = make_data(self.data_size, self.data_offset)
self.base64_text = make_base64_from_data(self.data)
self.wrapped_base64 = wrap_base64(self.base64_text)
self.pem_text = make_pem(self.pem_header, self.data)
class TestPEMParseResult(tests.TestCase):
def test_pem_types(self):
for pem_type in pemutils.pem_types:
pem_header = pemutils.PEM_TYPE_TO_HEADER[pem_type]
r = pemutils.PEMParseResult(pem_type=pem_type)
self.assertEqual(pem_type, r.pem_type)
self.assertEqual(pem_header, r.pem_header)
pem_type = 'xxx'
self.assertRaises(ValueError,
pemutils.PEMParseResult, pem_type=pem_type)
def test_pem_headers(self):
for pem_header in pemutils.pem_headers:
pem_type = pemutils.PEM_HEADER_TO_TYPE[pem_header]
r = pemutils.PEMParseResult(pem_header=pem_header)
self.assertEqual(pem_type, r.pem_type)
self.assertEqual(pem_header, r.pem_header)
pem_header = 'xxx'
self.assertRaises(ValueError,
pemutils.PEMParseResult, pem_header=pem_header)
class TestPEMParse(tests.TestCase):
def test_parse_none(self):
text = ''
text += 'bla bla\n'
text += 'yada yada yada\n'
text += 'burfl blatz bingo\n'
parse_results = pemutils.parse_pem(text)
self.assertEqual(len(parse_results), 0)
self.assertEqual(pemutils.is_pem(text), False)
def test_parse_invalid(self):
p = PEM(pem_type='xxx',
pem_header='XXX')
text = p.pem_text
self.assertRaises(ValueError,
pemutils.parse_pem, text)
def test_parse_one(self):
data_size = 70
count = len(headers)
pems = []
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
p = pems[i]
text = p.pem_text
parse_results = pemutils.parse_pem(text)
self.assertEqual(len(parse_results), 1)
r = parse_results[0]
self.assertEqual(p.pem_type, r.pem_type)
self.assertEqual(p.pem_header, r.pem_header)
self.assertEqual(p.pem_text,
text[r.pem_start:r.pem_end])
self.assertEqual(p.wrapped_base64,
text[r.base64_start:r.base64_end])
self.assertEqual(p.data, r.binary_data)
def test_parse_one_embedded(self):
p = PEM(data_offset=0)
text = ''
text += 'bla bla\n'
text += 'yada yada yada\n'
text += p.pem_text
text += 'burfl blatz bingo\n'
parse_results = pemutils.parse_pem(text)
self.assertEqual(len(parse_results), 1)
r = parse_results[0]
self.assertEqual(p.pem_type, r.pem_type)
self.assertEqual(p.pem_header, r.pem_header)
self.assertEqual(p.pem_text,
text[r.pem_start:r.pem_end])
self.assertEqual(p.wrapped_base64,
text[r.base64_start: r.base64_end])
self.assertEqual(p.data, r.binary_data)
def test_parse_multple(self):
data_size = 70
count = len(headers)
pems = []
text = ''
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
text += pems[i].pem_text
parse_results = pemutils.parse_pem(text)
self.assertEqual(len(parse_results), count)
for i in moves.range(count):
r = parse_results[i]
p = pems[i]
self.assertEqual(p.pem_type, r.pem_type)
self.assertEqual(p.pem_header, r.pem_header)
self.assertEqual(p.pem_text,
text[r.pem_start:r.pem_end])
self.assertEqual(p.wrapped_base64,
text[r.base64_start: r.base64_end])
self.assertEqual(p.data, r.binary_data)
def test_parse_multple_find_specific(self):
data_size = 70
count = len(headers)
pems = []
text = ''
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
text += pems[i].pem_text
for i in moves.range(count):
parse_results = pemutils.parse_pem(text, pem_type=headers[i][0])
self.assertEqual(len(parse_results), 1)
r = parse_results[0]
p = pems[i]
self.assertEqual(p.pem_type, r.pem_type)
self.assertEqual(p.pem_header, r.pem_header)
self.assertEqual(p.pem_text,
text[r.pem_start:r.pem_end])
self.assertEqual(p.wrapped_base64,
text[r.base64_start:r.base64_end])
self.assertEqual(p.data, r.binary_data)
def test_parse_multple_embedded(self):
data_size = 75
count = len(headers)
pems = []
text = ''
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
text += 'bla bla\n'
text += 'yada yada yada\n'
text += pems[i].pem_text
text += 'burfl blatz bingo\n'
parse_results = pemutils.parse_pem(text)
self.assertEqual(len(parse_results), count)
for i in moves.range(count):
r = parse_results[i]
p = pems[i]
self.assertEqual(p.pem_type, r.pem_type)
self.assertEqual(p.pem_header, r.pem_header)
self.assertEqual(p.pem_text,
text[r.pem_start:r.pem_end])
self.assertEqual(p.wrapped_base64,
text[r.base64_start:r.base64_end])
self.assertEqual(p.data, r.binary_data)
def test_get_pem_data_none(self):
text = ''
text += 'bla bla\n'
text += 'yada yada yada\n'
text += 'burfl blatz bingo\n'
data = pemutils.get_pem_data(text)
self.assertEqual(None, data)
def test_get_pem_data_invalid(self):
p = PEM(pem_type='xxx',
pem_header='XXX')
text = p.pem_text
self.assertRaises(ValueError,
pemutils.get_pem_data, text)
def test_get_pem_data(self):
data_size = 70
count = len(headers)
pems = []
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
p = pems[i]
text = p.pem_text
data = pemutils.get_pem_data(text, p.pem_type)
self.assertEqual(p.data, data)
def test_is_pem(self):
data_size = 70
count = len(headers)
pems = []
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
p = pems[i]
text = p.pem_text
self.assertTrue(pemutils.is_pem(text, pem_type=p.pem_type))
self.assertFalse(pemutils.is_pem(text,
pem_type=p.pem_type + 'xxx'))
def test_base64_to_pem(self):
data_size = 70
count = len(headers)
pems = []
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
p = pems[i]
pem = pemutils.base64_to_pem(p.base64_text, p.pem_type)
self.assertEqual(pemutils.get_pem_data(pem, p.pem_type), p.data)
def test_binary_to_pem(self):
data_size = 70
count = len(headers)
pems = []
for i in moves.range(count):
pems.append(PEM(pem_type=headers[i][0],
pem_header=headers[i][1],
data_size=data_size + i,
data_offset=i))
for i in moves.range(count):
p = pems[i]
pem = pemutils.binary_to_pem(p.data, p.pem_type)
self.assertEqual(pemutils.get_pem_data(pem, p.pem_type), p.data)
| dsiddharth/access-keys | keystone/tests/test_pemutils.py | Python | apache-2.0 | 11,150 |
import pickle
import pylab as pl
from operator import itemgetter
import scipy.io as io
import numpy as np
import sys
# next:
# implement function passing
# implement multiple variables
# implement multiple variable arrays
# implement wildcard at the end
# implement warning messages:
# - the rwchunksize
def nctypecode(dtype):
# purose: netcdf-typecode from array-dtype
if ((dtype == dtype('float32')) or (dtype == 'float32')):
return 'f'
elif ((dtype == dtype('float64')) or (dtype == 'float64')):
return 'd'
elif ((dtype == dtype('int32')) or (dtype == 'int32')):
return 'i'
elif ((dtype == dtype('int64')) or (dtype == 'int64')):
return 'l'
def ncdtype(typecode):
# purpose: get array-dtype from netcdf-typecode
if typecode == 'f':
return np.dtype('float32')
elif typecode == 'd':
return np.dtype('float64')
elif typecode == 'i':
return dtype('int32')
elif typecode == 'l':
return dtype('int64')
# print rwicecube(fin,(1,35,52),(5,),(3,))
def rwicecube(filestream,shp,dimiterref,dimiter,dimpos,dimnoiterref,dimnoiter,icecube,vtype,vsize,voffset,rwchsize,mode):
"""
read or write data icecube from binary data and put it in an array
filestream: binary file reference
shp: shape of the filestream
dimiterref: reference to dimensions over which no slice is performed
pos: current index position of the non-sliced dimensions
"""
# e.g. shp = (200,100,50,50,20)
# dimiterref = (1,3,4)
# dimpos = (5,10,9)
# extend so that structured arrays are read at once
# dimiter = []
# dimnoiter = []
lennoiter = long(1)
# for i in range(len(shp)):
# if i in dimiterref:
# dimiter.append(shp[i])
# if dimnoiterref == None:
# dimnoiterref = []
# for i in range(len(shp)):
# if i not in dimiterref:
# dimnoiterref.append(i)
# dimnoiter.append(shp[i])
# lennoiter = lennoiter*shp[i]
# # the following is not really needed for application, but we implement it for debugging
# else:
for idimnoiterref,edimnoiterref in enumerate(dimnoiterref):
# dimnoiter.append(shp[edimnoiterref])
lennoiter = lennoiter*dimnoiter[idimnoiterref]
# print 'lennoiter',shp,dimnoiterref,dimiterref,lennoiter
fpos = 0
# e.g. fpos = (9)+ 20*(10) + 50*50*20*(5)
for idimpos,edimpos in enumerate(dimpos):
curadd = edimpos
#e.g. if edimpos == (5): curadd = 50*50*20*(5)
# exclude trivial special case of only 1 iteration step
# --> in that case fpos is just zero.
if dimiterref != [-1]:
if ((dimiterref[idimpos] + 1) < len(shp)):
for i in range(dimiterref[idimpos] + 1,len(shp)) :
curadd = curadd * shp[i]
fpos = fpos + curadd
# print 'fpos',fpos,dimiterref,dimiter,leniter
# e.g. dimnoiterref = (0,2)
# dimnoiterpos = (5,20)
# j = based on (0,2) and (5,20)
# print 'lennoiter:', lennoiter
# Initialize (for reading) or prepare (for writing) icecube array
if mode == 'read':
icecube = np.zeros((lennoiter,),dtype=vtype)*np.nan
elif mode == 'write':
# print lennoiter
# print icecube.shape, dimnoiter # should be the same
icecube = np.reshape(icecube,(lennoiter,))
# print dataout
# get the maximum size of continuous data chunks for more efficient IO
# # zou goed moeten zijn.
# found = False
# idimnoiterref = 0
# while ((found == False) & (idimnoiterref < len(dimnoiterref))):
# cont = True
# for ishp in range(len(shp) - (len(dimnoiterref) - idimnoiterref),len(shp)):
# if ishp in dimnoiterref[idimnoiterref:]:
# cont == True
# else:
# cont == False
# if cont == True: found = idimnoiterref
# idimnoiterref = idimnoiterref+1
# print 'found',found,dimnoiterref[found]
# if found != False:
# for ishp in range(dimnoiterref[found],len(shp)):
# rwchunksize = rwchunksize * shp[ishp]
# rwchunksizeout = rwchunksizeout * shpout[ishp]
# print 'rwchunksize',rwchunksize
# while dimnoiterref[idimnoiterref] in range(ishp,len(ishp)):
# # print ishp,idimnoiterref,dimnoiterref[idimnoiterref],shp[ishp],dimnoiter[idimnoiterref]
# rwchunksize = rwchunksize * shp[ishp]
# # # or
# # rwchunksize = rwchunksize * dimnoiter[idimnoiterref]
# idimnoiterref = idimnoiterref - 1
# ishp = ishp -1
# # get the maximum size of continuous data chunks for more efficient IO
# rwchunksize = 1
# idimnoiterref = len(dimnoiterref) - 1
# ishp = len(shp)-1
# while dimnoiterref[idimnoiterref] in range(ishp,len(ishp)):
# # print ishp,idimnoiterref,dimnoiterref[idimnoiterref],shp[ishp],dimnoiter[idimnoiterref]
# rwchunksize = rwchunksize * shp[ishp]
# # # or
# # rwchunksize = rwchunksize * dimnoiter[idimnoiterref]
# # idimnoiterref = idimnoiterref - 1
# ishp = ishp -1
# print 'rwchunksize',rwchunksize
dimnoiterpos = [0]*len(dimnoiter)
# print icecube,dimnoiterpos
j = 0
while j < lennoiter:
fposicecube = fpos
for idimpos,edimpos in enumerate(dimnoiterpos):
curadd = edimpos
# e.g. fposicecube = (1)*52
# e.g. fposicecube = (9)+ 20*(10) + 50*50*20*(5)
if ((dimnoiterref[idimpos] + 1) < len(shp)):
for i in range(dimnoiterref[idimpos] + 1,len(shp)) :
curadd = curadd * shp[i]
fposicecube = fposicecube + curadd
filestream.seek(voffset+vsize*fposicecube)
if mode == 'read':
# rwchsize=rwchunksize
#print 'test',j,rwchunksize,j+rwchunksize,icecube.shape,voffset,vsize,fposicecube,voffset+vsize*fposicecube
icecube[j:(j+rwchsize)] = np.fromfile(filestream,dtype=vtype,count=rwchsize)
# print '_rwchunksize',rwchunksize,icecube[j:(j+rwchunksize)].shape,rwchunksize*vsize
# icecube[j:(j+1)] = np.fromstring(filestream.read(vsize),dtype=vtype)
elif mode == 'write':
# rwchsize=rwchunksizeout
filestream.seek(vsize*fposicecube)
# print vtype
filestream.write(icecube[j:(j+rwchsize)])
#print 'reading icecube with length / position: ', fposicecube,'/',1,icecube[j]
# print j, dimnoiterpos,fposicecube,j == fposicecube,icecube[j]
# go to next data strip
if dimnoiterpos != []:
# rwchsize: allow reading of chunks for the inner dimensions
dimnoiterpos[-1] = dimnoiterpos[-1] + rwchsize
for idimidx,edimidx in enumerate(reversed(dimnoiterpos)):
if idimidx > 0:
while dimnoiterpos[idimidx] >= dimnoiter[idimidx]:
#print idimidx,dimnoiter[idimidx]
dimnoiterpos[idimidx-1] = dimnoiterpos[idimidx-1] + 1
dimnoiterpos[idimidx] -= dimnoiter[idimidx]
j = j+rwchsize
icecube.shape = dimnoiter
if mode == 'read':
return icecube
def readicecubeps(fstream,shp,dimiterref,dimiter,dimiterpos,dimnoiterref,dimnoiter,vtype,vsize,voffset,rwchsize):
"""
read an icecube by sorting the indices (highest at the back).
perform an in-memory Post Swap of dimensions (very fast) to compensate for the sorting.
we allow reading in chunks according to the inner dimensions. They will be mostly there because we allow an max-icecubesize
"""
# print 'trns:',zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))[1]
icecube =rwicecube(fstream,shp,dimiterref,dimiter,dimiterpos,sorted(dimnoiterref),sorted(dimnoiter),None,vtype,vsize,voffset,rwchsize,'read')
# print 'shape',icecube.shape
# print 'shape tr',np.transpose(icecube,zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))[1]).shape
print icecube.shape,zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))[1]
trns = zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))[1]
# print 'test write',data.shape,trns
# build the 'inverse permutation' operator for tranposition before writeout
inv = range(len(trns))
for itrns, etrns in enumerate(trns):
inv[etrns] = itrns
return np.transpose(icecube,inv)
def writeicecubeps(fstream,shp,dimiterref,dimiter,dimiterpos,dimnoiterref,dimnoiter,data,vtype,vsize,voffset,rwchsize):
"""
write an icecube and perform an in-memory Post Swap of dimensions before (very fast)
hereby, we acquire the order of the icecube dimensions
"""
# print 'shape',icecube,icecube.shape
# print icecube.shape,zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))
# if dimnoiterref == None:
# return icecube
# else:
# return np.transpose(icecube,zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))[1])
trns = zip(*sorted(zip(dimnoiterref,range(len(dimnoiterref))),key=itemgetter(0,1)))[1]
# print 'test write',data.shape,trns
# build the 'inverse permutation' operator for tranposition before writeout
# print 'test trans',np.transpose(data,inv).shape
# print 'hello2',data.shape
# print 'hello3',dataout.shape
rwicecube(fstream,shp,dimiterref,dimiter,dimiterpos,sorted(dimnoiterref),sorted(dimnoiter),np.transpose(data,trns),vtype,vsize,voffset,rwchsize,'write')
self = io.netcdf.netcdf_file('/home/hendrik/data/belgium_aq/rcm/aq09/stage2/int2lm/laf2009010100_urb_ahf.nc','r')
# self = io.netcdf.netcdf_file('/home/hendrik/data/global/AHF_2005_2.5min.nc','r')
self.fp.seek(0)
magic = self.fp.read(3)
self.__dict__['version_byte'] = np.fromstring(self.fp.read(1), '>b')[0]
# Read file headers and set data.
# stolen from scipy: /usr/lib/python2.7/dist-packages/scipy/io/netcdf.py
self._read_numrecs()
self._read_dim_array()
self._read_gatt_array()
header = self.fp.read(4)
count = self._unpack_int()
vars = []
for ic in range(count):
vars.append(list(self._read_var()))
var = 'T'
ivar = np.where(np.array(vars) == var)[0][0]
fin = self.fp;
shp = self.variables[var].shape; vtype = vars[ivar][6]; vsize = self.variables[var].itemsize(); voffset = long(vars[ivar][7])
# shp = self.variables[var].shape; vtype = 'float32'; vsize = 4; voffset = vars[ivar][7]
# fin = open('/home/hendrik/data/belgium_aq/rcm/aq09/stage1/aurorabc/hour16_beleuros.bin','r')
# shp = (4,36,52);vtype=np.float32; vsize=4; voffset=0
fout = open('/home/hendrik/data/global/test.bin','wb')
# fout = open('/home/hendrik/data/belgium_aq/rcm/aq09/stage1/aurorabc/hour16_beleuros2.bin','wb')
# def readicecube(filestream,shp,dimiterref,dimpos,dimnoiterref=None):
# testdat = readicecubeps( fin, shp,(1,), (2,),dimnoiterref=(1,0))
# def shake(fin,shp,dimapplyref,fout,vtype,vsize,voffset,dimiterref=None,maxicecubesize=10000):
# shake( fin,shp,(1,2),offset,vtype,vsize,voffset,dimiterref=None,maxicecubesize=10000)
dimapplyref = (0,1,)
dimiterref = None
maxicecubesize=100000000
func = lambda x: [[np.mean(x)]]# *(1.+np.zeros(x.shape))
# def shake(fin,shp,dimapplyref,dimiterref=None,maxicecubesize=10000):
for tt in range(1):
"""
purpose
-------
swap specified dimensions to the back efficiently in a specified order
input parameters
----------------
fin: binary file input stream
fout: binary file output stream
shp: shape of the data stream
dimapplyref: dimensions over which the function is applied
dimiterref (optional): reference to dimensions that are swapped to the front. The order of those indices are
taken into account. Of not specified, it is guessed from the residual dimensions (defined in shp) that are not in dimnoiterref
"""
# the total length of data passed to function
lenapply = 1
# the dimension of data passed to function
dimapply = []
# we want to read the data in chunks (icecubes) as big as possible. In the first place, the data chunks contain of course the dimensions on which the functions are applied. Afterwards, the chunk dimensions is extended (in the outer(!) direction) to make the icecubes bigger.
# dimnoiterref: reference to dimensions that are swapped to the back. In any case, this needs to include all dimapplyrefs. Data in these dimensions are read in icecubes. The order of those indices are taken into account. We also add in front those dimensions that can be read at once (still needs to be tested!).
dimnoiterref = []
# the total length of the numpy array as IO memory buffer ('icecubes'). The programm will try to read this in chunks (cfr. rwchunksize- as large as possible. An in-memory transposition may be applied after read or before writing.
lennoiter = 1
# the dimension of the IO buffer array
dimnoiter = []
for edimapplyref in dimapplyref:
# dimapplyref.append(edimapplyref)
dimapply.append(shp[edimapplyref])
lenapply = lenapply*shp[edimapplyref]
dimnoiterref.append(edimapplyref)
dimnoiter.append(shp[edimapplyref])
lennoiter = lennoiter*shp[edimapplyref]
if lenapply > maxicecubesize:
print 'Warning, the function data input length of',lenapply,' (dimensions: ',dimapply,') exceeds the maximum icecubesize of '+str(maxicecubesize)+'.'
else:
idim = len(shp)-1
edim = shp[idim]
while ((idim >= 0) & ((lennoiter*edim) < maxicecubesize)):
if (idim not in dimnoiterref):
dimnoiterref.insert(0,idim)
dimnoiter.insert(0,edim)
lennoiter = lennoiter*edim
# print 'yeeps',idim,edim,dimnoiterref,dimnoiter,lennoiter, maxicecubesize
idim = idim - 1
edim = shp[idim]
print 'Icecubesize is: ',lennoiter,dimnoiter,dimnoiterref
lenapply = long(1)
dimapply = []
for idimapplyref in range(len(dimapplyref)):
dimapply.append(shp[dimapplyref[idimapplyref]])
lenapply = lenapply*dimapply[-1]
dimapplyout = np.array(func(np.zeros(dimapply))).shape
# dimnoiterrefout = list(dimnoiterref)
dimnoiterout = list(dimnoiter)
dimnoiterout[(len(dimnoiterout) - len(dimapply)):] = list(dimapplyout)
lennoiterout = 1
for edimnoiterout in dimnoiterout:
lennoiterout = lennoiterout*edimnoiterout
lenapplyout = long(1)
for edim in dimapplyout:
lenapplyout = lenapplyout*edim
dimiter = []
leniter = long(1)
# guess from residual dimensions that are not in dimnoiterref
if dimiterref == None:
dimiterref = []
for ishp,eshp in enumerate(shp):
if ishp not in dimnoiterref:
dimiterref.append(ishp)
for edimiterref in dimiterref:
dimiter.append(shp[edimiterref])
leniter = leniter*dimiter[-1]
#print dimiter
# the trivial case of only one iteration
if dimiter == []:
dimiter = [1]
dimiterpos = [0]
dimiterref = [-1]
else:
dimiterpos = [0]*len(dimiterref)
shpout = [None]*len(shp)
if dimiterref != [-1]:
for idimiterref,edimiterref in enumerate(dimiterref):
shpout[edimiterref] = dimiter[idimiterref]
for idimnoiterref,edimnoiterref in enumerate(dimnoiterref):
# print 'hello',idimnoiterref,edimnoiterref,dimnoiterout[idimnoiterref]
shpout[edimnoiterref] = dimnoiterout[idimnoiterref]
rwchunksize = 1
rwchunksizeout = 1
ishp = len(shp)-1
while ishp in dimnoiterref:
rwchunksize = rwchunksize*shp[ishp]
rwchunksizeout = rwchunksizeout*shpout[ishp]
ishp = ishp-1
print ' rwchunksize ',rwchunksize
print ' rwchunksizeout',rwchunksizeout
# # or
# for ishp,eshp in enumerate(shp):
# if ishp not in dimiterref:
# shpout.append(shp[i])
for j in range(leniter):
if j>0: sys.stdout.write ('\b'*(len(str(j-1)+'/'+str(leniter))+1))
print str(j)+'/'+str(leniter),
# print j,leniter,dimnoiterref,dimapplyref
# actually, this is just the end of the file output already written
# # read data from file
# fin.seek(voffset + vsize*fpos)
# reading icecube, rearranged in the order of dimensions specified by dimnoiterref
dataicecube = np.array(readicecubeps(fin,shp,dimiterref,dimiter,dimiterpos,dimnoiterref,dimnoiter,vtype,vsize,voffset,rwchunksize),dtype=vtype).ravel()
dataicecubeout = np.zeros((lennoiterout,),dtype=vtype)
# crush the ice
# dimnoiterref = (6 ,7 ,8 ,4 ,5)
# dimiter = (30,20,15,20,15)
# dimapplyref = (8 ,4 ,5)
# # guess from residual dimensions that are not in dimnoiterref
# if dimiterref == None:
# dimiterref = []
# for ishp,eshp in enumerate(shp):
# if ishp not in dimnoiterref:
# dimiterref.append(ishp)
# we know that the function apply dimensions are at the inner data
dimnoapply = []
lennoapply = long(1)
for idimnoiterref in range(len(dimnoiterref)-len(dimapplyref)):
dimnoapply.append(dimnoiter[idimnoiterref])
lennoapply = lennoapply*dimnoapply[-1]
if dimnoapply == []:
dimnoapply = [1]
dimnoapplypos = [0]*len(dimnoapply)
for k in range(lennoapply):
# print j,k
# actually, this is just the end of the file output already written
apos = 0
# e.g. apos = (9)+ 20*(10) + 50*50*20*(5)
for idimpos,edimpos in enumerate(dimnoapplypos):
curadd = edimpos
#e.g. if edimpos == (5): curadd = 50*50*20*(5)
if ((idimpos + 1) < len(dimnoiterref)):
for i in range(idimpos + 1,len(dimnoiterref)) :
curadd = curadd * dimnoiter[i]
# curaddout = curaddout * dimnoiteroutref[i]
apos = apos + curadd
aposout = 0
# e.g. aposout = (9)+ 20*(10) + 50*50*20*(5)
for idimpos,edimpos in enumerate(dimnoapplypos):
curadd = edimpos
#e.g. if edimpos == (5): curadd = 50*50*20*(5)
if ((idimpos + 1) < len(dimnoiterref)):
for i in range(idimpos + 1,len(dimnoiterref)) :
curadd = curadd * dimnoiterout[i]
# curaddout = curaddout * dimnoiteroutref[i]
aposout = aposout + curadd
hunk = dataicecube[apos:(apos+lenapply)]
hunk.shape = dimapply
# apply the function
hunkout = np.array(func(hunk)) #np.array((np.zeros(hunk.shape) + 1)*np.mean(hunk),dtype=vtype)
# print 'hunk ',apos, hunk.shape, lenapply
# print 'hunkout',aposout, hunkout.shape, lenapplyout
dataicecubeout[aposout:(aposout+lenapplyout)] = np.array(hunkout[:].ravel(),dtype=vtype)
# print aposout, aposout+lenapplyout,lenapplyout,dataicecubeout
# go to next data slice
dimnoapplypos[-1] = dimnoapplypos[-1] + 1
for idimidx,edimidx in enumerate(reversed(dimnoapplypos)):
# # alternative (makes 'dimiter' redundant)
# if dimiterpos[idimidx] == shp[dimiterref[idimidx]]:
if idimidx > 0:
if dimnoapplypos[idimidx] == dimnoapply[idimidx]:
dimnoapplypos[idimidx-1] = dimnoapplypos[idimidx-1] + 1
dimnoapplypos[idimidx] = 0
# print "hello",dataicecubeout.shape, dimnoiter
dataicecubeout.shape = dimnoiterout
# print dataicecubeout
writeicecubeps(fout,shpout,dimiterref,dimiter,dimiterpos,dimnoiterref,dimnoiterout,dataicecubeout,vtype,vsize,voffset,rwchunksizeout)
#print dimiterpos
# go to next data slice
dimiterpos[-1] = dimiterpos[-1] + 1
for idimidx,edimidx in enumerate(reversed(dimiterpos)):
# # alternative (makes 'dimiter' redundant)
# if dimiterpos[idimidx] == shp[dimiterref[idimidx]]:
if dimiterpos[idimidx] == dimiter[idimidx]:
if idimidx > 0:
dimiterpos[idimidx-1] = dimiterpos[idimidx-1] + 1
dimiterpos[idimidx] = 0
#print leniter
#
# def swapindcs(fin,shp,dimnoiterref,fout,dimiterref=None):
# """
# purpose
# -------
#
# swap specified dimensions to the back efficiently in a specified order
#
# input parameters
# ----------------
#
# fin: binary file input stream
# fout: binary file output stream
# shp: shape of the filestream
# dimnoiterref: reference to dimensions that are swapped to the back. Data in these dimensions are treated as icecubes. The order of those indices are taken into account
# dimiterref (optional): reference to dimensions that are swapped to the front. The order of those indices are
# taken into account. Of not specified, it is guessed from the residual dimensions (defined in shp) that are not in dimnoiterref
# """
#
# dimiter = []
# leniter = long(1)
#
# # guess from residual dimensions that are not in dimnoiterref
# if dimiterref == None:
# dimiterref = []
# for ishp,eshp in enumerate(shp):
# if ishp not in dimnoiterref:
# dimiterref.append(ishp)
# for edimiterref in dimiterref:
# dimiter.append(shp[edimiterref])
# leniter = leniter*dimiter[-1]
#
# dimiterpos = [0]*len(dimiter)
#
#
# shpout = []
# for edimiterref in dimiterref:
# shpout.append(shp[edimiterref])
#
# for edimnoiterref in dimnoiterref:
# shpout.append(shp[edimnoiterref])
# # # or
# # for ishp,eshp in enumerate(shp):
# # if ishp not in dimiterref:
# # shpout.append(shp[i])
#
# for j in range(leniter):
#
# # actually, this is just the end of the file output already written
# fposout = 0
# # e.g. fposout = (9)+ 20*(10) + 50*50*20*(5)
# for idimpos,edimpos in enumerate(dimiterpos):
# curadd = edimpos
# #e.g. if edimpos == (5): curadd = 50*50*20*(5)
# if ((idimpos + 1) < len(shpout)):
# for i in range(idimpos + 1,len(shpout)) :
# curadd = curadd * shpout[i]
#
# fposout = fposout + curadd
#
# # drop data to file in reordered way
# fout.seek(4*fposout)
# np.array(readicecubeps(fin,shp,dimiterref,dimiterpos,dimnoiterref),dtype='float32').tofile(fout)
# # go to next data slice
# dimiterpos[-1] = dimiterpos[-1] + 1
# for idimidx,edimidx in enumerate(reversed(dimiterpos)):
# # # alternative (makes 'dimiter' redundant)
# # if dimiterpos[idimidx] == shp[dimiterref[idimidx]]:
# if dimiterpos[idimidx] == dimiter[idimidx]:
# if idimidx > 0:
# dimiterpos[idimidx-1] = dimiterpos[idimidx-1] + 1
# dimiterpos[idimidx] = 0
# print leniter
#
#
# def outerloop(fin,shp,dimiterref):
# """
# loop over the dimensions over which we want to iterate and that are within the icecubes
# filestream: binary file refence
# shp: shape of the filestream
# dimiterref: reference to dimensions over which no slice is performed
# """
#
# dimiter = []
# leniter = long(1)
# for edimiterref in dimiterref:
# dimiter.append(shp[edimiterref])
# leniter = leniter*dimiter[-1]
#
# dimiterpos = [0]*len(dimiter)
#
# for j in range(leniter):
# print readicecube(fin,shp,dimiterref,dimiterpos)
#
# # go to next data slice
# dimiterpos[-1] = dimiterpos[-1] + 1
# for idimidx,edimidx in enumerate(reversed(dimiterpos)):
# # # alternative (makes 'dimiter' redundant)
# # if dimiterpos[idimidx] == shp[dimiterref[idimidx]]:
# if dimiterpos[idimidx] == dimiter[idimidx]:
# if idimidx > 0:
# dimiterpos[idimidx-1] = dimiterpos[idimidx-1] + 1
# dimiterpos[idimidx] = 0
fout.close()
fin.close()
fread = open('/home/hendrik/data/global/test.bin','r')
ipol = 0
nx = shpout[3]
ny = shpout[2]
nz = 1# shp[0]
iz = 0
fig = pl.figure()
fread.seek((ipol*nz + iz)*ny*nx*vsize,0)
field = np.fromfile(fread,dtype=vtype,count=ny*nx)
field.shape = (ny,nx)
pl.imshow(field)
fig.show()
# pl.imshow(testdat)
# fig.show()
# fread.close()
fread.close()
| hendrikwout/pynacolada | trash/pynacolada-20130925-1.py | Python | gpl-3.0 | 25,078 |
# This script showcases lines and parked runnables
#
# The script creates a line object between the positions of the Earth and the Moon. Then,
# it parks a runnable which updates the line points with the new positions of the
# objects, so that the line is always up to date, even when the objects move. Finally,
# time is started to showcase the line movement.
# Created by Toni Sagrista
from py4j.java_gateway import JavaGateway, GatewayParameters, CallbackServerParameters
class LineUpdaterRunnable(object):
def __init__(self, polyline):
self.polyline = polyline
def run(self):
earthp = gs.getObjectPosition("Earth")
moonp = gs.getObjectPosition("Moon")
pl = self.polyline.getPointCloud()
pl.setX(0, earthp[0])
pl.setY(0, earthp[1])
pl.setZ(0, earthp[2])
pl.setX(1, moonp[0])
pl.setY(1, moonp[1])
pl.setZ(1, moonp[2])
self.polyline.markForUpdate()
def toString():
return "line-update-runnable"
class Java:
implements = ["java.lang.Runnable"]
gateway = JavaGateway(gateway_parameters=GatewayParameters(auto_convert=True),
callback_server_parameters=CallbackServerParameters())
gs = gateway.entry_point
gs.cameraStop()
gs.stopSimulationTime()
gs.setVisibility("element.orbits", True)
gs.setCameraLock(True)
gs.setFov(49)
gs.goToObject("Earth", 91.38e-2)
print("We will now add a line between the Earth and Moon")
earthp = gs.getObjectPosition("Earth")
moonp = gs.getObjectPosition("Moon")
gs.addPolyline("line-em", [earthp[0], earthp[1], earthp[2], moonp[0], moonp[1], moonp[2]], [ 1., .2, .2, .8 ], 1 )
gs.sleep(0.5)
# create line
line_em = gs.getObject("line-em")
# park the line updater
gs.parkRunnable("line-updater", LineUpdaterRunnable(line_em))
gs.setSimulationPace(1e5)
gs.startSimulationTime()
gs.sleep(20)
gs.stopSimulationTime()
# clean up and finish
print("Cleaning up and ending")
gs.unparkRunnable("line-updater")
gs.removeModelObject("line-em")
gs.cameraStop()
gs.maximizeInterfaceWindow()
gs.enableInput()
# close connection
gateway.close()
| ari-zah/gaiasky | assets/scripts/showcases/line-objects-update.py | Python | lgpl-3.0 | 2,126 |
from datetime import datetime, timedelta
from django import template
from package.models import Commit
from package.context_processors import used_packages_list
register = template.Library()
class ParticipantURLNode(template.Node):
def __init__(self, repo, participant):
self.repo = template.Variable(repo)
self.participant = template.Variable(participant)
def render(self, context):
repo = self.repo.resolve(context)
participant = self.participant.resolve(context)
if repo.user_url:
user_url = repo.user_url % participant
else:
user_url = '%s/%s' % (repo.url, participant)
return user_url
@register.tag
def participant_url(parser, token):
try:
tag_name, repo, participant = token.split_contents()
except ValueError:
raise template.TemplateSyntaxError, "%r tag requires exactly two arguments" % token.contents.split()[0]
return ParticipantURLNode(repo, participant)
@register.filter
def commits_over_52(package):
now = datetime.now()
commits = Commit.objects.filter(
package=package,
commit_date__gt=now - timedelta(weeks=52),
).values_list('commit_date', flat=True)
weeks = [0] * 52
for cdate in commits:
age_weeks = (now - cdate).days // 7
if age_weeks < 52:
weeks[age_weeks] += 1
return ','.join(map(str, reversed(weeks)))
@register.inclusion_tag('package/templatetags/_usage_button.html', takes_context=True)
def usage_button(context):
response = used_packages_list(context['request'])
response['STATIC_URL'] = context['STATIC_URL']
response['package'] = context['package']
if context['package'].pk in response['used_packages_list']:
response['usage_action'] = "remove"
response['image'] = "usage_triangle_filled"
else:
response['usage_action'] = "add"
response['image'] = "usage_triangle_hollow"
return response
| audreyr/opencomparison | package/templatetags/package_tags.py | Python | mit | 1,974 |
# encoding: UTF-8
"""
包含一些CTA因子挖掘中常用的函数
"""
from __future__ import division
from visFunction import *
from calcFunction import *
| moonnejs/uiKLine | ctaFunction/__init__.py | Python | mit | 159 |
import sys
sys.path.insert(1, "../../../")
import h2o
######################################################
#
# Sample Running GBM on prostate.csv
def prostateGBM(ip,port):
# Connect to a pre-existing cluster
# connect to localhost:54321
df = h2o.import_file(path=h2o.locate("smalldata/logreg/prostate.csv"))
df.describe()
# Remove ID from training frame
train = df.drop("ID")
# For VOL & GLEASON, a zero really means "missing"
vol = train['VOL']
vol[vol == 0] = None
gle = train['GLEASON']
gle[gle == 0] = None
# Convert CAPSULE to a logical factor
train['CAPSULE'] = train['CAPSULE'].asfactor()
# See that the data is ready
train.describe()
# Run GBM
my_gbm = h2o.gbm( y=train["CAPSULE"],
validation_y=train["CAPSULE"],
x=train[1:],
validation_x=train[1:],
ntrees=50,
learn_rate=0.1,
distribution="bernoulli")
my_gbm.show()
my_gbm_metrics = my_gbm.model_performance(train)
my_gbm_metrics.show()
my_gbm_metrics #.show(criterion=my_gbm_metrics.theCriteria.PRECISION)
if __name__ == "__main__":
h2o.run_test(sys.argv, prostateGBM)
| PawarPawan/h2o-v3 | h2o-py/tests/testdir_algos/gbm/pyunit_prostateGBM.py | Python | apache-2.0 | 1,224 |
from qxpacker.Container import Container , ContainerFileType
from qxpacker.DdContainer import DdContainer
from qxpacker.EchoContainer import EchoContainer
import os , tempfile
import tarfile
# OPTIONS:
# name : possible values : description
#------------------------------------------------------------
# compress : gzip none : Set compression type
# loader : dd echo : Select bootloader (dd by default)
class TarContainer(Container):
compression=''
bootloader = DdContainer()
def shell_module_required(self):
return []
def __init__(self, ctx = None):
self.compression = ''
self.bootloader = DdContainer()
Container.__init__(self, ctx)
def _control_code_flush(self, shellctx):
shellctx.constant("CONTAINER_TYPE","tar")
def _data_flush(self, to, callnext, data_extraction_fname = "extract_data", after_extraction = "", control_code_flush=True):
targz_tmpfile = tempfile.mktemp()
tf = tarfile.open(targz_tmpfile, 'w:' + self.compression)
for f in self.search(recurse_datalist = True):
absname = os.path.abspath(f.name)
dirname=os.path.dirname(absname)
filename=os.path.basename(absname)
tf.add(absname , arcname = f.target_path)
tf.close()
to.write("tarcontainer_extract() { \n")
to.write("tar -xf")
if self.compression == "gz":
to.write("z")
to.write(" $TMPFILENAME\n");
if after_extraction != "":
to.write("\t%s $@" % after_extraction)
to.write(" }\n")
self.bootloader.add(targz_tmpfile, tpath = "$TMPFILENAME")
self.bootloader.flush(to, callnext, data_extraction_fname = data_extraction_fname, after_extraction = "tarcontainer_extract", control_code_flush = False)
if os.path.isfile(targz_tmpfile): os.remove(targz_tmpfile)
def set_opt(self,opt,val):
if opt == 'compress':
if val == 'none': self.compression = ''
elif val == 'gzip': self.compression = 'gz'
else: raise Exception('Bad option value ' + opt + ' = ' + val)
elif opt == 'loader':
if val == 'dd': self.bootloader = DdContainer()
elif val == 'echo': self.bootloader = EchoContainer()
else: raise Exception('Bad option value ' + opt + ' = ' + val)
| TODOTomorrow/qxpacker | qxpacker/TarContainer.py | Python | mit | 2,442 |
"""
Copyright 2010 Rusty Klophaus <[email protected]>
Copyright 2010 Justin Sheehy <[email protected]>
Copyright 2009 Jay Baird <[email protected]>
This file is provided to you under the Apache License,
Version 2.0 (the "License"); you may not use this file
except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
"""
MD_CTYPE = "content-type"
MD_CHARSET = "charset"
MD_ENCODING = "content-encoding"
MD_VTAG = "vtag"
MD_LINKS = "links"
MD_LASTMOD = "lastmod"
MD_LASTMOD_USECS = "lastmod-usecs"
MD_USERMETA = "usermeta"
MD_INDEX = "index"
| richleland/riak-python-client | riak/metadata.py | Python | apache-2.0 | 930 |
from itertools import count
def main(j, args, params, tags, tasklet):
page = args.page
page.addCSS(cssContent='''
.comparison-block{
border: 1px solid #CCE4E2;
margin-bottom: 10px;
}
.comparison-block:hover{
border: 1px solid #B0D7D5;
}
.comparison-block:hover .title{
background-color: #62a29e;
color: #fff;
}
.comparison-block:hover .title *{
color: #fff;
}
.comparison-footer{
padding: 10px 0;
border-top: 1px solid #CCE4E2;
margin-top: 10px;
}
.comparison-footer button{
margin-top: 8px;
}
.text-center{
text-align: center;
}
.comparison-block .title{
background: #C7E1E0;
padding: 15px;
}
.comparison-block .title small, .price small, .comparison-footer small{
color: #8D8A8A;
}
.comparison-block .title p{
margin-bottom: 5px;
color: #4F918D;
font-weight: bold;
}
.comparison-block .title p.small{
font-size: 95%;
}
.comparison-block .title p.medium{
font-size: 18px;
}
.comparison-block .title p.large{
font-size: 180%;
}
.comparison-block .price{
padding-top: 15px;
background-color: #F1F0F0;
border-top: 1px solid #CCE4E2;
border-bottom: 1px solid #CCE4E2;
margin-bottom: 10px;
padding-bottom: 10px;
}
.comparison-block .price p{
font-size: 30px;
color: #767677;
margin-bottom: 0;
}
.comparison-block .property{
padding: 3px;
font-size: 90%;
padding-left: 8px;
cursor: default;
}
.comparison-block .property:hover{
background-color: #62a29e;
color: #fff;
}
.comparison-block .currency{
font-size: 60%;
}''')
hrd = j.data.hrd.get(content=args.cmdstr)
currency = hrd.getStr('currency', '')
blocks = []
for i in count(1):
block = {}
block['Title'] = hrd.getStr('block.{}.title.text'.format(i), '').replace(r'\n', '<br />')
if not block['Title']:
break
block['TitleSize'] = hrd.getStr('block.{}.title.size'.format(i), '')
block['SubtitleText'] = hrd.getStr('block.{}.subtitle.text'.format(i), '').replace(r'\n', '<br />')
block['SubtitleSize'] = hrd.getStr('block.{}.subtitle.size'.format(i), '')
block['Price'] = hrd.getStr('block.{}.price'.format(i), '')
block['PriceSubtitle'] = hrd.getStr('block.{}.price.subtitle'.format(i), '').replace(r'\n', '<br />')
block['Property1'] = hrd.getStr('block.{}.property.1'.format(i), '').replace(r'\n', '<br />')
block['Property2'] = hrd.getStr('block.{}.property.2'.format(i), '').replace(r'\n', '<br />')
block['Property3'] = hrd.getStr('block.{}.property.3'.format(i), '').replace(r'\n', '<br />')
block['Property4'] = hrd.getStr('block.{}.property.4'.format(i), '').replace(r'\n', '<br />')
block['OrderButtonText'] = hrd.getStr('block.{}.order.button.text'.format(i), '').replace(r'\n', '<br />')
block['OrderButtonStyle'] = hrd.getStr('block.{}.order.button.style'.format(i), '').lower()
block['OrderButtonSubtext'] = hrd.getStr('block.{}.order.button.subtext'.format(i), '').replace(r'\n', '<br />')
block['OrderButtonSubLink'] = hrd.getStr('block.{}.order.button.link'.format(i), '')
blocks.append(block)
page.addMessage('''
<div class="container">
''')
for block in blocks:
block['i'] = 12 / len(blocks)
page.addMessage('''
<div class="span{i} comparison-block">
<div class="title text-center">
<p class="{TitleSize}">{Title}</p>
<small>{SubtitleText}</small>
</div>
<div class="price text-center">
<p><small class="currency">{currency}</small>{Price}</p>
<small>{PriceSubtitle}</small>
</div>
<div class="property-container">
'''.format(currency=currency, **block))
if(block['Property1']):
page.addMessage('''
<div class="property">
{Property1}
</div>
'''.format(**block))
if(block['Property2']):
page.addMessage('''
<div class="property">
{Property2}
</div>
'''.format(**block))
if(block['Property3']):
page.addMessage('''
<div class="property">
{Property3}
</div>
'''.format(**block))
if(block['Property4']):
page.addMessage('''
<div class="property">
{Property4}
</div>
'''.format(**block))
page.addMessage('''
</div>
<div class="comparison-footer text-center">
<small>{OrderButtonSubtext}</small>
<br/>
<a href="{OrderButtonSubLink}" class="btn btn-{OrderButtonStyle}">{OrderButtonText}</a>
</div>
</div>
'''.format(**block))
page.addMessage('''</div>''')
params.result = page
return params
def match(j, args, params, tags, tasklet):
return True
| Jumpscale/jumpscale_portal8 | apps/portalbase/macros/page/Comparison/1_comparison.py | Python | apache-2.0 | 4,616 |
"""
This file contains general functions for swiftbrowser that don't merit their
own file.
"""
# -*- coding: utf-8 -*-
#pylint:disable=E1101
import os
import logging
from swiftclient import client
from django.shortcuts import render_to_response, redirect
from django.template import RequestContext
from django.contrib import messages
from django.conf import settings
from django.http import JsonResponse
from django.views.decorators.http import require_POST
from django.utils.translation import ugettext as _
from jfu.http import JFUResponse
from swiftbrowser.models import Photo
from swiftbrowser.forms import PseudoFolderForm, LoginForm, TimeForm
from swiftbrowser.utils import *
from swiftbrowser.views.containers import containerview
from swiftbrowser.views.objects import objectview
from swiftbrowser.views.limited_users import limited_users_login, \
limited_users_containerview
from openstack_auth.utils import get_session
import keystoneauth1.identity
from keystoneclient.v2_0 import client as v2_client
logger = logging.getLogger(__name__)
def login(request):
""" Tries to login user and sets session data """
request.session.flush()
#Process the form if there is a POST request.
if (request.POST):
form = LoginForm(request.POST)
if form.is_valid():
user = form.cleaned_data['username']
password = form.cleaned_data['password']
try:
# Users can login with "username" or "tenant:username"
tenant_specified = user.find(":") > 0
if tenant_specified:
# The user has specifed the tenant they want to login to
tenant_name, user = split_tenant_user_names(user)
# Authenticate with keystone
unscoped_auth = keystoneauth1.identity.v2.Password(
auth_url=settings.SWIFT_AUTH_URL,
username=user,
password=password)
unscoped_auth_ref = unscoped_auth.get_access(get_session())
keystone_token = unscoped_auth_ref.auth_token
keystoneclient = v2_client.Client(
token=keystone_token,
endpoint="https://olrcdev.scholarsportal.info:5000/v2.0/")
tenant_manager = keystoneclient.tenants
projects = tenant_manager.list()
# Save the tenants the user is part of.
request.session["tenants"] = \
[project.name for project in projects]
# When the user does not specify a tenant on login, use the
# first tenant as a default.
if not tenant_specified:
tenant_name = request.session["tenants"][0]
# Authenticate with swift
username = tenant_name + ":" + user
auth_version = settings.SWIFT_AUTH_VERSION or 1
(storage_url, auth_token) = client.get_auth(
settings.SWIFT_AUTH_URL, username, password,
auth_version=auth_version)
request.session['auth_token'] = auth_token
request.session['storage_url'] = storage_url
request.session['password'] = password
request.session['user'] = user
request.session['username'] = username
request.session['tenant_name'] = tenant_name
# Upon successful retrieval of a token, if we're unable to
# head the account, then the user is not an admin or
# swiftoperator and has access to only a container.
try:
client.head_account(storage_url, auth_token)
except:
request.session['norole'] = True
return redirect(limited_users_login)
return redirect(containerview)
# Swift authentication error.
except client.ClientException, e:
messages.error(request, _("Login failed: {0}".format(e)))
# Other error.
except Exception, e:
messages.error(request, _("Login failed: {0}").format(e))
# Login failure on invalid forms.
else:
user = ""
messages.error(request, _("Login failed."))
else:
form = LoginForm(None)
user = ""
return render_to_response(
'login.html',
{'form': form,
'username': user},
context_instance=RequestContext(request)
)
@session_valid
def toggle_public(request, container):
""" Sets/unsets '.r:*,.rlistings' container read ACL """
storage_url = request.session.get('storage_url', '')
auth_token = request.session.get('auth_token', '')
try:
meta = client.head_container(storage_url, auth_token, container)
except client.ClientException:
messages.add_message(request, messages.ERROR, _("Access denied."))
return redirect(containerview)
read_acl = meta.get('x-container-read', '')
if '.rlistings' and '.r:*' in read_acl:
read_acl = read_acl.replace('.r:*', '')
read_acl = read_acl.replace('.rlistings', '')
read_acl = read_acl.replace(',,', ',')
else:
read_acl += '.r:*,.rlistings'
headers = {'X-Container-Read': read_acl, }
try:
client.post_container(storage_url, auth_token, container, headers)
except client.ClientException:
messages.add_message(request, messages.ERROR, _("Access denied."))
return redirect(objectview, container=container)
@session_valid
def create_pseudofolder(request, container, prefix=None):
""" Creates a pseudofolder (empty object of type application/directory) """
storage_url = request.session.get('storage_url', '')
auth_token = request.session.get('auth_token', '')
form = PseudoFolderForm(request.POST)
if form.is_valid():
foldername = request.POST.get('foldername', None)
if prefix:
foldername = prefix + '/' + foldername
foldername = os.path.normpath(foldername)
foldername = foldername.strip('/')
foldername += '/'
content_type = 'application/directory'
obj = None
try:
client.put_object(storage_url, auth_token,
container, foldername, obj,
content_type=content_type)
messages.add_message(
request,
messages.SUCCESS,
_(
"The folder " +
request.POST.get('foldername', None) + " was created.")
)
except client.ClientException:
messages.add_message(request, messages.ERROR, _("Access denied."))
return JsonResponse({})
@session_valid
def settings_view(request):
""" Render the settings page with options to update the tenants default
tempurl time. """
storage_url = request.session.get('storage_url', '')
auth_token = request.session.get('auth_token', '')
# If temp url is set, display it, else, inform user the default is 7 days.
default_temp_time = get_default_temp_time(storage_url, auth_token)
if not default_temp_time:
default_temp_time = 604800 # 7 days in seconds
days_to_expiry = int(default_temp_time) / (24 * 3600)
hours_to_expiry = (int(default_temp_time) % (24 * 3600)) / 3600
tempurl_form = TimeForm(
initial={
'days': days_to_expiry,
'hours': hours_to_expiry,
}
)
return render_to_response(
"settings.html",
{
'session': request.session,
'tempurl_form': tempurl_form,
},
context_instance=RequestContext(request)
)
def get_version(request):
""" Render the verison information page as found on version.info. """
return render_to_response('version.html',
context_instance=RequestContext(request))
def switch_tenant(request, tenant, login=False):
""" Switch the user to another tenant. Authenticate under the new tenant
and redirect to the container view.
Redirect to login if login is True.
"""
user = request.session.get('user', '')
password = request.session.get('password', '')
username = tenant + ":" + user
auth_version = settings.SWIFT_AUTH_VERSION or 1
(storage_url, auth_token) = client.get_auth(
settings.SWIFT_AUTH_URL, username, password,
auth_version=auth_version)
request.session['auth_token'] = auth_token
request.session['storage_url'] = storage_url
request.session['tenant_name'] = tenant
if 'norole' in request.session:
if login:
return redirect(limited_users_login)
else:
return redirect(limited_users_containerview)
return redirect(containerview)
| bkawula/django-swiftbrowser | swiftbrowser/views/main.py | Python | apache-2.0 | 8,929 |
import json
try:
from django.contrib.gis.geos import GEOSGeometry
except Exception as e:
pass
from django.db.models import F, Q
class Query():
def __init__(self, json_query):
"""
Initializes the sub_tree of this query starting from the json_query parameter
:param json_query: dict
"""
pass
def applyOnQuerySet(self, queryset):
"""
Applies the sub_tree of this query to the queryset passed as parameter
:param queryset: Queryset
:return: Queryset
"""
return queryset
class CompositeQuery(Query):
order_by = None
def __init__(self, json_query):
self._sub_queries = []
for query in json_query['_sub_queries']:
if QueryDecoder.decodeJSONQuery(query):
self._sub_queries.append(QueryDecoder.decodeJSONQuery(query))
def applyOnQuerySet(self, queryset):
for query in self._sub_queries:
queryset = query.applyOnQuerySet(queryset)
return queryset
def get_sub_queries(self):
return self._sub_queries
class OrQuery(CompositeQuery):
def applyOnQuerySet(self, queryset):
qs = queryset.none()
for query in self._sub_queries:
qs = qs | query.applyOnQuerySet(queryset)
class All(Query):
def __init__(self, json_query):
pass
def applyOnQuerySet(self, queryset):
return queryset.all()
class Filter(Query):
"""
Applies the .filter( ... ) method on the queryset
"""
def __init__(self, json_query):
"""
Builds the query,
Syntax:
json_query = {
"_query_class":"filter",
"_filter_condition": [string] | string,
"_values": [*] | *
}
:param json_query: dict
"""
self.filter_condition = json_query['_condition']
conditions = []
values = []
if isinstance(self.filter_condition, (list, tuple)):
for cond in self.filter_condition:
conditions.append(cond)
else:
conditions.append(self.filter_condition)
try:
value = GEOSGeometry(json.dumps(json_query['_value']))
except Exception as e:
value = json_query['_value']
self.filter_value = value
if isinstance(self.filter_condition, (list, tuple)):
if isinstance(self.filter_value, (list, tuple)):
for v in self.filter_value:
values.append(v)
else:
values.append(self.filter_value)
self.query = {}
for idx, condition in enumerate(conditions):
self.query[condition] = values[idx]
def applyOnQuerySet(self, queryset):
return queryset.filter(**self.query)
class OrderBy(Query):
def __init__(self, json_query):
self.order_by = json_query['_order_by']
def applyOnQuerySet(self, queryset):
queryset = queryset.order_by(*self.order_by)
return queryset
class Exclude(Query):
def __init__(self, json_query):
self.filter_condition = json_query['_condition']
conditions = []
values = []
if isinstance(self.filter_condition, (list, tuple)):
for cond in self.filter_condition:
conditions.append(cond)
else:
conditions.append(self.filter_condition)
try:
value = GEOSGeometry(json.dumps(json_query['_value']))
except Exception as e:
value = json_query['_value']
self.filter_value = value
if isinstance(self.filter_condition, (list, tuple)):
if isinstance(self.filter_value, (list, tuple)):
for v in self.filter_value:
values.append(v)
else:
values.append(self.filter_value)
self.query = {}
for idx, condition in enumerate(conditions):
self.query[condition] = values[idx]
def applyOnQuerySet(self, queryset):
return queryset.exclude(**self.query)
class SelfFieldFilter(Filter):
def __init__(self, json_query):
self.filter_condition = json_query['_condition']
self.inner_condition = json_query['_field_to_compare']
self.query = {self.filter_condition: F(self.inner_condition)}
class SelfFieldExclude(Exclude):
def __init__(self, json_query):
self.filter_condition = json_query['_condition']
self.inner_condition = json_query['_field_to_compare']
self.query = {self.filter_condition: F(self.inner_condition)}
class QueryFilter(Filter):
def __init__(self, json_query):
self.filter_condition = json_query['_condition']
self.inner_condition = json_query['_inner_condition']
self.filter_value = json_query['_value']
self.inner_query = {self.inner_condition: self.filter_value}
self.query = {self.filter_condition: Q(**self.inner_query)}
class QueryExclude(Exclude):
def __init__(self, json_query):
self.filter_condition = json_query['_condition']
self.inner_condition = json_query['_inner_condition']
self.filter_value = json_query['_value']
self.inner_query = {self.inner_condition: self.filter_value}
self.query = {self.filter_condition: Q(**self.inner_query)}
class Distinct(Query):
def applyOnQuerySet(self, queryset):
return queryset.distinct()
class QueryDecoder():
@staticmethod
def decodeJSONQuery(json_query):
query = None
if json_query is not None:
if '_query_class' in json_query:
try:
query = DRQ_QUERY_GLOSSARY[str(json_query['_query_class'])](json_query)
except:
query = None
return query
DRQ_QUERY_GLOSSARY = {
Query.__name__.lower(): Query,
Filter.__name__.lower(): Filter,
Exclude.__name__.lower(): Exclude,
OrderBy.__name__.lower(): OrderBy,
CompositeQuery.__name__.lower(): CompositeQuery,
OrQuery.__name__.lower(): OrQuery,
SelfFieldFilter.__name__.lower(): SelfFieldFilter,
SelfFieldExclude.__name__.lower(): SelfFieldExclude,
Distinct.__name__.lower(): Distinct
}
| nicfix/django-remote-queryset | django_remote_queryset/queries.py | Python | mit | 6,270 |
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'NouabookItem'
db.create_table(u'elections_nouabookitem', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('title', self.gf('django.db.models.fields.CharField')(max_length=250, null=True, blank=True)),
('text', self.gf('django.db.models.fields.TextField')()),
('url', self.gf('django.db.models.fields.URLField')(max_length=300, null=True, blank=True)),
('urlVideo', self.gf('django.db.models.fields.URLField')(max_length=300, null=True, blank=True)),
('created', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
('updated', self.gf('django.db.models.fields.DateTimeField')(auto_now=True, null=True, blank=True)),
('candidate', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['candideitorg.Candidate'])),
('category', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['elections.NouabookCategory'])),
))
db.send_create_signal(u'elections', ['NouabookItem'])
# Adding model 'Attachment'
db.create_table(u'elections_attachment', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('modelName', self.gf('django.db.models.fields.CharField')(max_length=50, null=True, blank=True)),
('file', self.gf('django.db.models.fields.files.FileField')(max_length=100)),
('messageId', self.gf('django.db.models.fields.IntegerField')(max_length=10, null=True)),
('author_id', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
))
db.send_create_signal(u'elections', ['Attachment'])
# Adding model 'NouabookCategory'
db.create_table(u'elections_nouabookcategory', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=255)),
))
db.send_create_signal(u'elections', ['NouabookCategory'])
# Adding field 'VotaInteligenteMessage.nouabookItem'
db.add_column(u'elections_votainteligentemessage', 'nouabookItem',
self.gf('django.db.models.fields.related.ForeignKey')(to=orm['elections.NouabookItem'], null=True, on_delete=models.SET_NULL, blank=True),
keep_default=False)
# Adding field 'CandidatePerson.canUsername'
db.add_column(u'elections_candidateperson', 'canUsername',
self.gf('django.db.models.fields.related.OneToOneField')(to=orm['auth.User'], unique=True, null=True),
keep_default=False)
# Adding field 'CandidatePerson.ranking'
db.add_column(u'elections_candidateperson', 'ranking',
self.gf('django.db.models.fields.IntegerField')(default=0),
keep_default=False)
def backwards(self, orm):
# Deleting model 'NouabookItem'
db.delete_table(u'elections_nouabookitem')
# Deleting model 'Attachment'
db.delete_table(u'elections_attachment')
# Deleting model 'NouabookCategory'
db.delete_table(u'elections_nouabookcategory')
# Deleting field 'VotaInteligenteMessage.nouabookItem'
db.delete_column(u'elections_votainteligentemessage', 'nouabookItem_id')
# Deleting field 'CandidatePerson.canUsername'
db.delete_column(u'elections_candidateperson', 'canUsername_id')
# Deleting field 'CandidatePerson.ranking'
db.delete_column(u'elections_candidateperson', 'ranking')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "u'user_set'", 'blank': 'True', 'to': u"orm['auth.Group']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "u'user_set'", 'blank': 'True', 'to': u"orm['auth.Permission']"}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'candideitorg.answer': {
'Meta': {'object_name': 'Answer'},
'caption': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'question': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['candideitorg.Question']"}),
'remote_id': ('django.db.models.fields.IntegerField', [], {'default': '12'}),
'resource_uri': ('django.db.models.fields.CharField', [], {'default': "'resource empty'", 'max_length': '255'})
},
u'candideitorg.candidate': {
'Meta': {'object_name': 'Candidate'},
'answers': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'to': u"orm['candideitorg.Answer']", 'null': 'True', 'blank': 'True'}),
'election': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['candideitorg.Election']"}),
'has_answered': ('django.db.models.fields.BooleanField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'photo': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'remote_id': ('django.db.models.fields.IntegerField', [], {'default': '12'}),
'resource_uri': ('django.db.models.fields.CharField', [], {'default': "'resource empty'", 'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'})
},
u'candideitorg.category': {
'Meta': {'object_name': 'Category'},
'election': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['candideitorg.Election']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'order': ('django.db.models.fields.IntegerField', [], {}),
'remote_id': ('django.db.models.fields.IntegerField', [], {'default': '12'}),
'resource_uri': ('django.db.models.fields.CharField', [], {'default': "'resource empty'", 'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'})
},
u'candideitorg.election': {
'Meta': {'object_name': 'Election'},
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'information_source': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'logo': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'remote_id': ('django.db.models.fields.IntegerField', [], {'default': '12'}),
'resource_uri': ('django.db.models.fields.CharField', [], {'default': "'resource empty'", 'max_length': '255'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '255'}),
'use_default_media_naranja_option': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
u'candideitorg.question': {
'Meta': {'object_name': 'Question'},
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['candideitorg.Category']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'question': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'remote_id': ('django.db.models.fields.IntegerField', [], {'default': '12'}),
'resource_uri': ('django.db.models.fields.CharField', [], {'default': "'resource empty'", 'max_length': '255'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'elections.attachment': {
'Meta': {'object_name': 'Attachment'},
'author_id': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'file': ('django.db.models.fields.files.FileField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'messageId': ('django.db.models.fields.IntegerField', [], {'max_length': '10', 'null': 'True'}),
'modelName': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True', 'blank': 'True'})
},
u'elections.candidateperson': {
'Meta': {'object_name': 'CandidatePerson'},
'canUsername': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['auth.User']", 'unique': 'True', 'null': 'True'}),
'candidate': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'relation'", 'unique': 'True', 'to': u"orm['candideitorg.Candidate']"}),
'custom_ribbon': ('django.db.models.fields.CharField', [], {'max_length': '18', 'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'default': "''", 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'person': ('django.db.models.fields.related.OneToOneField', [], {'related_name': "'relation'", 'unique': 'True', 'to': u"orm['popit.Person']"}),
'portrait_photo': ('django.db.models.fields.CharField', [], {'max_length': '256', 'null': 'True', 'blank': 'True'}),
'ranking': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'reachable': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
u'elections.election': {
'Meta': {'object_name': 'Election'},
'can_election': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['candideitorg.Election']", 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'extra_info_content': ('django.db.models.fields.TextField', [], {'max_length': '3000', 'null': 'True', 'blank': 'True'}),
'extra_info_title': ('django.db.models.fields.CharField', [], {'max_length': '50', 'null': 'True', 'blank': 'True'}),
'highlighted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'popit_api_instance': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['popit.ApiInstance']", 'null': 'True', 'blank': 'True'}),
'searchable': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'slug': ('autoslug.fields.AutoSlugField', [], {'unique': 'True', 'max_length': '50', 'populate_from': "'name'", 'unique_with': '()'}),
'uses_face_to_face': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'uses_preguntales': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'uses_questionary': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'uses_ranking': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'uses_soul_mate': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'writeitinstance': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['writeit.WriteItInstance']", 'null': 'True', 'blank': 'True'})
},
u'elections.nouabookcategory': {
'Meta': {'object_name': 'NouabookCategory'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
u'elections.nouabookitem': {
'Meta': {'object_name': 'NouabookItem'},
'candidate': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['candideitorg.Candidate']"}),
'category': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['elections.NouabookCategory']"}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'text': ('django.db.models.fields.TextField', [], {}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '250', 'null': 'True', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'null': 'True', 'blank': 'True'}),
'url': ('django.db.models.fields.URLField', [], {'max_length': '300', 'null': 'True', 'blank': 'True'}),
'urlVideo': ('django.db.models.fields.URLField', [], {'max_length': '300', 'null': 'True', 'blank': 'True'})
},
u'elections.votainteligenteanswer': {
'Meta': {'object_name': 'VotaInteligenteAnswer'},
'content': ('django.db.models.fields.TextField', [], {}),
'created': ('django.db.models.fields.DateTimeField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'message': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'answers'", 'to': u"orm['elections.VotaInteligenteMessage']"}),
'person': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'answers'", 'to': u"orm['popit.Person']"})
},
u'elections.votainteligentemessage': {
'Meta': {'object_name': 'VotaInteligenteMessage', '_ormbases': [u'writeit.Message']},
'author_ville': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '35'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'fbshared': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'first_moderation': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_video': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
u'message_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['writeit.Message']", 'unique': 'True', 'primary_key': 'True'}),
'moderated': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'moderated_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'nouabookItem': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['elections.NouabookItem']", 'null': 'True', 'on_delete': 'models.SET_NULL', 'blank': 'True'}),
'pending_status': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'rejected_status': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
},
u'popit.apiinstance': {
'Meta': {'object_name': 'ApiInstance'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'url': ('popit.fields.ApiInstanceURLField', [], {'unique': 'True', 'max_length': '200'})
},
u'popit.person': {
'Meta': {'object_name': 'Person'},
'api_instance': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['popit.ApiInstance']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image': ('django.db.models.fields.URLField', [], {'max_length': '200', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'popit_url': ('popit.fields.PopItURLField', [], {'default': "''", 'max_length': '200', 'unique': 'True', 'null': 'True', 'blank': 'True'}),
'summary': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'writeit.message': {
'Meta': {'object_name': 'Message', '_ormbases': [u'writeit.WriteItDocument']},
'author_email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'author_name': ('django.db.models.fields.CharField', [], {'max_length': '512'}),
'content': ('django.db.models.fields.TextField', [], {}),
'people': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'messages'", 'symmetrical': 'False', 'to': u"orm['popit.Person']"}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '512'}),
'subject': ('django.db.models.fields.CharField', [], {'max_length': '512'}),
u'writeitdocument_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['writeit.WriteItDocument']", 'unique': 'True', 'primary_key': 'True'}),
'writeitinstance': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['writeit.WriteItInstance']"})
},
u'writeit.writeitapiinstance': {
'Meta': {'object_name': 'WriteItApiInstance'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'url': ('django.db.models.fields.URLField', [], {'unique': 'True', 'max_length': '200'})
},
u'writeit.writeitdocument': {
'Meta': {'object_name': 'WriteItDocument'},
'api_instance': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['writeit.WriteItApiInstance']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'remote_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'url': ('django.db.models.fields.CharField', [], {'max_length': '256'})
},
u'writeit.writeitinstance': {
'Meta': {'object_name': 'WriteItInstance', '_ormbases': [u'writeit.WriteItDocument']},
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
u'writeitdocument_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['writeit.WriteItDocument']", 'unique': 'True', 'primary_key': 'True'})
}
}
complete_apps = ['elections'] | lfalvarez/nouabook | elections/migrations/0015_auto__add_nouabookitem__add_attachment__add_nouabookcategory__add_fiel.py | Python | gpl-3.0 | 21,570 |
"""create countries table with data
Revision ID: da93beb77564
Revises:
Create Date: 2017-03-16 09:15:50.966604
"""
from alembic import op
import sqlalchemy as sa
from radar.models.patient_addresses import COUNTRIES
from radar.models.countries import Country
# revision identifiers, used by Alembic.
revision = 'da93beb77564'
down_revision = None
branch_labels = None
depends_on = None
from sqlalchemy.orm import sessionmaker
Session = sessionmaker()
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('countries',
sa.Column('code', sa.String(length=2), nullable=False),
sa.Column('label', sa.String(length=100), nullable=False),
sa.PrimaryKeyConstraint('code')
)
bind = op.get_bind()
session = Session(bind=bind)
countries = [Country(code=code, label=label) for code, label in COUNTRIES.items()]
session.add_all(countries)
session.commit()
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('countries')
# ### end Alembic commands ###
| renalreg/radar | migrations/versions/da93beb77564_create_countries_table_with_data.py | Python | agpl-3.0 | 1,122 |
from __future__ import unicode_literals
import logging
from django.db import migrations
from osf.utils.migrations import ensure_schemas, remove_schemas
logger = logging.getLogger(__file__)
class Migration(migrations.Migration):
dependencies = [
('osf', '0111_auto_20180605_1240'),
]
operations = [
migrations.RunPython(ensure_schemas, remove_schemas),
]
| erinspace/osf.io | osf/migrations/0112_ensure_schemas.py | Python | apache-2.0 | 394 |
# keywords define the Python language
keywordList = [
'False', 'class', 'finally', 'is', 'return',
'None', 'continue', 'for', 'lambda', 'try',
'True', 'def', 'from', 'nonlocal', 'while',
'and', 'del', 'global', 'not', 'with',
'as', 'elif', 'if', 'or', 'yield',
'assert', 'else', 'import', 'pass',
'break', 'except', 'in', 'raise' ]
# Scan a text file for Python keywords
def main():
fileName = input('Enter the name of a python file: ')
# open the file
pyFile = open( fileName )
# get a list of strings that hold the file contents
# we call the pyFile object function 'readlines()'
pyFileLineList = pyFile.readlines()
# print some info about the file
print("The file", fileName, "has", len(pyFileLineList), "lines.")
# print the first 3 lines
print("The first three lines are:")
for line in pyFileLineList[0:3] :
print(line)
# lets see if we can find any keywords in the file
for line in pyFileLineList :
for keyword in keywordList:
if line.find( keyword + ' ' ) > -1 :
print("Found keyword:", keyword)
# This tells Python to run the function called main()
if __name__ == "__main__":
main()
| SoftwareLiteracyFoundation/Python-Programs | program_7_Find_Keywords.py | Python | lgpl-3.0 | 1,350 |
from taskplus.core.shared.action import Action
from taskplus.core.shared.response import ResponseSuccess
from collections import Mapping
from taskplus.core.shared.request import Request
class ListTaskStatusesAction(Action):
def __init__(self, repo):
super().__init__()
self.statuses_repo = repo
def process_request(self, request):
self._call_before_execution_hooks(dict(request=request, statuses=None))
statuses = self.statuses_repo.list(filters=request.filters)
self._call_after_execution_hooks(dict(request=request, statuses=statuses))
return ResponseSuccess(statuses)
class ListTaskStatusesRequest(Request):
def __init__(self, filters=None):
super().__init__()
self.filters = filters
if not filters:
self.filters = None
self._validate()
def _validate(self):
self.errors = []
if self.filters is None:
return
if not isinstance(self.filters, Mapping):
parameter = 'filters'
message = 'is not iterable'
self._add_error(parameter, message)
| Himon-SYNCRAFT/taskplus | taskplus/core/actions/list_task_statuses.py | Python | bsd-3-clause | 1,131 |
from setuptools import setup, Extension
skein_hash_module = Extension('skein_hash',
sources = ['skeinmodule.c',
'skein.c',
'../../sph/skein.c',
'../../sph/sha2.c'],
include_dirs=['.', '../../sph'],
extra_compile_args=['-O1'])
setup (name = 'skein_hash',
version = '1.0',
description = 'Bindings for skein proof of work modified for Myriadcoin Skein',
test_suite = 'test',
ext_modules = [skein_hash_module])
| bitbandi/all-hash-python | algorithm/skein-hash/setup.py | Python | mit | 656 |
# Copyright 2021 Binovo IT Human Project SL
# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).
{
"name": "TicketBAI (API) - Batuz - "
"declaración de todas las operaciones de venta realizadas por las personas "
" y entidades que desarrollan actividades económicas en Bizkaia",
"version": "11.0.2.0.1",
"category": "Accounting & Finance",
"website": "http://www.binovo.es",
"author": "Binovo,"
"Odoo Community Association (OCA)",
"license": "AGPL-3",
"application": False,
"installable": True,
"auto_install": False,
"development_status": "Alpha",
"maintainers": [
'Binovo'
],
"depends": [
"l10n_es_ticketbai_api",
],
"external_dependencies": {
"python": [
"xmltodict",
"requests_pkcs12"
],
},
"data": [
"security/ir.model.access.csv",
"data/tax_agency_data.xml",
"data/lroe_chapter_data.xml",
"views/res_company_views.xml",
"views/lroe_operation_views.xml"
],
}
| factorlibre/l10n-spain | l10n_es_ticketbai_api_batuz/__manifest__.py | Python | agpl-3.0 | 1,085 |
#
# This FLIP example combines narrow band flip, 2nd order wall boundary conditions, and
# adaptive time stepping.
#
from manta import *
dim = 3
res = 64
#res = 124
gs = vec3(res,res,res)
if (dim==2):
gs.z=1
s = Solver(name='main', gridSize = gs, dim=dim)
narrowBand = 3
minParticles = pow(2,dim)
saveParts = False
frames = 200
# Adaptive time stepping
s.frameLength = 0.8 # length of one frame (in "world time")
s.cfl = 3.0 # maximal velocity per cell and timestep, 3 is fairly strict
s.timestep = s.frameLength
s.timestepMin = s.frameLength / 4. # time step range
s.timestepMax = s.frameLength * 4.
# prepare grids and particles
flags = s.create(FlagGrid)
phi = s.create(LevelsetGrid)
phiParts = s.create(LevelsetGrid)
phiObs = s.create(LevelsetGrid)
vel = s.create(MACGrid)
velOld = s.create(MACGrid)
velParts = s.create(MACGrid)
#mapWeights= s.create(MACGrid)
pressure = s.create(RealGrid)
fractions = s.create(MACGrid)
tmpVec3 = s.create(VecGrid)
pp = s.create(BasicParticleSystem)
pVel = pp.create(PdataVec3)
mesh = s.create(Mesh)
# acceleration data for particle nbs
pindex = s.create(ParticleIndexSystem)
gpi = s.create(IntGrid)
# scene setup
bWidth=1
flags.initDomain(boundaryWidth=bWidth, phiWalls=phiObs )
fluidVel = 0
fluidSetVel = 0
phi.setConst(999.)
# standing dam
fluidbox1 = Box( parent=s, p0=gs*vec3(0,0,0), p1=gs*vec3(1.0,0.3,1))
phi.join( fluidbox1.computeLevelset() )
fluidbox2 = Box( parent=s, p0=gs*vec3(0.1,0,0), p1=gs*vec3(0.2,0.75,1))
phi.join( fluidbox2.computeLevelset() )
if 1:
sphere = Sphere( parent=s , center=gs*vec3(0.66,0.3,0.5), radius=res*0.2)
phiObs.join( sphere.computeLevelset() )
#obsbox = Box( parent=s, p0=gs*vec3(0.4,0.2,0), p1=gs*vec3(0.7,0.4,1))
#obsbox = Box( parent=s, p0=gs*vec3(0.3,0.2,0), p1=gs*vec3(0.7,0.6,1))
#phiObs.join( obsbox.computeLevelset() )
flags.updateFromLevelset(phi)
phi.subtract( phiObs );
sampleLevelsetWithParticles( phi=phi, flags=flags, parts=pp, discretization=2, randomness=0.05 )
if fluidVel!=0:
# set initial velocity
fluidVel.applyToGrid( grid=vel , value=fluidSetVel )
mapGridToPartsVec3(source=vel, parts=pp, target=pVel )
# also sets boundary flags for phiObs
updateFractions( flags=flags, phiObs=phiObs, fractions=fractions, boundaryWidth=bWidth )
setObstacleFlags(flags=flags, phiObs=phiObs, fractions=fractions)
lastFrame = -1
if 1 and (GUI):
gui = Gui()
gui.show()
#gui.pause()
# save reference any grid, to automatically determine grid size
if saveParts:
pressure.save( 'ref_flipParts_0000.uni' );
#main loop
while s.frame < frames:
maxVel = vel.getMax()
s.adaptTimestep( maxVel )
mantaMsg('\nFrame %i, time-step size %f' % (s.frame, s.timestep))
# FLIP
pp.advectInGrid(flags=flags, vel=vel, integrationMode=IntRK4, deleteInObstacle=False, stopInObstacle=False )
pushOutofObs( parts=pp, flags=flags, phiObs=phiObs )
advectSemiLagrange(flags=flags, vel=vel, grid=phi, order=1) # first order is usually enough
advectSemiLagrange(flags=flags, vel=vel, grid=vel, order=2)
# create level set of particles
gridParticleIndex( parts=pp , flags=flags, indexSys=pindex, index=gpi )
unionParticleLevelset( pp, pindex, flags, gpi, phiParts )
# combine level set of particles with grid level set
phi.addConst(1.); # shrink slightly
phi.join( phiParts );
extrapolateLsSimple(phi=phi, distance=narrowBand+2, inside=True )
extrapolateLsSimple(phi=phi, distance=3 )
phi.setBoundNeumann(1) # make sure no particles are placed at outer boundary
flags.updateFromLevelset(phi)
# combine particles velocities with advected grid velocities
mapPartsToMAC(vel=velParts, flags=flags, velOld=velOld, parts=pp, partVel=pVel, weight=tmpVec3)
extrapolateMACFromWeight( vel=velParts , distance=2, weight=tmpVec3 )
combineGridVel(vel=velParts, weight=tmpVec3 , combineVel=vel, phi=phi, narrowBand=(narrowBand-1), thresh=0)
velOld.copyFrom(vel)
# forces & pressure solve
addGravity(flags=flags, vel=vel, gravity=(0,-0.001,0))
extrapolateMACSimple( flags=flags, vel=vel , distance=2, intoObs=True )
setWallBcs(flags=flags, vel=vel, fractions=fractions, phiObs=phiObs)
solvePressure(flags=flags, vel=vel, pressure=pressure, phi=phi, fractions=fractions )
extrapolateMACSimple( flags=flags, vel=vel , distance=4, intoObs=True )
setWallBcs(flags=flags, vel=vel, fractions=fractions, phiObs=phiObs)
if (dim==3):
# mis-use phiParts as temp grid to close the mesh
phiParts.copyFrom(phi)
phiParts.setBound(0.5,0)
phiParts.createMesh(mesh)
# set source grids for resampling, used in adjustNumber!
pVel.setSource( vel, isMAC=True )
adjustNumber( parts=pp, vel=vel, flags=flags, minParticles=1*minParticles, maxParticles=2*minParticles, phi=phi, exclude=phiObs, narrowBand=narrowBand )
flipVelocityUpdate(vel=vel, velOld=velOld, flags=flags, parts=pp, partVel=pVel, flipRatio=0.97 )
s.step()
if (lastFrame!=s.frame):
# generate data for flip03_gen.py surface generation scene
if saveParts:
pp.save( 'flipParts_%04d.uni' % s.frame );
if 0 and (GUI):
gui.screenshot( 'flip06_%04d.png' % s.frame );
#s.printMemInfo()
lastFrame = s.frame;
| CoderDuan/mantaflow | scenes/flip06_obstacle.py | Python | gpl-3.0 | 5,213 |
#
# pyquaero - a Python library for Aquaero fan controllers
#
# Copyright (C) 2014 Richard "Shred" Körber
# https://github.com/shred/pyquaero
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
__author__ = 'Richard "Shred" Körber'
import usb.core
import usb.util
VENDOR_ID = 0x0c70
PRODUCT_ID = 0xf001
class AquaDevice(object):
"""Aquaero USB device object.
Connect to the Aquaero via USB and offer a set of low level access methods that
are firmware independent.
"""
def __init__(self, dev):
"""Initialize the AquaDevice object."""
self.dev = dev
self.interface = [self.dev[0][(x, 0)] for x in range(3)]
# claim the interfaces if held by the kernel
for intf in self.interface:
if dev.is_kernel_driver_active(intf.bInterfaceNumber):
self.dev.detach_kernel_driver(intf.bInterfaceNumber)
usb.util.claim_interface(self.dev, intf)
def close(self):
"""Close the AquaDevice object after usage.
Must be invoked to properly release the USB device!
"""
for intf in self.interface:
usb.util.release_interface(self.dev, intf)
self.dev.attach_kernel_driver(intf.bInterfaceNumber)
def send_report(self, reportId, data, wIndex=2):
"""Send a USBHID OUT report request to the AquaDevice."""
self.dev.ctrl_transfer(bmRequestType=0x21, bRequest=0x09,
wValue=(0x0200 | reportId), wIndex=wIndex,
data_or_wLength=data)
def receive_report(self, reportId, length, wIndex=2):
"""Send a USBHID IN report request to the AquaDevice and receive the answer."""
return self.dev.ctrl_transfer(bmRequestType=0xa1, bRequest=0x01,
wValue=(0x0300 | reportId), wIndex=wIndex,
data_or_wLength=length)
def write_endpoint(self, data, endpoint):
"""Send a data package to the given endpoint."""
ep = self.interface[endpoint - 1][0]
ep.write(data)
def read_endpoint(self, length, endpoint):
"""Reads a number of data from the given endpoint."""
ep = self.interface[endpoint - 1][0]
return ep.read(length)
def count_devices():
"""Count the number of Aquaero devices found."""
devices = list(usb.core.find(idVendor=VENDOR_ID, idProduct=PRODUCT_ID, find_all=True))
return len(devices)
def get_device(unit=0):
"""Return an AquaDevice instance for the given Aquaero device unit found."""
devices = list(usb.core.find(idVendor=VENDOR_ID, idProduct=PRODUCT_ID, find_all=True))
if unit >= len(devices):
raise IndexError('No Aquaero unit %d found' % unit)
return AquaDevice(devices[unit])
| shred/pyquaero | pyquaero/usb.py | Python | lgpl-3.0 | 3,381 |
# coding: utf-8
from setuptools import setup
setup(
name="declarative-fsm",
version="",
py_modules=["fsm"],
author="Thomas Quintana",
author_email="[email protected]",
description='',
license="PROPRIETARY",
url="",
)
| thomasquintana/declarative-fsm | setup.py | Python | apache-2.0 | 259 |
from setuptools import setup
from setuptools import find_packages
setup(name='mazeexp',
version='0.0.10',
author='Mr-Yellow',
author_email='[email protected]',
description='A maze exploration game engine',
packages=find_packages(),
url='https://github.com/mryellow/maze_explorer',
license='MIT',
install_requires=['cocos2d', 'pyglet'],
include_package_data=True,
keywords='maze, game, maze-explorer, openaigym, openai-gym',
classifiers=[
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 3 - Alpha',
# Indicate who your project is intended for
'Intended Audience :: Science/Research',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: MIT License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
],
)
#package_dir={'gym_mazeexplorer' : 'gym_mazeexplorer/envs'},
| mryellow/maze_explorer | setup.py | Python | mit | 1,331 |
import os
from datetime import datetime
from django.forms.models import model_to_dict
from django.db.models.fields import DateTimeField
from django.db.models.fields.related import ManyToManyField, ForeignKey
from django.contrib.contenttypes.fields import GenericRelation
from celery.task import Task
from tendenci.apps.perms.models import TendenciBaseModel
from tendenci.apps.exports.utils import render_csv
class InvoiceExportTask(Task):
"""Export Task for Celery
This exports the entire queryset of a given TendenciBaseModel.
"""
def run(self, model, start_dt, end_dt, file_name, **kwargs):
"""Create the xls file"""
fields = (
'id',
'guid',
'object_type',
'object_id',
'title',
'tender_date',
'bill_to',
'bill_to_first_name',
'bill_to_last_name',
'bill_to_company',
'bill_to_address',
'bill_to_city',
'bill_to_state',
'bill_to_zip_code',
'bill_to_country',
'bill_to_phone',
'bill_to_fax',
'bill_to_email',
'ship_to',
'ship_to_first_name',
'ship_to_last_name',
'ship_to_company',
'ship_to_address',
'ship_to_city',
'ship_to_state',
'ship_to_zip_code',
'ship_to_country',
'ship_to_phone',
'ship_to_fax',
'ship_to_email',
'ship_to_address_type',
'receipt',
'gift',
'arrival_date_time',
'greeting',
'instructions',
'po',
'terms',
'due_date',
'ship_date',
'ship_via',
'fob',
'project',
'other',
'message',
'subtotal',
'shipping',
'shipping_surcharge',
'box_and_packing',
'tax_exempt',
'tax_exemptid',
'tax_rate',
'taxable',
'tax',
'variance',
'discount_amount',
'total',
'payments_credits',
'balance',
'disclaimer',
'variance_notes',
'admin_notes',
'create_dt',
'update_dt',
'creator',
'creator_username',
'owner',
'owner_username',
'status_detail',
'status',
)
start_dt_date = datetime.strptime(start_dt, "%Y-%m-%d")
end_dt_date = datetime.strptime(end_dt, "%Y-%m-%d")
items = model.objects.filter(status=True, tender_date__gte=start_dt_date, tender_date__lte=end_dt_date)
data_row_list = []
for item in items:
# get the available fields from the model's meta
opts = item._meta
d = {}
for f in opts.fields + opts.many_to_many:
if f.name in fields: # include specified fields only
if isinstance(f, ManyToManyField):
value = ["%s" % obj for obj in f.value_from_object(item)]
if isinstance(f, ForeignKey):
value = getattr(item, f.name)
if isinstance(f, GenericRelation):
generics = f.value_from_object(item).all()
value = ["%s" % obj for obj in generics]
if isinstance(f, DateTimeField):
if f.value_from_object(item):
value = f.value_from_object(item).strftime("%Y-%m-%d %H:%M")
else:
value = f.value_from_object(item)
d[f.name] = value
# append the accumulated values as a data row
# keep in mind the ordering of the fields
data_row = []
for field in fields:
# clean the derived values into unicode
value = unicode(d[field]).rstrip()
data_row.append(value)
data_row_list.append(data_row)
return render_csv(file_name, fields, data_row_list)
| alirizakeles/tendenci | tendenci/apps/invoices/tasks.py | Python | gpl-3.0 | 4,238 |
#
# (c) 2018 Extreme Networks Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from mock import MagicMock
from units.compat import unittest
from ansible.plugins.terminal import slxos
from ansible.errors import AnsibleConnectionFailure
class TestPluginTerminalSLXOS(unittest.TestCase):
""" Test class for SLX-OS Terminal Module
"""
def setUp(self):
self._mock_connection = MagicMock()
self._terminal = slxos.TerminalModule(self._mock_connection)
def test_on_open_shell(self):
""" Test on_open_shell
"""
self._mock_connection.exec_command.side_effect = [
b'Looking out my window I see a brick building, and people. Cool.',
]
self._terminal.on_open_shell()
self._mock_connection.exec_command.assert_called_with(u'terminal length 0')
def test_on_open_shell_error(self):
""" Test on_open_shell with error
"""
self._mock_connection.exec_command.side_effect = [
AnsibleConnectionFailure
]
with self.assertRaises(AnsibleConnectionFailure):
self._terminal.on_open_shell()
| roadmapper/ansible | test/units/plugins/terminal/test_slxos.py | Python | gpl-3.0 | 1,830 |
from .base import ProgressBar, ProgressBarCounter
from .formatters import (
Bar,
Formatter,
IterationsPerSecond,
Label,
Percentage,
Progress,
Rainbow,
SpinningWheel,
Text,
TimeElapsed,
TimeLeft,
)
__all__ = [
"ProgressBar",
"ProgressBarCounter",
# Formatters.
"Formatter",
"Text",
"Label",
"Percentage",
"Bar",
"Progress",
"TimeElapsed",
"TimeLeft",
"IterationsPerSecond",
"SpinningWheel",
"Rainbow",
]
| jonathanslenders/python-prompt-toolkit | prompt_toolkit/shortcuts/progress_bar/__init__.py | Python | bsd-3-clause | 504 |
# encoding: utf8
from GhastlyLion import Ui_GhastlyLion | Ingener74/White-Albatross | GhastlyLion/__init__.py | Python | lgpl-3.0 | 56 |
# vim: expandtab ts=4 sw=4 sts=4 fileencoding=utf-8:
#
# Copyright (C) 2007-2010 GNS3 Development Team (http://www.gns3.net/team).
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
# published by the Free Software Foundation;
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
# http://www.gns3.net/contact
#
import GNS3.Globals as globals
import GNS3.Dynagen.dynamips_lib as lib
from PyQt4 import QtCore, QtGui
from GNS3.Utils import translate, debug
from GNS3.Node.IOSRouter import IOSRouter
from GNS3.Ui.Form_IDLEPCDialog import Ui_IDLEPCDialog
class IDLEPCDialog(QtGui.QDialog, Ui_IDLEPCDialog):
""" IDLEPCDialog class
"""
def __init__(self, router, idles, options):
QtGui.QDialog.__init__(self)
self.setupUi(self)
self.idles = idles
self.router = router
self.comboBox.addItems(options)
def apply(self, message=False):
""" Apply the IDLE PC to the router
"""
try:
selection = str(self.comboBox.currentText()).split(':')[0].strip('* ')
index = int(selection)
for node in globals.GApp.topology.nodes.values():
if isinstance(node, IOSRouter) and node.hostname == self.router.hostname:
dyn_router = node.get_dynagen_device()
if globals.GApp.systconf['dynamips'].HypervisorManager_binding == '0.0.0.0':
host = '0.0.0.0'
else:
host = dyn_router.dynamips.host
if globals.GApp.iosimages.has_key(host + ':' + dyn_router.image):
image = globals.GApp.iosimages[host + ':' + dyn_router.image]
debug("Register IDLE PC " + self.idles[index] + " for image " + image.filename)
image.idlepc = self.idles[index]
# Apply idle pc to devices with the same IOS image
for device in globals.GApp.topology.nodes.values():
if isinstance(device, IOSRouter) and device.config['image'] == image.filename:
debug("Apply IDLE PC " + self.idles[index] + " to " + device.hostname)
device.get_dynagen_device().idlepc = self.idles[index]
config = device.get_config()
config['idlepc'] = self.idles[index]
device.set_config(config)
device.setCustomToolTip()
break
if message:
QtGui.QMessageBox.information(self, translate("IDLEPCDialog", "IDLE PC"),
translate("IDLEPCDialog", "IDLE PC value %s has been applied on %s") % (self.idles[index], self.router.hostname))
except lib.DynamipsError, msg:
QtGui.QMessageBox.critical(self, translate("IDLEPCDialog", "Dynamips error"), unicode(msg))
return
def on_buttonBox_clicked(self, button):
""" Private slot called by a button of the button box clicked.
button: button that was clicked (QAbstractButton)
"""
if button == self.buttonBox.button(QtGui.QDialogButtonBox.Cancel):
QtGui.QDialog.reject(self)
elif button == self.buttonBox.button(QtGui.QDialogButtonBox.Apply):
self.apply()
elif button == self.buttonBox.button(QtGui.QDialogButtonBox.Help):
help_text = translate("IDLEPCDialog", "Finding the right idlepc value is a trial and error process, consisting of applying different idlepc values and monitoring the CPU usage.\n\nBest idlepc values are usually obtained when IOS is in idle state, the following message being displayed on the console: %s con0 is now available ... Press RETURN to get started.") % self.router.hostname
QtGui.QMessageBox.information(self, translate("IDLEPCDialog", "Hints for IDLE PC"), help_text)
else:
self.apply(message=True)
QtGui.QDialog.accept(self)
| dlintott/gns3 | src/GNS3/IDLEPCDialog.py | Python | gpl-2.0 | 4,564 |
import factory
from mozillians.api.models import APIApp
from mozillians.users.tests import UserFactory
class APIAppFactory(factory.DjangoModelFactory):
FACTORY_FOR = APIApp
name = factory.Sequence(lambda n: 'App {0}'.format(n))
description = factory.Sequence(lambda n: 'Description for App {0}'.format(n))
owner = factory.SubFactory(UserFactory)
is_active = True
| glogiotatidis/mozillians-new | mozillians/api/tests/__init__.py | Python | bsd-3-clause | 386 |
""" Classifier mixin offering functionality including:
- grid search
- named labels
- validation sets
- accuracy scores
- cross validation
"""
import numpy as np
import itertools
from random import shuffle
class ClassifierMixin:
""" Mixin for classifiers adding grid search functionality, as well
as named labels. The class should implement the following:
- train(samples, int_labels)
- predict(samples) -> int_labels
- verbose -> boolean
- a constructor which takes the arguments to perform grid search
on, and fully resets the classifier (ie. one can call __init__
multiple times without corrupting the state).
"""
def predict_named(self, samples):
int_labels = self.predict(samples)
return map(
lambda i: self.int_to_label[i],
self.predict(samples).tolist()
)
def name_to_int(self, labels):
""" Converts a collection of string labels to integer labels, storing
the correspondance in the self.int_to_label list.
"""
self.int_to_label = list(set(labels))
self.label_to_int = {}
for i in range(len(self.int_to_label)):
self.label_to_int[self.int_to_label[i]] = i
int_labels = np.array(
map(lambda l: self.label_to_int[l], labels),
dtype=np.int32
)
return int_labels
def name_to_int_test(self, labels):
return np.array(
map(lambda l: self.label_to_int[l], labels),
dtype=np.int32
)
def train_named(self, samples, labels):
self.train(samples, self.name_to_int(labels))
def train_gs_named(self, samples, labels, k, **args):
""" Trains a classifier with grid search using named labels.
"""
return self.train_gs(samples, self.name_to_int(labels), k, **args)
def train_validation(self, samples, labels, valid_size=0.2):
""" Trains the classifier, picking a random validation set out of the training
data.
Arguments:
samples
full training set of samples.
labels
labels for the training samples.
valid_size
fraction of the samples of each class in the dataset to pick. This is
not simply picking a random subset of the samples, as we still would
like each class to be represented equally - and by at least one sample.
"""
nb_samples = len(samples)
nb_classes = np.unique(labels).size
assert nb_samples >= 2 * nb_classes
# Group the samples per class.
samples_per_class = []
for i in range(nb_classes):
samples_per_class.append([])
for i in range(nb_samples):
samples_per_class[labels[i]].append(samples[i])
# For each class, split into training and validation sets.
train_samples = []
train_labels = []
valid_samples = []
valid_labels = []
for i in range(nb_classes):
# We need at least one validation sample and one training sample.
nb_samples_class = len(samples_per_class[i])
assert nb_samples_class >= 2
nb_valid = min(
nb_samples_class - 1,
max(1,
int(round(valid_size * nb_samples_class))
)
)
nb_train = nb_samples_class - nb_valid
# Pick the sets randomly.
shflidxs = np.random.permutation(nb_samples_class)
j = 0
for k in range(nb_valid):
valid_samples.append(samples_per_class[i][shflidxs[j]])
valid_labels.append(i)
j += 1
for k in range(nb_train):
train_samples.append(samples_per_class[i][shflidxs[j]])
train_labels.append(i)
j += 1
# Run the actual training.
self.train(train_samples, np.array(train_labels, np.int32),
valid_samples, np.array(valid_labels, np.int32))
def train_validation_named(self, samples, labels, valid_size=0.2):
self.train_validation(samples, self.name_to_int(labels), valid_size)
def top_accuracy(self, samples, labels):
""" Computes top-1 to to top-k accuracy of the classifier on test data,
assuming it already has been trained, where k is the total number
of classes.
"""
probas = self.predict_proba(samples)
sorted_classes = np.fliplr(np.argsort(probas, axis=1))
nb_classes = probas.shape[1]
nb_samples = len(samples)
nb_correct_top = np.zeros([nb_classes], np.int32)
for i in range(nb_samples):
for j in range(nb_classes):
if labels[i] == sorted_classes[i,j]:
nb_correct_top[j:] += 1
break
return nb_correct_top.astype(np.float64) / nb_samples
def top_accuracy_named(self, samples, labels):
return self.top_accuracy(samples, self.name_to_int_test(labels))
| alexisVallet/dpm-identification | classifier.py | Python | gpl-2.0 | 5,173 |
from __future__ import absolute_import
import rules
from django.contrib.auth.models import User
from django.core.exceptions import ImproperlyConfigured
from django.shortcuts import reverse
from rest_framework.test import APITestCase
from testapp.models import Book
from testapp import views
class PermissionRequiredMixedAPIViewTests(APITestCase):
"""Tests the behavior of the mixin when used on an APIView
"""
def test_user_with_permission_gets_access(self):
user = User.objects.get(username='anton')
permissions = views.SinglePermissionView().get_permission_required()
self.assertTrue(all([user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='anton', password='secr3t'))
response = self.client.get(reverse('single_permission_view'))
self.assertEqual(200, response.status_code)
def test_user_without_permission_gets_no_access(self):
user = User.objects.get(username='beatrix')
permissions = views.SinglePermissionView().get_permission_required()
self.assertTrue(any([not user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='beatrix', password='secr3t'))
response = self.client.get(reverse('single_permission_view'))
self.assertEqual(403, response.status_code)
def test_user_with_permissions_gets_access(self):
user = User.objects.get(username='anton')
permissions = views.MultiplePermissionsView().get_permission_required()
self.assertTrue(all([user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='anton', password='secr3t'))
response = self.client.get(reverse('multiple_permissions_view'))
self.assertEqual(200, response.status_code)
def test_user_with_partial_permissions_gets_no_access(self):
user = User.objects.get(username='beatrix')
permissions = views.MultiplePermissionsView().get_permission_required()
self.assertTrue(any([not user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='beatrix', password='secr3t'))
response = self.client.get(reverse('multiple_permissions_view'))
self.assertEqual(403, response.status_code)
def test_user_without_permissions_gets_no_access(self):
user = User.objects.get(username='carlos')
permissions = views.MultiplePermissionsView().get_permission_required()
self.assertTrue(all([not user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='carlos', password='secr3t'))
response = self.client.get(reverse('multiple_permissions_view'))
self.assertEqual(403, response.status_code)
def test_improperly_configured_api_view_raises(self):
with self.assertRaises(ImproperlyConfigured):
response = self.client.get(reverse('improperly_configured_api_view'))
class PermissionRequiredMixedGenericAPIViewTests(APITestCase):
"""Tests the behavior of the mixin when used on a GenericAPIView
"""
def test_object_permission_falls_back_to_required_permissions(self):
view = views.GenericViewWithoutObjectPermissions()
self.assertEquals(None, view.object_permission_required)
self.assertEquals(view.get_permission_required(),
view.get_object_permission_required())
def test_user_with_object_permission_gets_access_to_object(self):
user = User.objects.get(username='anton')
permissions = views.SinglePermissionGenericView().get_object_permission_required()
self.assertTrue(all([user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='anton', password='secr3t'))
response = self.client.post(reverse('single_permission_generic_view', args=(1,)))
self.assertEqual(200, response.status_code)
def test_user_with_object_permissions_gets_access_to_object(self):
user = User.objects.get(username='anton')
permissions = views.MultiplePermissionsGenericView().get_object_permission_required()
self.assertTrue(all([user.has_perm(perm) for perm in permissions]))
self.assertTrue(self.client.login(username='anton', password='secr3t'))
response = self.client.post(reverse('multiple_permissions_generic_view', args=(1,)))
self.assertEqual(200, response.status_code)
def test_user_with_partial_object_permissions_gets_no_access_to_object(self):
user = User.objects.get(username='beatrix')
view = views.MultiplePermissionsGenericView()
permissions = view.get_object_permission_required()
obj = view.queryset.get(pk=1)
self.assertTrue(any([not user.has_perm(perm, obj) for perm in permissions]))
self.assertTrue(self.client.login(username='beatrix', password='secr3t'))
response = self.client.post(reverse('multiple_permissions_generic_view', args=(1,)))
self.assertEqual(403, response.status_code)
def test_user_without_object_permission_gets_no_access_to_object(self):
user = User.objects.get(username='carlos')
view = views.MultiplePermissionsGenericView()
permissions = view.get_object_permission_required()
obj = view.queryset.get(pk=1)
self.assertTrue(all([not user.has_perm(perm, obj) for perm in permissions]))
self.assertTrue(self.client.login(username='carlos', password='secr3t'))
response = self.client.post(reverse('multiple_permissions_generic_view', args=(1,)))
self.assertEqual(403, response.status_code)
| escodebar/django-rest-framework-rules | tests/testsuite/test_views/test_mixin.py | Python | mit | 5,668 |
"""
Galaxy web controllers.
"""
| mikel-egana-aranguren/SADI-Galaxy-Docker | galaxy-dist/lib/galaxy/webapps/galaxy/controllers/__init__.py | Python | gpl-3.0 | 32 |
import logging
from flexmock import flexmock
from should_dsl import should, matcher
from should_dsl.matchers import equal_to
@matcher
class LogMatcher(object):
# pylint: disable=R0903
"""A should-dsl matcher that checks whether a log
entry is being made at a certain log level.
Additional Dependency:
flexmock: <http://pypi.python.org/pypi/flexmock>
Example:
import unittest
logging.basicConfig()
logger = logging.getLogger('some.logger')
def log_something():
logger.error('A log message')
class LogTest(unittest.TestCase):
def test_log_something(self):
logger | should | log('CRITICAL', 'A log message')
log_something() # Will throw an exception because the log message
# has level 'ERROR' and not 'CRITICAL' as expected.
suite = unittest.TestLoader().loadTestsFromTestCase(LogTest)
unittest.TextTestRunner(verbosity=2).run(suite)
"""
name = 'log'
def __init__(self):
self._expected_level = ''
self._expected_message = ''
def __call__(self, expected_level, expected_message):
self._expected_level = expected_level
self._expected_message = expected_message
return self
def match(self, logger):
"""Implement should_dsl's match() method."""
handler = flexmock(logging.Handler())
def emit(record):
"""Hooks into the log handler."""
try:
# pylint: disable=E1121,W0106
record.levelname | should | equal_to(self._expected_level)
record.msg | should | equal_to(self._expected_message)
finally:
logger.removeHandler(handler)
handler.should_receive('emit').replace_with(emit).once()
logger.addHandler(handler)
# Always return true and let flexmock take care of
# actual matching.
return True
if __name__ == '__main__':
import unittest
logging.basicConfig()
logger = logging.getLogger('some.logger')
def log_something():
logger.error('A log message')
class LogTest(unittest.TestCase):
def test_log_something(self):
logger | should | log('CRITICAL', 'A log message')
log_something() # Will throw an exception because the log message
# has level 'ERROR' and not 'CRITICAL' as expected.
suite = unittest.TestLoader().loadTestsFromTestCase(LogTest)
unittest.TextTestRunner(verbosity=2).run(suite)
| rodrigomanhaes/matcher_crowd | matcher_crowd/matcher_logger.py | Python | mit | 2,609 |
from __future__ import absolute_import, division, print_function
import sys
import itertools
import os
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
ON_TRAVIS_CI = 'TRAVIS_PYTHON_VERSION' in os.environ
if sys.version_info[0] >= 3:
unicode = str
map = map
range = range
else:
unicode = unicode
map = itertools.imap
range = xrange
def skipif(cond, **kwargs):
def _(func):
if cond:
return None
else:
return func
return _
try:
from urlparse import urlparse
except ImportError:
from urllib.parse import urlparse
try:
from urllib2 import urlopen
except ImportError:
from urllib.request import urlopen
| cowlicks/odo | odo/compatibility.py | Python | bsd-3-clause | 706 |
#!/usr/bin/env python3
# Copyright (c) 2015-2017 The Bitcoin Core developers
# Copyright (c) 2019 The Bitcoin developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test block processing."""
import copy
import struct
import time
from data import invalid_txs
from test_framework.blocktools import (
create_block,
create_coinbase,
create_tx_with_script,
make_conform_to_ctor,
)
from test_framework.cdefs import LEGACY_MAX_BLOCK_SIZE
from test_framework.key import ECKey
from test_framework.messages import (
COIN,
CBlock,
COutPoint,
CTransaction,
CTxIn,
CTxOut,
uint256_from_compact,
uint256_from_str,
)
from test_framework.p2p import P2PDataStore
from test_framework.script import (
OP_ELSE,
OP_ENDIF,
OP_FALSE,
OP_IF,
OP_INVALIDOPCODE,
OP_RETURN,
OP_TRUE,
SIGHASH_ALL,
SIGHASH_FORKID,
CScript,
SignatureHashForkId,
)
from test_framework.test_framework import BitcoinTestFramework
from test_framework.txtools import pad_tx
from test_framework.util import assert_equal
# Use this class for tests that require behavior other than normal p2p behavior.
# For now, it is used to serialize a bloated varint (b64).
class CBrokenBlock(CBlock):
def initialize(self, base_block):
self.vtx = copy.deepcopy(base_block.vtx)
self.hashMerkleRoot = self.calc_merkle_root()
def serialize(self):
r = b""
r += super(CBlock, self).serialize()
r += struct.pack("<BQ", 255, len(self.vtx))
for tx in self.vtx:
r += tx.serialize()
return r
def normal_serialize(self):
return super().serialize()
# Valid for block at height 120
DUPLICATE_COINBASE_SCRIPT_SIG = b'\x01\x78'
class FullBlockTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
self.setup_clean_chain = True
# This is a consensus block test, we don't care about tx policy
self.extra_args = [['-noparkdeepreorg',
'-maxreorgdepth=-1', '-acceptnonstdtxn=1']]
def run_test(self):
node = self.nodes[0] # convenience reference to the node
self.bootstrap_p2p() # Add one p2p connection to the node
self.block_heights = {}
self.coinbase_key = ECKey()
self.coinbase_key.generate()
self.coinbase_pubkey = self.coinbase_key.get_pubkey().get_bytes()
self.tip = None
self.blocks = {}
self.genesis_hash = int(self.nodes[0].getbestblockhash(), 16)
self.block_heights[self.genesis_hash] = 0
self.spendable_outputs = []
# Create a new block
b_dup_cb = self.next_block('dup_cb')
b_dup_cb.vtx[0].vin[0].scriptSig = DUPLICATE_COINBASE_SCRIPT_SIG
b_dup_cb.vtx[0].rehash()
duplicate_tx = b_dup_cb.vtx[0]
b_dup_cb = self.update_block('dup_cb', [])
self.send_blocks([b_dup_cb])
b0 = self.next_block(0)
self.save_spendable_output()
self.send_blocks([b0])
# These constants chosen specifically to trigger an immature coinbase spend
# at a certain time below.
NUM_BUFFER_BLOCKS_TO_GENERATE = 99
NUM_OUTPUTS_TO_COLLECT = 33
# Allow the block to mature
blocks = []
for i in range(NUM_BUFFER_BLOCKS_TO_GENERATE):
blocks.append(self.next_block("maturitybuffer.{}".format(i)))
self.save_spendable_output()
self.send_blocks(blocks)
# collect spendable outputs now to avoid cluttering the code later on
out = []
for _ in range(NUM_OUTPUTS_TO_COLLECT):
out.append(self.get_spendable_output())
# Start by building a couple of blocks on top (which output is spent is
# in parentheses):
# genesis -> b1 (0) -> b2 (1)
b1 = self.next_block(1, spend=out[0])
self.save_spendable_output()
b2 = self.next_block(2, spend=out[1])
self.save_spendable_output()
self.send_blocks([b1, b2], timeout=4)
# Select a txn with an output eligible for spending. This won't actually be spent,
# since we're testing submission of a series of blocks with invalid
# txns.
attempt_spend_tx = out[2]
# Submit blocks for rejection, each of which contains a single transaction
# (aside from coinbase) which should be considered invalid.
for TxTemplate in invalid_txs.iter_all_templates():
template = TxTemplate(spend_tx=attempt_spend_tx)
if template.valid_in_block:
continue
self.log.info(
"Reject block with invalid tx: %s",
TxTemplate.__name__)
blockname = "for_invalid.{}".format(TxTemplate.__name__)
badblock = self.next_block(blockname)
badtx = template.get_tx()
if TxTemplate != invalid_txs.InputMissing:
self.sign_tx(badtx, attempt_spend_tx)
badtx.rehash()
badblock = self.update_block(blockname, [badtx])
self.send_blocks(
[badblock], success=False,
reject_reason=(
template.block_reject_reason or template.reject_reason),
reconnect=True, timeout=2)
self.move_tip(2)
# Fork like this:
#
# genesis -> b1 (0) -> b2 (1)
# \-> b3 (1)
#
# Nothing should happen at this point. We saw b2 first so it takes
# priority.
self.log.info("Don't reorg to a chain of the same length")
self.move_tip(1)
b3 = self.next_block(3, spend=out[1])
txout_b3 = b3.vtx[1]
self.send_blocks([b3], False)
# Now we add another block to make the alternative chain longer.
#
# genesis -> b1 (0) -> b2 (1)
# \-> b3 (1) -> b4 (2)
self.log.info("Reorg to a longer chain")
b4 = self.next_block(4, spend=out[2])
self.send_blocks([b4])
# ... and back to the first chain.
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b3 (1) -> b4 (2)
self.move_tip(2)
b5 = self.next_block(5, spend=out[2])
self.save_spendable_output()
self.send_blocks([b5], False)
self.log.info("Reorg back to the original chain")
b6 = self.next_block(6, spend=out[3])
self.send_blocks([b6], True)
# Try to create a fork that double-spends
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b7 (2) -> b8 (4)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a chain with a double spend, even if it is longer")
self.move_tip(5)
b7 = self.next_block(7, spend=out[2])
self.send_blocks([b7], False)
b8 = self.next_block(8, spend=out[4])
self.send_blocks([b8], False, reconnect=True)
# Try to create a block that has too much fee
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b9 (4)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a block where the miner creates too much coinbase reward")
self.move_tip(6)
b9 = self.next_block(9, spend=out[4], additional_coinbase_value=1)
self.send_blocks([b9], success=False,
reject_reason='bad-cb-amount', reconnect=True)
# Create a fork that ends in a block with too much fee (the one that causes the reorg)
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b10 (3) -> b11 (4)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a chain where the miner creates too much coinbase reward, even if the chain is longer")
self.move_tip(5)
b10 = self.next_block(10, spend=out[3])
self.send_blocks([b10], False)
b11 = self.next_block(11, spend=out[4], additional_coinbase_value=1)
self.send_blocks([b11], success=False,
reject_reason='bad-cb-amount', reconnect=True)
# Try again, but with a valid fork first
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b14 (5)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a chain where the miner creates too much coinbase reward, even if the chain is longer (on a forked chain)")
self.move_tip(5)
b12 = self.next_block(12, spend=out[3])
self.save_spendable_output()
b13 = self.next_block(13, spend=out[4])
self.save_spendable_output()
b14 = self.next_block(14, spend=out[5], additional_coinbase_value=1)
self.send_blocks([b12, b13, b14], success=False,
reject_reason='bad-cb-amount', reconnect=True)
# New tip should be b13.
assert_equal(node.getbestblockhash(), b13.hash)
self.log.info("Skipped sigops tests")
# tests were moved to feature_block_sigops.py
self.move_tip(13)
b15 = self.next_block(15)
self.save_spendable_output()
self.send_blocks([b15], True)
# Attempt to spend a transaction created on a different fork
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b15 (5) -> b17 (b3.vtx[1])
# \-> b3 (1) -> b4 (2)
self.log.info("Reject a block with a spend from a re-org'ed out tx")
self.move_tip(15)
b17 = self.next_block(17, spend=txout_b3)
self.send_blocks([b17], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
# Attempt to spend a transaction created on a different fork (on a fork this time)
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b15 (5)
# \-> b18 (b3.vtx[1]) -> b19 (6)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a block with a spend from a re-org'ed out tx (on a forked chain)")
self.move_tip(13)
b18 = self.next_block(18, spend=txout_b3)
self.send_blocks([b18], False)
b19 = self.next_block(19, spend=out[6])
self.send_blocks([b19], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
# Attempt to spend a coinbase at depth too low
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b15 (5) -> b20 (7)
# \-> b3 (1) -> b4 (2)
self.log.info("Reject a block spending an immature coinbase.")
self.move_tip(15)
b20 = self.next_block(20, spend=out[7])
self.send_blocks(
[b20],
success=False,
reject_reason='bad-txns-premature-spend-of-coinbase',
reconnect=True)
# Attempt to spend a coinbase at depth too low (on a fork this time)
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b15 (5)
# \-> b21 (6) -> b22 (5)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a block spending an immature coinbase (on a forked chain)")
self.move_tip(13)
b21 = self.next_block(21, spend=out[6])
self.send_blocks([b21], False)
b22 = self.next_block(22, spend=out[5])
self.send_blocks(
[b22],
success=False,
reject_reason='bad-txns-premature-spend-of-coinbase',
reconnect=True)
# Create a block on either side of LEGACY_MAX_BLOCK_SIZE and make sure its accepted/rejected
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b15 (5) -> b23 (6)
# \-> b24 (6) -> b25 (7)
# \-> b3 (1) -> b4 (2)
self.log.info("Accept a block of size LEGACY_MAX_BLOCK_SIZE")
self.move_tip(15)
b23 = self.next_block(23, spend=out[6])
tx = CTransaction()
script_length = LEGACY_MAX_BLOCK_SIZE - len(b23.serialize()) - 69
script_output = CScript([b'\x00' * script_length])
tx.vout.append(CTxOut(0, script_output))
tx.vin.append(CTxIn(COutPoint(b23.vtx[1].sha256, 0)))
b23 = self.update_block(23, [tx])
# Make sure the math above worked out to produce a max-sized block
assert_equal(len(b23.serialize()), LEGACY_MAX_BLOCK_SIZE)
self.send_blocks([b23], True)
self.save_spendable_output()
# Create blocks with a coinbase input script size out of range
# genesis -> b1 (0) -> b2 (1) -> b5 (2) -> b6 (3)
# \-> b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) -> b30 (7)
# \-> ... (6) -> ... (7)
# \-> b3 (1) -> b4 (2)
self.log.info(
"Reject a block with coinbase input script size out of range")
self.move_tip(15)
b26 = self.next_block(26, spend=out[6])
b26.vtx[0].vin[0].scriptSig = b'\x00'
b26.vtx[0].rehash()
# update_block causes the merkle root to get updated, even with no new
# transactions, and updates the required state.
b26 = self.update_block(26, [])
self.send_blocks([b26], success=False,
reject_reason='bad-cb-length', reconnect=True)
# Extend the b26 chain to make sure bitcoind isn't accepting b26
b27 = self.next_block(27, spend=out[7])
self.send_blocks([b27], False)
# Now try a too-large-coinbase script
self.move_tip(15)
b28 = self.next_block(28, spend=out[6])
b28.vtx[0].vin[0].scriptSig = b'\x00' * 101
b28.vtx[0].rehash()
b28 = self.update_block(28, [])
self.send_blocks([b28], success=False,
reject_reason='bad-cb-length', reconnect=True)
# Extend the b28 chain to make sure bitcoind isn't accepting b28
b29 = self.next_block(29, spend=out[7])
self.send_blocks([b29], False)
# b30 has a max-sized coinbase scriptSig.
self.move_tip(23)
b30 = self.next_block(30)
b30.vtx[0].vin[0].scriptSig = b'\x00' * 100
b30.vtx[0].rehash()
b30 = self.update_block(30, [])
self.send_blocks([b30], True)
self.save_spendable_output()
self.log.info("Skipped sigops tests")
# tests were moved to feature_block_sigops.py
b31 = self.next_block(31)
self.save_spendable_output()
b33 = self.next_block(33)
self.save_spendable_output()
b35 = self.next_block(35)
self.save_spendable_output()
self.send_blocks([b31, b33, b35], True)
# Check spending of a transaction in a block which failed to connect
#
# b6 (3)
# b12 (3) -> b13 (4) -> b15 (5) -> b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10)
# \-> b37 (11)
# \-> b38 (11/37)
#
# save 37's spendable output, but then double-spend out11 to invalidate
# the block
self.log.info(
"Reject a block spending transaction from a block which failed to connect")
self.move_tip(35)
b37 = self.next_block(37, spend=out[11])
txout_b37 = b37.vtx[1]
tx = self.create_and_sign_transaction(out[11], 0)
b37 = self.update_block(37, [tx])
self.send_blocks([b37], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
# attempt to spend b37's first non-coinbase tx, at which point b37 was
# still considered valid
self.move_tip(35)
b38 = self.next_block(38, spend=txout_b37)
self.send_blocks([b38], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
self.log.info("Skipped sigops tests")
# tests were moved to feature_block_sigops.py
self.move_tip(35)
b39 = self.next_block(39)
self.save_spendable_output()
b41 = self.next_block(41)
self.send_blocks([b39, b41], True)
# Fork off of b39 to create a constant base again
#
# b23 (6) -> b30 (7) -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13)
# \-> b41 (12)
#
self.move_tip(39)
b42 = self.next_block(42, spend=out[12])
self.save_spendable_output()
b43 = self.next_block(43, spend=out[13])
self.save_spendable_output()
self.send_blocks([b42, b43], True)
# Test a number of really invalid scenarios
#
# -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b44 (14)
# \-> ??? (15)
# The next few blocks are going to be created "by hand" since they'll do funky things, such as having
# the first transaction be non-coinbase, etc. The purpose of b44 is to
# make sure this works.
self.log.info("Build block 44 manually")
height = self.block_heights[self.tip.sha256] + 1
coinbase = create_coinbase(height, self.coinbase_pubkey)
b44 = CBlock()
b44.nTime = self.tip.nTime + 1
b44.hashPrevBlock = self.tip.sha256
b44.nBits = 0x207fffff
b44.vtx.append(coinbase)
b44.hashMerkleRoot = b44.calc_merkle_root()
b44.solve()
self.tip = b44
self.block_heights[b44.sha256] = height
self.blocks[44] = b44
self.send_blocks([b44], True)
self.log.info("Reject a block with a non-coinbase as the first tx")
non_coinbase = self.create_tx(out[15], 0, 1)
b45 = CBlock()
b45.nTime = self.tip.nTime + 1
b45.hashPrevBlock = self.tip.sha256
b45.nBits = 0x207fffff
b45.vtx.append(non_coinbase)
b45.hashMerkleRoot = b45.calc_merkle_root()
b45.calc_sha256()
b45.solve()
self.block_heights[b45.sha256] = self.block_heights[
self.tip.sha256] + 1
self.tip = b45
self.blocks[45] = b45
self.send_blocks([b45], success=False,
reject_reason='bad-cb-missing', reconnect=True)
self.log.info("Reject a block with no transactions")
self.move_tip(44)
b46 = CBlock()
b46.nTime = b44.nTime + 1
b46.hashPrevBlock = b44.sha256
b46.nBits = 0x207fffff
b46.vtx = []
b46.hashMerkleRoot = 0
b46.solve()
self.block_heights[b46.sha256] = self.block_heights[b44.sha256] + 1
self.tip = b46
assert 46 not in self.blocks
self.blocks[46] = b46
self.send_blocks([b46], success=False,
reject_reason='bad-cb-missing', reconnect=True)
self.log.info("Reject a block with invalid work")
self.move_tip(44)
b47 = self.next_block(47)
target = uint256_from_compact(b47.nBits)
while b47.sha256 <= target:
# Rehash nonces until an invalid too-high-hash block is found.
b47.nNonce += 1
b47.rehash()
self.send_blocks(
[b47],
False,
force_send=True,
reject_reason='high-hash',
reconnect=True)
self.log.info("Reject a block with a timestamp >2 hours in the future")
self.move_tip(44)
b48 = self.next_block(48)
b48.nTime = int(time.time()) + 60 * 60 * 3
# Header timestamp has changed. Re-solve the block.
b48.solve()
self.send_blocks([b48], False, force_send=True,
reject_reason='time-too-new')
self.log.info("Reject a block with invalid merkle hash")
self.move_tip(44)
b49 = self.next_block(49)
b49.hashMerkleRoot += 1
b49.solve()
self.send_blocks([b49], success=False,
reject_reason='bad-txnmrklroot', reconnect=True)
self.log.info("Reject a block with incorrect POW limit")
self.move_tip(44)
b50 = self.next_block(50)
b50.nBits = b50.nBits - 1
b50.solve()
self.send_blocks(
[b50],
False,
force_send=True,
reject_reason='bad-diffbits',
reconnect=True)
self.log.info("Reject a block with two coinbase transactions")
self.move_tip(44)
b51 = self.next_block(51)
cb2 = create_coinbase(51, self.coinbase_pubkey)
b51 = self.update_block(51, [cb2])
self.send_blocks([b51], success=False,
reject_reason='bad-tx-coinbase', reconnect=True)
self.log.info("Reject a block with duplicate transactions")
self.move_tip(44)
b52 = self.next_block(52, spend=out[15])
b52 = self.update_block(52, [b52.vtx[1]])
self.send_blocks([b52], success=False,
reject_reason='tx-duplicate', reconnect=True)
# Test block timestamps
# -> b31 (8) -> b33 (9) -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15)
# \-> b54 (15)
#
self.move_tip(43)
b53 = self.next_block(53, spend=out[14])
self.send_blocks([b53], False)
self.save_spendable_output()
self.log.info("Reject a block with timestamp before MedianTimePast")
b54 = self.next_block(54, spend=out[15])
b54.nTime = b35.nTime - 1
b54.solve()
self.send_blocks(
[b54],
False,
force_send=True,
reject_reason='time-too-old',
reconnect=True)
# valid timestamp
self.move_tip(53)
b55 = self.next_block(55, spend=out[15])
b55.nTime = b35.nTime
self.update_block(55, [])
self.send_blocks([b55], True)
self.save_spendable_output()
# Test Merkle tree malleability
#
# -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57p2 (16)
# \-> b57 (16)
# \-> b56p2 (16)
# \-> b56 (16)
#
# Merkle tree malleability (CVE-2012-2459): repeating sequences of transactions in a block without
# affecting the merkle root of a block, while still invalidating it.
# See: src/consensus/merkle.h
#
# b57 has three txns: coinbase, tx, tx1. The merkle root computation will duplicate tx.
# Result: OK
#
# b56 copies b57 but duplicates tx1 and does not recalculate the block hash. So it has a valid merkle
# root but duplicate transactions.
# Result: Fails
#
# b57p2 has six transactions in its merkle tree:
# - coinbase, tx, tx1, tx2, tx3, tx4
# Merkle root calculation will duplicate as necessary.
# Result: OK.
#
# b56p2 copies b57p2 but adds both tx3 and tx4. The purpose of the test is to make sure the code catches
# duplicate txns that are not next to one another with the "bad-txns-duplicate" error (which indicates
# that the error was caught early, avoiding a DOS vulnerability.)
# b57 - a good block with 2 txs, don't submit until end
self.move_tip(55)
b57 = self.next_block(57)
tx = self.create_and_sign_transaction(out[16], 1)
tx1 = self.create_tx(tx, 0, 1)
b57 = self.update_block(57, [tx, tx1])
# b56 - copy b57, add a duplicate tx
self.log.info(
"Reject a block with a duplicate transaction in the Merkle Tree (but with a valid Merkle Root)")
self.move_tip(55)
b56 = copy.deepcopy(b57)
self.blocks[56] = b56
assert_equal(len(b56.vtx), 3)
b56 = self.update_block(56, [b57.vtx[2]])
assert_equal(b56.hash, b57.hash)
self.send_blocks([b56], success=False,
reject_reason='bad-txns-duplicate', reconnect=True)
# b57p2 - a good block with 6 tx'es, don't submit until end
self.move_tip(55)
b57p2 = self.next_block("57p2")
tx = self.create_and_sign_transaction(out[16], 1)
tx1 = self.create_tx(tx, 0, 1)
tx2 = self.create_tx(tx1, 0, 1)
tx3 = self.create_tx(tx2, 0, 1)
tx4 = self.create_tx(tx3, 0, 1)
b57p2 = self.update_block("57p2", [tx, tx1, tx2, tx3, tx4])
# b56p2 - copy b57p2, duplicate two non-consecutive tx's
self.log.info(
"Reject a block with two duplicate transactions in the Merkle Tree (but with a valid Merkle Root)")
self.move_tip(55)
b56p2 = copy.deepcopy(b57p2)
self.blocks["b56p2"] = b56p2
assert_equal(len(b56p2.vtx), 6)
b56p2 = self.update_block("b56p2", b56p2.vtx[4:6], reorder=False)
assert_equal(b56p2.hash, b57p2.hash)
self.send_blocks([b56p2], success=False,
reject_reason='bad-txns-duplicate', reconnect=True)
self.move_tip("57p2")
self.send_blocks([b57p2], True)
self.move_tip(57)
# The tip is not updated because 57p2 seen first
self.send_blocks([b57], False)
self.save_spendable_output()
# Test a few invalid tx types
#
# -> b35 (10) -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 ()
# \-> ??? (17)
#
# tx with prevout.n out of range
self.log.info(
"Reject a block with a transaction with prevout.n out of range")
self.move_tip(57)
b58 = self.next_block(58, spend=out[17])
tx = CTransaction()
assert(len(out[17].vout) < 42)
tx.vin.append(
CTxIn(COutPoint(out[17].sha256, 42), CScript([OP_TRUE]), 0xffffffff))
tx.vout.append(CTxOut(0, b""))
pad_tx(tx)
tx.calc_sha256()
b58 = self.update_block(58, [tx])
self.send_blocks([b58], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
# tx with output value > input value
self.log.info(
"Reject a block with a transaction with outputs > inputs")
self.move_tip(57)
b59 = self.next_block(59)
tx = self.create_and_sign_transaction(out[17], 51 * COIN)
b59 = self.update_block(59, [tx])
self.send_blocks([b59], success=False,
reject_reason='bad-txns-in-belowout', reconnect=True)
# reset to good chain
self.move_tip(57)
b60 = self.next_block(60)
self.send_blocks([b60], True)
self.save_spendable_output()
# Test BIP30 (reject duplicate)
#
# -> b39 (11) -> b42 (12) -> b43 (13) -> b53 (14) -> b55 (15) -> b57 (16) -> b60 ()
# \-> b61 ()
#
# Blocks are not allowed to contain a transaction whose id matches that of an earlier,
# not-fully-spent transaction in the same chain. To test, make identical coinbases;
# the second one should be rejected. See also CVE-2012-1909.
#
self.log.info(
"Reject a block with a transaction with a duplicate hash of a previous transaction (BIP30)")
self.move_tip(60)
b61 = self.next_block(61)
b61.vtx[0].vin[0].scriptSig = DUPLICATE_COINBASE_SCRIPT_SIG
b61.vtx[0].rehash()
b61 = self.update_block(61, [])
assert_equal(duplicate_tx.serialize(), b61.vtx[0].serialize())
self.send_blocks([b61], success=False,
reject_reason='bad-txns-BIP30', reconnect=True)
# Test BIP30 (allow duplicate if spent)
#
# -> b57 (16) -> b60 ()
# \-> b_spend_dup_cb (b_dup_cb) -> b_dup_2 ()
#
self.move_tip(57)
b_spend_dup_cb = self.next_block('spend_dup_cb')
tx = CTransaction()
tx.vin.append(CTxIn(COutPoint(duplicate_tx.sha256, 0)))
tx.vout.append(CTxOut(0, CScript([OP_TRUE])))
self.sign_tx(tx, duplicate_tx)
tx.rehash()
b_spend_dup_cb = self.update_block('spend_dup_cb', [tx])
b_dup_2 = self.next_block('dup_2')
b_dup_2.vtx[0].vin[0].scriptSig = DUPLICATE_COINBASE_SCRIPT_SIG
b_dup_2.vtx[0].rehash()
b_dup_2 = self.update_block('dup_2', [])
assert_equal(duplicate_tx.serialize(), b_dup_2.vtx[0].serialize())
assert_equal(
self.nodes[0].gettxout(
txid=duplicate_tx.hash,
n=0)['confirmations'],
119)
self.send_blocks([b_spend_dup_cb, b_dup_2], success=True)
# The duplicate has less confirmations
assert_equal(
self.nodes[0].gettxout(
txid=duplicate_tx.hash,
n=0)['confirmations'],
1)
# Test tx.isFinal is properly rejected (not an exhaustive tx.isFinal test, that should be in data-driven transaction tests)
#
# -> b_spend_dup_cb (b_dup_cb) -> b_dup_2 ()
# \-> b62 (18)
#
self.log.info(
"Reject a block with a transaction with a nonfinal locktime")
self.move_tip('dup_2')
b62 = self.next_block(62)
tx = CTransaction()
tx.nLockTime = 0xffffffff # this locktime is non-final
# don't set nSequence
tx.vin.append(CTxIn(COutPoint(out[18].sha256, 0)))
tx.vout.append(CTxOut(0, CScript([OP_TRUE])))
assert tx.vin[0].nSequence < 0xffffffff
tx.calc_sha256()
b62 = self.update_block(62, [tx])
self.send_blocks(
[b62],
success=False,
reject_reason='bad-txns-nonfinal',
reconnect=True)
# Test a non-final coinbase is also rejected
#
# -> b_spend_dup_cb (b_dup_cb) -> b_dup_2 ()
# \-> b63 (-)
#
self.log.info(
"Reject a block with a coinbase transaction with a nonfinal locktime")
self.move_tip('dup_2')
b63 = self.next_block(63)
b63.vtx[0].nLockTime = 0xffffffff
b63.vtx[0].vin[0].nSequence = 0xDEADBEEF
b63.vtx[0].rehash()
b63 = self.update_block(63, [])
self.send_blocks(
[b63],
success=False,
reject_reason='bad-txns-nonfinal',
reconnect=True)
# This checks that a block with a bloated VARINT between the block_header and the array of tx such that
# the block is > LEGACY_MAX_BLOCK_SIZE with the bloated varint, but <= LEGACY_MAX_BLOCK_SIZE without the bloated varint,
# does not cause a subsequent, identical block with canonical encoding to be rejected. The test does not
# care whether the bloated block is accepted or rejected; it only cares that the second block is accepted.
#
# What matters is that the receiving node should not reject the bloated block, and then reject the canonical
# block on the basis that it's the same as an already-rejected block (which would be a consensus failure.)
#
# -> b_spend_dup_cb (b_dup_cb) -> b_dup_2 () -> b64 (18)
# \
# b64a (18)
# b64a is a bloated block (non-canonical varint)
# b64 is a good block (same as b64 but w/ canonical varint)
#
self.log.info(
"Accept a valid block even if a bloated version of the block has previously been sent")
self.move_tip('dup_2')
regular_block = self.next_block("64a", spend=out[18])
# make it a "broken_block," with non-canonical serialization
b64a = CBrokenBlock(regular_block)
b64a.initialize(regular_block)
self.blocks["64a"] = b64a
self.tip = b64a
tx = CTransaction()
# use canonical serialization to calculate size
script_length = LEGACY_MAX_BLOCK_SIZE - \
len(b64a.normal_serialize()) - 69
script_output = CScript([b'\x00' * script_length])
tx.vout.append(CTxOut(0, script_output))
tx.vin.append(CTxIn(COutPoint(b64a.vtx[1].sha256, 0)))
b64a = self.update_block("64a", [tx])
assert_equal(len(b64a.serialize()), LEGACY_MAX_BLOCK_SIZE + 8)
self.send_blocks([b64a], success=False,
reject_reason='non-canonical ReadCompactSize()')
# bitcoind doesn't disconnect us for sending a bloated block, but if we subsequently
# resend the header message, it won't send us the getdata message again. Just
# disconnect and reconnect and then call send_blocks.
# TODO: improve this test to be less dependent on P2P DOS behaviour.
node.disconnect_p2ps()
self.reconnect_p2p()
self.move_tip('dup_2')
b64 = CBlock(b64a)
b64.vtx = copy.deepcopy(b64a.vtx)
assert_equal(b64.hash, b64a.hash)
assert_equal(len(b64.serialize()), LEGACY_MAX_BLOCK_SIZE)
self.blocks[64] = b64
b64 = self.update_block(64, [])
self.send_blocks([b64], True)
self.save_spendable_output()
# Spend an output created in the block itself
#
# -> b_dup_2 () -> b64 (18) -> b65 (19)
#
self.log.info(
"Accept a block with a transaction spending an output created in the same block")
self.move_tip(64)
b65 = self.next_block(65)
tx1 = self.create_and_sign_transaction(out[19], out[19].vout[0].nValue)
tx2 = self.create_and_sign_transaction(tx1, 0)
b65 = self.update_block(65, [tx1, tx2])
self.send_blocks([b65], True)
self.save_spendable_output()
# Attempt to double-spend a transaction created in a block
#
# -> b64 (18) -> b65 (19)
# \-> b67 (20)
#
#
self.log.info(
"Reject a block with a transaction double spending a transaction created in the same block")
self.move_tip(65)
b67 = self.next_block(67)
tx1 = self.create_and_sign_transaction(out[20], out[20].vout[0].nValue)
tx2 = self.create_and_sign_transaction(tx1, 1)
tx3 = self.create_and_sign_transaction(tx1, 2)
b67 = self.update_block(67, [tx1, tx2, tx3])
self.send_blocks([b67], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
# More tests of block subsidy
#
# -> b64 (18) -> b65 (19) -> b69 (20)
# \-> b68 (20)
#
# b68 - coinbase with an extra 10 satoshis,
# creates a tx that has 9 satoshis from out[20] go to fees
# this fails because the coinbase is trying to claim 1 satoshi too much in fees
#
# b69 - coinbase with extra 10 satoshis, and a tx that gives a 10 satoshi fee
# this succeeds
#
self.log.info(
"Reject a block trying to claim too much subsidy in the coinbase transaction")
self.move_tip(65)
b68 = self.next_block(68, additional_coinbase_value=10)
tx = self.create_and_sign_transaction(
out[20], out[20].vout[0].nValue - 9)
b68 = self.update_block(68, [tx])
self.send_blocks([b68], success=False,
reject_reason='bad-cb-amount', reconnect=True)
self.log.info(
"Accept a block claiming the correct subsidy in the coinbase transaction")
self.move_tip(65)
b69 = self.next_block(69, additional_coinbase_value=10)
tx = self.create_and_sign_transaction(
out[20], out[20].vout[0].nValue - 10)
self.update_block(69, [tx])
self.send_blocks([b69], True)
self.save_spendable_output()
# Test spending the outpoint of a non-existent transaction
#
# -> b65 (19) -> b69 (20)
# \-> b70 (21)
#
self.log.info(
"Reject a block containing a transaction spending from a non-existent input")
self.move_tip(69)
b70 = self.next_block(70, spend=out[21])
bogus_tx = CTransaction()
bogus_tx.sha256 = uint256_from_str(
b"23c70ed7c0506e9178fc1a987f40a33946d4ad4c962b5ae3a52546da53af0c5c")
tx = CTransaction()
tx.vin.append(CTxIn(COutPoint(bogus_tx.sha256, 0), b"", 0xffffffff))
tx.vout.append(CTxOut(1, b""))
pad_tx(tx)
b70 = self.update_block(70, [tx])
self.send_blocks([b70], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
# Test accepting an invalid block which has the same hash as a valid one (via merkle tree tricks)
#
# -> b65 (19) -> b69 (20) -> b72 (21)
# \-> b71 (21)
#
# b72 is a good block.
# b71 is a copy of 72, but re-adds one of its transactions. However,
# it has the same hash as b72.
self.log.info(
"Reject a block containing a duplicate transaction but with the same Merkle root (Merkle tree malleability")
self.move_tip(69)
b72 = self.next_block(72)
tx1 = self.create_and_sign_transaction(out[21], 2)
tx2 = self.create_and_sign_transaction(tx1, 1)
b72 = self.update_block(72, [tx1, tx2]) # now tip is 72
b71 = copy.deepcopy(b72)
# add duplicate last transaction
b71.vtx.append(b72.vtx[-1])
# b71 builds off b69
self.block_heights[b71.sha256] = self.block_heights[b69.sha256] + 1
self.blocks[71] = b71
assert_equal(len(b71.vtx), 4)
assert_equal(len(b72.vtx), 3)
assert_equal(b72.sha256, b71.sha256)
self.move_tip(71)
self.send_blocks([b71], success=False,
reject_reason='bad-txns-duplicate', reconnect=True)
self.move_tip(72)
self.send_blocks([b72], True)
self.save_spendable_output()
self.log.info("Skipped sigops tests")
# sigops tests were moved to feature_block_sigops.py,
# then deleted from Bitcoin ABC after the May 2020 upgrade
b75 = self.next_block(75)
self.save_spendable_output()
b76 = self.next_block(76)
self.save_spendable_output()
self.send_blocks([b75, b76], True)
# Test transaction resurrection
#
# -> b77 (24) -> b78 (25) -> b79 (26)
# \-> b80 (25) -> b81 (26) -> b82 (27)
#
# b78 creates a tx, which is spent in b79. After b82, both should be in mempool
#
# The tx'es must be unsigned and pass the node's mempool policy. It is unsigned for the
# rather obscure reason that the Python signature code does not distinguish between
# Low-S and High-S values (whereas the bitcoin code has custom code which does so);
# as a result of which, the odds are 50% that the python code will use the right
# value and the transaction will be accepted into the mempool. Until we modify the
# test framework to support low-S signing, we are out of luck.
#
# To get around this issue, we construct transactions which are not signed and which
# spend to OP_TRUE. If the standard-ness rules change, this test would need to be
# updated. (Perhaps to spend to a P2SH OP_TRUE script)
self.log.info("Test transaction resurrection during a re-org")
self.move_tip(76)
b77 = self.next_block(77)
tx77 = self.create_and_sign_transaction(out[24], 10 * COIN)
b77 = self.update_block(77, [tx77])
self.send_blocks([b77], True)
self.save_spendable_output()
b78 = self.next_block(78)
tx78 = self.create_tx(tx77, 0, 9 * COIN)
b78 = self.update_block(78, [tx78])
self.send_blocks([b78], True)
b79 = self.next_block(79)
tx79 = self.create_tx(tx78, 0, 8 * COIN)
b79 = self.update_block(79, [tx79])
self.send_blocks([b79], True)
# mempool should be empty
assert_equal(len(self.nodes[0].getrawmempool()), 0)
self.move_tip(77)
b80 = self.next_block(80, spend=out[25])
self.send_blocks([b80], False, force_send=True)
self.save_spendable_output()
b81 = self.next_block(81, spend=out[26])
# other chain is same length
self.send_blocks([b81], False, force_send=True)
self.save_spendable_output()
b82 = self.next_block(82, spend=out[27])
# now this chain is longer, triggers re-org
self.send_blocks([b82], True)
self.save_spendable_output()
# now check that tx78 and tx79 have been put back into the peer's
# mempool
mempool = self.nodes[0].getrawmempool()
assert_equal(len(mempool), 2)
assert tx78.hash in mempool
assert tx79.hash in mempool
# Test invalid opcodes in dead execution paths.
#
# -> b81 (26) -> b82 (27) -> b83 (28)
#
self.log.info(
"Accept a block with invalid opcodes in dead execution paths")
b83 = self.next_block(83)
op_codes = [OP_IF, OP_INVALIDOPCODE, OP_ELSE, OP_TRUE, OP_ENDIF]
script = CScript(op_codes)
tx1 = self.create_and_sign_transaction(
out[28], out[28].vout[0].nValue, script)
tx2 = self.create_and_sign_transaction(tx1, 0, CScript([OP_TRUE]))
tx2.vin[0].scriptSig = CScript([OP_FALSE])
tx2.rehash()
b83 = self.update_block(83, [tx1, tx2])
self.send_blocks([b83], True)
self.save_spendable_output()
# Reorg on/off blocks that have OP_RETURN in them (and try to spend them)
#
# -> b81 (26) -> b82 (27) -> b83 (28) -> b84 (29) -> b87 (30) -> b88 (31)
# \-> b85 (29) -> b86 (30) \-> b89a (32)
#
self.log.info("Test re-orging blocks with OP_RETURN in them")
b84 = self.next_block(84)
tx1 = self.create_tx(out[29], 0, 0, CScript([OP_RETURN]))
vout_offset = len(tx1.vout)
tx1.vout.append(CTxOut(0, CScript([OP_TRUE])))
tx1.vout.append(CTxOut(0, CScript([OP_TRUE])))
tx1.vout.append(CTxOut(0, CScript([OP_TRUE])))
tx1.vout.append(CTxOut(0, CScript([OP_TRUE])))
tx1.calc_sha256()
self.sign_tx(tx1, out[29])
tx1.rehash()
tx2 = self.create_tx(tx1, vout_offset, 0, CScript([OP_RETURN]))
tx2.vout.append(CTxOut(0, CScript([OP_RETURN])))
tx3 = self.create_tx(tx1, vout_offset + 1, 0, CScript([OP_RETURN]))
tx3.vout.append(CTxOut(0, CScript([OP_TRUE])))
tx4 = self.create_tx(tx1, vout_offset + 2, 0, CScript([OP_TRUE]))
tx4.vout.append(CTxOut(0, CScript([OP_RETURN])))
tx5 = self.create_tx(tx1, vout_offset + 3, 0, CScript([OP_RETURN]))
b84 = self.update_block(84, [tx1, tx2, tx3, tx4, tx5])
self.send_blocks([b84], True)
self.save_spendable_output()
self.move_tip(83)
b85 = self.next_block(85, spend=out[29])
self.send_blocks([b85], False) # other chain is same length
b86 = self.next_block(86, spend=out[30])
self.send_blocks([b86], True)
self.move_tip(84)
b87 = self.next_block(87, spend=out[30])
self.send_blocks([b87], False) # other chain is same length
self.save_spendable_output()
b88 = self.next_block(88, spend=out[31])
self.send_blocks([b88], True)
self.save_spendable_output()
# trying to spend the OP_RETURN output is rejected
b89a = self.next_block("89a", spend=out[32])
tx = self.create_tx(tx1, 0, 0, CScript([OP_TRUE]))
b89a = self.update_block("89a", [tx])
self.send_blocks([b89a], success=False,
reject_reason='bad-txns-inputs-missingorspent', reconnect=True)
self.log.info(
"Test a re-org of one week's worth of blocks (1088 blocks)")
self.move_tip(88)
LARGE_REORG_SIZE = 1088
blocks = []
spend = out[32]
for i in range(89, LARGE_REORG_SIZE + 89):
b = self.next_block(i, spend)
tx = CTransaction()
script_length = LEGACY_MAX_BLOCK_SIZE - len(b.serialize()) - 69
script_output = CScript([b'\x00' * script_length])
tx.vout.append(CTxOut(0, script_output))
tx.vin.append(CTxIn(COutPoint(b.vtx[1].sha256, 0)))
b = self.update_block(i, [tx])
assert_equal(len(b.serialize()), LEGACY_MAX_BLOCK_SIZE)
blocks.append(b)
self.save_spendable_output()
spend = self.get_spendable_output()
self.send_blocks(blocks, True, timeout=1920)
chain1_tip = i
# now create alt chain of same length
self.move_tip(88)
blocks2 = []
for i in range(89, LARGE_REORG_SIZE + 89):
blocks2.append(self.next_block("alt" + str(i)))
self.send_blocks(blocks2, False, force_send=True)
# extend alt chain to trigger re-org
block = self.next_block("alt" + str(chain1_tip + 1))
self.send_blocks([block], True, timeout=1920)
# ... and re-org back to the first chain
self.move_tip(chain1_tip)
block = self.next_block(chain1_tip + 1)
self.send_blocks([block], False, force_send=True)
block = self.next_block(chain1_tip + 2)
self.send_blocks([block], True, timeout=1920)
self.log.info("Reject a block with an invalid block header version")
b_v1 = self.next_block('b_v1', version=1)
self.send_blocks(
[b_v1],
success=False,
force_send=True,
reject_reason='bad-version(0x00000001)',
reconnect=True)
self.move_tip(chain1_tip + 2)
b_cb34 = self.next_block('b_cb34')
b_cb34.vtx[0].vin[0].scriptSig = b_cb34.vtx[0].vin[0].scriptSig[:-1]
b_cb34.vtx[0].rehash()
b_cb34.hashMerkleRoot = b_cb34.calc_merkle_root()
b_cb34.solve()
self.send_blocks(
[b_cb34],
success=False,
reject_reason='bad-cb-height',
reconnect=True)
# Helper methods
################
def add_transactions_to_block(self, block, tx_list):
[tx.rehash() for tx in tx_list]
block.vtx.extend(tx_list)
# this is a little handier to use than the version in blocktools.py
def create_tx(self, spend_tx, n, value, script=CScript([OP_TRUE])):
return create_tx_with_script(
spend_tx, n, amount=value, script_pub_key=script)
# sign a transaction, using the key we know about
# this signs input 0 in tx, which is assumed to be spending output n in
# spend_tx
def sign_tx(self, tx, spend_tx):
scriptPubKey = bytearray(spend_tx.vout[0].scriptPubKey)
if (scriptPubKey[0] == OP_TRUE): # an anyone-can-spend
tx.vin[0].scriptSig = CScript()
return
sighash = SignatureHashForkId(
spend_tx.vout[0].scriptPubKey, tx, 0, SIGHASH_ALL | SIGHASH_FORKID, spend_tx.vout[0].nValue)
tx.vin[0].scriptSig = CScript(
[self.coinbase_key.sign_ecdsa(sighash) + bytes(bytearray([SIGHASH_ALL | SIGHASH_FORKID]))])
def create_and_sign_transaction(
self, spend_tx, value, script=CScript([OP_TRUE])):
tx = self.create_tx(spend_tx, 0, value, script)
self.sign_tx(tx, spend_tx)
tx.rehash()
return tx
def next_block(self, number, spend=None, additional_coinbase_value=0,
script=CScript([OP_TRUE]), *, version=4):
if self.tip is None:
base_block_hash = self.genesis_hash
block_time = int(time.time()) + 1
else:
base_block_hash = self.tip.sha256
block_time = self.tip.nTime + 1
# First create the coinbase
height = self.block_heights[base_block_hash] + 1
coinbase = create_coinbase(height, self.coinbase_pubkey)
coinbase.vout[0].nValue += additional_coinbase_value
coinbase.rehash()
if spend is None:
block = create_block(
base_block_hash,
coinbase,
block_time,
version=version)
else:
# all but one satoshi to fees
coinbase.vout[0].nValue += spend.vout[0].nValue - 1
coinbase.rehash()
block = create_block(
base_block_hash,
coinbase,
block_time,
version=version)
# spend 1 satoshi
tx = self.create_tx(spend, 0, 1, script)
self.sign_tx(tx, spend)
self.add_transactions_to_block(block, [tx])
block.hashMerkleRoot = block.calc_merkle_root()
# Block is created. Find a valid nonce.
block.solve()
self.tip = block
self.block_heights[block.sha256] = height
assert number not in self.blocks
self.blocks[number] = block
return block
# save the current tip so it can be spent by a later block
def save_spendable_output(self):
self.log.debug("saving spendable output {}".format(self.tip.vtx[0]))
self.spendable_outputs.append(self.tip)
# get an output that we previously marked as spendable
def get_spendable_output(self):
self.log.debug("getting spendable output {}".format(
self.spendable_outputs[0].vtx[0]))
return self.spendable_outputs.pop(0).vtx[0]
# move the tip back to a previous block
def move_tip(self, number):
self.tip = self.blocks[number]
# adds transactions to the block and updates state
def update_block(self, block_number, new_transactions, reorder=True):
block = self.blocks[block_number]
self.add_transactions_to_block(block, new_transactions)
old_sha256 = block.sha256
if reorder:
make_conform_to_ctor(block)
block.hashMerkleRoot = block.calc_merkle_root()
block.solve()
# Update the internal state just like in next_block
self.tip = block
if block.sha256 != old_sha256:
self.block_heights[block.sha256] = self.block_heights[old_sha256]
del self.block_heights[old_sha256]
self.blocks[block_number] = block
return block
def bootstrap_p2p(self, timeout=10):
"""Add a P2P connection to the node.
Helper to connect and wait for version handshake."""
self.helper_peer = self.nodes[0].add_p2p_connection(P2PDataStore())
# We need to wait for the initial getheaders from the peer before we
# start populating our blockstore. If we don't, then we may run ahead
# to the next subtest before we receive the getheaders. We'd then send
# an INV for the next block and receive two getheaders - one for the
# IBD and one for the INV. We'd respond to both and could get
# unexpectedly disconnected if the DoS score for that error is 50.
self.helper_peer.wait_for_getheaders(timeout=timeout)
def reconnect_p2p(self, timeout=60):
"""Tear down and bootstrap the P2P connection to the node.
The node gets disconnected several times in this test. This helper
method reconnects the p2p and restarts the network thread."""
self.nodes[0].disconnect_p2ps()
self.bootstrap_p2p(timeout=timeout)
def send_blocks(self, blocks, success=True, reject_reason=None,
force_send=False, reconnect=False, timeout=60):
"""Sends blocks to test node. Syncs and verifies that tip has advanced to most recent block.
Call with success = False if the tip shouldn't advance to the most recent block."""
self.helper_peer.send_blocks_and_test(blocks, self.nodes[0], success=success,
reject_reason=reject_reason, force_send=force_send, timeout=timeout, expect_disconnect=reconnect)
if reconnect:
self.reconnect_p2p(timeout=timeout)
if __name__ == '__main__':
FullBlockTest().main()
| Bitcoin-ABC/bitcoin-abc | test/functional/feature_block.py | Python | mit | 54,112 |
r"""
Harvey is a command line legal expert who manages license for you.
Usage:
harvey (ls | list)
harvey <NAME> --tldr
harvey <NAME>
harvey <NAME> --export
harvey (-h | --help)
harvey --version
Options:
-h --help Show this screen.
--version Show version.
"""
import json
import os
import re
import subprocess
import sys
from datetime import date
import requests
from colorama import Fore, Back, Style
from docopt import docopt
requests.packages.urllib3.disable_warnings()
__version__ = '0.0.5'
BASE_URL = "https://api.github.com"
_HEADERS = {'Accept': 'application/vnd.github.drax-preview+json'}
_LICENSES = {}
_ROOT = os.path.abspath(os.path.dirname(__file__))
filename = "licenses.json"
abs_file = os.path.join(_ROOT, filename)
with open(abs_file, 'r') as f:
_LICENSES = json.loads(f.read())
def _stripslashes(s):
'''Removes trailing and leading backslashes from string'''
r = re.sub(r"\\(n|r)", "\n", s)
r = re.sub(r"\\", "", r)
return r
def _get_config_name():
'''Get git config user name'''
p = subprocess.Popen('git config --get user.name', shell=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = p.stdout.readlines()
return _stripslashes(output[0])
def _get_licences():
""" Lists all the licenses on command line """
licenses = _LICENSES
for license in licenses:
print("{license_name} [{license_code}]".format(
license_name=licenses[license], license_code=license))
def _get_license_description(license_code):
""" Gets the body for a license based on a license code """
req = requests.get("{base_url}/licenses/{license_code}".format(
base_url=BASE_URL, license_code=license_code), headers=_HEADERS)
if req.status_code == requests.codes.ok:
s = req.json()["body"]
search_curly = re.search(r'\{(.*)\}', s)
search_square = re.search(r'\[(.*)\]', s)
license = ""
replace_string = '{year} {name}'.format(year=date.today().year,
name=_get_config_name())
if search_curly:
license = re.sub(r'\{(.+)\}', replace_string, s)
elif search_square:
license = re.sub(r'\[(.+)\]', replace_string, s)
else:
license = s
return license
else:
print(Fore.RED + 'No such license. Please check again.'),
print(Style.RESET_ALL),
sys.exit()
def get_license_summary(license_code):
""" Gets the license summary and permitted, forbidden and required
behaviour """
try:
abs_file = os.path.join(_ROOT, "summary.json")
with open(abs_file, 'r') as f:
summary_license = json.loads(f.read())[license_code]
# prints summary
print(Fore.YELLOW + 'SUMMARY')
print(Style.RESET_ALL),
print(summary_license['summary'])
# prints source for summary
print(Style.BRIGHT + 'Source:'),
print(Style.RESET_ALL),
print(Fore.BLUE + summary_license['source'])
print(Style.RESET_ALL)
# prints cans
print(Fore.GREEN + 'CAN')
print(Style.RESET_ALL),
for rule in summary_license['can']:
print(rule)
print('')
# prints cannot
print(Fore.RED + 'CANNOT')
print(Style.RESET_ALL),
for rule in summary_license['cannot']:
print(rule)
print('')
# prints musts
print(Fore.BLUE + 'MUST')
print(Style.RESET_ALL),
for rule in summary_license['must']:
print(rule)
print('')
except KeyError:
print(Fore.RED + 'No such license. Please check again.'),
print(Style.RESET_ALL),
def save_license(license_code):
""" Grab license, save to LICENSE/LICENSE.txt file """
desc = _get_license_description(license_code)
fname = "LICENSE"
if sys.platform == "win32":
fname += ".txt" # Windows and file exts
with open(os.path.join(os.getcwd(), fname), "w") as afile:
afile.write(desc)
def main():
''' harvey helps you manage and add license from the command line '''
arguments = docopt(__doc__, version=__version__)
if arguments['ls'] or arguments['list']:
_get_licences()
elif arguments['--tldr'] and arguments['<NAME>']:
get_license_summary(arguments['<NAME>'].lower())
elif arguments['--export'] and arguments['<NAME>']:
save_license(arguments['<NAME>'].lower())
elif arguments['<NAME>']:
print(_get_license_description(arguments['<NAME>'].lower()))
else:
print(__doc__)
if __name__ == '__main__':
main()
| architv/harvey | harvey/harvey.py | Python | mit | 4,390 |
# Copyright 2000 by Katharine Lindner. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""
This module provides code to work with files from Rebase.
http://rebase.neb.com/rebase/rebase.html
Classes:
Record Holds rebase sequence data.
Iterator Iterates over sequence data in a rebase file.
Dictionary Accesses a rebase file using a dictionary interface.
RecordParser Parses rebase sequence data into a Record object.
_Scanner Scans a rebase-format stream.
_RecordConsumer Consumes rebase data to a Record object.
Functions:
index_file Index a FASTA file for a Dictionary.
"""
from types import *
import string
from Bio import File
from Bio import Index
from Bio.ParserSupport import *
class Record:
"""Holds information from a FASTA record.
Members:
seq_5_to_3 The sequence.
seq_3_to_5
enzyme_num The enzyme number
pos Position of cleavage
prototype Prototype
source
microorganism
temperature Growth temperature
misc Miscellaneous information
date_entered
date_modified
num_Adeno2
num_Lambda
num_pBR322
num_PhiX174
num_SV40
"""
def __init__(self, colwidth=60):
"""__init__(self, colwidth=60)
Create a new Record. colwidth specifies the number of residues
to put on each line.
"""
self.seq_5_to_3 = ''
self.seq_3_to_5 = ''
self.methylation = ''
self.enzyme_num = None
self.prototype = ''
self.source = ''
self.microorganism = ''
self.temperature = None
self.misc = ''
self.date_entered = ''
self.date_modified = ''
self._colwidth = colwidth
self.num_Adeno2 = 0
self.num_Lambda = 0
self.num_pBR322 = 0
self.num_PhiX174 = 0
self.num_SV40 = 0
class Iterator:
"""Returns one record at a time from a Rebase file.
Methods:
next Return the next record from the stream, or None.
"""
def __init__(self, handle, parser=None):
"""__init__(self, handle, parser=None)
Create a new iterator. handle is a file-like object. parser
is an optional Parser object to change the results into another form.
If set to None, then the raw contents of the file will be returned.
"""
if type(handle) is not FileType and type(handle) is not InstanceType:
raise ValueError, "I expected a file handle or file-like object"
self._uhandle = SGMLHandle( File.UndoHandle( handle ) )
self._parser = parser
def next(self):
"""next(self) -> object
Return the next rebase record from the file. If no more records,
return None.
"""
lines = []
first_tag = 'Recognition Sequence'
while 1:
line = self._uhandle.readline()
if not line:
break
if line[:len( first_tag )] == 'first_tag':
self._uhandle.saveline(line)
break
if not line:
return None
if self._parser is not None:
return self._parser.parse(File.StringHandle(data))
return data
def __iter__(self):
return iter(self.next, None)
class Dictionary:
"""Accesses a rebase file using a dictionary interface.
"""
__filename_key = '__filename'
def __init__(self, indexname, parser=None):
"""__init__(self, indexname, parser=None)
Open a Fasta Dictionary. indexname is the name of the
index for the dictionary. The index should have been created
using the index_file function. parser is an optional Parser
object to change the results into another form. If set to None,
then the raw contents of the file will be returned.
"""
self._index = Index.Index(indexname)
self._handle = open(self._index[Dictionary.__filename_key])
self._parser = parser
def __len__(self):
return len(self._index)
def __getitem__(self, key):
start, len = self._index[key]
self._handle.seek(start)
data = self._handle.read(len)
if self._parser is not None:
return self._parser.parse(File.StringHandle(data))
return data
def __getattr__(self, name):
return getattr(self._index, name)
class RecordParser:
"""Parses FASTA sequence data into a Record object.
"""
def __init__(self):
self._scanner = _Scanner()
self._consumer = _RecordConsumer()
def parse(self, handle):
self._scanner.feed(handle, self._consumer)
return self._consumer.data
class _Scanner:
"""Scans a rebase file.
Methods:
feed Feed in one rebase record.
"""
def feed(self, handle, consumer):
"""feed(self, handle, consumer)
Feed in rebase data for scanning. handle is a file-like object
containing rebase data. consumer is a Consumer object that will
receive events as the rebase data is scanned.
"""
if isinstance(handle, File.UndoHandle):
uhandle = handle
else:
uhandle = File.UndoHandle(handle)
uhandle = File.SGMLHandle( uhandle )
if uhandle.peekline():
self._scan_record(uhandle, consumer)
def _scan_line(self, uhandle ):
line = safe_readline( uhandle )
line = string.join( string.split( line ), ' ' ) + ' '
return line
def _text_in( self, uhandle, text, count ):
for j in range( count ):
line = self._scan_line( uhandle )
text = text + line
return text
def _scan_record(self, uhandle, consumer):
consumer.start_sequence()
text = ''
text = self._text_in( uhandle, text, 100 )
self._scan_sequence( text, consumer)
self._scan_methylation( text, consumer)
self._scan_enzyme_num( text, consumer )
self._scan_prototype( text, consumer )
self._scan_source( text, consumer )
self._scan_microorganism( text, consumer )
self._scan_temperature( text, consumer)
self._scan_date_entered( text, consumer)
self._scan_date_modified( text, consumer)
self._scan_Adeno2( text, consumer)
self._scan_Lambda( text, consumer)
self._scan_pBR322( text, consumer)
self._scan_PhiX174( text, consumer)
self._scan_SV40( text, consumer)
# consumer.end_sequence()
def _scan_sequence(self, text, consumer ):
start = string.find( text, 'Recognition Sequence:' )
end = string.find( text, 'Base (Type of methylation):' )
if( end == -1 ):
end = string.find( text, 'REBASE enzyme #:' )
next_item = text[ start:end ]
consumer.sequence( next_item )
def _scan_methylation(self, text, consumer ):
start = string.find( text, 'Base (Type of methylation):' )
if( start != -1 ):
end = string.find( text, 'REBASE enzyme #:' )
next_item = text[ start:end ]
consumer.methylation( next_item )
def _scan_enzyme_num(self, text, consumer ):
start = string.find( text, 'REBASE enzyme #:' )
end = string.find( text, 'Prototype:' )
next_item = text[ start:end ]
consumer.enzyme_num( next_item )
def _scan_prototype(self, text, consumer ):
start = string.find( text, 'Prototype:' )
end = string.find( text, 'Source:' )
next_item = text[ start:end ]
consumer.prototype( next_item )
def _scan_source(self, text, consumer ):
start = string.find( text, 'Source:' )
end = string.find( text, 'Microorganism:' )
next_item = text[ start:end ]
consumer.source( next_item )
def _scan_microorganism(self, text, consumer ):
start = string.find( text, 'Microorganism:' )
end = string.find( text, 'Growth Temperature:' )
next_item = text[ start:end ]
consumer.microorganism( next_item )
def _scan_temperature(self, text, consumer):
start = string.find( text, 'Growth Temperature:' )
end = start + 30
next_item = text[ start:end ]
consumer.temperature( next_item )
def _scan_date_entered(self, text, consumer):
start = string.find( text, 'Entered:' )
end = start + 30
next_item = text[ start:end ]
consumer.data_entered( next_item )
def _scan_date_modified(self, text, consumer):
start = string.find( text, 'Modified:' )
if( start != -1 ):
end = start + 30
next_item = text[ start:end ]
consumer.data_modified( next_item )
def _scan_Adeno2( self, text, consumer ):
start = string.find( text, 'Adeno2:' )
end = string.find( text, 'Lambda:' )
next_item = text[ start:end ]
consumer.num_Adeno2( next_item )
def _scan_Lambda( self, text, consumer ):
start = string.find( text, 'Lambda:' )
end = string.find( text, 'pBR322:' )
next_item = text[ start:end ]
consumer.num_Lambda( next_item )
def _scan_pBR322(self, text, consumer ):
start = string.find( text, 'pBR322:' )
end = string.find( text, 'PhiX174:' )
next_item = text[ start:end ]
consumer.num_pBR322( next_item )
def _scan_PhiX174(self, text, consumer ):
start = string.find( text, 'PhiX174:' )
end = string.find( text, 'SV40:' )
next_item = text[ start:end ]
consumer.num_PhiX174( next_item )
def _scan_SV40(self, text, consumer ):
start = string.find( text, 'SV40:' )
end = start + 30
next_item = text[ start:end ]
consumer.num_SV40( next_item )
class _RecordConsumer(AbstractConsumer):
"""Consumer that converts a rebase record to a Record object.
Members:
data Record with rebase data.
"""
def __init__(self):
self.data = None
def start_sequence(self):
self.data = Record()
def end_sequence(self):
pass
def sequence( self, line ):
cols = string.split( line, ': ' )
sequence = cols[ 1 ]
sequence = string.strip( sequence )
if( string.find( sequence, ' ...' ) != -1 ):
cols = string.split( sequence, '...' )
self.data.seq_5_to_3 = cols[ 1 ]
elif( string.lower( sequence ) != 'unknown' ):
seq_len = len( sequence ) / 2
self.data.seq_5_to_3 = string.strip( sequence[ :seq_len ] )
self.data.seq_3_to_5 = string.strip( sequence[ seq_len: ] )
def methylation( self, line ):
cols = string.split( line, ': ' )
self.data.methylation = cols[ 1 ]
def enzyme_num( self, line ):
cols = string.split( line, ': ' )
self.data.enzyme_num = int( cols[ 1 ] )
def prototype( self, line ):
cols = string.split( line, ': ' )
self.data.prototype = cols[ 1 ]
def source( self, line ):
cols = string.split( line, ': ' )
self.data.source = cols[ 1 ]
def microorganism( self, line ):
cols = string.split( line, ': ' )
self.data.microorganism = cols[ 1 ]
def temperature( self, line ):
cols = string.split( line, ':' )
cols = string.split( cols[ 1 ], ' ' )
self.data.temperature = cols[ 1 ]
def data_entered( self, line ):
cols = string.split( line, ':' )
cols = string.split( cols[ 1 ] )
self.data.date_entered = string.join( cols[ :3 ] )
def data_modified( self, line ):
cols = string.split( line, ':' )
cols = string.split( cols[ 1 ] )
self.data.date_modified = string.join( cols[ :3 ] )
def num_Adeno2( self, line ):
cols = string.split( line, ': ' )
self.data.num_Adeno2 = int( cols[ 1 ] )
def num_Lambda( self, line ):
cols = string.split( line, ': ' )
self.data.num_Lambda = int( cols[ 1 ] )
def num_pBR322( self, line ):
cols = string.split( line, ': ' )
self.data.num_pBR322 = int( cols[ 1 ] )
def num_PhiX174( self, line ):
cols = string.split( line, ': ' )
self.data.num_PhiX174 = int( cols[ 1 ] )
def num_SV40( self, line ):
cols = string.split( line, ':' )
cols = string.split( cols[ 1 ], ' ' )
self.data.num_SV40 = cols[ 1 ]
def index_file(filename, indexname, rec2key=None):
"""index_file(filename, ind/exname, rec2key=None)
Index a rebase file. filename is the name of the file.
indexname is the name of the dictionary. rec2key is an
optional callback that takes a Record and generates a unique key
(e.g. the accession number) for the record. If not specified,
the sequence title will be used.
"""
if not os.path.exists(filename):
raise ValueError, "%s does not exist" % filename
index = Index.Index(indexname, truncate=1)
index[Dictionary._Dictionary__filename_key] = filename
iter = Iterator(open(filename), parser=RecordParser())
while 1:
start = iter._uhandle.tell()
rec = iter.next()
length = iter._uhandle.tell() - start
if rec is None:
break
if rec2key is not None:
key = rec2key(rec)
else:
key = rec.title
if not key:
raise KeyError, "empty sequence key was produced"
elif index.has_key(key):
raise KeyError, "duplicate key %s found" % key
index[key] = start, length
| dbmi-pitt/DIKB-Micropublication | scripts/mp-scripts/Bio/Rebase/__init__.py | Python | apache-2.0 | 13,770 |
# -*- coding: utf-8 -*-
"""
Display NVIDIA properties currently exhibiting in the NVIDIA GPUs.
nvidia-smi, short for NVIDIA System Management Interface program, is a cross
platform tool that supports all standard NVIDIA driver-supported Linux distros.
Configuration parameters:
cache_timeout: refresh interval for this module (default 10)
format: display format for this module (default '{format_gpu}')
format_gpu: display format for NVIDIA GPUs
*(default '{gpu_name} [\?color=temperature.gpu {temperature.gpu}°] '
'[\?color=memory.used_percent {memory.used} MiB]')*
format_gpu_separator: show separator if more than one (default ' ')
thresholds: specify color thresholds to use
(default [(0, 'good'), (65, 'degraded'), (75, 'orange'), (85, 'bad')])
Format placeholders:
{format_gpu} format for NVIDIA GPUs
format_gpu placeholders:
{index} Zero based index of the GPU.
{count} The number of NVIDIA GPUs in the system
{driver_version} The version of the installed NVIDIA display driver
{gpu_name} The official product name of the GPU
{gpu_uuid} Globally unique immutable identifier of the GPU
{memory.free} Total free memory
{memory.total} Total installed GPU memory
{memory.used} Total memory allocated by active contexts
{memory.used_percent} Total memory allocated by active contexts percentage
{temperature.gpu} Core GPU temperature in degrees C
Use `python /path/to/nvidia_smi.py --list-properties` for a full list of
supported NVIDIA properties to use. Not all of supported NVIDIA properties
will be usable. See `nvidia-smi --help-query-gpu` for more information.
Color thresholds:
format_gpu:
`xxx`: print a color based on the value of NVIDIA `xxx` property
Requires:
nvidia-smi: command line interface to query NVIDIA devices
Examples:
```
# add {memory.used_percent}
nvidia_smi {
format_gpu = '{gpu_name} [\?color=temperature.gpu {temperature.gpu}°] '
format_gpu += '[\?color=memory.used_percent {memory.used} MiB'
format_gpu += '[\?color=darkgray&show \|]{memory.used_percent:.1f}%]'
}
```
@author lasers
SAMPLE OUTPUT
[
{'full_text': 'Quadro NVS 295 '},
{'color': '#00ff00', 'full_text': '51°C'},
]
memory
[
{'full_text': 'Quadro NVS 295 '},
{'color': '#ffff00', 'full_text': '74°C '},
{'color': '#00ff00', 'full_text': '155 MiB'},
{'color': '#a9a9a9', 'full_text': '|'},
{'color': '#00ff00', 'full_text': '60.8%'},
]
percent
[
{'full_text': 'Quadro NVS 295 '},
{'full_text': '73° ', 'color': '#fce94f'},
{'full_text': '192 MiB', 'color': '#ffa500'},
{'full_text': '|', 'color': '#a9a9a9'},
{'full_text': '75.3%', 'color': '#ffa500'}
]
"""
STRING_NOT_INSTALLED = 'not installed'
class Py3status:
"""
"""
# available configuration parameters
cache_timeout = 10
format = '{format_gpu}'
format_gpu = (u'{gpu_name} [\?color=temperature.gpu {temperature.gpu}°] '
'[\?color=memory.used_percent {memory.used} MiB]')
format_gpu_separator = ' '
thresholds = [(0, 'good'), (65, 'degraded'), (75, 'orange'), (85, 'bad')]
def post_config_hook(self):
command = 'nvidia-smi --format=csv,noheader,nounits --query-gpu='
if not self.py3.check_commands(command.split()[0]):
raise Exception(STRING_NOT_INSTALLED)
self.properties = self.py3.get_placeholders_list(self.format_gpu)
format_gpu = {x: ':.1f' for x in self.properties if 'used_percent' in x}
self.format_gpu = self.py3.update_placeholder_formats(
self.format_gpu, format_gpu
)
for name in ['memory.used_percent']:
if name in self.properties:
self.properties.remove(name)
for name in ['memory.used', 'memory.total']:
if name not in self.properties:
self.properties.append(name)
self.nvidia_command = command + ','.join(self.properties)
self.thresholds_init = self.py3.get_color_names_list(self.format_gpu)
def _get_nvidia_data(self):
data = self.py3.command_output(self.nvidia_command)
if '[Not Supported]' in data:
data = data.replace('[Not Supported]', 'None')
return data
def nvidia_smi(self):
nvidia_data = self._get_nvidia_data()
new_gpu = []
for line in nvidia_data.splitlines():
gpu = dict(zip(self.properties, line.split(', ')))
gpu['memory.used_percent'] = (
float(gpu['memory.used']) / float(gpu['memory.total']) * 100.0
)
for x in self.thresholds_init:
if x in gpu:
self.py3.threshold_get_color(gpu[x], x)
new_gpu.append(self.py3.safe_format(self.format_gpu, gpu))
format_gpu_separator = self.py3.safe_format(self.format_gpu_separator)
format_gpu = self.py3.composite_join(format_gpu_separator, new_gpu)
return {
'cached_until': self.py3.time_in(self.cache_timeout),
'full_text': self.py3.safe_format(
self.format, {'format_gpu': format_gpu}
)
}
if __name__ == "__main__":
from sys import argv
if '--list-properties' in argv:
from sys import exit
from json import dumps
from subprocess import check_output
help_cmd = 'nvidia-smi --help-query-gpu'
help_data = check_output(help_cmd.split()).decode()
new_properties = []
e = ['Default', 'Exclusive_Thread', 'Exclusive_Process', 'Prohibited']
for line in help_data.splitlines():
if line.startswith('"'):
properties = line.split('"')[1::2]
for name in properties:
if name not in e:
new_properties.append(name)
properties = ','.join(new_properties)
gpu_cmd = 'nvidia-smi --format=csv,noheader,nounits --query-gpu='
gpu_data = check_output((gpu_cmd + properties).split()).decode()
new_gpus = []
msg = 'This GPU contains {} supported properties.'
for line in gpu_data.splitlines():
gpu = dict(zip(new_properties, line.split(', ')))
gpu = {k: v for k, v in gpu.items() if '[Not Supported]' not in v}
gpu['= ' + msg.format(len(gpu))] = ''
gpu['=' * (len(msg) + 2)] = ''
new_gpus.append(gpu)
print(dumps(new_gpus, sort_keys=True, indent=4))
exit()
"""
Run module in test mode.
"""
from py3status.module_test import module_test
module_test(Py3status)
| tobes/py3status | py3status/modules/nvidia_smi.py | Python | bsd-3-clause | 6,729 |
#!/usr/bin/env python
##############################################################################
#
# diffpy.pyfullprof by DANSE Diffraction group
# Simon J. L. Billinge
# (c) 2010 Trustees of the Columbia University
# in the City of New York. All rights reserved.
#
# File coded by: Wenduo Zhou, Jiwu Liu and Peng Tian
#
# See AUTHORS.txt for a list of people who contributed.
# See LICENSE.txt for license information.
#
##############################################################################
"""class Fit - a class containing information to perform a multi-phase,
multi-pattern Rietveld refinement
"""
__id__ = "$Id: fit.py 6843 2013-01-09 22:14:20Z juhas $"
import os
import glob
import shutil
from diffpy.pyfullprof.refine import Refine
from diffpy.pyfullprof.rietveldclass import RietveldClass
from diffpy.pyfullprof.utilfunction import verifyType
from diffpy.pyfullprof.infoclass import ParameterInfo
from diffpy.pyfullprof.infoclass import BoolInfo
from diffpy.pyfullprof.infoclass import EnumInfo
from diffpy.pyfullprof.infoclass import FloatInfo
from diffpy.pyfullprof.infoclass import IntInfo
from diffpy.pyfullprof.infoclass import RefineInfo
from diffpy.pyfullprof.infoclass import StringInfo
from diffpy.pyfullprof.infoclass import ObjectInfo
from diffpy.pyfullprof.utilfunction import checkFileExistence
from diffpy.pyfullprof.exception import RietError
class Fit(RietveldClass):
"""
Fit contains information for a single Rietveld refinement configuration
attributes
banklist -- list of integers, index of banks used
"""
ParamDict = {
# general information
"Name": StringInfo("Name", "Fit Name", "new fit"), \
"Information": StringInfo("Information", "Fit Information", ""), \
"physparam": FloatInfo("physparam", "External Parameter", 0.0), \
# refinement solution information
"Chi2": FloatInfo("Chi2", "Chi^2", 0.0, '', 0.0, None),
# refinement setup information
"Dum": EnumInfo("Dum", "Divergence Control", 0, \
{0: "Regular", \
1: "criterion of convergence is not applied when shifts are lower than a fraction of standard deviation", \
2: "stopped in case of local divergence", \
3: "reflection near excluded regions are not taken into account for Bragg R-factor"}, \
[0, 1, 2, 3]),
"Ias": EnumInfo("Ias", "Reordering of Reflections", 0,
{0: "At First Cycle",
1: "At Each Cycle"},
[0, 1]),
"Cry": EnumInfo("Cry", "Job and Refinement Algorithm", 0, \
{0: "Rietveld refinement", \
1: "Refinement of single crystal data or integrated intensity of powder data", \
2: "No least-square method is applied (Monte Carlo)", \
3: "Simulated Annearing"}, \
[0, 1, 2, 3]),
"Opt": BoolInfo("Opt", "Calculation Optimization", False),
"Aut": BoolInfo("Aut", "Automatic Mode for Refinement Codes Numbering", False),
"NCY": IntInfo("NCY", "Refinement Cycle Number", 1, 1, None),
"Eps": FloatInfo("Eps", "Convergence Precision", 0.1, '', 0.0, None),
"R_at": FloatInfo("R_at", "Atomic Relaxation Factor", 1.0),
"R_an": FloatInfo("R_an", "Anisotropic Relaxation Factor", 1.0),
"R_pr": FloatInfo("R_pr", "Profile Relaxation Factor", 1.0),
"R_gl": FloatInfo("R_gl", "Global Pamameter Relaxation Factor", 1.0),
# output options
"Mat": EnumInfo("Mat", "Correlation Matrix Output", 1,
{0: "no action", \
1: "written in CODFIL.dat", \
2: "diagonal of least square matrix is printed at every cycle"},
[0,1,2]),
"Pcr": EnumInfo("Pcr", "Upate .pcr file after refinement", 1,
{1: "CODFIL.pcr is re-written with updated parameters",
2: "A new input file is generated named CODFIL.new"},
[2,1]),
"Syo": EnumInfo("Syo", "Output of the symmetry operator", 0,
{0: "no action",
1: "symmetry operators are written in CODFIL.out"},
[0,1]),
"Rpa": EnumInfo("Rpa", "Output .rpa file", 1,
{-1:".cif file",
0: "no action",
1: ".rpa file",
2: ".sav file"},
[0,1,2]),
"Sym": EnumInfo("Sym", "Output .sym file", 1,
{0: "no cation",
1: "prepare CODFIL.sym"},
[0,1]),
"Sho": BoolInfo("Sho", "Reduced output", False),
"Nre": IntInfo("Nre", "Number of restrainted parameters", 0),
}
ObjectDict = {
"Refine": ObjectInfo("Refine", "Refine"),
}
ObjectListDict = {
#"MonteCarlo": ObjectInfo("MonteCarlo", "MonteCarlo", 0, 1),
#"SimulatedAnnealing": ObjectInfo("SimulatedAnneaing", "SimulatedAnnealing", 0, 1),
"Pattern": ObjectInfo("PatternList", "Pattern", 1, None),
"Phase": ObjectInfo("PhaseList", "Phase", 1, None),
"Contribution": ObjectInfo("ContributionList", "Contribution", 0, None),
}
def __init__(self, parent):
"""
initialization: add a new Fit, and create the refine object belonged to this fit object
"""
RietveldClass.__init__(self, parent)
# init refine
refineobj = Refine(None)
self.set("Refine", refineobj)
# bank information: This is for some specific pattern data such as ".gss" with different bank
self.banklist = []
# internal data
self.key = ''
self._param_indices = {}
self.updateParamIndices(self._param_indices)
return
def __str__(self):
"""
customerized output
"""
rstring = ""
rstring += RietveldClass.__str__(self)
return rstring
def getContribution(self, p1, p2):
"""
get the contribution by pattern and phase
p1: pattern/phase
p2: phase/pattern
return -- (1) Contribution
(2) None if not exist
"""
from diffpy.pyfullprof.pattern import Pattern
from diffpy.pyfullprof.phase import Phase
from diffpy.pyfullprof.contribution import Contribution
phase = None
pattern = None
if isinstance(p1, Pattern) and isinstance(p2, Phase):
pattern = p1
phase = p2
elif isinstance(p1, Phase) and isinstance(p2, Pattern):
pattern = p2
phase = p1
else:
raise NotImplementedError, "fit.getContribution: p1 and p2 must be phase and pattern or pattern and phase"
contributionlist = self.get("Contribution")
for contribution in contributionlist:
if contribution._ParentPhase == phase and contribution._ParentPattern == pattern:
return contribution
# if return will be None, then print out some debugging information
if 0:
contributionlist = self.get("Contribution")
dbmsg = "pyfullprof.core.Fit.getContribution(): Cannot Find a Matching Contribution!\n"
dbmsg += "%-20s: Phase -- %-30s Pattern -- %-30s\n"%("Input", repr(phase), repr(pattern))
counts = 0
for contribution in contributionlist:
addrphase = repr(contribution._ParentPhase)
addrpattern = repr(contribution._ParentPattern)
dbmsg += "%-20s: Phase -- %-30s Pattern -- %-30s\n"%("Contribution "+str(counts), addrphase, addrpattern)
counts += 1
print dbmsg
return None
def getParamList(self):
return self.Refine.constraints
def updateFit(self, newfit):
"""Update self with the new fit, which is the resultant Fit instance.
newfit -- an instance of Fit
"""
# update Chi^2
for paramname in self.ParamDict:
self.set(paramname, newfit.get(paramname))
for constraint in self.get("Refine").constraints[:]:
if constraint.on:
path = constraint.path
val = newfit.getByPath(path)
self.setByPath(path, val)
#FIXME: it is a get-around of the engine bug
newconstraint = newfit.getConstraintByPath(path)
if newconstraint is not None:
constraint.sigma = newconstraint.sigma
for constraint in self.get("Refine").constraints[:]:
print " %-10s %-15s %-15.6f+/- %-11.6f" \
% ( constraint.name, constraint.varName, constraint.getValue(),
constraint.sigma)
print "\n"
return
def validate(self, mode="Refine"):
"""
validate the parameters, subclass and container to meet the refinement requirement
Arguments
- mode : string, validate mode, (Refine, Calculate)
Return : Boolean
"""
rvalue = RietveldClass.validate(self)
errmsg = ""
# I. Synchronization of FulProf Parameter & Check Data File Existence
for pattern in self.get("Pattern"):
# 2.1 data file existence
if mode == "Refine":
exist = checkFileExistence(pattern.get("Datafile"))
if not exist:
rvalue = False
errmsg += "Data File %-10s Cannot Be Found\n"% (pattern.get("Datafile"))
# 2. Nre
nre = 0
for variable in self.get("Refine").get("Variable"):
if variable.get("usemin") is True:
nre += 1
# End -- for variable in ...
self.set("Nre", nre)
# Check Validity
# 1. parameter
if self.get("NCY") < 1:
rvalue = False
# 2. .hkl file
# the reason why not put the checking .hkl file in Contribution is that
# there is a sequence number related between pattern sequence
# within contribution, it is hard to get this number
for pattern in self.get("Pattern"):
# scan all Phases
pindex = 1
fnamer = pattern.get("Datafile").split(".")[0]
usehkl = False
for phase in self.get("Phase"):
# get contribution
contribution = self.getContribution(pattern, phase)
if contribution is not None and contribution.get("Irf") == 2:
# using reflection
usehkl = True
# check single phase/contribution hkl file
hklfname = fnamer+str(pindex)+".hkl"
try:
hklfile = open(hklfname, "r")
hklfile.close()
except IOError, err:
# if no such file exits, update Irf to 0
contribution.set("Irf", 0)
# error message output
errmsg += "Fit.validate(): Reflection File %-10s Cannot Be Found: "% (hklfname)
errmsg += "Chaning Contribution.Irf to 0 ... Related to Phase[%-5s]"% (pindex)
print errmsg
# End -- if contribution is not None and Irf == 2:
pindex += 1
# End -- for phase in self.get("Phase"):
if usehkl is True:
# check overall hkl file
hklfname = fnamer+".hkl"
try:
hklfile = open(hklfname, "r")
hklfile.close()
except IOError, err:
# if no such file exists, update all Irf to 0
for contribution in self.get("Contribution"):
contribution.set("Irf", 0)
# END -- if usehkl is True:
# End -- for pattern in self.get("Pattern"):
if rvalue is not True:
# Error output
errmsg = "=== Fit.validate() ===\n" + "Invalidity Deteced\n"
print errmsg
return rvalue
def refine(self, cycle, srtype="l", mode="refine", signature=["dry","/tmp/"]):
""" Unify the interface to connect higher level """
import diffpy.pyfullprof.runfullprof as FpEngine
subid = signature[0]
processdir = signature[1]
# 1. set cycle number
self.set("NCY", cycle)
self.adaptPyFullProftoSrRietveld(srtype, processdir)
# 2. init file name for refinement
pcrfullname = os.path.join(processdir, "temp.pcr")
# 3. call runfullprof
fitsummary = FpEngine.runFullProf(self, pcrfullname, mode, srtype, processdir)
# 4. updateFit and get data/reflectionlist files
self.getRefineResult(fitsummary, srtype, pcrfullname)
filedir = processdir + "/" + "temp*.*"
des = processdir + "/" + subid
os.mkdir(des)
for fname in glob.glob(filedir):
shutil.move(fname, des)
return fitsummary
def adaptPyFullProftoSrRietveld(self, srtype, processdir, weightscheme="standard"):
""" Change some setting to FullProf from SrRietveld """
if srtype == "l":
# Lebail: set Jbt -> Profile Matching Mode, and Auto-calculate
# FIXME In advanced mode, if Fit can capture mult-step refine, Irf=2 in step>1 refine
for phase in self.get("Phase"):
phase.set("Jbt", 2)
self.Jbt=2
for contr in self.get("Contribution"):
contr.set("Irf", 0)
elif srtype == "r":
# Rietveld
for phase in self.get("Phase"):
phase.set("Jbt", 0)
self.Jbt=0
# END-IF: if srtype
for pattern in self.get("Pattern"):
if weightscheme[0] == "s":
pattern.set("Iwg", 0)
elif weightscheme[0] == "u":
pattern.set("Iwg", 2)
else:
errmsg = "Weighting Scheme %-10s Is Not Supported"% (weightscheme)
raise RietError(errmsg)
return
def getRefineResult(self, fitsummary, srtype, pcrfullname):
""" get out Refinement results
"""
import diffpy.pyfullprof.fpoutputfileparsers as FPP
newfit = fitsummary.getFit()
if isinstance(newfit, Fit):
self.updateFit(newfit)
else:
#FIXME: should change the refine status and let srr to handle the error
raise RietError(self.__class__.__name__+".refine(): Fit Error!")
self._readCalculatePattern(pcrfullname)
# if mode=lebail, import reflections from RFL file
if srtype == "l":
patterns = self.get("Pattern")
numpat = len(patterns)
if numpat == 1:
singlepattern = True
else:
singlepattern = False
# b) Phase
numpha = len(self.get("Phase"))
# c) Read all reflections
hklrange = {}
for patid in range(numpat):
tempreflects = []
hklrange["min"] = patterns[patid].get("Thmin")
hklrange["max"] = patterns[patid].get("Thmax")
for phaseid in range(numpha):
# Only work on phase-pattern related case
corecontrib = self.getContribution(self.get("Phase")[phaseid],
self.get("Pattern")[patid])
if corecontrib is not None:
reflectdict = FPP.parseHKLn(pcrfullname, phaseno=phaseid+1,
singlepattern=singlepattern, patno=patid+1, hklrange=hklrange)
tempreflects.append(reflectdict)
# END-IF
# LOOP-OVER
patterns[patid]._reflections[phaseid] = tempreflects
# LOOP-OVER
# turn all constraint refinement off
#self.fixAll()
return
def fixAll(self):
"""Fix all the refinable parameters at their current values."""
srrefine = self.get("Refine")
for constraint in srrefine.constraints[:]:
# there is no removeConstraintByPath, so a constraint can only be removed
# by its parname + index
constraint.owner.removeConstraint(constraint.parname, constraint.index)
srrefine.constraints = []
return
def _readCalculatePattern(self, pcrfullname):
"""read the calculated pattern from refinement solution
put the data as list of 2-tuples to each component/data
"""
pcrbasename = os.path.basename(pcrfullname)
pcrrootname = os.path.splitext(pcrbasename)[0]
processdir = pcrfullname.split(pcrbasename)[0][:-1]
# get pattern list
patternlist = self.get("Pattern")
# read each model file
index = 0
for pattern in patternlist:
if pattern.get("Prf") == 3 or pattern.get("Prf") == -3:
# output solution in prf format
if len(patternlist) > 1:
prffname = processdir+"/"+pcrrootname+"_"+str(index+1)+".prf"
else:
prffname = processdir+"/"+pcrrootname+".prf"
pattern.importPrfFile(prffname)
else:
errmsg = "Prf = %-5s will be implemented soon in Fit._readCalculatePattern()"%(pattern.get("Prf"))
raise NotImplementedError, errmsg
index += 1
# End of: for pattern in patternlist:
return
def calculate(self, processdir):
"""calculate all the patterns according to the given model, \
peakprofile and sample corrections
this method is closely related to Gsas engine
Arguments:
- processid : str. directory
Return -- None
"""
import diffpy.pyfullprof.runfullprof as FpEngine
# 1. set cycle
self.set("NCY", 1)
for pattern in self.get("Pattern"):
# 2. set to the correct choice
pattern.setCalculation()
# 3. init file names for refinement
pcrfullname = processdir + "/TEMP.EXP"
# 5. call GSAS to refine
FpEngine.runfullprof(self, pcrfullname, mode="Calculate")
self._readCalculatePattern(pcrfullname)
return
def setDataProperty(self, datadict):
""" Set Datafile for FullProf """
id = -1
for pattern in self.get("Pattern"):
id += 1
radiationtype = pattern.get("Job")
pattern.setDataProperty([datadict.keys()[id], datadict.values()[id]["Name"]],
radiationtype)
return
""" End of class Fit """
class MonteCarlo(RietveldClass):
# Optional (Cry=2): Monte Carlo search parameters
ParamDict = {
"NCONF": IntInfo("NCONF", "Number of Configuration", 1, 1, None),
"NSOLU": IntInfo("NSOLU", "Number of Solution", 1, 1, None),
"NREFLEXF": IntInfo("NREFLEXF", "Number of Reflection", 0, 0, None),
"NSCALE": IntInfo("NSCALEF", "Scale Factor", 0)
}
ObjectDict = {}
ObjectListDict = {}
def __init__(self, Parent):
RietveldClass.__init__(self, Parent)
return
class CPL(RietveldClass):
ParamDict = {
"flag": EnumInfo("flag", "flags to indicate if the coefficient will be switched", 0,
{0: "remain fixed",
1: "switched"},
[1,0]),
}
def __init__(self, Parent):
RietveldClass.__init__(self, Parent)
return
class SimulatedAnnealing(RietveldClass):
# Simulated (Cry=3): Simulated annealing parameters
ParamDict = {
"T_INI": FloatInfo("T_INI", "Initial Temperature", 0.0, '', 0.0, None),
"ANNEAL": FloatInfo("ANNEAL", "Reduction Factor of the temperature between the MC Cycles", 0.9, '', 0.0, None),
"ACCEPT": FloatInfo("ACCEPT", "Lowest percentage of accepted configurations", 0.5,'', 0.0, None),
"NUMTEMPS": IntInfo("NUMTEMPS", "Maximum number of temperature", 1, 1, None),
"NUMTHCYC": IntInfo("NUMTHCYC", "Number of Monte Carlo Cycles", 1, 1, None),
"INITCONF": EnumInfo("INITCONF", "Initial Configuration", 0,
{0: "random",
1: "given"},
[0, 1]),
"SEED_Random": StringInfo("SEED_Random", "Randomized Seed", ""),
"NCONF": IntInfo("NCONF", "Number of Configuration", 1, 1, None),
"NSOLU": IntInfo("NSOLU", "Number of Solution", 1, 1, None),
"NREFLEXF": IntInfo("NREFLEXF", "Number of Reflection", 0, 0, None),
"NSCALE": IntInfo("NSCALEF", "Scale Factor", 0),
"NALGOR": EnumInfo("NALGOR", "Algorithm", 0,
{0: "Corana algorithm",
1: "Corana algorithm is selected using as initial steps",
2: "Conventional algorithm using fixed steps"},
[2, 0, 1]),
"ISWAP": IntInfo("ISWAP", "Interchange of Atoms", 0, 0, None),
}
ObjectDict = {}
ObjectListDict = {
"CPL": ObjectInfo("CPLList", "CPL", 0, None),
}
def __init__(self, Parent):
RietveldClass.__init__(self, Parent)
return
| xpclove/autofp | diffpy/pyfullprof/fit.py | Python | gpl-3.0 | 22,179 |
# -*- coding: utf-8 -*-
from __future__ import division, print_function, absolute_import
from numpy import abs, cos, exp, log, arange, pi, roll, sin, sqrt, sum
from .go_benchmark import Benchmark
class BartelsConn(Benchmark):
r"""
Bartels-Conn objective function.
The BartelsConn [1]_ global optimization problem is a multimodal
minimization problem defined as follows:
.. math::
f_{\text{BartelsConn}}(x) = \lvert {x_1^2 + x_2^2 + x_1x_2} \rvert +
\lvert {\sin(x_1)} \rvert + \lvert {\cos(x_2)} \rvert
with :math:`x_i \in [-500, 500]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 1` for :math:`x = [0, 0]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-500.] * self.N, [500.] * self.N)
self.global_optimum = [[0 for _ in range(self.N)]]
self.fglob = 1.0
def fun(self, x, *args):
self.nfev += 1
return (abs(x[0] ** 2.0 + x[1] ** 2.0 + x[0] * x[1]) + abs(sin(x[0]))
+ abs(cos(x[1])))
class Beale(Benchmark):
r"""
Beale objective function.
The Beale [1]_ global optimization problem is a multimodal
minimization problem defined as follows:
.. math::
f_{\text{Beale}}(x) = \left(x_1 x_2 - x_1 + 1.5\right)^{2} +
\left(x_1 x_2^{2} - x_1 + 2.25\right)^{2} + \left(x_1 x_2^{3} - x_1 +
2.625\right)^{2}
with :math:`x_i \in [-4.5, 4.5]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x=[3, 0.5]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-4.5] * self.N, [4.5] * self.N)
self.global_optimum = [[3.0, 0.5]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return ((1.5 - x[0] + x[0] * x[1]) ** 2
+ (2.25 - x[0] + x[0] * x[1] ** 2) ** 2
+ (2.625 - x[0] + x[0] * x[1] ** 3) ** 2)
class BiggsExp02(Benchmark):
r"""
BiggsExp02 objective function.
The BiggsExp02 [1]_ global optimization problem is a multimodal minimization
problem defined as follows
.. math::
\begin{matrix}
f_{\text{BiggsExp02}}(x) = \sum_{i=1}^{10} (e^{-t_i x_1}
- 5 e^{-t_i x_2} - y_i)^2 \\
t_i = 0.1 i\\
y_i = e^{-t_i} - 5 e^{-10t_i}\\
\end{matrix}
with :math:`x_i \in [0, 20]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 10]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([0] * 2,
[20] * 2)
self.global_optimum = [[1., 10.]]
self.fglob = 0
def fun(self, x, *args):
self.nfev += 1
t = arange(1, 11.) * 0.1
y = exp(-t) - 5 * exp(-10 * t)
vec = (exp(-t * x[0]) - 5 * exp(-t * x[1]) - y) ** 2
return sum(vec)
class BiggsExp03(Benchmark):
r"""
BiggsExp03 objective function.
The BiggsExp03 [1]_ global optimization problem is a multimodal minimization
problem defined as follows
.. math::
\begin{matrix}\ f_{\text{BiggsExp03}}(x) = \sum_{i=1}^{10}
(e^{-t_i x_1} - x_3e^{-t_i x_2} - y_i)^2\\
t_i = 0.1i\\
y_i = e^{-t_i} - 5e^{-10 t_i}\\
\end{matrix}
with :math:`x_i \in [0, 20]` for :math:`i = 1, 2, 3`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 10, 5]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=3):
Benchmark.__init__(self, dimensions)
self._bounds = zip([0] * 3,
[20] * 3)
self.global_optimum = [[1., 10., 5.]]
self.fglob = 0
def fun(self, x, *args):
self.nfev += 1
t = arange(1., 11.) * 0.1
y = exp(-t) - 5 * exp(-10 * t)
vec = (exp(-t * x[0]) - x[2] * exp(-t * x[1]) - y) ** 2
return sum(vec)
class BiggsExp04(Benchmark):
r"""
BiggsExp04 objective function.
The BiggsExp04 [1]_ global optimization problem is a multimodal
minimization problem defined as follows
.. math::
\begin{matrix}\ f_{\text{BiggsExp04}}(x) = \sum_{i=1}^{10}
(x_3 e^{-t_i x_1} - x_4 e^{-t_i x_2} - y_i)^2\\
t_i = 0.1i\\
y_i = e^{-t_i} - 5 e^{-10 t_i}\\
\end{matrix}
with :math:`x_i \in [0, 20]` for :math:`i = 1, ..., 4`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 10, 1, 5]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=4):
Benchmark.__init__(self, dimensions)
self._bounds = zip([0.] * 4,
[20.] * 4)
self.global_optimum = [[1., 10., 1., 5.]]
self.fglob = 0
def fun(self, x, *args):
self.nfev += 1
t = arange(1, 11.) * 0.1
y = exp(-t) - 5 * exp(-10 * t)
vec = (x[2] * exp(-t * x[0]) - x[3] * exp(-t * x[1]) - y) ** 2
return sum(vec)
class BiggsExp05(Benchmark):
r"""
BiggsExp05 objective function.
The BiggsExp05 [1]_ global optimization problem is a multimodal minimization
problem defined as follows
.. math::
\begin{matrix}\ f_{\text{BiggsExp05}}(x) = \sum_{i=1}^{11}
(x_3 e^{-t_i x_1} - x_4 e^{-t_i x_2} + 3 e^{-t_i x_5} - y_i)^2\\
t_i = 0.1i\\
y_i = e^{-t_i} - 5e^{-10 t_i} + 3e^{-4 t_i}\\
\end{matrix}
with :math:`x_i \in [0, 20]` for :math:`i=1, ..., 5`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 10, 1, 5, 4]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=5):
Benchmark.__init__(self, dimensions)
self._bounds = zip([0.] * 5,
[20.] * 5)
self.global_optimum = [[1., 10., 1., 5., 4.]]
self.fglob = 0
def fun(self, x, *args):
self.nfev += 1
t = arange(1, 12.) * 0.1
y = exp(-t) - 5 * exp(-10 * t) + 3 * exp(-4 * t)
vec = (x[2] * exp(-t * x[0]) - x[3] * exp(-t * x[1])
+ 3 * exp(-t * x[4]) - y) ** 2
return sum(vec)
class Bird(Benchmark):
r"""
Bird objective function.
The Bird global optimization problem is a multimodal minimization
problem defined as follows
.. math::
f_{\text{Bird}}(x) = \left(x_1 - x_2\right)^{2} + e^{\left[1 -
\sin\left(x_1\right) \right]^{2}} \cos\left(x_2\right) + e^{\left[1 -
\cos\left(x_2\right)\right]^{2}} \sin\left(x_1\right)
with :math:`x_i \in [-2\pi, 2\pi]`
*Global optimum*: :math:`f(x) = -106.7645367198034` for :math:`x
= [4.701055751981055, 3.152946019601391]` or :math:`x =
[-1.582142172055011, -3.130246799635430]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-2.0 * pi] * self.N,
[2.0 * pi] * self.N)
self.global_optimum = [[4.701055751981055, 3.152946019601391],
[-1.582142172055011, -3.130246799635430]]
self.fglob = -106.7645367198034
def fun(self, x, *args):
self.nfev += 1
return (sin(x[0]) * exp((1 - cos(x[1])) ** 2)
+ cos(x[1]) * exp((1 - sin(x[0])) ** 2) + (x[0] - x[1]) ** 2)
class Bohachevsky1(Benchmark):
r"""
Bohachevsky 1 objective function.
The Bohachevsky 1 [1]_ global optimization problem is a multimodal
minimization problem defined as follows
.. math::
f_{\text{Bohachevsky}}(x) = \sum_{i=1}^{n-1}\left[x_i^2 + 2 x_{i+1}^2 -
0.3 \cos(3 \pi x_i) - 0.4 \cos(4 \pi x_{i + 1}) + 0.7 \right]
Here, :math:`n` represents the number of dimensions and :math:`x_i \in
[-15, 15]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 0` for :math:`x_i = 0` for :math:`i = 1,
..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: equation needs to be fixed up in the docstring. see Jamil#17
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-100.0] * self.N, [100.0] * self.N)
self.global_optimum = [[0 for _ in range(self.N)]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return (x[0] ** 2 + 2 * x[1] ** 2 - 0.3 * cos(3 * pi * x[0])
- 0.4 * cos(4 * pi * x[1]) + 0.7)
class Bohachevsky2(Benchmark):
r"""
Bohachevsky 2 objective function.
The Bohachevsky 2 [1]_ global optimization problem is a multimodal
minimization problem defined as follows
.. math::
f_{\text{Bohachevsky}}(x) = \sum_{i=1}^{n-1}\left[x_i^2 + 2 x_{i+1}^2 -
0.3 \cos(3 \pi x_i) - 0.4 \cos(4 \pi x_{i + 1}) + 0.7 \right]
Here, :math:`n` represents the number of dimensions and :math:`x_i \in
[-15, 15]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 0` for :math:`x_i = 0` for :math:`i = 1,
..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: equation needs to be fixed up in the docstring. Jamil is also wrong.
There should be no 0.4 factor in front of the cos term
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-100.0] * self.N, [100.0] * self.N)
self.global_optimum = [[0 for _ in range(self.N)]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return (x[0] ** 2 + 2 * x[1] ** 2 - 0.3 * cos(3 * pi * x[0])
* cos(4 * pi * x[1]) + 0.3)
class Bohachevsky3(Benchmark):
r"""
Bohachevsky 3 objective function.
The Bohachevsky 3 [1]_ global optimization problem is a multimodal
minimization problem defined as follows
.. math::
f_{\text{Bohachevsky}}(x) = \sum_{i=1}^{n-1}\left[x_i^2 + 2 x_{i+1}^2 -
0.3 \cos(3 \pi x_i) - 0.4 \cos(4 \pi x_{i + 1}) + 0.7 \right]
Here, :math:`n` represents the number of dimensions and :math:`x_i \in
[-15, 15]` for :math:`i = 1, ..., n`.
*Global optimum*: :math:`f(x) = 0` for :math:`x_i = 0` for :math:`i = 1,
..., n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: equation needs to be fixed up in the docstring. Jamil#19
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-100.0] * self.N, [100.0] * self.N)
self.global_optimum = [[0 for _ in range(self.N)]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return (x[0] ** 2 + 2 * x[1] ** 2
- 0.3 * cos(3 * pi * x[0] + 4 * pi * x[1]) + 0.3)
class BoxBetts(Benchmark):
r"""
BoxBetts objective function.
The BoxBetts global optimization problem is a multimodal
minimization problem defined as follows
.. math::
f_{\text{BoxBetts}}(x) = \sum_{i=1}^k g(x_i)^2
Where, in this exercise:
.. math::
g(x) = e^{-0.1i x_1} - e^{-0.1i x_2} - x_3\left[e^{-0.1i}
- e^{-i}\right]
And :math:`k = 10`.
Here, :math:`x_1 \in [0.9, 1.2], x_2 \in [9, 11.2], x_3 \in [0.9, 1.2]`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [1, 10, 1]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=3):
Benchmark.__init__(self, dimensions)
self._bounds = ([0.9, 1.2], [9.0, 11.2], [0.9, 1.2])
self.global_optimum = [[1.0, 10.0, 1.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
i = arange(1, 11)
g = (exp(-0.1 * i * x[0]) - exp(-0.1 * i * x[1])
- (exp(-0.1 * i) - exp(-i)) * x[2])
return sum(g**2)
class Branin01(Benchmark):
r"""
Branin01 objective function.
The Branin01 global optimization problem is a multimodal minimization
problem defined as follows
.. math::
f_{\text{Branin01}}(x) = \left(- 1.275 \frac{x_1^{2}}{\pi^{2}} + 5
\frac{x_1}{\pi} + x_2 -6\right)^{2} + \left(10 -\frac{5}{4 \pi} \right)
\cos\left(x_1\right) + 10
with :math:`x_1 \in [-5, 10], x_2 \in [0, 15]`
*Global optimum*: :math:`f(x) = 0.39788735772973816` for :math:`x =
[-\pi, 12.275]` or :math:`x = [\pi, 2.275]` or :math:`x = [3\pi, 2.475]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: Jamil#22, one of the solutions is different
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = [(-5., 10.), (0., 15.)]
self.global_optimum = [[-pi, 12.275], [pi, 2.275], [3 * pi, 2.475]]
self.fglob = 0.39788735772973816
def fun(self, x, *args):
self.nfev += 1
return ((x[1] - (5.1 / (4 * pi ** 2)) * x[0] ** 2
+ 5 * x[0] / pi - 6) ** 2
+ 10 * (1 - 1 / (8 * pi)) * cos(x[0]) + 10)
class Branin02(Benchmark):
r"""
Branin02 objective function.
The Branin02 global optimization problem is a multimodal minimization
problem defined as follows
.. math::
f_{\text{Branin02}}(x) = \left(- 1.275 \frac{x_1^{2}}{\pi^{2}}
+ 5 \frac{x_1}{\pi} + x_2 - 6 \right)^{2} + \left(10 - \frac{5}{4 \pi}
\right) \cos\left(x_1\right) \cos\left(x_2\right)
+ \log(x_1^2+x_2^2 + 1) + 10
with :math:`x_i \in [-5, 15]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 5.559037` for :math:`x = [-3.2, 12.53]`
.. [1] Gavana, A. Global Optimization Benchmarks and AMPGO retrieved 2015
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = [(-5.0, 15.0), (-5.0, 15.0)]
self.global_optimum = [[-3.1969884, 12.52625787]]
self.fglob = 5.5589144038938247
def fun(self, x, *args):
self.nfev += 1
return ((x[1] - (5.1 / (4 * pi ** 2)) * x[0] ** 2
+ 5 * x[0] / pi - 6) ** 2
+ 10 * (1 - 1 / (8 * pi)) * cos(x[0]) * cos(x[1])
+ log(x[0] ** 2.0 + x[1] ** 2.0 + 1.0) + 10)
class Brent(Benchmark):
r"""
Brent objective function.
The Brent [1]_ global optimization problem is a multimodal minimization
problem defined as follows:
.. math::
f_{\text{Brent}}(x) = (x_1 + 10)^2 + (x_2 + 10)^2 + e^{(-x_1^2 -x_2^2)}
with :math:`x_i \in [-10, 10]` for :math:`i = 1, 2`.
*Global optimum*: :math:`f(x) = 0` for :math:`x = [-10, -10]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO solution is different to Jamil#24
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-10.0] * self.N, [10.0] * self.N)
self.custom_bounds = ([-10, 2], [-10, 2])
self.global_optimum = [[-10.0, -10.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return ((x[0] + 10.0) ** 2.0 + (x[1] + 10.0) ** 2.0
+ exp(-x[0] ** 2.0 - x[1] ** 2.0))
class Brown(Benchmark):
r"""
Brown objective function.
The Brown [1]_ global optimization problem is a multimodal minimization
problem defined as follows:
.. math::
f_{\text{Brown}}(x) = \sum_{i=1}^{n-1}\left[
\left(x_i^2\right)^{x_{i + 1}^2 + 1}
+ \left(x_{i + 1}^2\right)^{x_i^2 + 1}\right]
with :math:`x_i \in [-1, 4]` for :math:`i=1,...,n`.
*Global optimum*: :math:`f(x_i) = 0` for :math:`x_i = 0` for
:math:`i=1,...,n`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = zip([-1.0] * self.N, [4.0] * self.N)
self.custom_bounds = ([-1.0, 1.0], [-1.0, 1.0])
self.global_optimum = [[0 for _ in range(self.N)]]
self.fglob = 0.0
self.change_dimensionality = True
def fun(self, x, *args):
self.nfev += 1
x0 = x[:-1]
x1 = x[1:]
return sum((x0 ** 2.0) ** (x1 ** 2.0 + 1.0)
+ (x1 ** 2.0) ** (x0 ** 2.0 + 1.0))
class Bukin02(Benchmark):
r"""
Bukin02 objective function.
The Bukin02 [1]_ global optimization problem is a multimodal minimization
problem defined as follows:
.. math::
f_{\text{Bukin02}}(x) = 100 (x_2^2 - 0.01x_1^2 + 1)
+ 0.01(x_1 + 10)^2
with :math:`x_1 \in [-15, -5], x_2 \in [-3, 3]`
*Global optimum*: :math:`f(x) = -124.75` for :math:`x = [-15, 0]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
TODO: I think that Gavana and Jamil are wrong on this function. In both
sources the x[1] term is not squared. As such there will be a minimum at
the smallest value of x[1].
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = [(-15.0, -5.0), (-3.0, 3.0)]
self.global_optimum = [[-15.0, 0.0]]
self.fglob = -124.75
def fun(self, x, *args):
self.nfev += 1
return (100 * (x[1] ** 2 - 0.01 * x[0] ** 2 + 1.0)
+ 0.01 * (x[0] + 10.0) ** 2.0)
class Bukin04(Benchmark):
r"""
Bukin04 objective function.
The Bukin04 [1]_ global optimization problem is a multimodal minimization
problem defined as follows:
.. math::
f_{\text{Bukin04}}(x) = 100 x_2^{2} + 0.01 \lvert{x_1 + 10}
\rvert
with :math:`x_1 \in [-15, -5], x_2 \in [-3, 3]`
*Global optimum*: :math:`f(x) = 0` for :math:`x = [-10, 0]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = [(-15.0, -5.0), (-3.0, 3.0)]
self.global_optimum = [[-10.0, 0.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return 100 * x[1] ** 2 + 0.01 * abs(x[0] + 10)
class Bukin06(Benchmark):
r"""
Bukin06 objective function.
The Bukin06 [1]_ global optimization problem is a multimodal minimization
problem defined as follows:
.. math::
f_{\text{Bukin06}}(x) = 100 \sqrt{ \lvert{x_2 - 0.01 x_1^{2}}
\rvert} + 0.01 \lvert{x_1 + 10} \rvert
with :math:`x_1 \in [-15, -5], x_2 \in [-3, 3]`
*Global optimum*: :math:`f(x) = 0` for :math:`x = [-10, 1]`
.. [1] Jamil, M. & Yang, X.-S. A Literature Survey of Benchmark Functions
For Global Optimization Problems Int. Journal of Mathematical Modelling
and Numerical Optimisation, 2013, 4, 150-194.
"""
def __init__(self, dimensions=2):
Benchmark.__init__(self, dimensions)
self._bounds = [(-15.0, -5.0), (-3.0, 3.0)]
self.global_optimum = [[-10.0, 1.0]]
self.fglob = 0.0
def fun(self, x, *args):
self.nfev += 1
return 100 * sqrt(abs(x[1] - 0.01 * x[0] ** 2)) + 0.01 * abs(x[0] + 10)
| chatcannon/scipy | benchmarks/benchmarks/go_benchmark_functions/go_funcs_B.py | Python | bsd-3-clause | 21,639 |
# -*- coding: utf-8 -*-
# Copyright (C) 2009, 2013 Rocky Bernstein
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
from import_relative import import_relative
Mbase_cmd = import_relative('base_cmd', top_name='trepan')
Mcmdfns = import_relative('cmdfns', '..', 'trepan')
Mfile = import_relative('file', '...lib', 'trepan')
Mmisc = import_relative('misc', '...', 'trepan')
Mbreak = import_relative('break', '.', 'trepan')
class DisableCommand(Mbase_cmd.DebuggerCommand):
"""**disable** *bpnumber* [*bpnumber* ...]
Disables the breakpoints given as a space separated list of breakpoint
numbers. See also `info break` to get a list.
"""
category = 'breakpoints'
min_args = 0
max_args = None
name = os.path.basename(__file__).split('.')[0]
need_stack = False
short_help = 'Disable some breakpoints'
def run(self, args):
if len(args) == 1:
self.errmsg('No breakpoint number given.')
return
# if args[1] == 'display':
# self.display_enable(args[2:], 0)
# return
for i in args[1:]:
success, msg = self.core.bpmgr.en_disable_breakpoint_by_number(int(i), False)
if not success:
self.errmsg(msg)
else:
self.msg('Breakpoint %s disabled.' % i)
pass
pass
return
if __name__ == '__main__':
Mdebugger = import_relative('debugger', '...')
d = Mdebugger.Debugger()
command = DisableCommand(d.core.processor)
pass
| kamawanu/pydbgr | trepan/processor/command/disable.py | Python | gpl-3.0 | 2,184 |
# --------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# --------------------------------------------------------------------------------------------
# pylint: disable=line-too-long
# pylint: disable=too-many-lines
# pylint: disable=inconsistent-return-statements
# pylint: disable=unused-variable
# Namespace Region
def cli_namespace_create(client, resource_group_name, namespace_name, location=None, tags=None):
from azure.mgmt.relay.models import RelayNamespace
return client.create_or_update(
resource_group_name=resource_group_name,
namespace_name=namespace_name,
parameters=RelayNamespace(
location,
tags)
)
def cli_namespace_update(instance, tags=None):
if tags is not None:
instance.tags = tags
return instance
def cli_namespace_list(client, resource_group_name=None):
if resource_group_name:
return client.list_by_resource_group(resource_group_name=resource_group_name)
return client.list()
# Namespace Authorization rule:
def cli_namespaceautho_create(client, resource_group_name, namespace_name, name, access_rights=None):
from azure.cli.command_modules.relay._utils import accessrights_converter
return client.create_or_update_authorization_rule(
resource_group_name=resource_group_name,
namespace_name=namespace_name,
authorization_rule_name=name,
rights=accessrights_converter(access_rights)
)
# Namespace Authorization rule:
def cli_namespaceautho_update(instance, rights):
from azure.cli.command_modules.relay._utils import accessrights_converter
instance.rights = accessrights_converter(rights)
return instance
# WCF Relay Region
def cli_wcfrelay_create(client, resource_group_name, namespace_name, relay_name, relay_type,
requires_client_authorization=None, requires_transport_security=None, user_metadata=None):
from azure.mgmt.relay.models import WcfRelay, Relaytype
if relay_type is None:
set_relay_type = Relaytype.net_tcp
elif relay_type == "Http":
set_relay_type = Relaytype.http
else:
set_relay_type = Relaytype.net_tcp
wcfrelay_params = WcfRelay(
relay_type=set_relay_type,
requires_client_authorization=requires_client_authorization,
requires_transport_security=requires_transport_security,
user_metadata=user_metadata
)
return client.create_or_update(
resource_group_name=resource_group_name,
namespace_name=namespace_name,
relay_name=relay_name,
parameters=wcfrelay_params)
def cli_wcfrelay_update(instance, relay_type=None, user_metadata=None, status=None):
from azure.mgmt.relay.models import WcfRelay
returnobj = WcfRelay(relay_type=instance.relay_type,
requires_client_authorization=instance.requires_client_authorization,
requires_transport_security=instance.requires_transport_security,
user_metadata=instance.user_metadata)
if relay_type:
returnobj.relay_type = relay_type
if user_metadata:
returnobj.user_metadata = user_metadata
if status:
returnobj.status = status
return returnobj
# Hybrid Connection Region
def cli_hyco_create(client, resource_group_name, namespace_name, hybrid_connection_name,
requires_client_authorization=None, user_metadata=None):
return client.create_or_update(
resource_group_name=resource_group_name,
namespace_name=namespace_name,
hybrid_connection_name=hybrid_connection_name,
requires_client_authorization=requires_client_authorization, user_metadata=user_metadata)
def cli_hyco_update(instance, requires_client_authorization=None, status=None, user_metadata=None):
from azure.mgmt.relay.models import HybridConnection
hyco_params = HybridConnection(requires_client_authorization=instance.requires_client_authorization,
user_metadata=instance.user_metadata)
if requires_client_authorization:
hyco_params.requires_client_authorization = requires_client_authorization
if status:
hyco_params.status = status
if user_metadata:
hyco_params.user_metadata = user_metadata
return hyco_params
def empty_on_404(ex):
from azure.mgmt.relay.models import ErrorResponseException
if isinstance(ex, ErrorResponseException) and ex.response.status_code == 404:
return None
raise ex
| yugangw-msft/azure-cli | src/azure-cli/azure/cli/command_modules/relay/custom.py | Python | mit | 4,737 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.