hexsha
stringlengths 40
40
| size
int64 1
1.03M
| ext
stringclasses 10
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
239
| max_stars_repo_name
stringlengths 5
130
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
sequencelengths 1
10
| max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
239
| max_issues_repo_name
stringlengths 5
130
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
sequencelengths 1
10
| max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
239
| max_forks_repo_name
stringlengths 5
130
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
sequencelengths 1
10
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 1
1.03M
| avg_line_length
float64 1
958k
| max_line_length
int64 1
1.03M
| alphanum_fraction
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7943a81206a66e9aa6bdf1d7f3ebb7117688a94c | 82,500 | py | Python | rootfs/usr/lib/python3.2/difflib.py | kappaIO-Dev/kappaIO-sdk-armhf-crosscompile | 66fc5fc21e6235f7a3be72a7ccac68e2224b7fb2 | [
"MIT"
] | null | null | null | rootfs/usr/lib/python3.2/difflib.py | kappaIO-Dev/kappaIO-sdk-armhf-crosscompile | 66fc5fc21e6235f7a3be72a7ccac68e2224b7fb2 | [
"MIT"
] | null | null | null | rootfs/usr/lib/python3.2/difflib.py | kappaIO-Dev/kappaIO-sdk-armhf-crosscompile | 66fc5fc21e6235f7a3be72a7ccac68e2224b7fb2 | [
"MIT"
] | null | null | null | #! /usr/bin/python3.2
"""
Module difflib -- helpers for computing deltas between objects.
Function get_close_matches(word, possibilities, n=3, cutoff=0.6):
Use SequenceMatcher to return list of the best "good enough" matches.
Function context_diff(a, b):
For two lists of strings, return a delta in context diff format.
Function ndiff(a, b):
Return a delta: the difference between `a` and `b` (lists of strings).
Function restore(delta, which):
Return one of the two sequences that generated an ndiff delta.
Function unified_diff(a, b):
For two lists of strings, return a delta in unified diff format.
Class SequenceMatcher:
A flexible class for comparing pairs of sequences of any type.
Class Differ:
For producing human-readable deltas from sequences of lines of text.
Class HtmlDiff:
For producing HTML side by side comparison with change highlights.
"""
__all__ = ['get_close_matches', 'ndiff', 'restore', 'SequenceMatcher',
'Differ','IS_CHARACTER_JUNK', 'IS_LINE_JUNK', 'context_diff',
'unified_diff', 'HtmlDiff', 'Match']
import warnings
import heapq
from collections import namedtuple as _namedtuple
Match = _namedtuple('Match', 'a b size')
def _calculate_ratio(matches, length):
if length:
return 2.0 * matches / length
return 1.0
class SequenceMatcher:
"""
SequenceMatcher is a flexible class for comparing pairs of sequences of
any type, so long as the sequence elements are hashable. The basic
algorithm predates, and is a little fancier than, an algorithm
published in the late 1980's by Ratcliff and Obershelp under the
hyperbolic name "gestalt pattern matching". The basic idea is to find
the longest contiguous matching subsequence that contains no "junk"
elements (R-O doesn't address junk). The same idea is then applied
recursively to the pieces of the sequences to the left and to the right
of the matching subsequence. This does not yield minimal edit
sequences, but does tend to yield matches that "look right" to people.
SequenceMatcher tries to compute a "human-friendly diff" between two
sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the
longest *contiguous* & junk-free matching subsequence. That's what
catches peoples' eyes. The Windows(tm) windiff has another interesting
notion, pairing up elements that appear uniquely in each sequence.
That, and the method here, appear to yield more intuitive difference
reports than does diff. This method appears to be the least vulnerable
to synching up on blocks of "junk lines", though (like blank lines in
ordinary text files, or maybe "<P>" lines in HTML files). That may be
because this is the only method of the 3 that has a *concept* of
"junk" <wink>.
Example, comparing two strings, and considering blanks to be "junk":
>>> s = SequenceMatcher(lambda x: x == " ",
... "private Thread currentThread;",
... "private volatile Thread currentThread;")
>>>
.ratio() returns a float in [0, 1], measuring the "similarity" of the
sequences. As a rule of thumb, a .ratio() value over 0.6 means the
sequences are close matches:
>>> print(round(s.ratio(), 3))
0.866
>>>
If you're only interested in where the sequences match,
.get_matching_blocks() is handy:
>>> for block in s.get_matching_blocks():
... print("a[%d] and b[%d] match for %d elements" % block)
a[0] and b[0] match for 8 elements
a[8] and b[17] match for 21 elements
a[29] and b[38] match for 0 elements
Note that the last tuple returned by .get_matching_blocks() is always a
dummy, (len(a), len(b), 0), and this is the only case in which the last
tuple element (number of elements matched) is 0.
If you want to know how to change the first sequence into the second,
use .get_opcodes():
>>> for opcode in s.get_opcodes():
... print("%6s a[%d:%d] b[%d:%d]" % opcode)
equal a[0:8] b[0:8]
insert a[8:8] b[8:17]
equal a[8:29] b[17:38]
See the Differ class for a fancy human-friendly file differencer, which
uses SequenceMatcher both to compare sequences of lines, and to compare
sequences of characters within similar (near-matching) lines.
See also function get_close_matches() in this module, which shows how
simple code building on SequenceMatcher can be used to do useful work.
Timing: Basic R-O is cubic time worst case and quadratic time expected
case. SequenceMatcher is quadratic time for the worst case and has
expected-case behavior dependent in a complicated way on how many
elements the sequences have in common; best case time is linear.
Methods:
__init__(isjunk=None, a='', b='')
Construct a SequenceMatcher.
set_seqs(a, b)
Set the two sequences to be compared.
set_seq1(a)
Set the first sequence to be compared.
set_seq2(b)
Set the second sequence to be compared.
find_longest_match(alo, ahi, blo, bhi)
Find longest matching block in a[alo:ahi] and b[blo:bhi].
get_matching_blocks()
Return list of triples describing matching subsequences.
get_opcodes()
Return list of 5-tuples describing how to turn a into b.
ratio()
Return a measure of the sequences' similarity (float in [0,1]).
quick_ratio()
Return an upper bound on .ratio() relatively quickly.
real_quick_ratio()
Return an upper bound on ratio() very quickly.
"""
def __init__(self, isjunk=None, a='', b='', autojunk=True):
"""Construct a SequenceMatcher.
Optional arg isjunk is None (the default), or a one-argument
function that takes a sequence element and returns true iff the
element is junk. None is equivalent to passing "lambda x: 0", i.e.
no elements are considered to be junk. For example, pass
lambda x: x in " \\t"
if you're comparing lines as sequences of characters, and don't
want to synch up on blanks or hard tabs.
Optional arg a is the first of two sequences to be compared. By
default, an empty string. The elements of a must be hashable. See
also .set_seqs() and .set_seq1().
Optional arg b is the second of two sequences to be compared. By
default, an empty string. The elements of b must be hashable. See
also .set_seqs() and .set_seq2().
Optional arg autojunk should be set to False to disable the
"automatic junk heuristic" that treats popular elements as junk
(see module documentation for more information).
"""
# Members:
# a
# first sequence
# b
# second sequence; differences are computed as "what do
# we need to do to 'a' to change it into 'b'?"
# b2j
# for x in b, b2j[x] is a list of the indices (into b)
# at which x appears; junk and popular elements do not appear
# fullbcount
# for x in b, fullbcount[x] == the number of times x
# appears in b; only materialized if really needed (used
# only for computing quick_ratio())
# matching_blocks
# a list of (i, j, k) triples, where a[i:i+k] == b[j:j+k];
# ascending & non-overlapping in i and in j; terminated by
# a dummy (len(a), len(b), 0) sentinel
# opcodes
# a list of (tag, i1, i2, j1, j2) tuples, where tag is
# one of
# 'replace' a[i1:i2] should be replaced by b[j1:j2]
# 'delete' a[i1:i2] should be deleted
# 'insert' b[j1:j2] should be inserted
# 'equal' a[i1:i2] == b[j1:j2]
# isjunk
# a user-supplied function taking a sequence element and
# returning true iff the element is "junk" -- this has
# subtle but helpful effects on the algorithm, which I'll
# get around to writing up someday <0.9 wink>.
# DON'T USE! Only __chain_b uses this. Use isbjunk.
# bjunk
# the items in b for which isjunk is True.
# bpopular
# nonjunk items in b treated as junk by the heuristic (if used).
self.isjunk = isjunk
self.a = self.b = None
self.autojunk = autojunk
self.set_seqs(a, b)
def set_seqs(self, a, b):
"""Set the two sequences to be compared.
>>> s = SequenceMatcher()
>>> s.set_seqs("abcd", "bcde")
>>> s.ratio()
0.75
"""
self.set_seq1(a)
self.set_seq2(b)
def set_seq1(self, a):
"""Set the first sequence to be compared.
The second sequence to be compared is not changed.
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.set_seq1("bcde")
>>> s.ratio()
1.0
>>>
SequenceMatcher computes and caches detailed information about the
second sequence, so if you want to compare one sequence S against
many sequences, use .set_seq2(S) once and call .set_seq1(x)
repeatedly for each of the other sequences.
See also set_seqs() and set_seq2().
"""
if a is self.a:
return
self.a = a
self.matching_blocks = self.opcodes = None
def set_seq2(self, b):
"""Set the second sequence to be compared.
The first sequence to be compared is not changed.
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.set_seq2("abcd")
>>> s.ratio()
1.0
>>>
SequenceMatcher computes and caches detailed information about the
second sequence, so if you want to compare one sequence S against
many sequences, use .set_seq2(S) once and call .set_seq1(x)
repeatedly for each of the other sequences.
See also set_seqs() and set_seq1().
"""
if b is self.b:
return
self.b = b
self.matching_blocks = self.opcodes = None
self.fullbcount = None
self.__chain_b()
# For each element x in b, set b2j[x] to a list of the indices in
# b where x appears; the indices are in increasing order; note that
# the number of times x appears in b is len(b2j[x]) ...
# when self.isjunk is defined, junk elements don't show up in this
# map at all, which stops the central find_longest_match method
# from starting any matching block at a junk element ...
# also creates the fast isbjunk function ...
# b2j also does not contain entries for "popular" elements, meaning
# elements that account for more than 1 + 1% of the total elements, and
# when the sequence is reasonably large (>= 200 elements); this can
# be viewed as an adaptive notion of semi-junk, and yields an enormous
# speedup when, e.g., comparing program files with hundreds of
# instances of "return NULL;" ...
# note that this is only called when b changes; so for cross-product
# kinds of matches, it's best to call set_seq2 once, then set_seq1
# repeatedly
def __chain_b(self):
# Because isjunk is a user-defined (not C) function, and we test
# for junk a LOT, it's important to minimize the number of calls.
# Before the tricks described here, __chain_b was by far the most
# time-consuming routine in the whole module! If anyone sees
# Jim Roskind, thank him again for profile.py -- I never would
# have guessed that.
# The first trick is to build b2j ignoring the possibility
# of junk. I.e., we don't call isjunk at all yet. Throwing
# out the junk later is much cheaper than building b2j "right"
# from the start.
b = self.b
self.b2j = b2j = {}
for i, elt in enumerate(b):
indices = b2j.setdefault(elt, [])
indices.append(i)
# Purge junk elements
self.bjunk = junk = set()
isjunk = self.isjunk
if isjunk:
for elt in b2j.keys():
if isjunk(elt):
junk.add(elt)
for elt in junk: # separate loop avoids separate list of keys
del b2j[elt]
# Purge popular elements that are not junk
self.bpopular = popular = set()
n = len(b)
if self.autojunk and n >= 200:
ntest = n // 100 + 1
for elt, idxs in b2j.items():
if len(idxs) > ntest:
popular.add(elt)
for elt in popular: # ditto; as fast for 1% deletion
del b2j[elt]
def isbjunk(self, item):
"Deprecated; use 'item in SequenceMatcher().bjunk'."
warnings.warn("'SequenceMatcher().isbjunk(item)' is deprecated;\n"
"use 'item in SMinstance.bjunk' instead.",
DeprecationWarning, 2)
return item in self.bjunk
def isbpopular(self, item):
"Deprecated; use 'item in SequenceMatcher().bpopular'."
warnings.warn("'SequenceMatcher().isbpopular(item)' is deprecated;\n"
"use 'item in SMinstance.bpopular' instead.",
DeprecationWarning, 2)
return item in self.bpopular
def find_longest_match(self, alo, ahi, blo, bhi):
"""Find longest matching block in a[alo:ahi] and b[blo:bhi].
If isjunk is not defined:
Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
alo <= i <= i+k <= ahi
blo <= j <= j+k <= bhi
and for all (i',j',k') meeting those conditions,
k >= k'
i <= i'
and if i == i', j <= j'
In other words, of all maximal matching blocks, return one that
starts earliest in a, and of all those maximal matching blocks that
start earliest in a, return the one that starts earliest in b.
>>> s = SequenceMatcher(None, " abcd", "abcd abcd")
>>> s.find_longest_match(0, 5, 0, 9)
Match(a=0, b=4, size=5)
If isjunk is defined, first the longest matching block is
determined as above, but with the additional restriction that no
junk element appears in the block. Then that block is extended as
far as possible by matching (only) junk elements on both sides. So
the resulting block never matches on junk except as identical junk
happens to be adjacent to an "interesting" match.
Here's the same example as before, but considering blanks to be
junk. That prevents " abcd" from matching the " abcd" at the tail
end of the second sequence directly. Instead only the "abcd" can
match, and matches the leftmost "abcd" in the second sequence:
>>> s = SequenceMatcher(lambda x: x==" ", " abcd", "abcd abcd")
>>> s.find_longest_match(0, 5, 0, 9)
Match(a=1, b=0, size=4)
If no blocks match, return (alo, blo, 0).
>>> s = SequenceMatcher(None, "ab", "c")
>>> s.find_longest_match(0, 2, 0, 1)
Match(a=0, b=0, size=0)
"""
# CAUTION: stripping common prefix or suffix would be incorrect.
# E.g.,
# ab
# acab
# Longest matching block is "ab", but if common prefix is
# stripped, it's "a" (tied with "b"). UNIX(tm) diff does so
# strip, so ends up claiming that ab is changed to acab by
# inserting "ca" in the middle. That's minimal but unintuitive:
# "it's obvious" that someone inserted "ac" at the front.
# Windiff ends up at the same place as diff, but by pairing up
# the unique 'b's and then matching the first two 'a's.
a, b, b2j, isbjunk = self.a, self.b, self.b2j, self.bjunk.__contains__
besti, bestj, bestsize = alo, blo, 0
# find longest junk-free match
# during an iteration of the loop, j2len[j] = length of longest
# junk-free match ending with a[i-1] and b[j]
j2len = {}
nothing = []
for i in range(alo, ahi):
# look at all instances of a[i] in b; note that because
# b2j has no junk keys, the loop is skipped if a[i] is junk
j2lenget = j2len.get
newj2len = {}
for j in b2j.get(a[i], nothing):
# a[i] matches b[j]
if j < blo:
continue
if j >= bhi:
break
k = newj2len[j] = j2lenget(j-1, 0) + 1
if k > bestsize:
besti, bestj, bestsize = i-k+1, j-k+1, k
j2len = newj2len
# Extend the best by non-junk elements on each end. In particular,
# "popular" non-junk elements aren't in b2j, which greatly speeds
# the inner loop above, but also means "the best" match so far
# doesn't contain any junk *or* popular non-junk elements.
while besti > alo and bestj > blo and \
not isbjunk(b[bestj-1]) and \
a[besti-1] == b[bestj-1]:
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
while besti+bestsize < ahi and bestj+bestsize < bhi and \
not isbjunk(b[bestj+bestsize]) and \
a[besti+bestsize] == b[bestj+bestsize]:
bestsize += 1
# Now that we have a wholly interesting match (albeit possibly
# empty!), we may as well suck up the matching junk on each
# side of it too. Can't think of a good reason not to, and it
# saves post-processing the (possibly considerable) expense of
# figuring out what to do with it. In the case of an empty
# interesting match, this is clearly the right thing to do,
# because no other kind of match is possible in the regions.
while besti > alo and bestj > blo and \
isbjunk(b[bestj-1]) and \
a[besti-1] == b[bestj-1]:
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
while besti+bestsize < ahi and bestj+bestsize < bhi and \
isbjunk(b[bestj+bestsize]) and \
a[besti+bestsize] == b[bestj+bestsize]:
bestsize = bestsize + 1
return Match(besti, bestj, bestsize)
def get_matching_blocks(self):
"""Return list of triples describing matching subsequences.
Each triple is of the form (i, j, n), and means that
a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
i and in j. New in Python 2.5, it's also guaranteed that if
(i, j, n) and (i', j', n') are adjacent triples in the list, and
the second is not the last triple in the list, then i+n != i' or
j+n != j'. IOW, adjacent triples never describe adjacent equal
blocks.
The last triple is a dummy, (len(a), len(b), 0), and is the only
triple with n==0.
>>> s = SequenceMatcher(None, "abxcd", "abcd")
>>> list(s.get_matching_blocks())
[Match(a=0, b=0, size=2), Match(a=3, b=2, size=2), Match(a=5, b=4, size=0)]
"""
if self.matching_blocks is not None:
return self.matching_blocks
la, lb = len(self.a), len(self.b)
# This is most naturally expressed as a recursive algorithm, but
# at least one user bumped into extreme use cases that exceeded
# the recursion limit on their box. So, now we maintain a list
# ('queue`) of blocks we still need to look at, and append partial
# results to `matching_blocks` in a loop; the matches are sorted
# at the end.
queue = [(0, la, 0, lb)]
matching_blocks = []
while queue:
alo, ahi, blo, bhi = queue.pop()
i, j, k = x = self.find_longest_match(alo, ahi, blo, bhi)
# a[alo:i] vs b[blo:j] unknown
# a[i:i+k] same as b[j:j+k]
# a[i+k:ahi] vs b[j+k:bhi] unknown
if k: # if k is 0, there was no matching block
matching_blocks.append(x)
if alo < i and blo < j:
queue.append((alo, i, blo, j))
if i+k < ahi and j+k < bhi:
queue.append((i+k, ahi, j+k, bhi))
matching_blocks.sort()
# It's possible that we have adjacent equal blocks in the
# matching_blocks list now. Starting with 2.5, this code was added
# to collapse them.
i1 = j1 = k1 = 0
non_adjacent = []
for i2, j2, k2 in matching_blocks:
# Is this block adjacent to i1, j1, k1?
if i1 + k1 == i2 and j1 + k1 == j2:
# Yes, so collapse them -- this just increases the length of
# the first block by the length of the second, and the first
# block so lengthened remains the block to compare against.
k1 += k2
else:
# Not adjacent. Remember the first block (k1==0 means it's
# the dummy we started with), and make the second block the
# new block to compare against.
if k1:
non_adjacent.append((i1, j1, k1))
i1, j1, k1 = i2, j2, k2
if k1:
non_adjacent.append((i1, j1, k1))
non_adjacent.append( (la, lb, 0) )
self.matching_blocks = non_adjacent
return map(Match._make, self.matching_blocks)
def get_opcodes(self):
"""Return list of 5-tuples describing how to turn a into b.
Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple
has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the
tuple preceding it, and likewise for j1 == the previous j2.
The tags are strings, with these meanings:
'replace': a[i1:i2] should be replaced by b[j1:j2]
'delete': a[i1:i2] should be deleted.
Note that j1==j2 in this case.
'insert': b[j1:j2] should be inserted at a[i1:i1].
Note that i1==i2 in this case.
'equal': a[i1:i2] == b[j1:j2]
>>> a = "qabxcd"
>>> b = "abycdf"
>>> s = SequenceMatcher(None, a, b)
>>> for tag, i1, i2, j1, j2 in s.get_opcodes():
... print(("%7s a[%d:%d] (%s) b[%d:%d] (%s)" %
... (tag, i1, i2, a[i1:i2], j1, j2, b[j1:j2])))
delete a[0:1] (q) b[0:0] ()
equal a[1:3] (ab) b[0:2] (ab)
replace a[3:4] (x) b[2:3] (y)
equal a[4:6] (cd) b[3:5] (cd)
insert a[6:6] () b[5:6] (f)
"""
if self.opcodes is not None:
return self.opcodes
i = j = 0
self.opcodes = answer = []
for ai, bj, size in self.get_matching_blocks():
# invariant: we've pumped out correct diffs to change
# a[:i] into b[:j], and the next matching block is
# a[ai:ai+size] == b[bj:bj+size]. So we need to pump
# out a diff to change a[i:ai] into b[j:bj], pump out
# the matching block, and move (i,j) beyond the match
tag = ''
if i < ai and j < bj:
tag = 'replace'
elif i < ai:
tag = 'delete'
elif j < bj:
tag = 'insert'
if tag:
answer.append( (tag, i, ai, j, bj) )
i, j = ai+size, bj+size
# the list of matching blocks is terminated by a
# sentinel with size 0
if size:
answer.append( ('equal', ai, i, bj, j) )
return answer
def get_grouped_opcodes(self, n=3):
""" Isolate change clusters by eliminating ranges with no changes.
Return a generator of groups with upto n lines of context.
Each group is in the same format as returned by get_opcodes().
>>> from pprint import pprint
>>> a = list(map(str, range(1,40)))
>>> b = a[:]
>>> b[8:8] = ['i'] # Make an insertion
>>> b[20] += 'x' # Make a replacement
>>> b[23:28] = [] # Make a deletion
>>> b[30] += 'y' # Make another replacement
>>> pprint(list(SequenceMatcher(None,a,b).get_grouped_opcodes()))
[[('equal', 5, 8, 5, 8), ('insert', 8, 8, 8, 9), ('equal', 8, 11, 9, 12)],
[('equal', 16, 19, 17, 20),
('replace', 19, 20, 20, 21),
('equal', 20, 22, 21, 23),
('delete', 22, 27, 23, 23),
('equal', 27, 30, 23, 26)],
[('equal', 31, 34, 27, 30),
('replace', 34, 35, 30, 31),
('equal', 35, 38, 31, 34)]]
"""
codes = self.get_opcodes()
if not codes:
codes = [("equal", 0, 1, 0, 1)]
# Fixup leading and trailing groups if they show no changes.
if codes[0][0] == 'equal':
tag, i1, i2, j1, j2 = codes[0]
codes[0] = tag, max(i1, i2-n), i2, max(j1, j2-n), j2
if codes[-1][0] == 'equal':
tag, i1, i2, j1, j2 = codes[-1]
codes[-1] = tag, i1, min(i2, i1+n), j1, min(j2, j1+n)
nn = n + n
group = []
for tag, i1, i2, j1, j2 in codes:
# End the current group and start a new one whenever
# there is a large range with no changes.
if tag == 'equal' and i2-i1 > nn:
group.append((tag, i1, min(i2, i1+n), j1, min(j2, j1+n)))
yield group
group = []
i1, j1 = max(i1, i2-n), max(j1, j2-n)
group.append((tag, i1, i2, j1 ,j2))
if group and not (len(group)==1 and group[0][0] == 'equal'):
yield group
def ratio(self):
"""Return a measure of the sequences' similarity (float in [0,1]).
Where T is the total number of elements in both sequences, and
M is the number of matches, this is 2.0*M / T.
Note that this is 1 if the sequences are identical, and 0 if
they have nothing in common.
.ratio() is expensive to compute if you haven't already computed
.get_matching_blocks() or .get_opcodes(), in which case you may
want to try .quick_ratio() or .real_quick_ratio() first to get an
upper bound.
>>> s = SequenceMatcher(None, "abcd", "bcde")
>>> s.ratio()
0.75
>>> s.quick_ratio()
0.75
>>> s.real_quick_ratio()
1.0
"""
matches = sum(triple[-1] for triple in self.get_matching_blocks())
return _calculate_ratio(matches, len(self.a) + len(self.b))
def quick_ratio(self):
"""Return an upper bound on ratio() relatively quickly.
This isn't defined beyond that it is an upper bound on .ratio(), and
is faster to compute.
"""
# viewing a and b as multisets, set matches to the cardinality
# of their intersection; this counts the number of matches
# without regard to order, so is clearly an upper bound
if self.fullbcount is None:
self.fullbcount = fullbcount = {}
for elt in self.b:
fullbcount[elt] = fullbcount.get(elt, 0) + 1
fullbcount = self.fullbcount
# avail[x] is the number of times x appears in 'b' less the
# number of times we've seen it in 'a' so far ... kinda
avail = {}
availhas, matches = avail.__contains__, 0
for elt in self.a:
if availhas(elt):
numb = avail[elt]
else:
numb = fullbcount.get(elt, 0)
avail[elt] = numb - 1
if numb > 0:
matches = matches + 1
return _calculate_ratio(matches, len(self.a) + len(self.b))
def real_quick_ratio(self):
"""Return an upper bound on ratio() very quickly.
This isn't defined beyond that it is an upper bound on .ratio(), and
is faster to compute than either .ratio() or .quick_ratio().
"""
la, lb = len(self.a), len(self.b)
# can't have more matches than the number of elements in the
# shorter sequence
return _calculate_ratio(min(la, lb), la + lb)
def get_close_matches(word, possibilities, n=3, cutoff=0.6):
"""Use SequenceMatcher to return list of the best "good enough" matches.
word is a sequence for which close matches are desired (typically a
string).
possibilities is a list of sequences against which to match word
(typically a list of strings).
Optional arg n (default 3) is the maximum number of close matches to
return. n must be > 0.
Optional arg cutoff (default 0.6) is a float in [0, 1]. Possibilities
that don't score at least that similar to word are ignored.
The best (no more than n) matches among the possibilities are returned
in a list, sorted by similarity score, most similar first.
>>> get_close_matches("appel", ["ape", "apple", "peach", "puppy"])
['apple', 'ape']
>>> import keyword as _keyword
>>> get_close_matches("wheel", _keyword.kwlist)
['while']
>>> get_close_matches("Apple", _keyword.kwlist)
[]
>>> get_close_matches("accept", _keyword.kwlist)
['except']
"""
if not n > 0:
raise ValueError("n must be > 0: %r" % (n,))
if not 0.0 <= cutoff <= 1.0:
raise ValueError("cutoff must be in [0.0, 1.0]: %r" % (cutoff,))
result = []
s = SequenceMatcher()
s.set_seq2(word)
for x in possibilities:
s.set_seq1(x)
if s.real_quick_ratio() >= cutoff and \
s.quick_ratio() >= cutoff and \
s.ratio() >= cutoff:
result.append((s.ratio(), x))
# Move the best scorers to head of list
result = heapq.nlargest(n, result)
# Strip scores for the best n matches
return [x for score, x in result]
def _count_leading(line, ch):
"""
Return number of `ch` characters at the start of `line`.
Example:
>>> _count_leading(' abc', ' ')
3
"""
i, n = 0, len(line)
while i < n and line[i] == ch:
i += 1
return i
class Differ:
r"""
Differ is a class for comparing sequences of lines of text, and
producing human-readable differences or deltas. Differ uses
SequenceMatcher both to compare sequences of lines, and to compare
sequences of characters within similar (near-matching) lines.
Each line of a Differ delta begins with a two-letter code:
'- ' line unique to sequence 1
'+ ' line unique to sequence 2
' ' line common to both sequences
'? ' line not present in either input sequence
Lines beginning with '? ' attempt to guide the eye to intraline
differences, and were not present in either input sequence. These lines
can be confusing if the sequences contain tab characters.
Note that Differ makes no claim to produce a *minimal* diff. To the
contrary, minimal diffs are often counter-intuitive, because they synch
up anywhere possible, sometimes accidental matches 100 pages apart.
Restricting synch points to contiguous matches preserves some notion of
locality, at the occasional cost of producing a longer diff.
Example: Comparing two texts.
First we set up the texts, sequences of individual single-line strings
ending with newlines (such sequences can also be obtained from the
`readlines()` method of file-like objects):
>>> text1 = ''' 1. Beautiful is better than ugly.
... 2. Explicit is better than implicit.
... 3. Simple is better than complex.
... 4. Complex is better than complicated.
... '''.splitlines(1)
>>> len(text1)
4
>>> text1[0][-1]
'\n'
>>> text2 = ''' 1. Beautiful is better than ugly.
... 3. Simple is better than complex.
... 4. Complicated is better than complex.
... 5. Flat is better than nested.
... '''.splitlines(1)
Next we instantiate a Differ object:
>>> d = Differ()
Note that when instantiating a Differ object we may pass functions to
filter out line and character 'junk'. See Differ.__init__ for details.
Finally, we compare the two:
>>> result = list(d.compare(text1, text2))
'result' is a list of strings, so let's pretty-print it:
>>> from pprint import pprint as _pprint
>>> _pprint(result)
[' 1. Beautiful is better than ugly.\n',
'- 2. Explicit is better than implicit.\n',
'- 3. Simple is better than complex.\n',
'+ 3. Simple is better than complex.\n',
'? ++\n',
'- 4. Complex is better than complicated.\n',
'? ^ ---- ^\n',
'+ 4. Complicated is better than complex.\n',
'? ++++ ^ ^\n',
'+ 5. Flat is better than nested.\n']
As a single multi-line string it looks like this:
>>> print(''.join(result), end="")
1. Beautiful is better than ugly.
- 2. Explicit is better than implicit.
- 3. Simple is better than complex.
+ 3. Simple is better than complex.
? ++
- 4. Complex is better than complicated.
? ^ ---- ^
+ 4. Complicated is better than complex.
? ++++ ^ ^
+ 5. Flat is better than nested.
Methods:
__init__(linejunk=None, charjunk=None)
Construct a text differencer, with optional filters.
compare(a, b)
Compare two sequences of lines; generate the resulting delta.
"""
def __init__(self, linejunk=None, charjunk=None):
"""
Construct a text differencer, with optional filters.
The two optional keyword parameters are for filter functions:
- `linejunk`: A function that should accept a single string argument,
and return true iff the string is junk. The module-level function
`IS_LINE_JUNK` may be used to filter out lines without visible
characters, except for at most one splat ('#'). It is recommended
to leave linejunk None; as of Python 2.3, the underlying
SequenceMatcher class has grown an adaptive notion of "noise" lines
that's better than any static definition the author has ever been
able to craft.
- `charjunk`: A function that should accept a string of length 1. The
module-level function `IS_CHARACTER_JUNK` may be used to filter out
whitespace characters (a blank or tab; **note**: bad idea to include
newline in this!). Use of IS_CHARACTER_JUNK is recommended.
"""
self.linejunk = linejunk
self.charjunk = charjunk
def compare(self, a, b):
r"""
Compare two sequences of lines; generate the resulting delta.
Each sequence must contain individual single-line strings ending with
newlines. Such sequences can be obtained from the `readlines()` method
of file-like objects. The delta generated also consists of newline-
terminated strings, ready to be printed as-is via the writeline()
method of a file-like object.
Example:
>>> print(''.join(Differ().compare('one\ntwo\nthree\n'.splitlines(1),
... 'ore\ntree\nemu\n'.splitlines(1))),
... end="")
- one
? ^
+ ore
? ^
- two
- three
? -
+ tree
+ emu
"""
cruncher = SequenceMatcher(self.linejunk, a, b)
for tag, alo, ahi, blo, bhi in cruncher.get_opcodes():
if tag == 'replace':
g = self._fancy_replace(a, alo, ahi, b, blo, bhi)
elif tag == 'delete':
g = self._dump('-', a, alo, ahi)
elif tag == 'insert':
g = self._dump('+', b, blo, bhi)
elif tag == 'equal':
g = self._dump(' ', a, alo, ahi)
else:
raise ValueError('unknown tag %r' % (tag,))
for line in g:
yield line
def _dump(self, tag, x, lo, hi):
"""Generate comparison results for a same-tagged range."""
for i in range(lo, hi):
yield '%s %s' % (tag, x[i])
def _plain_replace(self, a, alo, ahi, b, blo, bhi):
assert alo < ahi and blo < bhi
# dump the shorter block first -- reduces the burden on short-term
# memory if the blocks are of very different sizes
if bhi - blo < ahi - alo:
first = self._dump('+', b, blo, bhi)
second = self._dump('-', a, alo, ahi)
else:
first = self._dump('-', a, alo, ahi)
second = self._dump('+', b, blo, bhi)
for g in first, second:
for line in g:
yield line
def _fancy_replace(self, a, alo, ahi, b, blo, bhi):
r"""
When replacing one block of lines with another, search the blocks
for *similar* lines; the best-matching pair (if any) is used as a
synch point, and intraline difference marking is done on the
similar pair. Lots of work, but often worth it.
Example:
>>> d = Differ()
>>> results = d._fancy_replace(['abcDefghiJkl\n'], 0, 1,
... ['abcdefGhijkl\n'], 0, 1)
>>> print(''.join(results), end="")
- abcDefghiJkl
? ^ ^ ^
+ abcdefGhijkl
? ^ ^ ^
"""
# don't synch up unless the lines have a similarity score of at
# least cutoff; best_ratio tracks the best score seen so far
best_ratio, cutoff = 0.74, 0.75
cruncher = SequenceMatcher(self.charjunk)
eqi, eqj = None, None # 1st indices of equal lines (if any)
# search for the pair that matches best without being identical
# (identical lines must be junk lines, & we don't want to synch up
# on junk -- unless we have to)
for j in range(blo, bhi):
bj = b[j]
cruncher.set_seq2(bj)
for i in range(alo, ahi):
ai = a[i]
if ai == bj:
if eqi is None:
eqi, eqj = i, j
continue
cruncher.set_seq1(ai)
# computing similarity is expensive, so use the quick
# upper bounds first -- have seen this speed up messy
# compares by a factor of 3.
# note that ratio() is only expensive to compute the first
# time it's called on a sequence pair; the expensive part
# of the computation is cached by cruncher
if cruncher.real_quick_ratio() > best_ratio and \
cruncher.quick_ratio() > best_ratio and \
cruncher.ratio() > best_ratio:
best_ratio, best_i, best_j = cruncher.ratio(), i, j
if best_ratio < cutoff:
# no non-identical "pretty close" pair
if eqi is None:
# no identical pair either -- treat it as a straight replace
for line in self._plain_replace(a, alo, ahi, b, blo, bhi):
yield line
return
# no close pair, but an identical pair -- synch up on that
best_i, best_j, best_ratio = eqi, eqj, 1.0
else:
# there's a close pair, so forget the identical pair (if any)
eqi = None
# a[best_i] very similar to b[best_j]; eqi is None iff they're not
# identical
# pump out diffs from before the synch point
for line in self._fancy_helper(a, alo, best_i, b, blo, best_j):
yield line
# do intraline marking on the synch pair
aelt, belt = a[best_i], b[best_j]
if eqi is None:
# pump out a '-', '?', '+', '?' quad for the synched lines
atags = btags = ""
cruncher.set_seqs(aelt, belt)
for tag, ai1, ai2, bj1, bj2 in cruncher.get_opcodes():
la, lb = ai2 - ai1, bj2 - bj1
if tag == 'replace':
atags += '^' * la
btags += '^' * lb
elif tag == 'delete':
atags += '-' * la
elif tag == 'insert':
btags += '+' * lb
elif tag == 'equal':
atags += ' ' * la
btags += ' ' * lb
else:
raise ValueError('unknown tag %r' % (tag,))
for line in self._qformat(aelt, belt, atags, btags):
yield line
else:
# the synch pair is identical
yield ' ' + aelt
# pump out diffs from after the synch point
for line in self._fancy_helper(a, best_i+1, ahi, b, best_j+1, bhi):
yield line
def _fancy_helper(self, a, alo, ahi, b, blo, bhi):
g = []
if alo < ahi:
if blo < bhi:
g = self._fancy_replace(a, alo, ahi, b, blo, bhi)
else:
g = self._dump('-', a, alo, ahi)
elif blo < bhi:
g = self._dump('+', b, blo, bhi)
for line in g:
yield line
def _qformat(self, aline, bline, atags, btags):
r"""
Format "?" output and deal with leading tabs.
Example:
>>> d = Differ()
>>> results = d._qformat('\tabcDefghiJkl\n', '\tabcdefGhijkl\n',
... ' ^ ^ ^ ', ' ^ ^ ^ ')
>>> for line in results: print(repr(line))
...
'- \tabcDefghiJkl\n'
'? \t ^ ^ ^\n'
'+ \tabcdefGhijkl\n'
'? \t ^ ^ ^\n'
"""
# Can hurt, but will probably help most of the time.
common = min(_count_leading(aline, "\t"),
_count_leading(bline, "\t"))
common = min(common, _count_leading(atags[:common], " "))
common = min(common, _count_leading(btags[:common], " "))
atags = atags[common:].rstrip()
btags = btags[common:].rstrip()
yield "- " + aline
if atags:
yield "? %s%s\n" % ("\t" * common, atags)
yield "+ " + bline
if btags:
yield "? %s%s\n" % ("\t" * common, btags)
# With respect to junk, an earlier version of ndiff simply refused to
# *start* a match with a junk element. The result was cases like this:
# before: private Thread currentThread;
# after: private volatile Thread currentThread;
# If you consider whitespace to be junk, the longest contiguous match
# not starting with junk is "e Thread currentThread". So ndiff reported
# that "e volatil" was inserted between the 't' and the 'e' in "private".
# While an accurate view, to people that's absurd. The current version
# looks for matching blocks that are entirely junk-free, then extends the
# longest one of those as far as possible but only with matching junk.
# So now "currentThread" is matched, then extended to suck up the
# preceding blank; then "private" is matched, and extended to suck up the
# following blank; then "Thread" is matched; and finally ndiff reports
# that "volatile " was inserted before "Thread". The only quibble
# remaining is that perhaps it was really the case that " volatile"
# was inserted after "private". I can live with that <wink>.
import re
def IS_LINE_JUNK(line, pat=re.compile(r"\s*#?\s*$").match):
r"""
Return 1 for ignorable line: iff `line` is blank or contains a single '#'.
Examples:
>>> IS_LINE_JUNK('\n')
True
>>> IS_LINE_JUNK(' # \n')
True
>>> IS_LINE_JUNK('hello\n')
False
"""
return pat(line) is not None
def IS_CHARACTER_JUNK(ch, ws=" \t"):
r"""
Return 1 for ignorable character: iff `ch` is a space or tab.
Examples:
>>> IS_CHARACTER_JUNK(' ')
True
>>> IS_CHARACTER_JUNK('\t')
True
>>> IS_CHARACTER_JUNK('\n')
False
>>> IS_CHARACTER_JUNK('x')
False
"""
return ch in ws
########################################################################
### Unified Diff
########################################################################
def _format_range_unified(start, stop):
'Convert range to the "ed" format'
# Per the diff spec at http://www.unix.org/single_unix_specification/
beginning = start + 1 # lines start numbering with one
length = stop - start
if length == 1:
return '{}'.format(beginning)
if not length:
beginning -= 1 # empty ranges begin at line just before the range
return '{},{}'.format(beginning, length)
def unified_diff(a, b, fromfile='', tofile='', fromfiledate='',
tofiledate='', n=3, lineterm='\n'):
r"""
Compare two sequences of lines; generate the delta as a unified diff.
Unified diffs are a compact way of showing line changes and a few
lines of context. The number of context lines is set by 'n' which
defaults to three.
By default, the diff control lines (those with ---, +++, or @@) are
created with a trailing newline. This is helpful so that inputs
created from file.readlines() result in diffs that are suitable for
file.writelines() since both the inputs and outputs have trailing
newlines.
For inputs that do not have trailing newlines, set the lineterm
argument to "" so that the output will be uniformly newline free.
The unidiff format normally has a header for filenames and modification
times. Any or all of these may be specified using strings for
'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
The modification times are normally expressed in the ISO 8601 format.
Example:
>>> for line in unified_diff('one two three four'.split(),
... 'zero one tree four'.split(), 'Original', 'Current',
... '2005-01-26 23:30:50', '2010-04-02 10:20:52',
... lineterm=''):
... print(line) # doctest: +NORMALIZE_WHITESPACE
--- Original 2005-01-26 23:30:50
+++ Current 2010-04-02 10:20:52
@@ -1,4 +1,4 @@
+zero
one
-two
-three
+tree
four
"""
started = False
for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n):
if not started:
started = True
fromdate = '\t{}'.format(fromfiledate) if fromfiledate else ''
todate = '\t{}'.format(tofiledate) if tofiledate else ''
yield '--- {}{}{}'.format(fromfile, fromdate, lineterm)
yield '+++ {}{}{}'.format(tofile, todate, lineterm)
first, last = group[0], group[-1]
file1_range = _format_range_unified(first[1], last[2])
file2_range = _format_range_unified(first[3], last[4])
yield '@@ -{} +{} @@{}'.format(file1_range, file2_range, lineterm)
for tag, i1, i2, j1, j2 in group:
if tag == 'equal':
for line in a[i1:i2]:
yield ' ' + line
continue
if tag in {'replace', 'delete'}:
for line in a[i1:i2]:
yield '-' + line
if tag in {'replace', 'insert'}:
for line in b[j1:j2]:
yield '+' + line
########################################################################
### Context Diff
########################################################################
def _format_range_context(start, stop):
'Convert range to the "ed" format'
# Per the diff spec at http://www.unix.org/single_unix_specification/
beginning = start + 1 # lines start numbering with one
length = stop - start
if not length:
beginning -= 1 # empty ranges begin at line just before the range
if length <= 1:
return '{}'.format(beginning)
return '{},{}'.format(beginning, beginning + length - 1)
# See http://www.unix.org/single_unix_specification/
def context_diff(a, b, fromfile='', tofile='',
fromfiledate='', tofiledate='', n=3, lineterm='\n'):
r"""
Compare two sequences of lines; generate the delta as a context diff.
Context diffs are a compact way of showing line changes and a few
lines of context. The number of context lines is set by 'n' which
defaults to three.
By default, the diff control lines (those with *** or ---) are
created with a trailing newline. This is helpful so that inputs
created from file.readlines() result in diffs that are suitable for
file.writelines() since both the inputs and outputs have trailing
newlines.
For inputs that do not have trailing newlines, set the lineterm
argument to "" so that the output will be uniformly newline free.
The context diff format normally has a header for filenames and
modification times. Any or all of these may be specified using
strings for 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
The modification times are normally expressed in the ISO 8601 format.
If not specified, the strings default to blanks.
Example:
>>> print(''.join(context_diff('one\ntwo\nthree\nfour\n'.splitlines(1),
... 'zero\none\ntree\nfour\n'.splitlines(1), 'Original', 'Current')),
... end="")
*** Original
--- Current
***************
*** 1,4 ****
one
! two
! three
four
--- 1,4 ----
+ zero
one
! tree
four
"""
prefix = dict(insert='+ ', delete='- ', replace='! ', equal=' ')
started = False
for group in SequenceMatcher(None,a,b).get_grouped_opcodes(n):
if not started:
started = True
fromdate = '\t{}'.format(fromfiledate) if fromfiledate else ''
todate = '\t{}'.format(tofiledate) if tofiledate else ''
yield '*** {}{}{}'.format(fromfile, fromdate, lineterm)
yield '--- {}{}{}'.format(tofile, todate, lineterm)
first, last = group[0], group[-1]
yield '***************' + lineterm
file1_range = _format_range_context(first[1], last[2])
yield '*** {} ****{}'.format(file1_range, lineterm)
if any(tag in {'replace', 'delete'} for tag, _, _, _, _ in group):
for tag, i1, i2, _, _ in group:
if tag != 'insert':
for line in a[i1:i2]:
yield prefix[tag] + line
file2_range = _format_range_context(first[3], last[4])
yield '--- {} ----{}'.format(file2_range, lineterm)
if any(tag in {'replace', 'insert'} for tag, _, _, _, _ in group):
for tag, _, _, j1, j2 in group:
if tag != 'delete':
for line in b[j1:j2]:
yield prefix[tag] + line
def ndiff(a, b, linejunk=None, charjunk=IS_CHARACTER_JUNK):
r"""
Compare `a` and `b` (lists of strings); return a `Differ`-style delta.
Optional keyword parameters `linejunk` and `charjunk` are for filter
functions (or None):
- linejunk: A function that should accept a single string argument, and
return true iff the string is junk. The default is None, and is
recommended; as of Python 2.3, an adaptive notion of "noise" lines is
used that does a good job on its own.
- charjunk: A function that should accept a string of length 1. The
default is module-level function IS_CHARACTER_JUNK, which filters out
whitespace characters (a blank or tab; note: bad idea to include newline
in this!).
Tools/scripts/ndiff.py is a command-line front-end to this function.
Example:
>>> diff = ndiff('one\ntwo\nthree\n'.splitlines(1),
... 'ore\ntree\nemu\n'.splitlines(1))
>>> print(''.join(diff), end="")
- one
? ^
+ ore
? ^
- two
- three
? -
+ tree
+ emu
"""
return Differ(linejunk, charjunk).compare(a, b)
def _mdiff(fromlines, tolines, context=None, linejunk=None,
charjunk=IS_CHARACTER_JUNK):
r"""Returns generator yielding marked up from/to side by side differences.
Arguments:
fromlines -- list of text lines to compared to tolines
tolines -- list of text lines to be compared to fromlines
context -- number of context lines to display on each side of difference,
if None, all from/to text lines will be generated.
linejunk -- passed on to ndiff (see ndiff documentation)
charjunk -- passed on to ndiff (see ndiff documentation)
This function returns an interator which returns a tuple:
(from line tuple, to line tuple, boolean flag)
from/to line tuple -- (line num, line text)
line num -- integer or None (to indicate a context separation)
line text -- original line text with following markers inserted:
'\0+' -- marks start of added text
'\0-' -- marks start of deleted text
'\0^' -- marks start of changed text
'\1' -- marks end of added/deleted/changed text
boolean flag -- None indicates context separation, True indicates
either "from" or "to" line contains a change, otherwise False.
This function/iterator was originally developed to generate side by side
file difference for making HTML pages (see HtmlDiff class for example
usage).
Note, this function utilizes the ndiff function to generate the side by
side difference markup. Optional ndiff arguments may be passed to this
function and they in turn will be passed to ndiff.
"""
import re
# regular expression for finding intraline change indices
change_re = re.compile('(\++|\-+|\^+)')
# create the difference iterator to generate the differences
diff_lines_iterator = ndiff(fromlines,tolines,linejunk,charjunk)
def _make_line(lines, format_key, side, num_lines=[0,0]):
"""Returns line of text with user's change markup and line formatting.
lines -- list of lines from the ndiff generator to produce a line of
text from. When producing the line of text to return, the
lines used are removed from this list.
format_key -- '+' return first line in list with "add" markup around
the entire line.
'-' return first line in list with "delete" markup around
the entire line.
'?' return first line in list with add/delete/change
intraline markup (indices obtained from second line)
None return first line in list with no markup
side -- indice into the num_lines list (0=from,1=to)
num_lines -- from/to current line number. This is NOT intended to be a
passed parameter. It is present as a keyword argument to
maintain memory of the current line numbers between calls
of this function.
Note, this function is purposefully not defined at the module scope so
that data it needs from its parent function (within whose context it
is defined) does not need to be of module scope.
"""
num_lines[side] += 1
# Handle case where no user markup is to be added, just return line of
# text with user's line format to allow for usage of the line number.
if format_key is None:
return (num_lines[side],lines.pop(0)[2:])
# Handle case of intraline changes
if format_key == '?':
text, markers = lines.pop(0), lines.pop(0)
# find intraline changes (store change type and indices in tuples)
sub_info = []
def record_sub_info(match_object,sub_info=sub_info):
sub_info.append([match_object.group(1)[0],match_object.span()])
return match_object.group(1)
change_re.sub(record_sub_info,markers)
# process each tuple inserting our special marks that won't be
# noticed by an xml/html escaper.
for key,(begin,end) in sub_info[::-1]:
text = text[0:begin]+'\0'+key+text[begin:end]+'\1'+text[end:]
text = text[2:]
# Handle case of add/delete entire line
else:
text = lines.pop(0)[2:]
# if line of text is just a newline, insert a space so there is
# something for the user to highlight and see.
if not text:
text = ' '
# insert marks that won't be noticed by an xml/html escaper.
text = '\0' + format_key + text + '\1'
# Return line of text, first allow user's line formatter to do its
# thing (such as adding the line number) then replace the special
# marks with what the user's change markup.
return (num_lines[side],text)
def _line_iterator():
"""Yields from/to lines of text with a change indication.
This function is an iterator. It itself pulls lines from a
differencing iterator, processes them and yields them. When it can
it yields both a "from" and a "to" line, otherwise it will yield one
or the other. In addition to yielding the lines of from/to text, a
boolean flag is yielded to indicate if the text line(s) have
differences in them.
Note, this function is purposefully not defined at the module scope so
that data it needs from its parent function (within whose context it
is defined) does not need to be of module scope.
"""
lines = []
num_blanks_pending, num_blanks_to_yield = 0, 0
while True:
# Load up next 4 lines so we can look ahead, create strings which
# are a concatenation of the first character of each of the 4 lines
# so we can do some very readable comparisons.
while len(lines) < 4:
try:
lines.append(next(diff_lines_iterator))
except StopIteration:
lines.append('X')
s = ''.join([line[0] for line in lines])
if s.startswith('X'):
# When no more lines, pump out any remaining blank lines so the
# corresponding add/delete lines get a matching blank line so
# all line pairs get yielded at the next level.
num_blanks_to_yield = num_blanks_pending
elif s.startswith('-?+?'):
# simple intraline change
yield _make_line(lines,'?',0), _make_line(lines,'?',1), True
continue
elif s.startswith('--++'):
# in delete block, add block coming: we do NOT want to get
# caught up on blank lines yet, just process the delete line
num_blanks_pending -= 1
yield _make_line(lines,'-',0), None, True
continue
elif s.startswith(('--?+', '--+', '- ')):
# in delete block and see a intraline change or unchanged line
# coming: yield the delete line and then blanks
from_line,to_line = _make_line(lines,'-',0), None
num_blanks_to_yield,num_blanks_pending = num_blanks_pending-1,0
elif s.startswith('-+?'):
# intraline change
yield _make_line(lines,None,0), _make_line(lines,'?',1), True
continue
elif s.startswith('-?+'):
# intraline change
yield _make_line(lines,'?',0), _make_line(lines,None,1), True
continue
elif s.startswith('-'):
# delete FROM line
num_blanks_pending -= 1
yield _make_line(lines,'-',0), None, True
continue
elif s.startswith('+--'):
# in add block, delete block coming: we do NOT want to get
# caught up on blank lines yet, just process the add line
num_blanks_pending += 1
yield None, _make_line(lines,'+',1), True
continue
elif s.startswith(('+ ', '+-')):
# will be leaving an add block: yield blanks then add line
from_line, to_line = None, _make_line(lines,'+',1)
num_blanks_to_yield,num_blanks_pending = num_blanks_pending+1,0
elif s.startswith('+'):
# inside an add block, yield the add line
num_blanks_pending += 1
yield None, _make_line(lines,'+',1), True
continue
elif s.startswith(' '):
# unchanged text, yield it to both sides
yield _make_line(lines[:],None,0),_make_line(lines,None,1),False
continue
# Catch up on the blank lines so when we yield the next from/to
# pair, they are lined up.
while(num_blanks_to_yield < 0):
num_blanks_to_yield += 1
yield None,('','\n'),True
while(num_blanks_to_yield > 0):
num_blanks_to_yield -= 1
yield ('','\n'),None,True
if s.startswith('X'):
raise StopIteration
else:
yield from_line,to_line,True
def _line_pair_iterator():
"""Yields from/to lines of text with a change indication.
This function is an iterator. It itself pulls lines from the line
iterator. Its difference from that iterator is that this function
always yields a pair of from/to text lines (with the change
indication). If necessary it will collect single from/to lines
until it has a matching pair from/to pair to yield.
Note, this function is purposefully not defined at the module scope so
that data it needs from its parent function (within whose context it
is defined) does not need to be of module scope.
"""
line_iterator = _line_iterator()
fromlines,tolines=[],[]
while True:
# Collecting lines of text until we have a from/to pair
while (len(fromlines)==0 or len(tolines)==0):
from_line, to_line, found_diff = next(line_iterator)
if from_line is not None:
fromlines.append((from_line,found_diff))
if to_line is not None:
tolines.append((to_line,found_diff))
# Once we have a pair, remove them from the collection and yield it
from_line, fromDiff = fromlines.pop(0)
to_line, to_diff = tolines.pop(0)
yield (from_line,to_line,fromDiff or to_diff)
# Handle case where user does not want context differencing, just yield
# them up without doing anything else with them.
line_pair_iterator = _line_pair_iterator()
if context is None:
while True:
yield next(line_pair_iterator)
# Handle case where user wants context differencing. We must do some
# storage of lines until we know for sure that they are to be yielded.
else:
context += 1
lines_to_write = 0
while True:
# Store lines up until we find a difference, note use of a
# circular queue because we only need to keep around what
# we need for context.
index, contextLines = 0, [None]*(context)
found_diff = False
while(found_diff is False):
from_line, to_line, found_diff = next(line_pair_iterator)
i = index % context
contextLines[i] = (from_line, to_line, found_diff)
index += 1
# Yield lines that we have collected so far, but first yield
# the user's separator.
if index > context:
yield None, None, None
lines_to_write = context
else:
lines_to_write = index
index = 0
while(lines_to_write):
i = index % context
index += 1
yield contextLines[i]
lines_to_write -= 1
# Now yield the context lines after the change
lines_to_write = context-1
while(lines_to_write):
from_line, to_line, found_diff = next(line_pair_iterator)
# If another change within the context, extend the context
if found_diff:
lines_to_write = context-1
else:
lines_to_write -= 1
yield from_line, to_line, found_diff
_file_template = """
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
<head>
<meta http-equiv="Content-Type"
content="text/html; charset=ISO-8859-1" />
<title></title>
<style type="text/css">%(styles)s
</style>
</head>
<body>
%(table)s%(legend)s
</body>
</html>"""
_styles = """
table.diff {font-family:Courier; border:medium;}
.diff_header {background-color:#e0e0e0}
td.diff_header {text-align:right}
.diff_next {background-color:#c0c0c0}
.diff_add {background-color:#aaffaa}
.diff_chg {background-color:#ffff77}
.diff_sub {background-color:#ffaaaa}"""
_table_template = """
<table class="diff" id="difflib_chg_%(prefix)s_top"
cellspacing="0" cellpadding="0" rules="groups" >
<colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup>
<colgroup></colgroup> <colgroup></colgroup> <colgroup></colgroup>
%(header_row)s
<tbody>
%(data_rows)s </tbody>
</table>"""
_legend = """
<table class="diff" summary="Legends">
<tr> <th colspan="2"> Legends </th> </tr>
<tr> <td> <table border="" summary="Colors">
<tr><th> Colors </th> </tr>
<tr><td class="diff_add"> Added </td></tr>
<tr><td class="diff_chg">Changed</td> </tr>
<tr><td class="diff_sub">Deleted</td> </tr>
</table></td>
<td> <table border="" summary="Links">
<tr><th colspan="2"> Links </th> </tr>
<tr><td>(f)irst change</td> </tr>
<tr><td>(n)ext change</td> </tr>
<tr><td>(t)op</td> </tr>
</table></td> </tr>
</table>"""
class HtmlDiff(object):
"""For producing HTML side by side comparison with change highlights.
This class can be used to create an HTML table (or a complete HTML file
containing the table) showing a side by side, line by line comparison
of text with inter-line and intra-line change highlights. The table can
be generated in either full or contextual difference mode.
The following methods are provided for HTML generation:
make_table -- generates HTML for a single side by side table
make_file -- generates complete HTML file with a single side by side table
See tools/scripts/diff.py for an example usage of this class.
"""
_file_template = _file_template
_styles = _styles
_table_template = _table_template
_legend = _legend
_default_prefix = 0
def __init__(self,tabsize=8,wrapcolumn=None,linejunk=None,
charjunk=IS_CHARACTER_JUNK):
"""HtmlDiff instance initializer
Arguments:
tabsize -- tab stop spacing, defaults to 8.
wrapcolumn -- column number where lines are broken and wrapped,
defaults to None where lines are not wrapped.
linejunk,charjunk -- keyword arguments passed into ndiff() (used to by
HtmlDiff() to generate the side by side HTML differences). See
ndiff() documentation for argument default values and descriptions.
"""
self._tabsize = tabsize
self._wrapcolumn = wrapcolumn
self._linejunk = linejunk
self._charjunk = charjunk
def make_file(self,fromlines,tolines,fromdesc='',todesc='',context=False,
numlines=5):
"""Returns HTML file of side by side comparison with change highlights
Arguments:
fromlines -- list of "from" lines
tolines -- list of "to" lines
fromdesc -- "from" file column header string
todesc -- "to" file column header string
context -- set to True for contextual differences (defaults to False
which shows full differences).
numlines -- number of context lines. When context is set True,
controls number of lines displayed before and after the change.
When context is False, controls the number of lines to place
the "next" link anchors before the next change (so click of
"next" link jumps to just before the change).
"""
return self._file_template % dict(
styles = self._styles,
legend = self._legend,
table = self.make_table(fromlines,tolines,fromdesc,todesc,
context=context,numlines=numlines))
def _tab_newline_replace(self,fromlines,tolines):
"""Returns from/to line lists with tabs expanded and newlines removed.
Instead of tab characters being replaced by the number of spaces
needed to fill in to the next tab stop, this function will fill
the space with tab characters. This is done so that the difference
algorithms can identify changes in a file when tabs are replaced by
spaces and vice versa. At the end of the HTML generation, the tab
characters will be replaced with a nonbreakable space.
"""
def expand_tabs(line):
# hide real spaces
line = line.replace(' ','\0')
# expand tabs into spaces
line = line.expandtabs(self._tabsize)
# replace spaces from expanded tabs back into tab characters
# (we'll replace them with markup after we do differencing)
line = line.replace(' ','\t')
return line.replace('\0',' ').rstrip('\n')
fromlines = [expand_tabs(line) for line in fromlines]
tolines = [expand_tabs(line) for line in tolines]
return fromlines,tolines
def _split_line(self,data_list,line_num,text):
"""Builds list of text lines by splitting text lines at wrap point
This function will determine if the input text line needs to be
wrapped (split) into separate lines. If so, the first wrap point
will be determined and the first line appended to the output
text line list. This function is used recursively to handle
the second part of the split line to further split it.
"""
# if blank line or context separator, just add it to the output list
if not line_num:
data_list.append((line_num,text))
return
# if line text doesn't need wrapping, just add it to the output list
size = len(text)
max = self._wrapcolumn
if (size <= max) or ((size -(text.count('\0')*3)) <= max):
data_list.append((line_num,text))
return
# scan text looking for the wrap point, keeping track if the wrap
# point is inside markers
i = 0
n = 0
mark = ''
while n < max and i < size:
if text[i] == '\0':
i += 1
mark = text[i]
i += 1
elif text[i] == '\1':
i += 1
mark = ''
else:
i += 1
n += 1
# wrap point is inside text, break it up into separate lines
line1 = text[:i]
line2 = text[i:]
# if wrap point is inside markers, place end marker at end of first
# line and start marker at beginning of second line because each
# line will have its own table tag markup around it.
if mark:
line1 = line1 + '\1'
line2 = '\0' + mark + line2
# tack on first line onto the output list
data_list.append((line_num,line1))
# use this routine again to wrap the remaining text
self._split_line(data_list,'>',line2)
def _line_wrapper(self,diffs):
"""Returns iterator that splits (wraps) mdiff text lines"""
# pull from/to data and flags from mdiff iterator
for fromdata,todata,flag in diffs:
# check for context separators and pass them through
if flag is None:
yield fromdata,todata,flag
continue
(fromline,fromtext),(toline,totext) = fromdata,todata
# for each from/to line split it at the wrap column to form
# list of text lines.
fromlist,tolist = [],[]
self._split_line(fromlist,fromline,fromtext)
self._split_line(tolist,toline,totext)
# yield from/to line in pairs inserting blank lines as
# necessary when one side has more wrapped lines
while fromlist or tolist:
if fromlist:
fromdata = fromlist.pop(0)
else:
fromdata = ('',' ')
if tolist:
todata = tolist.pop(0)
else:
todata = ('',' ')
yield fromdata,todata,flag
def _collect_lines(self,diffs):
"""Collects mdiff output into separate lists
Before storing the mdiff from/to data into a list, it is converted
into a single line of text with HTML markup.
"""
fromlist,tolist,flaglist = [],[],[]
# pull from/to data and flags from mdiff style iterator
for fromdata,todata,flag in diffs:
try:
# store HTML markup of the lines into the lists
fromlist.append(self._format_line(0,flag,*fromdata))
tolist.append(self._format_line(1,flag,*todata))
except TypeError:
# exceptions occur for lines where context separators go
fromlist.append(None)
tolist.append(None)
flaglist.append(flag)
return fromlist,tolist,flaglist
def _format_line(self,side,flag,linenum,text):
"""Returns HTML markup of "from" / "to" text lines
side -- 0 or 1 indicating "from" or "to" text
flag -- indicates if difference on line
linenum -- line number (used for line number column)
text -- line text to be marked up
"""
try:
linenum = '%d' % linenum
id = ' id="%s%s"' % (self._prefix[side],linenum)
except TypeError:
# handle blank lines where linenum is '>' or ''
id = ''
# replace those things that would get confused with HTML symbols
text=text.replace("&","&").replace(">",">").replace("<","<")
# make space non-breakable so they don't get compressed or line wrapped
text = text.replace(' ',' ').rstrip()
return '<td class="diff_header"%s>%s</td><td nowrap="nowrap">%s</td>' \
% (id,linenum,text)
def _make_prefix(self):
"""Create unique anchor prefixes"""
# Generate a unique anchor prefix so multiple tables
# can exist on the same HTML page without conflicts.
fromprefix = "from%d_" % HtmlDiff._default_prefix
toprefix = "to%d_" % HtmlDiff._default_prefix
HtmlDiff._default_prefix += 1
# store prefixes so line format method has access
self._prefix = [fromprefix,toprefix]
def _convert_flags(self,fromlist,tolist,flaglist,context,numlines):
"""Makes list of "next" links"""
# all anchor names will be generated using the unique "to" prefix
toprefix = self._prefix[1]
# process change flags, generating middle column of next anchors/links
next_id = ['']*len(flaglist)
next_href = ['']*len(flaglist)
num_chg, in_change = 0, False
last = 0
for i,flag in enumerate(flaglist):
if flag:
if not in_change:
in_change = True
last = i
# at the beginning of a change, drop an anchor a few lines
# (the context lines) before the change for the previous
# link
i = max([0,i-numlines])
next_id[i] = ' id="difflib_chg_%s_%d"' % (toprefix,num_chg)
# at the beginning of a change, drop a link to the next
# change
num_chg += 1
next_href[last] = '<a href="#difflib_chg_%s_%d">n</a>' % (
toprefix,num_chg)
else:
in_change = False
# check for cases where there is no content to avoid exceptions
if not flaglist:
flaglist = [False]
next_id = ['']
next_href = ['']
last = 0
if context:
fromlist = ['<td></td><td> No Differences Found </td>']
tolist = fromlist
else:
fromlist = tolist = ['<td></td><td> Empty File </td>']
# if not a change on first line, drop a link
if not flaglist[0]:
next_href[0] = '<a href="#difflib_chg_%s_0">f</a>' % toprefix
# redo the last link to link to the top
next_href[last] = '<a href="#difflib_chg_%s_top">t</a>' % (toprefix)
return fromlist,tolist,flaglist,next_href,next_id
def make_table(self,fromlines,tolines,fromdesc='',todesc='',context=False,
numlines=5):
"""Returns HTML table of side by side comparison with change highlights
Arguments:
fromlines -- list of "from" lines
tolines -- list of "to" lines
fromdesc -- "from" file column header string
todesc -- "to" file column header string
context -- set to True for contextual differences (defaults to False
which shows full differences).
numlines -- number of context lines. When context is set True,
controls number of lines displayed before and after the change.
When context is False, controls the number of lines to place
the "next" link anchors before the next change (so click of
"next" link jumps to just before the change).
"""
# make unique anchor prefixes so that multiple tables may exist
# on the same page without conflict.
self._make_prefix()
# change tabs to spaces before it gets more difficult after we insert
# markkup
fromlines,tolines = self._tab_newline_replace(fromlines,tolines)
# create diffs iterator which generates side by side from/to data
if context:
context_lines = numlines
else:
context_lines = None
diffs = _mdiff(fromlines,tolines,context_lines,linejunk=self._linejunk,
charjunk=self._charjunk)
# set up iterator to wrap lines that exceed desired width
if self._wrapcolumn:
diffs = self._line_wrapper(diffs)
# collect up from/to lines and flags into lists (also format the lines)
fromlist,tolist,flaglist = self._collect_lines(diffs)
# process change flags, generating middle column of next anchors/links
fromlist,tolist,flaglist,next_href,next_id = self._convert_flags(
fromlist,tolist,flaglist,context,numlines)
s = []
fmt = ' <tr><td class="diff_next"%s>%s</td>%s' + \
'<td class="diff_next">%s</td>%s</tr>\n'
for i in range(len(flaglist)):
if flaglist[i] is None:
# mdiff yields None on separator lines skip the bogus ones
# generated for the first line
if i > 0:
s.append(' </tbody> \n <tbody>\n')
else:
s.append( fmt % (next_id[i],next_href[i],fromlist[i],
next_href[i],tolist[i]))
if fromdesc or todesc:
header_row = '<thead><tr>%s%s%s%s</tr></thead>' % (
'<th class="diff_next"><br /></th>',
'<th colspan="2" class="diff_header">%s</th>' % fromdesc,
'<th class="diff_next"><br /></th>',
'<th colspan="2" class="diff_header">%s</th>' % todesc)
else:
header_row = ''
table = self._table_template % dict(
data_rows=''.join(s),
header_row=header_row,
prefix=self._prefix[1])
return table.replace('\0+','<span class="diff_add">'). \
replace('\0-','<span class="diff_sub">'). \
replace('\0^','<span class="diff_chg">'). \
replace('\1','</span>'). \
replace('\t',' ')
del re
def restore(delta, which):
r"""
Generate one of the two sequences that generated a delta.
Given a `delta` produced by `Differ.compare()` or `ndiff()`, extract
lines originating from file 1 or 2 (parameter `which`), stripping off line
prefixes.
Examples:
>>> diff = ndiff('one\ntwo\nthree\n'.splitlines(1),
... 'ore\ntree\nemu\n'.splitlines(1))
>>> diff = list(diff)
>>> print(''.join(restore(diff, 1)), end="")
one
two
three
>>> print(''.join(restore(diff, 2)), end="")
ore
tree
emu
"""
try:
tag = {1: "- ", 2: "+ "}[int(which)]
except KeyError:
raise ValueError('unknown delta choice (must be 1 or 2): %r'
% which)
prefixes = (" ", tag)
for line in delta:
if line[:2] in prefixes:
yield line[2:]
def _test():
import doctest, difflib
return doctest.testmod(difflib)
if __name__ == "__main__":
_test()
| 39.951574 | 83 | 0.573721 |
7943a95a823275ae303fce38cbc724040630f5a7 | 2,622 | py | Python | virtual/lib/python3.6/site-packages/dns/rdataclass.py | amoskipz/pitch | 477599a56958bc677e22764d7e0cc14d34510e8c | [
"Unlicense",
"MIT"
] | 2 | 2021-07-26T15:04:07.000Z | 2021-07-26T17:23:08.000Z | virtual/lib/python3.6/site-packages/dns/rdataclass.py | amoskipz/pitch | 477599a56958bc677e22764d7e0cc14d34510e8c | [
"Unlicense",
"MIT"
] | 30 | 2020-07-31T05:23:33.000Z | 2022-03-25T11:04:00.000Z | virtual/lib/python3.6/site-packages/dns/rdataclass.py | amoskipz/pitch | 477599a56958bc677e22764d7e0cc14d34510e8c | [
"Unlicense",
"MIT"
] | 2 | 2021-02-26T16:25:00.000Z | 2021-03-06T15:45:56.000Z | # Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
# Copyright (C) 2001-2017 Nominum, Inc.
#
# Permission to use, copy, modify, and distribute this software and its
# documentation for any purpose with or without fee is hereby granted,
# provided that the above copyright notice and this permission notice
# appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
"""DNS Rdata Classes."""
import dns.enum
import dns.exception
class RdataClass(dns.enum.IntEnum):
"""DNS Rdata Class"""
RESERVED0 = 0
IN = 1
INTERNET = IN
CH = 3
CHAOS = CH
HS = 4
HESIOD = HS
NONE = 254
ANY = 255
@classmethod
def _maximum(cls):
return 65535
@classmethod
def _short_name(cls):
return "class"
@classmethod
def _prefix(cls):
return "CLASS"
@classmethod
def _unknown_exception_class(cls):
return UnknownRdataclass
globals().update(RdataClass.__members__)
_metaclasses = {RdataClass.NONE, RdataClass.ANY}
class UnknownRdataclass(dns.exception.DNSException):
"""A DNS class is unknown."""
def from_text(text):
"""Convert text into a DNS rdata class value.
The input text can be a defined DNS RR class mnemonic or
instance of the DNS generic class syntax.
For example, "IN" and "CLASS1" will both result in a value of 1.
Raises ``dns.rdatatype.UnknownRdataclass`` if the class is unknown.
Raises ``ValueError`` if the rdata class value is not >= 0 and <= 65535.
Returns an ``int``.
"""
return RdataClass.from_text(text)
def to_text(value):
"""Convert a DNS rdata class value to text.
If the value has a known mnemonic, it will be used, otherwise the
DNS generic class syntax will be used.
Raises ``ValueError`` if the rdata class value is not >= 0 and <= 65535.
Returns a ``str``.
"""
return RdataClass.to_text(value)
def is_metaclass(rdclass):
"""True if the specified class is a metaclass.
The currently defined metaclasses are ANY and NONE.
*rdclass* is an ``int``.
"""
if rdclass in _metaclasses:
return True
return False
| 25.456311 | 76 | 0.694508 |
7943ac674aea293c6ed0ebfff43d53877a99ba13 | 774 | py | Python | Aula 02/[Exercicio 05] Testing information.py | IsaacPSilva/LetsCode | 64396ee9fd0ad395598c74c3727a614261e5dd50 | [
"MIT"
] | null | null | null | Aula 02/[Exercicio 05] Testing information.py | IsaacPSilva/LetsCode | 64396ee9fd0ad395598c74c3727a614261e5dd50 | [
"MIT"
] | null | null | null | Aula 02/[Exercicio 05] Testing information.py | IsaacPSilva/LetsCode | 64396ee9fd0ad395598c74c3727a614261e5dd50 | [
"MIT"
] | null | null | null | '''5. Faça um programa que leia a validade das informações:
a. Idade: entre 0 e 150;
b. Salário: maior que 0;
c. Sexo: M, F ou Outro;
O programa deve imprimir uma mensagem de erro para cada
informação inválida.'''
print('TESTING INFORMATION')
print('-'*30)
#Input of data
age = int(input('Insert the age: '))
if age>=0 and age<=150:
print('Valid age')
print('-'*30)
else:
print('Invalid value')
print('-'*30)
salary = float(input('Insert the salary: R$ '))
if salary>0:
print('Valid salary')
print('-'*30)
else:
print('Invalid value')
print('-'*30)
sex = input('Insert the sex [M,F, Others] : ').lower()
if sex=='m' or sex=='f' or sex=='others':
print('Valid sex')
print('-'*30)
else:
print('Invalid value')
print('-'*30)
| 22.114286 | 59 | 0.618863 |
7943ac76a4c7c6ee26fbc1631d05309a6b7fa719 | 685 | py | Python | venv/Lib/site-packages/torch/_VF.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | 1 | 2022-01-08T12:30:44.000Z | 2022-01-08T12:30:44.000Z | venv/Lib/site-packages/torch/_VF.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | null | null | null | venv/Lib/site-packages/torch/_VF.py | Westlanderz/AI-Plat1 | 1187c22819e5135e8e8189c99b86a93a0d66b8d8 | [
"MIT"
] | null | null | null | """
This makes the functions in torch._C._VariableFunctions available as
torch._VF.<funcname>
without mypy being able to find them.
A subset of those functions are mapped to ATen functions in
torch/jit/_builtins.py
See https://github.com/pytorch/pytorch/issues/21478 for the reason for
introducing torch._VF
"""
import torch
import sys
import types
class VFModule(types.ModuleType):
vf: types.ModuleType
def __init__(self, name):
super(VFModule, self).__init__(name)
self.vf = torch._C._VariableFunctions
def __getattr__(self, attr):
return getattr(self.vf, attr)
sys.modules[__name__] = VFModule(__name__)
| 22.833333 | 71 | 0.705109 |
7943ad89bc9c903ed683fa5d41eebdd28fd3c0de | 5,546 | py | Python | molanet/run/base_architecture.py | exaV/molanet | e12e24d8e4788f839a74a2e7310c42180b2419dd | [
"Apache-2.0"
] | null | null | null | molanet/run/base_architecture.py | exaV/molanet | e12e24d8e4788f839a74a2e7310c42180b2419dd | [
"Apache-2.0"
] | 3 | 2017-12-08T18:12:37.000Z | 2018-02-25T18:39:41.000Z | molanet/run/base_architecture.py | michaelaerni/ip6-molanet | cb226d3866f86b030fa3951eefdba0da85f0dd92 | [
"Apache-2.0"
] | null | null | null | import argparse
import logging
import os
import shutil
from datetime import datetime
import tensorflow as tf
from molanet.base import NetworkTrainer, TrainingOptions
from molanet.input import TrainingPipeline, \
EvaluationPipeline, random_rotate_flip_rgb, random_contrast_rgb, random_brightness_rgb
from molanet.models.pix2pix import Pix2PixFactory
from molanet.models.wgan_gp import WassersteinGradientPenaltyFactory
def create_arg_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser("Molanet PoC script")
parser.add_argument("--sampledir", type=str,
help="Root sample directory, containing set directories and meta files")
parser.add_argument("--test-set", type=str, help="Name of the test set")
parser.add_argument("--cv-set", type=str, help="Name of the cv set")
parser.add_argument("--train-set", type=str, help="Name of the training set")
parser.add_argument("--logdir", type=str, help="Directory into which summaries and checkpoints are written")
parser.add_argument("--restore", type=int, help="If set, restores the model from logdir with the given iteration")
parser.add_argument("--debug-placement", action="store_true", help="Output device placement")
parser.add_argument("--no-gpu", action="store_true", help="Run everything on CPU")
parser.add_argument("--logsubdir", action="store_true", help="Create a subdirectory in logdir for each new run")
parser.add_argument("--nchw", action="store_true", help="Uses NCHW format for training and inference")
parser.add_argument("--cv-interval", type=int, default=200, help="Cross-validation interval")
parser.add_argument("--max-iterations", type=int,
help="Maximum number of iterations before training stops")
parser.add_argument("--xla", action="store_true",
help="Enable XLA JIT compilation (GPU only)")
return parser
if __name__ == "__main__":
parser = create_arg_parser()
args = parser.parse_args()
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s [%(name)s]: %(message)s")
log = logging.getLogger(__name__)
if args.logsubdir and args.restore is None:
now = datetime.now()
subdirname = f"run_{now.month:02}{now.day:02}_{now.hour:02}{now.minute:02}_base_architecture"
logdir = os.path.join(args.logdir, subdirname)
else:
logdir = args.logdir
if args.restore is None:
shutil.rmtree(logdir, ignore_errors=True)
os.makedirs(logdir)
data_format = "NCHW" if args.nchw else "NHWC"
tf.reset_default_graph()
AUGMENTATION_FUNCTIONS = [
lambda image, segmentation: random_rotate_flip_rgb(image, segmentation),
lambda image, segmentation: (random_contrast_rgb(image, 0.8, 1.2), segmentation),
lambda image, segmentation: (random_brightness_rgb(image, -0.3, 0.3), segmentation)
]
# No color conversion
color_converter = None
# Create input pipelines
training_pipeline = TrainingPipeline(args.sampledir, args.train_set, image_size=512,
color_converter=color_converter,
data_format=data_format,
batch_size=1, read_thread_count=4, batch_thread_count=1,
augmentation_functions=AUGMENTATION_FUNCTIONS, name="training")
cv_pipeline = EvaluationPipeline(args.sampledir, args.cv_set, image_size=512,
color_converter=color_converter,
data_format=data_format,
batch_size=1, batch_thread_count=1, name="cv")
log.info("Input pipelines created")
log.info(f"Training set size: {training_pipeline.sample_count}")
log.info(f"CV set size: {cv_pipeline.sample_count}")
if args.debug_placement:
log.info("Enabled device placement logging")
config = tf.ConfigProto(log_device_placement=args.debug_placement)
if args.xla:
config.graph_options.optimizer_options.global_jit_level = tf.OptimizerOptions.ON_1
log.info("Enabled JIT XLA compilation")
network_factory = Pix2PixFactory(
spatial_extent=512,
min_generator_features=64,
max_generator_features=1024,
min_discriminator_features=64,
max_discriminator_features=1024,
dropout_keep_probability=0.5,
dropout_layer_count=2,
use_batchnorm=True,
weight_initializer=tf.truncated_normal_initializer(stddev=0.02)
)
trainer = NetworkTrainer(
training_pipeline,
cv_pipeline,
network_factory,
WassersteinGradientPenaltyFactory(gradient_lambda=10, network_factory=network_factory, l1_lambda=0.0),
training_options=TrainingOptions(
cv_summary_interval=args.cv_interval,
summary_directory=logdir,
discriminator_iterations=5,
max_iterations=args.max_iterations,
session_configuration=config,
use_gpu=not args.no_gpu,
data_format=data_format),
learning_rate=0.0001, beta1=0.5, beta2=0.9)
log.info("Trainer created")
with trainer:
log.info("Session started")
if args.restore is not None:
trainer.restore(args.restore)
log.info(f"Iteration {args.restore} restored")
log.info("Starting training")
trainer.train()
log.info("Shutting down...")
| 42.335878 | 118 | 0.674721 |
7943af335298b418a57d74be68e9f74197fc1b47 | 6,086 | py | Python | tf_agents/bandits/agents/examples/v2/train_eval_stationary_linear.py | Veluga/agents | e436726df11fa5c1c01637b730c7fa6a8fdda1c5 | [
"Apache-2.0"
] | 2 | 2021-07-25T11:06:56.000Z | 2021-07-25T11:07:02.000Z | tf_agents/bandits/agents/examples/v2/train_eval_stationary_linear.py | shuvoxcd01/agents | c9c690841cd188a2d2d10a4e586a990c075e887d | [
"Apache-2.0"
] | null | null | null | tf_agents/bandits/agents/examples/v2/train_eval_stationary_linear.py | shuvoxcd01/agents | c9c690841cd188a2d2d10a4e586a990c075e887d | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# Copyright 2020 The TF-Agents Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""End-to-end test for bandit training under stationary linear environments."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import os
from absl import app
from absl import flags
import tensorflow as tf # pylint: disable=g-explicit-tensorflow-version-import
from tf_agents.bandits.agents import exp3_mixture_agent
from tf_agents.bandits.agents import lin_ucb_agent
from tf_agents.bandits.agents import linear_thompson_sampling_agent as lin_ts_agent
from tf_agents.bandits.agents import neural_epsilon_greedy_agent
from tf_agents.bandits.agents.examples.v2 import trainer
from tf_agents.bandits.environments import environment_utilities
from tf_agents.bandits.environments import stationary_stochastic_py_environment as sspe
from tf_agents.bandits.metrics import tf_metrics as tf_bandit_metrics
from tf_agents.environments import tf_py_environment
from tf_agents.networks import q_network
from tf_agents.policies import utils as policy_utilities
flags.DEFINE_string('root_dir', os.getenv('TEST_UNDECLARED_OUTPUTS_DIR'),
'Root directory for writing logs/summaries/checkpoints.')
flags.DEFINE_enum(
'agent', 'LinUCB', ['LinUCB', 'LinTS', 'epsGreedy', 'Mix'],
'Which agent to use. Possible values are `LinUCB` and `LinTS`, `epsGreedy`,'
' and `Mix`.'
)
flags.DEFINE_bool('normalize_reward_fns', False, 'Whether to normalize the '
'reward functions so that rewards are close to being in '
'[0, 1].')
FLAGS = flags.FLAGS
BATCH_SIZE = 8
CONTEXT_DIM = 15
NUM_ACTIONS = 5
REWARD_NOISE_VARIANCE = 0.01
TRAINING_LOOPS = 2000
STEPS_PER_LOOP = 2
AGENT_ALPHA = 0.1
EPSILON = 0.05
LAYERS = (50, 50, 50)
LR = 0.001
def main(unused_argv):
tf.compat.v1.enable_v2_behavior() # The trainer only runs with V2 enabled.
with tf.device('/CPU:0'): # due to b/128333994
if FLAGS.normalize_reward_fns:
action_reward_fns = (
environment_utilities.normalized_sliding_linear_reward_fn_generator(
CONTEXT_DIM, NUM_ACTIONS, REWARD_NOISE_VARIANCE))
else:
action_reward_fns = (
environment_utilities.sliding_linear_reward_fn_generator(
CONTEXT_DIM, NUM_ACTIONS, REWARD_NOISE_VARIANCE))
env = sspe.StationaryStochasticPyEnvironment(
functools.partial(
environment_utilities.context_sampling_fn,
batch_size=BATCH_SIZE,
context_dim=CONTEXT_DIM),
action_reward_fns,
batch_size=BATCH_SIZE)
environment = tf_py_environment.TFPyEnvironment(env)
optimal_reward_fn = functools.partial(
environment_utilities.tf_compute_optimal_reward,
per_action_reward_fns=action_reward_fns)
optimal_action_fn = functools.partial(
environment_utilities.tf_compute_optimal_action,
per_action_reward_fns=action_reward_fns)
network = q_network.QNetwork(
input_tensor_spec=environment.time_step_spec().observation,
action_spec=environment.action_spec(),
fc_layer_params=LAYERS)
if FLAGS.agent == 'LinUCB':
agent = lin_ucb_agent.LinearUCBAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
alpha=AGENT_ALPHA,
dtype=tf.float32)
elif FLAGS.agent == 'LinTS':
agent = lin_ts_agent.LinearThompsonSamplingAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
alpha=AGENT_ALPHA,
dtype=tf.float32)
elif FLAGS.agent == 'epsGreedy':
agent = neural_epsilon_greedy_agent.NeuralEpsilonGreedyAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
reward_network=network,
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=LR),
epsilon=EPSILON)
elif FLAGS.agent == 'Mix':
emit_policy_info = policy_utilities.InfoFields.PREDICTED_REWARDS_MEAN
agent_linucb = lin_ucb_agent.LinearUCBAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
emit_policy_info=emit_policy_info,
alpha=AGENT_ALPHA,
dtype=tf.float32)
agent_lints = lin_ts_agent.LinearThompsonSamplingAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
emit_policy_info=emit_policy_info,
alpha=AGENT_ALPHA,
dtype=tf.float32)
agent_epsgreedy = neural_epsilon_greedy_agent.NeuralEpsilonGreedyAgent(
time_step_spec=environment.time_step_spec(),
action_spec=environment.action_spec(),
reward_network=network,
optimizer=tf.compat.v1.train.AdamOptimizer(learning_rate=LR),
emit_policy_info=emit_policy_info,
epsilon=EPSILON)
agent = exp3_mixture_agent.Exp3MixtureAgent(
(agent_linucb, agent_lints, agent_epsgreedy))
regret_metric = tf_bandit_metrics.RegretMetric(optimal_reward_fn)
suboptimal_arms_metric = tf_bandit_metrics.SuboptimalArmsMetric(
optimal_action_fn)
trainer.train(
root_dir=FLAGS.root_dir,
agent=agent,
environment=environment,
training_loops=TRAINING_LOOPS,
steps_per_loop=STEPS_PER_LOOP,
additional_metrics=[regret_metric, suboptimal_arms_metric])
if __name__ == '__main__':
app.run(main)
| 38.27673 | 87 | 0.733158 |
7943af4e7b8de138dce98ba3461ae11796d489bb | 771 | py | Python | src/mission_report/helpers.py | Flyingfox646/flyingfox | fc936ca1c2631a5f9d842d0deae1e197d50e7a96 | [
"MIT"
] | null | null | null | src/mission_report/helpers.py | Flyingfox646/flyingfox | fc936ca1c2631a5f9d842d0deae1e197d50e7a96 | [
"MIT"
] | null | null | null | src/mission_report/helpers.py | Flyingfox646/flyingfox | fc936ca1c2631a5f9d842d0deae1e197d50e7a96 | [
"MIT"
] | null | null | null | import math
# http://www.ariel.com.au/a/python-point-int-poly.html
def point_in_polygon(point, polygon):
x, y = point['x'], point['z']
n = len(polygon)
inside = False
p1x, _, p1y = polygon[0]
for i in range(n + 1):
p2x, _, p2y = polygon[i % n]
if min(p1y, p2y) < y <= max(p1y, p2y) and x <= max(p1x, p2x):
if p1y != p2y:
xinters = (y - p1y) * (p2x - p1x) / (p2y - p1y) + p1x
if p1x == p2x or x <= xinters:
inside = not inside
p1x, p1y = p2x, p2y
return inside
def distance(p1, p2):
return math.hypot(p2['x'] - p1['x'], p2['z'] - p1['z'])
def is_pos_correct(pos):
if not pos or pos == {'x': 0.0, 'y': 0.0, 'z': 0.0}:
return False
return True
| 26.586207 | 69 | 0.507134 |
7943af648e5beebdc3766920a5d9f634375e724f | 2,700 | py | Python | abusehelper/bots/cleanmx/cleanmxbot.py | heikipikker/abusehelper | c451e36e9927b95ba82253accf170121fce61f4f | [
"MIT"
] | 1 | 2020-04-07T07:01:12.000Z | 2020-04-07T07:01:12.000Z | abusehelper/bots/cleanmx/cleanmxbot.py | heikipikker/abusehelper | c451e36e9927b95ba82253accf170121fce61f4f | [
"MIT"
] | null | null | null | abusehelper/bots/cleanmx/cleanmxbot.py | heikipikker/abusehelper | c451e36e9927b95ba82253accf170121fce61f4f | [
"MIT"
] | 1 | 2020-04-07T07:01:13.000Z | 2020-04-07T07:01:13.000Z | # -*- coding: utf-8 -*-
# In the runtime config:
# yield Source("cleanmxbot", csv_url="http://support.clean-mx.de/clean-mx/xmlphishing?response=alive&format=csv&domain=")
# yield Source("cleanmxbot", csv_url="http://support.clean-mx.de/clean-mx/xmlviruses?response=alive&format=csv&domain=", csv_name="xmlvirii")
"""
CleanMX bot
Maintainer: Codenomicon <[email protected]>
"""
import re
import idiokit
import urlparse
from xml.sax.saxutils import unescape as _unescape
from abusehelper.core import bot, events, utils
cdata = re.compile("(.*?)\<\!\[CDATA\[(.*?)\]\]\>")
def unescape(string):
"""
>>> unescape("one <![CDATA[two ]]>three")
'one two three'
"""
result = list()
for index, data in enumerate(cdata.split(string)):
if index % 3 != 2:
data = _unescape(data, {" ": " "})
result.append(data)
return "".join(result)
class CleanMXBot(bot.PollingBot):
def feed_keys(self, csv_url, csv_name=None, **keys):
if csv_name is None:
csv_name = urlparse.urlparse(csv_url)[2].split("/")[-1]
yield (csv_url, csv_name)
@idiokit.stream
def poll(self, url, name):
try:
self.log.info("Downloading page from: %r", url)
info, fileobj = yield utils.fetch_url(url)
except utils.FetchUrlFailed, e:
self.log.error("Failed to download page %r: %r", url, e)
return
charset = info.get_param("charset", None)
lines = (line.strip() for line in fileobj if line.strip())
yield utils.csv_to_events(lines, charset=charset) | self.normalize(name)
@idiokit.stream
def normalize(self, name):
while True:
event = yield idiokit.next()
# A dict telling how to rename raw event keys.
# A key is not renamed by default.
# Mapping a key to None removes the key.
key_mappings = {
"time": "source time",
"id": "cleanmx id",
"phishtank": "phishtank id",
"line": None,
"firsttime": "first seen",
"lasttime": "last seen"
}
new = events.Event()
for key, value in event.items():
key = key_mappings.get(key, key)
if key is None:
continue
value = unescape(value).strip()
if not value:
continue
new.add(key, value)
if name:
new.add("feed", name)
yield idiokit.send(new)
if __name__ == "__main__":
CleanMXBot.from_command_line().execute()
| 28.723404 | 141 | 0.561852 |
7943afbd4ce30a106a343db08e25fe296363d9a7 | 144 | py | Python | bot_api/__init__.py | wqzai/QQ-official-guild-bot | 19f7ad42cb0e6c3c9541ac6a9ee73116727f7cb2 | [
"MIT"
] | null | null | null | bot_api/__init__.py | wqzai/QQ-official-guild-bot | 19f7ad42cb0e6c3c9541ac6a9ee73116727f7cb2 | [
"MIT"
] | null | null | null | bot_api/__init__.py | wqzai/QQ-official-guild-bot | 19f7ad42cb0e6c3c9541ac6a9ee73116727f7cb2 | [
"MIT"
] | null | null | null | from .sdk_main import *
from . import api
from . import models
from . import structs
from . import utils
from .models import BotCallingAPIError
| 20.571429 | 38 | 0.784722 |
7943afd78364951e40c67dec58adf129e2fd997f | 1,493 | py | Python | tests/MedianBlurTest.py | spongezhang/maskgen | 7284e300d1cb326a5349879de0bace9cfa8788a8 | [
"BSD-3-Clause"
] | null | null | null | tests/MedianBlurTest.py | spongezhang/maskgen | 7284e300d1cb326a5349879de0bace9cfa8788a8 | [
"BSD-3-Clause"
] | null | null | null | tests/MedianBlurTest.py | spongezhang/maskgen | 7284e300d1cb326a5349879de0bace9cfa8788a8 | [
"BSD-3-Clause"
] | null | null | null | import unittest
import os
from maskgen import plugins, image_wrap
import numpy
import tempfile
class MedianBlurTestCase(unittest.TestCase):
filesToKill = []
def setUp(self):
plugins.loadPlugins()
def test_something(self):
img = numpy.random.randint(0, 255, (500, 500, 3), dtype='uint8')
wrapper = image_wrap.ImageWrapper(img)
filename = tempfile.mktemp(prefix='mstc',suffix='.png',dir='.')
filename_output = tempfile.mktemp(prefix='mstcr', suffix='.png', dir='.')
self.filesToKill.append(filename)
wrapper.save(filename)
self.filesToKill.append(filename_output)
image_wrap.ImageWrapper(img).save(filename_output)
args,error = plugins.callPlugin('MedianBlur',
wrapper,
filename,
filename_output,
percentageChange = 0.5)
wrapper = image_wrap.openImageFile(filename_output)
output = wrapper.to_array()
self.assertEqual(output.shape, img.shape)
diff = abs(output - img)
finaldiff = numpy.zeros((500,500))
for i in range(3):
finaldiff = finaldiff + diff[:,:,i]
finaldiff[finaldiff > 0] = 1
self.assertTrue(abs(sum(sum(finaldiff))-62500) < 100)
def tearDown(self):
for f in self.filesToKill:
if os.path.exists(f):
os.remove(f)
if __name__ == '__main__':
unittest.main()
| 33.177778 | 81 | 0.596115 |
7943b0277758cdcbd0714ce3020cb7a32f47fe34 | 623 | py | Python | nand2vm/seq/bit.py | mlouielu/nand2vm | 0c24f49efde8303a157ab658e75f0c081e442ae2 | [
"BSD-3-Clause"
] | 13 | 2017-07-04T10:52:12.000Z | 2021-07-19T16:13:32.000Z | nand2vm/seq/bit.py | mlouielu/nand2vm | 0c24f49efde8303a157ab658e75f0c081e442ae2 | [
"BSD-3-Clause"
] | 2 | 2017-07-03T15:53:52.000Z | 2017-07-04T03:48:24.000Z | nand2vm/seq/bit.py | mlouielu/nand2vm | 0c24f49efde8303a157ab658e75f0c081e442ae2 | [
"BSD-3-Clause"
] | 1 | 2021-06-25T17:15:35.000Z | 2021-06-25T17:15:35.000Z | #
# Copyright (c) 2017 Louie Lu. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
#
from .clock import ClockPhase
from .dff import DFF
from .. import gate
class Bit(object):
def __init__(self):
self.d = DFF()
self.state = self.d.state
self.clock = ClockPhase.POSITIVE_EDGE
def update(self, source: bool, load: bool, clock: ClockPhase=None) -> bool:
if clock:
self.clock = clock
mux = gate.Mux(self.d.state, source, load)
out = self.d.update(mux, self.clock)
return out
| 25.958333 | 79 | 0.638844 |
7943b0a7a2b9fb76d3754eba48e5dfb549293033 | 177 | py | Python | routelift_api/roles_and_permissions/urls.py | BitMask-Technologies/route-lift-api | 7ac78c6cce523fc5a3852dca7b289fe3a5f3afa8 | [
"MIT"
] | null | null | null | routelift_api/roles_and_permissions/urls.py | BitMask-Technologies/route-lift-api | 7ac78c6cce523fc5a3852dca7b289fe3a5f3afa8 | [
"MIT"
] | 7 | 2021-06-24T16:12:09.000Z | 2021-08-05T16:09:22.000Z | routelift_api/roles_and_permissions/urls.py | BitMask-Technologies/route-lift-api | 7ac78c6cce523fc5a3852dca7b289fe3a5f3afa8 | [
"MIT"
] | null | null | null | from django.urls import path
from . import routers
urlpatterns = [
path('', routers.staff_role_router),
path('/<int:staffRoleId>', routers.single_staff_role_router)
]
| 19.666667 | 64 | 0.728814 |
7943b1cc78ebeda86c2c12b072674488383c0f15 | 2,517 | py | Python | neo4j/_conf.py | matilda-me/neo4j-python-driver | 4fb25a266841bf2a861f00d5dcf257bd5ae5c686 | [
"Apache-2.0"
] | null | null | null | neo4j/_conf.py | matilda-me/neo4j-python-driver | 4fb25a266841bf2a861f00d5dcf257bd5ae5c686 | [
"Apache-2.0"
] | null | null | null | neo4j/_conf.py | matilda-me/neo4j-python-driver | 4fb25a266841bf2a861f00d5dcf257bd5ae5c686 | [
"Apache-2.0"
] | null | null | null | # Copyright (c) "Neo4j"
# Neo4j Sweden AB [http://neo4j.com]
#
# This file is part of Neo4j.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class TrustStore:
# Base class for trust stores. For internal type-checking only.
pass
class TrustSystemCAs(TrustStore):
"""Used to configure the driver to trust system CAs (default).
Trust server certificates that can be verified against the system
certificate authority. This option is primarily intended for use with
full certificates.
For example::
driver = neo4j.GraphDatabase.driver(
url, auth=auth, trusted_certificates=neo4j.TrustSystemCAs()
)
"""
pass
class TrustAll(TrustStore):
"""Used to configure the driver to trust all certificates.
Trust any server certificate. This ensures that communication
is encrypted but does not verify the server certificate against a
certificate authority. This option is primarily intended for use with
the default auto-generated server certificate.
For example::
driver = neo4j.GraphDatabase.driver(
url, auth=auth, trusted_certificates=neo4j.TrustAll()
)
"""
pass
class TrustCustomCAs(TrustStore):
"""Used to configure the driver to trust custom CAs.
Trust server certificates that can be verified against the certificate
authority at the specified paths. This option is primarily intended for
self-signed and custom certificates.
:param certificates (str): paths to the certificates to trust.
Those are not the certificates you expect to see from the server but
the CA certificates you expect to be used to sign the server's
certificate.
For example::
driver = neo4j.GraphDatabase.driver(
url, auth=auth,
trusted_certificates=neo4j.TrustCustomCAs(
"/path/to/ca1.crt", "/path/to/ca2.crt",
)
)
"""
def __init__(self, *certificates):
self.certs = certificates
| 31.074074 | 76 | 0.699245 |
7943b2997f7dc993f8481d53e35128582a4d28c6 | 3,739 | py | Python | salt/runners/jobs.py | mitsuhiko/salt | 3b211f87e307936f43c937edc62207dd1b887e19 | [
"Apache-2.0"
] | 4 | 2015-10-06T22:20:27.000Z | 2017-09-04T08:03:44.000Z | salt/runners/jobs.py | mitsuhiko/salt | 3b211f87e307936f43c937edc62207dd1b887e19 | [
"Apache-2.0"
] | null | null | null | salt/runners/jobs.py | mitsuhiko/salt | 3b211f87e307936f43c937edc62207dd1b887e19 | [
"Apache-2.0"
] | null | null | null | '''
A conveniance system to manage jobs, both active and already run
'''
# Import Python Modules
import os
# Import Salt Modules
import salt.client
import salt.payload
import salt.utils
from salt._compat import string_types
from salt.exceptions import SaltException
# Import Third party libs
import yaml
def active():
'''
Return a report on all actively running jobs from a job id centric
perspective
'''
ret = {}
job_dir = os.path.join(__opts__['cachedir'], 'jobs')
client = salt.client.LocalClient(__opts__['conf_file'])
active_ = client.cmd('*', 'saltutil.running', timeout=__opts__['timeout'])
for minion, data in active_.items():
if not isinstance(data, list):
continue
for job in data:
if not job['jid'] in ret:
ret[job['jid']] = {'Running': [],
'Returned': [],
'Function': job['fun'],
'Arguments': list(job['arg']),
'Target': job['tgt'],
'Target-type': job['tgt_type']}
else:
ret[job['jid']]['Running'].append({minion: job['pid']})
for jid in ret:
jid_dir = salt.utils.jid_dir(
jid,
__opts__['cachedir'],
__opts__['hash_type']
)
if not os.path.isdir(jid_dir):
continue
for minion in os.listdir(jid_dir):
if minion.startswith('.'):
continue
if os.path.exists(os.path.join(jid_dir, minion)):
ret[jid]['Returned'].append(minion)
print(yaml.dump(ret))
return ret
def lookup_jid(jid):
'''
Return the printout from a previousely executed job
'''
def _format_ret(full_ret):
'''
Take the full return data and format it to simple output
'''
out = None
ret = {}
for key, data in full_ret.items():
ret[key] = data['ret']
if 'out' in data:
out = data['out']
return ret, out
client = salt.client.LocalClient(__opts__['conf_file'])
full_ret = client.get_full_returns(jid, [], 0)
formatted = _format_ret(full_ret)
if formatted:
ret = formatted[0]
out = formatted[1]
else:
ret = SaltException('Job {0} hasn\'t finished. No data yet :('.format(jid))
out = ''
# Determine the proper output method and run it
get_outputter = salt.output.get_outputter
if isinstance(ret, (list, dict, string_types)) and out:
printout = get_outputter(out)
# Pretty print any salt exceptions
elif isinstance(ret, SaltException):
printout = get_outputter("txt")
else:
printout = get_outputter(None)
printout(ret)
return ret
def list_jobs():
'''
List all detectable jobs and associated functions
'''
serial = salt.payload.Serial(__opts__)
ret = {}
job_dir = os.path.join(__opts__['cachedir'], 'jobs')
for top in os.listdir(job_dir):
t_path = os.path.join(job_dir, top)
for final in os.listdir(t_path):
loadpath = os.path.join(t_path, final, '.load.p')
if not os.path.isfile(loadpath):
continue
load = serial.load(open(loadpath, 'rb'))
jid = load['jid']
ret[jid] = {'Start Time': salt.utils.jid_to_time(jid),
'Function': load['fun'],
'Arguments': list(load['arg']),
'Target': load['tgt'],
'Target-type': load['tgt_type']}
print(yaml.dump(ret))
return ret
| 31.158333 | 83 | 0.543193 |
7943b33eb4380ae0f7977a68eaef8facfa66e0c0 | 7,223 | py | Python | rtg/decode_pro.py | XuezheMax/rtg | a4bfc81dc1874c6f43765eb588d1026a2296aa2f | [
"Apache-2.0"
] | 15 | 2019-06-28T21:22:46.000Z | 2022-02-03T06:36:43.000Z | rtg/decode_pro.py | XuezheMax/rtg | a4bfc81dc1874c6f43765eb588d1026a2296aa2f | [
"Apache-2.0"
] | 23 | 2019-06-14T19:12:26.000Z | 2022-03-15T23:22:14.000Z | rtg/decode_pro.py | XuezheMax/rtg | a4bfc81dc1874c6f43765eb588d1026a2296aa2f | [
"Apache-2.0"
] | 8 | 2019-06-11T19:03:39.000Z | 2022-01-09T06:58:23.000Z | # CLI interface to decode task
import argparse
import sys
from argparse import ArgumentDefaultsHelpFormatter as ArgFormatter
import torch
from pathlib import Path
from rtg import TranslationExperiment as Experiment, log, yaml
from rtg.module.decoder import Decoder, ReloadEvent
from rtg.utils import IO
def parse_args():
parser = argparse.ArgumentParser(prog="rtg.decode", description="Decode using NMT model",
formatter_class=ArgFormatter)
parser.add_argument("work_dir", help="Working directory", type=str)
parser.add_argument("model_path", type=str, nargs='*',
help="Path to model's checkpoint. "
"If not specified, a best model (based on the score on validation set)"
" from the experiment directory will be used."
" If multiple paths are specified, then an ensembling is performed by"
" averaging the param weights")
parser.add_argument("-if", '--input', default=sys.stdin,
type=argparse.FileType('r', encoding='utf-8', errors='ignore'),
help='Input file path. default is STDIN')
parser.add_argument("-of", '--output', default=sys.stdout,
type=argparse.FileType('w', encoding='utf-8', errors='ignore'),
help='Output File path. default is STDOUT')
parser.add_argument("-bs", '--beam-size', type=int, default=5,
help='Beam size. beam_size=1 is greedy, '
'In theory: higher beam is better approximation but expensive. '
'But in practice, higher beam doesnt always increase.')
parser.add_argument("-bc", '--batch-size', type=int, default=1,
help='Number of source tokens in a batch, approximately. '
'tries to fit in atleast one sentence => so even if you set 0 or 1, '
'there will be atleast one sentence in batch. '
'1 sentence seems better in CPU but larger number is better on GPUs')
parser.add_argument("-lp", '--lp-alpha', type=float, default=0.6,
help='Length penalty alpha. to disable set <= 0.0 '
'Ideally in the range [0.0, 1.0] but you are allowed to '
'experiment beyond > 1.0 but not less than 0.0')
parser.add_argument("-ml", '--max-len', type=int, default=60,
help='Maximum output sequence length. '
'Example: if max_len=10 and if source_len is 50, '
'then decoder goes up to 50+10 time steps in search of EOS token.')
parser.add_argument("-msl", '--max-src-len', type=int,
help='max source len; longer seqs will be truncated')
parser.add_argument("-nh", '--num-hyp', type=int, default=1,
help='Number of hypothesis to output. This should be smaller than beam_size')
parser.add_argument("--prepared", dest="prepared", action='store_true', default=None,
help='Each token is a valid integer which is an index to embedding,'
' so skip indexifying again')
parser.add_argument("-bp", '--binmt-path', type=str, default=None,
choices=['E1D1', 'E2D2', 'E1D2E2D1', 'E2D2E1D2', 'E1D2', 'E2D1'],
help='Sub module path inside BiNMT. applicable only when model is BiNMT')
parser.add_argument("-it", '--interactive', action='store_true',
help='Open interactive shell with decoder')
parser.add_argument("-sc", '--skip-check', action='store_true',
help='Skip Checking whether the experiment dir is prepared and trained')
parser.add_argument("-en", '--ensemble', type=int, default=1,
help='Ensemble best --ensemble models by averaging them')
parser.add_argument("-cb", '--sys-comb', type=Path,
help='System combine models at the softmax layer using the weights'
' specified in this file. When this argument is supplied, model_path '
'argument is ignored.')
args = vars(parser.parse_args())
return args
def validate_args(args, exp: Experiment):
if not args.pop('skip_check'): # if --skip-check is not requested
assert exp.has_prepared(), \
f'Experiment dir {exp.work_dir} is not ready to train. Please run "prep" sub task'
assert exp.has_trained(), \
f'Experiment dir {exp.work_dir} is not ready to decode.' \
f' Please run "train" sub task or --skip-check to ignore this'
weights_file = exp.work_dir / 'combo-weights.yml'
if not args.get('sys_comb') and weights_file.exists():
log.warning("Found default combo weights, switching to combo mode")
args['sys_comb'] = weights_file
if args.get("sys_comb"):
with IO.reader(args['sys_comb']) as fh:
weights = yaml.load(fh)['weights']
args['model_path'], args['weights'] = zip(*weights.items())
for model in args['model_path']:
assert Path(model).exists(), model
assert abs(sum(args['weights']) - 1) < 1e-3, \
f'Weights from --sys-comb file should sum to 1.0, given={args["weights"]}'
def main():
# No grads required
torch.set_grad_enabled(False)
args = parse_args()
gen_args = {}
exp = Experiment(args.pop('work_dir'), read_only=True)
validate_args(args, exp)
if exp.model_type == 'binmt':
if not args.get('path'):
Exception('--binmt-path argument is needed for BiNMT model.')
gen_args['path'] = args.pop('binmt_path')
weights = args.get('weights')
if weights:
decoder = Decoder.combo_new(exp, model_paths=args.pop('model_path'),
weights=weights)
else:
decoder = Decoder.new(exp, gen_args=gen_args, model_paths=args.pop('model_path', None),
ensemble=args.pop('ensemble', 1))
if args.pop('interactive'):
if weights:
log.warning("Interactive shell not reloadable for combo mode. FIXME: TODO:")
if args['input'] != sys.stdin or args['output'] != sys.stdout:
log.warning('--input and --output args are not applicable in --interactive mode')
args.pop('input')
args.pop('output')
while True:
try:
# an hacky way to unload and reload model when user tries to switch models
decoder.decode_interactive(**args)
break # exit loop if there is no request for reload
except ReloadEvent as re:
decoder = Decoder.new(exp, gen_args=gen_args, model_paths=re.model_paths)
args = re.state
# go back to loop and redo interactive shell
else:
return decoder.decode_file(args.pop('input'), args.pop('output'), **args)
if __name__ == '__main__':
main()
| 52.34058 | 101 | 0.581476 |
7943b39fea167258135ce86450f8c8157613d9ff | 1,832 | py | Python | scripts/05_modules/colorchooser/colorswatch_creatematerials_r19.py | PluginCafe/cinema4d_py_sdk_extended | aea195b47c15e1c94443292e489afe6779b68550 | [
"Apache-2.0"
] | 85 | 2019-09-06T22:53:15.000Z | 2022-03-27T01:33:09.000Z | scripts/05_modules/colorchooser/colorswatch_creatematerials_r19.py | PluginCafe/cinema4d_py_sdk_extended | aea195b47c15e1c94443292e489afe6779b68550 | [
"Apache-2.0"
] | 11 | 2019-09-03T22:59:19.000Z | 2022-02-27T03:42:52.000Z | scripts/05_modules/colorchooser/colorswatch_creatematerials_r19.py | PluginCafe/cinema4d_py_sdk_extended | aea195b47c15e1c94443292e489afe6779b68550 | [
"Apache-2.0"
] | 31 | 2019-09-09T09:35:35.000Z | 2022-03-28T09:08:47.000Z | """
Copyright: MAXON Computer GmbH
Author: Maxime Adam
Description:
- Reads all the colors from the first swatch group in the active document and creates material for each one.
Class/method highlighted:
- c4d.modules.colorchooser.SwatchData
- c4d.modules.colorchooser.SwatchGroup
"""
import c4d
def main():
# Creates a swatch data
swatchData = c4d.modules.colorchooser.ColorSwatchData()
if swatchData is None:
raise MemoryError("Failed to create a ColorSwatchData.")
# Loads the swatches data from the active document
if not swatchData.Load(doc):
raise RuntimeError("Failed to load the ColorSwatchData.")
# Makes sure the document contains at least a swatch group
if swatchData.GetGroupCount(c4d.SWATCH_CATEGORY_DOCUMENT) == 0:
raise RuntimeError("There is no color swatch stored in the document.")
# Retrieves the first swatch group
group = swatchData.GetGroupAtIndex(0, c4d.SWATCH_CATEGORY_DOCUMENT)
if group is None:
raise RuntimeError("Failed to retrieve the first Group of the color swatch.")
groupName = group.GetName()
colorCount = group.GetColorCount()
for colorIndex in range(colorCount):
# Gets the current color
color = group.GetColor(colorIndex)[0]
# Creates a material for the current color
mat = c4d.BaseMaterial(c4d.Mmaterial)
# Sets the name with the group name and color index
mat.SetName(groupName + str(colorIndex))
# Converts maxon.ColorA to c4d.Vector to set the material color
mat[c4d.MATERIAL_COLOR_COLOR] = c4d.Vector(color.r, color.g, color.b)
# Inserts the material into the active document
doc.InsertMaterial(mat)
# Pushes an update event to Cinema 4D
c4d.EventAdd()
if __name__ == '__main__':
main()
| 31.050847 | 112 | 0.702511 |
7943b55ebcd4a7581b260d2ae0be079fff46cbb2 | 1,006 | py | Python | models/bare/conv4.py | sdamadi/image-classification | 2f38dc3c64c733f1ce820d09fe9d5f6dbf988f97 | [
"MIT"
] | 3 | 2021-12-13T13:05:58.000Z | 2022-03-23T09:14:13.000Z | models/bare/conv4.py | sdamadi/image-classification | 2f38dc3c64c733f1ce820d09fe9d5f6dbf988f97 | [
"MIT"
] | null | null | null | models/bare/conv4.py | sdamadi/image-classification | 2f38dc3c64c733f1ce820d09fe9d5f6dbf988f97 | [
"MIT"
] | 1 | 2021-12-31T04:00:45.000Z | 2021-12-31T04:00:45.000Z | import torch
import torch.nn as nn
class Conv4(nn.Module):
def __init__(self, in_ch, imgsz, num_classes=10):
super(Conv4, self).__init__()
self.conv1 = nn.Conv2d(in_ch, 64, kernel_size=(3, 3), stride=1, padding=1)
self.conv2 = nn.Conv2d(64, 64, kernel_size=(3, 3), stride=1, padding=1)
self.conv3 = nn.Conv2d(64, 128, kernel_size=(3, 3), stride=1, padding=1)
self.conv4 = nn.Conv2d(128, 128, kernel_size=(3, 3), stride=1, padding=1)
self.maxpool = nn.MaxPool2d(kernel_size=2)
self.relu = nn.ReLU(inplace=True)
self.fc1 = nn.Linear(128*(imgsz//4)*(imgsz//4), 256)
self.fc2 = nn.Linear(256, 256)
self.fc3 = nn.Linear(256, num_classes)
def forward(self, x):
x = self.relu(self.conv1(x))
x = self.relu(self.conv2(x))
x = self.maxpool(x)
x = self.relu(self.conv3(x))
x = self.relu(self.conv4(x))
x = self.maxpool(x)
x = x.view( x.size(0), -1)
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x | 35.928571 | 78 | 0.626243 |
7943b59a44a70785844a266fa6cdc7373bff767e | 117,318 | py | Python | tests/migrations/test_operations.py | bak1an/django | 98bcc5d81bca578f3a5b4d47907ba4ac40446887 | [
"PSF-2.0",
"BSD-3-Clause"
] | 1 | 2018-09-22T20:35:14.000Z | 2018-09-22T20:35:14.000Z | tests/migrations/test_operations.py | seanfagan/django | 66bbde6819586cc3a75630e12e569dc8ae72f211 | [
"PSF-2.0",
"BSD-3-Clause"
] | null | null | null | tests/migrations/test_operations.py | seanfagan/django | 66bbde6819586cc3a75630e12e569dc8ae72f211 | [
"PSF-2.0",
"BSD-3-Clause"
] | 1 | 2017-02-28T17:05:19.000Z | 2017-02-28T17:05:19.000Z | import unittest
from django.db import connection, migrations, models, transaction
from django.db.migrations.migration import Migration
from django.db.migrations.operations import CreateModel
from django.db.migrations.state import ModelState, ProjectState
from django.db.models.fields import NOT_PROVIDED
from django.db.transaction import atomic
from django.db.utils import IntegrityError
from django.test import SimpleTestCase, override_settings, skipUnlessDBFeature
from .models import FoodManager, FoodQuerySet, UnicodeModel
from .test_base import MigrationTestBase
try:
import sqlparse
except ImportError:
sqlparse = None
class Mixin:
pass
class OperationTestBase(MigrationTestBase):
"""
Common functions to help test operations.
"""
def apply_operations(self, app_label, project_state, operations):
migration = Migration('name', app_label)
migration.operations = operations
with connection.schema_editor() as editor:
return migration.apply(project_state, editor)
def unapply_operations(self, app_label, project_state, operations):
migration = Migration('name', app_label)
migration.operations = operations
with connection.schema_editor() as editor:
return migration.unapply(project_state, editor)
def make_test_state(self, app_label, operation, **kwargs):
"""
Makes a test state using set_up_test_model and returns the
original state and the state after the migration is applied.
"""
project_state = self.set_up_test_model(app_label, **kwargs)
new_state = project_state.clone()
operation.state_forwards(app_label, new_state)
return project_state, new_state
def set_up_test_model(
self, app_label, second_model=False, third_model=False, index=False, multicol_index=False,
related_model=False, mti_model=False, proxy_model=False, manager_model=False,
unique_together=False, options=False, db_table=None, index_together=False):
"""
Creates a test model state and database table.
"""
# Delete the tables if they already exist
table_names = [
# Start with ManyToMany tables
'_pony_stables', '_pony_vans',
# Then standard model tables
'_pony', '_stable', '_van',
]
tables = [(app_label + table_name) for table_name in table_names]
with connection.cursor() as cursor:
table_names = connection.introspection.table_names(cursor)
connection.disable_constraint_checking()
sql_delete_table = connection.schema_editor().sql_delete_table
with transaction.atomic():
for table in tables:
if table in table_names:
cursor.execute(sql_delete_table % {
"table": connection.ops.quote_name(table),
})
connection.enable_constraint_checking()
# Make the "current" state
model_options = {
"swappable": "TEST_SWAP_MODEL",
"index_together": [["weight", "pink"]] if index_together else [],
"unique_together": [["pink", "weight"]] if unique_together else [],
}
if options:
model_options["permissions"] = [("can_groom", "Can groom")]
if db_table:
model_options["db_table"] = db_table
operations = [migrations.CreateModel(
"Pony",
[
("id", models.AutoField(primary_key=True)),
("pink", models.IntegerField(default=3)),
("weight", models.FloatField()),
],
options=model_options,
)]
if index:
operations.append(migrations.AddIndex(
"Pony",
models.Index(fields=["pink"], name="pony_pink_idx")
))
if multicol_index:
operations.append(migrations.AddIndex(
"Pony",
models.Index(fields=["pink", "weight"], name="pony_test_idx")
))
if second_model:
operations.append(migrations.CreateModel(
"Stable",
[
("id", models.AutoField(primary_key=True)),
]
))
if third_model:
operations.append(migrations.CreateModel(
"Van",
[
("id", models.AutoField(primary_key=True)),
]
))
if related_model:
operations.append(migrations.CreateModel(
"Rider",
[
("id", models.AutoField(primary_key=True)),
("pony", models.ForeignKey("Pony", models.CASCADE)),
("friend", models.ForeignKey("self", models.CASCADE))
],
))
if mti_model:
operations.append(migrations.CreateModel(
"ShetlandPony",
fields=[
('pony_ptr', models.OneToOneField(
'Pony',
models.CASCADE,
auto_created=True,
parent_link=True,
primary_key=True,
to_field='id',
serialize=False,
)),
("cuteness", models.IntegerField(default=1)),
],
bases=['%s.Pony' % app_label],
))
if proxy_model:
operations.append(migrations.CreateModel(
"ProxyPony",
fields=[],
options={"proxy": True},
bases=['%s.Pony' % app_label],
))
if manager_model:
operations.append(migrations.CreateModel(
"Food",
fields=[
("id", models.AutoField(primary_key=True)),
],
managers=[
("food_qs", FoodQuerySet.as_manager()),
("food_mgr", FoodManager("a", "b")),
("food_mgr_kwargs", FoodManager("x", "y", 3, 4)),
]
))
return self.apply_operations(app_label, ProjectState(), operations)
class OperationTests(OperationTestBase):
"""
Tests running the operations and making sure they do what they say they do.
Each test looks at their state changing, and then their database operation -
both forwards and backwards.
"""
def test_create_model(self):
"""
Tests the CreateModel operation.
Most other tests use this operation as part of setup, so check failures here first.
"""
operation = migrations.CreateModel(
"Pony",
[
("id", models.AutoField(primary_key=True)),
("pink", models.IntegerField(default=1)),
],
)
self.assertEqual(operation.describe(), "Create model Pony")
# Test the state alteration
project_state = ProjectState()
new_state = project_state.clone()
operation.state_forwards("test_crmo", new_state)
self.assertEqual(new_state.models["test_crmo", "pony"].name, "Pony")
self.assertEqual(len(new_state.models["test_crmo", "pony"].fields), 2)
# Test the database alteration
self.assertTableNotExists("test_crmo_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_crmo", editor, project_state, new_state)
self.assertTableExists("test_crmo_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_crmo", editor, new_state, project_state)
self.assertTableNotExists("test_crmo_pony")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "CreateModel")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2].keys()), ["fields", "name"])
# And default manager not in set
operation = migrations.CreateModel("Foo", fields=[], managers=[("objects", models.Manager())])
definition = operation.deconstruct()
self.assertNotIn('managers', definition[2])
def test_create_model_with_duplicate_field_name(self):
with self.assertRaisesMessage(ValueError, 'Found duplicate value pink in CreateModel fields argument.'):
migrations.CreateModel(
"Pony",
[
("id", models.AutoField(primary_key=True)),
("pink", models.TextField()),
("pink", models.IntegerField(default=1)),
],
)
def test_create_model_with_duplicate_base(self):
message = 'Found duplicate value test_crmo.pony in CreateModel bases argument.'
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=("test_crmo.Pony", "test_crmo.Pony",),
)
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=("test_crmo.Pony", "test_crmo.pony",),
)
message = 'Found duplicate value migrations.unicodemodel in CreateModel bases argument.'
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=(UnicodeModel, UnicodeModel,),
)
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=(UnicodeModel, 'migrations.unicodemodel',),
)
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=(UnicodeModel, 'migrations.UnicodeModel',),
)
message = "Found duplicate value <class 'django.db.models.base.Model'> in CreateModel bases argument."
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=(models.Model, models.Model,),
)
message = "Found duplicate value <class 'migrations.test_operations.Mixin'> in CreateModel bases argument."
with self.assertRaisesMessage(ValueError, message):
migrations.CreateModel(
"Pony",
fields=[],
bases=(Mixin, Mixin,),
)
def test_create_model_with_duplicate_manager_name(self):
with self.assertRaisesMessage(ValueError, 'Found duplicate value objects in CreateModel managers argument.'):
migrations.CreateModel(
"Pony",
fields=[],
managers=[
("objects", models.Manager()),
("objects", models.Manager()),
],
)
def test_create_model_with_unique_after(self):
"""
Tests the CreateModel operation directly followed by an
AlterUniqueTogether (bug #22844 - sqlite remake issues)
"""
operation1 = migrations.CreateModel(
"Pony",
[
("id", models.AutoField(primary_key=True)),
("pink", models.IntegerField(default=1)),
],
)
operation2 = migrations.CreateModel(
"Rider",
[
("id", models.AutoField(primary_key=True)),
("number", models.IntegerField(default=1)),
("pony", models.ForeignKey("test_crmoua.Pony", models.CASCADE)),
],
)
operation3 = migrations.AlterUniqueTogether(
"Rider",
[
("number", "pony"),
],
)
# Test the database alteration
project_state = ProjectState()
self.assertTableNotExists("test_crmoua_pony")
self.assertTableNotExists("test_crmoua_rider")
with connection.schema_editor() as editor:
new_state = project_state.clone()
operation1.state_forwards("test_crmoua", new_state)
operation1.database_forwards("test_crmoua", editor, project_state, new_state)
project_state, new_state = new_state, new_state.clone()
operation2.state_forwards("test_crmoua", new_state)
operation2.database_forwards("test_crmoua", editor, project_state, new_state)
project_state, new_state = new_state, new_state.clone()
operation3.state_forwards("test_crmoua", new_state)
operation3.database_forwards("test_crmoua", editor, project_state, new_state)
self.assertTableExists("test_crmoua_pony")
self.assertTableExists("test_crmoua_rider")
def test_create_model_m2m(self):
"""
Test the creation of a model with a ManyToMany field and the
auto-created "through" model.
"""
project_state = self.set_up_test_model("test_crmomm")
operation = migrations.CreateModel(
"Stable",
[
("id", models.AutoField(primary_key=True)),
("ponies", models.ManyToManyField("Pony", related_name="stables"))
]
)
# Test the state alteration
new_state = project_state.clone()
operation.state_forwards("test_crmomm", new_state)
# Test the database alteration
self.assertTableNotExists("test_crmomm_stable_ponies")
with connection.schema_editor() as editor:
operation.database_forwards("test_crmomm", editor, project_state, new_state)
self.assertTableExists("test_crmomm_stable")
self.assertTableExists("test_crmomm_stable_ponies")
self.assertColumnNotExists("test_crmomm_stable", "ponies")
# Make sure the M2M field actually works
with atomic():
Pony = new_state.apps.get_model("test_crmomm", "Pony")
Stable = new_state.apps.get_model("test_crmomm", "Stable")
stable = Stable.objects.create()
p1 = Pony.objects.create(pink=False, weight=4.55)
p2 = Pony.objects.create(pink=True, weight=5.43)
stable.ponies.add(p1, p2)
self.assertEqual(stable.ponies.count(), 2)
stable.ponies.all().delete()
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_crmomm", editor, new_state, project_state)
self.assertTableNotExists("test_crmomm_stable")
self.assertTableNotExists("test_crmomm_stable_ponies")
def test_create_model_inheritance(self):
"""
Tests the CreateModel operation on a multi-table inheritance setup.
"""
project_state = self.set_up_test_model("test_crmoih")
# Test the state alteration
operation = migrations.CreateModel(
"ShetlandPony",
[
('pony_ptr', models.OneToOneField(
'test_crmoih.Pony',
models.CASCADE,
auto_created=True,
primary_key=True,
to_field='id',
serialize=False,
)),
("cuteness", models.IntegerField(default=1)),
],
)
new_state = project_state.clone()
operation.state_forwards("test_crmoih", new_state)
self.assertIn(("test_crmoih", "shetlandpony"), new_state.models)
# Test the database alteration
self.assertTableNotExists("test_crmoih_shetlandpony")
with connection.schema_editor() as editor:
operation.database_forwards("test_crmoih", editor, project_state, new_state)
self.assertTableExists("test_crmoih_shetlandpony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_crmoih", editor, new_state, project_state)
self.assertTableNotExists("test_crmoih_shetlandpony")
def test_create_proxy_model(self):
"""
CreateModel ignores proxy models.
"""
project_state = self.set_up_test_model("test_crprmo")
# Test the state alteration
operation = migrations.CreateModel(
"ProxyPony",
[],
options={"proxy": True},
bases=("test_crprmo.Pony", ),
)
self.assertEqual(operation.describe(), "Create proxy model ProxyPony")
new_state = project_state.clone()
operation.state_forwards("test_crprmo", new_state)
self.assertIn(("test_crprmo", "proxypony"), new_state.models)
# Test the database alteration
self.assertTableNotExists("test_crprmo_proxypony")
self.assertTableExists("test_crprmo_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_crprmo", editor, project_state, new_state)
self.assertTableNotExists("test_crprmo_proxypony")
self.assertTableExists("test_crprmo_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_crprmo", editor, new_state, project_state)
self.assertTableNotExists("test_crprmo_proxypony")
self.assertTableExists("test_crprmo_pony")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "CreateModel")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2].keys()), ["bases", "fields", "name", "options"])
def test_create_unmanaged_model(self):
"""
CreateModel ignores unmanaged models.
"""
project_state = self.set_up_test_model("test_crummo")
# Test the state alteration
operation = migrations.CreateModel(
"UnmanagedPony",
[],
options={"proxy": True},
bases=("test_crummo.Pony", ),
)
self.assertEqual(operation.describe(), "Create proxy model UnmanagedPony")
new_state = project_state.clone()
operation.state_forwards("test_crummo", new_state)
self.assertIn(("test_crummo", "unmanagedpony"), new_state.models)
# Test the database alteration
self.assertTableNotExists("test_crummo_unmanagedpony")
self.assertTableExists("test_crummo_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_crummo", editor, project_state, new_state)
self.assertTableNotExists("test_crummo_unmanagedpony")
self.assertTableExists("test_crummo_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_crummo", editor, new_state, project_state)
self.assertTableNotExists("test_crummo_unmanagedpony")
self.assertTableExists("test_crummo_pony")
def test_create_model_managers(self):
"""
The managers on a model are set.
"""
project_state = self.set_up_test_model("test_cmoma")
# Test the state alteration
operation = migrations.CreateModel(
"Food",
fields=[
("id", models.AutoField(primary_key=True)),
],
managers=[
("food_qs", FoodQuerySet.as_manager()),
("food_mgr", FoodManager("a", "b")),
("food_mgr_kwargs", FoodManager("x", "y", 3, 4)),
]
)
self.assertEqual(operation.describe(), "Create model Food")
new_state = project_state.clone()
operation.state_forwards("test_cmoma", new_state)
self.assertIn(("test_cmoma", "food"), new_state.models)
managers = new_state.models["test_cmoma", "food"].managers
self.assertEqual(managers[0][0], "food_qs")
self.assertIsInstance(managers[0][1], models.Manager)
self.assertEqual(managers[1][0], "food_mgr")
self.assertIsInstance(managers[1][1], FoodManager)
self.assertEqual(managers[1][1].args, ("a", "b", 1, 2))
self.assertEqual(managers[2][0], "food_mgr_kwargs")
self.assertIsInstance(managers[2][1], FoodManager)
self.assertEqual(managers[2][1].args, ("x", "y", 3, 4))
def test_delete_model(self):
"""
Tests the DeleteModel operation.
"""
project_state = self.set_up_test_model("test_dlmo")
# Test the state alteration
operation = migrations.DeleteModel("Pony")
self.assertEqual(operation.describe(), "Delete model Pony")
new_state = project_state.clone()
operation.state_forwards("test_dlmo", new_state)
self.assertNotIn(("test_dlmo", "pony"), new_state.models)
# Test the database alteration
self.assertTableExists("test_dlmo_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_dlmo", editor, project_state, new_state)
self.assertTableNotExists("test_dlmo_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_dlmo", editor, new_state, project_state)
self.assertTableExists("test_dlmo_pony")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "DeleteModel")
self.assertEqual(definition[1], [])
self.assertEqual(list(definition[2]), ["name"])
def test_delete_proxy_model(self):
"""
Tests the DeleteModel operation ignores proxy models.
"""
project_state = self.set_up_test_model("test_dlprmo", proxy_model=True)
# Test the state alteration
operation = migrations.DeleteModel("ProxyPony")
new_state = project_state.clone()
operation.state_forwards("test_dlprmo", new_state)
self.assertIn(("test_dlprmo", "proxypony"), project_state.models)
self.assertNotIn(("test_dlprmo", "proxypony"), new_state.models)
# Test the database alteration
self.assertTableExists("test_dlprmo_pony")
self.assertTableNotExists("test_dlprmo_proxypony")
with connection.schema_editor() as editor:
operation.database_forwards("test_dlprmo", editor, project_state, new_state)
self.assertTableExists("test_dlprmo_pony")
self.assertTableNotExists("test_dlprmo_proxypony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_dlprmo", editor, new_state, project_state)
self.assertTableExists("test_dlprmo_pony")
self.assertTableNotExists("test_dlprmo_proxypony")
def test_rename_model(self):
"""
Tests the RenameModel operation.
"""
project_state = self.set_up_test_model("test_rnmo", related_model=True)
# Test the state alteration
operation = migrations.RenameModel("Pony", "Horse")
self.assertEqual(operation.describe(), "Rename model Pony to Horse")
# Test initial state and database
self.assertIn(("test_rnmo", "pony"), project_state.models)
self.assertNotIn(("test_rnmo", "horse"), project_state.models)
self.assertTableExists("test_rnmo_pony")
self.assertTableNotExists("test_rnmo_horse")
if connection.features.supports_foreign_keys:
self.assertFKExists("test_rnmo_rider", ["pony_id"], ("test_rnmo_pony", "id"))
self.assertFKNotExists("test_rnmo_rider", ["pony_id"], ("test_rnmo_horse", "id"))
# Migrate forwards
new_state = project_state.clone()
new_state = self.apply_operations("test_rnmo", new_state, [operation])
# Test new state and database
self.assertNotIn(("test_rnmo", "pony"), new_state.models)
self.assertIn(("test_rnmo", "horse"), new_state.models)
# RenameModel also repoints all incoming FKs and M2Ms
self.assertEqual("test_rnmo.Horse", new_state.models["test_rnmo", "rider"].fields[1][1].remote_field.model)
self.assertTableNotExists("test_rnmo_pony")
self.assertTableExists("test_rnmo_horse")
if connection.features.supports_foreign_keys:
self.assertFKNotExists("test_rnmo_rider", ["pony_id"], ("test_rnmo_pony", "id"))
self.assertFKExists("test_rnmo_rider", ["pony_id"], ("test_rnmo_horse", "id"))
# Migrate backwards
original_state = self.unapply_operations("test_rnmo", project_state, [operation])
# Test original state and database
self.assertIn(("test_rnmo", "pony"), original_state.models)
self.assertNotIn(("test_rnmo", "horse"), original_state.models)
self.assertEqual("Pony", original_state.models["test_rnmo", "rider"].fields[1][1].remote_field.model)
self.assertTableExists("test_rnmo_pony")
self.assertTableNotExists("test_rnmo_horse")
if connection.features.supports_foreign_keys:
self.assertFKExists("test_rnmo_rider", ["pony_id"], ("test_rnmo_pony", "id"))
self.assertFKNotExists("test_rnmo_rider", ["pony_id"], ("test_rnmo_horse", "id"))
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RenameModel")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'old_name': "Pony", 'new_name': "Horse"})
def test_rename_model_state_forwards(self):
"""
RenameModel operations shouldn't trigger the caching of rendered apps
on state without prior apps.
"""
state = ProjectState()
state.add_model(ModelState('migrations', 'Foo', []))
operation = migrations.RenameModel('Foo', 'Bar')
operation.state_forwards('migrations', state)
self.assertNotIn('apps', state.__dict__)
self.assertNotIn(('migrations', 'foo'), state.models)
self.assertIn(('migrations', 'bar'), state.models)
# Now with apps cached.
apps = state.apps
operation = migrations.RenameModel('Bar', 'Foo')
operation.state_forwards('migrations', state)
self.assertIs(state.apps, apps)
self.assertNotIn(('migrations', 'bar'), state.models)
self.assertIn(('migrations', 'foo'), state.models)
def test_rename_model_with_self_referential_fk(self):
"""
Tests the RenameModel operation on model with self referential FK.
"""
project_state = self.set_up_test_model("test_rmwsrf", related_model=True)
# Test the state alteration
operation = migrations.RenameModel("Rider", "HorseRider")
self.assertEqual(operation.describe(), "Rename model Rider to HorseRider")
new_state = project_state.clone()
operation.state_forwards("test_rmwsrf", new_state)
self.assertNotIn(("test_rmwsrf", "rider"), new_state.models)
self.assertIn(("test_rmwsrf", "horserider"), new_state.models)
# Remember, RenameModel also repoints all incoming FKs and M2Ms
self.assertEqual(
'self',
new_state.models["test_rmwsrf", "horserider"].fields[2][1].remote_field.model
)
HorseRider = new_state.apps.get_model('test_rmwsrf', 'horserider')
self.assertIs(HorseRider._meta.get_field('horserider').remote_field.model, HorseRider)
# Test the database alteration
self.assertTableExists("test_rmwsrf_rider")
self.assertTableNotExists("test_rmwsrf_horserider")
if connection.features.supports_foreign_keys:
self.assertFKExists("test_rmwsrf_rider", ["friend_id"], ("test_rmwsrf_rider", "id"))
self.assertFKNotExists("test_rmwsrf_rider", ["friend_id"], ("test_rmwsrf_horserider", "id"))
with connection.schema_editor() as editor:
operation.database_forwards("test_rmwsrf", editor, project_state, new_state)
self.assertTableNotExists("test_rmwsrf_rider")
self.assertTableExists("test_rmwsrf_horserider")
if connection.features.supports_foreign_keys:
self.assertFKNotExists("test_rmwsrf_horserider", ["friend_id"], ("test_rmwsrf_rider", "id"))
self.assertFKExists("test_rmwsrf_horserider", ["friend_id"], ("test_rmwsrf_horserider", "id"))
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_rmwsrf", editor, new_state, project_state)
self.assertTableExists("test_rmwsrf_rider")
self.assertTableNotExists("test_rmwsrf_horserider")
if connection.features.supports_foreign_keys:
self.assertFKExists("test_rmwsrf_rider", ["friend_id"], ("test_rmwsrf_rider", "id"))
self.assertFKNotExists("test_rmwsrf_rider", ["friend_id"], ("test_rmwsrf_horserider", "id"))
def test_rename_model_with_superclass_fk(self):
"""
Tests the RenameModel operation on a model which has a superclass that
has a foreign key.
"""
project_state = self.set_up_test_model("test_rmwsc", related_model=True, mti_model=True)
# Test the state alteration
operation = migrations.RenameModel("ShetlandPony", "LittleHorse")
self.assertEqual(operation.describe(), "Rename model ShetlandPony to LittleHorse")
new_state = project_state.clone()
operation.state_forwards("test_rmwsc", new_state)
self.assertNotIn(("test_rmwsc", "shetlandpony"), new_state.models)
self.assertIn(("test_rmwsc", "littlehorse"), new_state.models)
# RenameModel shouldn't repoint the superclass's relations, only local ones
self.assertEqual(
project_state.models["test_rmwsc", "rider"].fields[1][1].remote_field.model,
new_state.models["test_rmwsc", "rider"].fields[1][1].remote_field.model
)
# Before running the migration we have a table for Shetland Pony, not Little Horse
self.assertTableExists("test_rmwsc_shetlandpony")
self.assertTableNotExists("test_rmwsc_littlehorse")
if connection.features.supports_foreign_keys:
# and the foreign key on rider points to pony, not shetland pony
self.assertFKExists("test_rmwsc_rider", ["pony_id"], ("test_rmwsc_pony", "id"))
self.assertFKNotExists("test_rmwsc_rider", ["pony_id"], ("test_rmwsc_shetlandpony", "id"))
with connection.schema_editor() as editor:
operation.database_forwards("test_rmwsc", editor, project_state, new_state)
# Now we have a little horse table, not shetland pony
self.assertTableNotExists("test_rmwsc_shetlandpony")
self.assertTableExists("test_rmwsc_littlehorse")
if connection.features.supports_foreign_keys:
# but the Foreign keys still point at pony, not little horse
self.assertFKExists("test_rmwsc_rider", ["pony_id"], ("test_rmwsc_pony", "id"))
self.assertFKNotExists("test_rmwsc_rider", ["pony_id"], ("test_rmwsc_littlehorse", "id"))
def test_rename_model_with_self_referential_m2m(self):
app_label = "test_rename_model_with_self_referential_m2m"
project_state = self.apply_operations(app_label, ProjectState(), operations=[
migrations.CreateModel("ReflexivePony", fields=[
("id", models.AutoField(primary_key=True)),
("ponies", models.ManyToManyField("self")),
]),
])
project_state = self.apply_operations(app_label, project_state, operations=[
migrations.RenameModel("ReflexivePony", "ReflexivePony2"),
])
Pony = project_state.apps.get_model(app_label, "ReflexivePony2")
pony = Pony.objects.create()
pony.ponies.add(pony)
def test_rename_model_with_m2m(self):
app_label = "test_rename_model_with_m2m"
project_state = self.apply_operations(app_label, ProjectState(), operations=[
migrations.CreateModel("Rider", fields=[
("id", models.AutoField(primary_key=True)),
]),
migrations.CreateModel("Pony", fields=[
("id", models.AutoField(primary_key=True)),
("riders", models.ManyToManyField("Rider")),
]),
])
Pony = project_state.apps.get_model(app_label, "Pony")
Rider = project_state.apps.get_model(app_label, "Rider")
pony = Pony.objects.create()
rider = Rider.objects.create()
pony.riders.add(rider)
project_state = self.apply_operations(app_label, project_state, operations=[
migrations.RenameModel("Pony", "Pony2"),
])
Pony = project_state.apps.get_model(app_label, "Pony2")
Rider = project_state.apps.get_model(app_label, "Rider")
pony = Pony.objects.create()
rider = Rider.objects.create()
pony.riders.add(rider)
self.assertEqual(Pony.objects.count(), 2)
self.assertEqual(Rider.objects.count(), 2)
self.assertEqual(Pony._meta.get_field('riders').remote_field.through.objects.count(), 2)
def test_rename_m2m_target_model(self):
app_label = "test_rename_m2m_target_model"
project_state = self.apply_operations(app_label, ProjectState(), operations=[
migrations.CreateModel("Rider", fields=[
("id", models.AutoField(primary_key=True)),
]),
migrations.CreateModel("Pony", fields=[
("id", models.AutoField(primary_key=True)),
("riders", models.ManyToManyField("Rider")),
]),
])
Pony = project_state.apps.get_model(app_label, "Pony")
Rider = project_state.apps.get_model(app_label, "Rider")
pony = Pony.objects.create()
rider = Rider.objects.create()
pony.riders.add(rider)
project_state = self.apply_operations(app_label, project_state, operations=[
migrations.RenameModel("Rider", "Rider2"),
])
Pony = project_state.apps.get_model(app_label, "Pony")
Rider = project_state.apps.get_model(app_label, "Rider2")
pony = Pony.objects.create()
rider = Rider.objects.create()
pony.riders.add(rider)
self.assertEqual(Pony.objects.count(), 2)
self.assertEqual(Rider.objects.count(), 2)
self.assertEqual(Pony._meta.get_field('riders').remote_field.through.objects.count(), 2)
def test_rename_m2m_through_model(self):
app_label = "test_rename_through"
project_state = self.apply_operations(app_label, ProjectState(), operations=[
migrations.CreateModel("Rider", fields=[
("id", models.AutoField(primary_key=True)),
]),
migrations.CreateModel("Pony", fields=[
("id", models.AutoField(primary_key=True)),
]),
migrations.CreateModel("PonyRider", fields=[
("id", models.AutoField(primary_key=True)),
("rider", models.ForeignKey("test_rename_through.Rider", models.CASCADE)),
("pony", models.ForeignKey("test_rename_through.Pony", models.CASCADE)),
]),
migrations.AddField(
"Pony",
"riders",
models.ManyToManyField("test_rename_through.Rider", through="test_rename_through.PonyRider"),
),
])
Pony = project_state.apps.get_model(app_label, "Pony")
Rider = project_state.apps.get_model(app_label, "Rider")
PonyRider = project_state.apps.get_model(app_label, "PonyRider")
pony = Pony.objects.create()
rider = Rider.objects.create()
PonyRider.objects.create(pony=pony, rider=rider)
project_state = self.apply_operations(app_label, project_state, operations=[
migrations.RenameModel("PonyRider", "PonyRider2"),
])
Pony = project_state.apps.get_model(app_label, "Pony")
Rider = project_state.apps.get_model(app_label, "Rider")
PonyRider = project_state.apps.get_model(app_label, "PonyRider2")
pony = Pony.objects.first()
rider = Rider.objects.create()
PonyRider.objects.create(pony=pony, rider=rider)
self.assertEqual(Pony.objects.count(), 1)
self.assertEqual(Rider.objects.count(), 2)
self.assertEqual(PonyRider.objects.count(), 2)
self.assertEqual(pony.riders.count(), 2)
def test_add_field(self):
"""
Tests the AddField operation.
"""
# Test the state alteration
operation = migrations.AddField(
"Pony",
"height",
models.FloatField(null=True, default=5),
)
self.assertEqual(operation.describe(), "Add field height to Pony")
project_state, new_state = self.make_test_state("test_adfl", operation)
self.assertEqual(len(new_state.models["test_adfl", "pony"].fields), 4)
field = [
f for n, f in new_state.models["test_adfl", "pony"].fields
if n == "height"
][0]
self.assertEqual(field.default, 5)
# Test the database alteration
self.assertColumnNotExists("test_adfl_pony", "height")
with connection.schema_editor() as editor:
operation.database_forwards("test_adfl", editor, project_state, new_state)
self.assertColumnExists("test_adfl_pony", "height")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_adfl", editor, new_state, project_state)
self.assertColumnNotExists("test_adfl_pony", "height")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AddField")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["field", "model_name", "name"])
def test_add_charfield(self):
"""
Tests the AddField operation on TextField.
"""
project_state = self.set_up_test_model("test_adchfl")
Pony = project_state.apps.get_model("test_adchfl", "Pony")
pony = Pony.objects.create(weight=42)
new_state = self.apply_operations("test_adchfl", project_state, [
migrations.AddField(
"Pony",
"text",
models.CharField(max_length=10, default="some text"),
),
migrations.AddField(
"Pony",
"empty",
models.CharField(max_length=10, default=""),
),
# If not properly quoted digits would be interpreted as an int.
migrations.AddField(
"Pony",
"digits",
models.CharField(max_length=10, default="42"),
),
# Manual quoting is fragile and could trip on quotes. Refs #xyz.
migrations.AddField(
"Pony",
"quotes",
models.CharField(max_length=10, default='"\'"'),
),
])
Pony = new_state.apps.get_model("test_adchfl", "Pony")
pony = Pony.objects.get(pk=pony.pk)
self.assertEqual(pony.text, "some text")
self.assertEqual(pony.empty, "")
self.assertEqual(pony.digits, "42")
self.assertEqual(pony.quotes, '"\'"')
def test_add_textfield(self):
"""
Tests the AddField operation on TextField.
"""
project_state = self.set_up_test_model("test_adtxtfl")
Pony = project_state.apps.get_model("test_adtxtfl", "Pony")
pony = Pony.objects.create(weight=42)
new_state = self.apply_operations("test_adtxtfl", project_state, [
migrations.AddField(
"Pony",
"text",
models.TextField(default="some text"),
),
migrations.AddField(
"Pony",
"empty",
models.TextField(default=""),
),
# If not properly quoted digits would be interpreted as an int.
migrations.AddField(
"Pony",
"digits",
models.TextField(default="42"),
),
# Manual quoting is fragile and could trip on quotes. Refs #xyz.
migrations.AddField(
"Pony",
"quotes",
models.TextField(default='"\'"'),
),
])
Pony = new_state.apps.get_model("test_adtxtfl", "Pony")
pony = Pony.objects.get(pk=pony.pk)
self.assertEqual(pony.text, "some text")
self.assertEqual(pony.empty, "")
self.assertEqual(pony.digits, "42")
self.assertEqual(pony.quotes, '"\'"')
def test_add_binaryfield(self):
"""
Tests the AddField operation on TextField/BinaryField.
"""
project_state = self.set_up_test_model("test_adbinfl")
Pony = project_state.apps.get_model("test_adbinfl", "Pony")
pony = Pony.objects.create(weight=42)
new_state = self.apply_operations("test_adbinfl", project_state, [
migrations.AddField(
"Pony",
"blob",
models.BinaryField(default=b"some text"),
),
migrations.AddField(
"Pony",
"empty",
models.BinaryField(default=b""),
),
# If not properly quoted digits would be interpreted as an int.
migrations.AddField(
"Pony",
"digits",
models.BinaryField(default=b"42"),
),
# Manual quoting is fragile and could trip on quotes. Refs #xyz.
migrations.AddField(
"Pony",
"quotes",
models.BinaryField(default=b'"\'"'),
),
])
Pony = new_state.apps.get_model("test_adbinfl", "Pony")
pony = Pony.objects.get(pk=pony.pk)
# SQLite returns buffer/memoryview, cast to bytes for checking.
self.assertEqual(bytes(pony.blob), b"some text")
self.assertEqual(bytes(pony.empty), b"")
self.assertEqual(bytes(pony.digits), b"42")
self.assertEqual(bytes(pony.quotes), b'"\'"')
def test_column_name_quoting(self):
"""
Column names that are SQL keywords shouldn't cause problems when used
in migrations (#22168).
"""
project_state = self.set_up_test_model("test_regr22168")
operation = migrations.AddField(
"Pony",
"order",
models.IntegerField(default=0),
)
new_state = project_state.clone()
operation.state_forwards("test_regr22168", new_state)
with connection.schema_editor() as editor:
operation.database_forwards("test_regr22168", editor, project_state, new_state)
self.assertColumnExists("test_regr22168_pony", "order")
def test_add_field_preserve_default(self):
"""
Tests the AddField operation's state alteration
when preserve_default = False.
"""
project_state = self.set_up_test_model("test_adflpd")
# Test the state alteration
operation = migrations.AddField(
"Pony",
"height",
models.FloatField(null=True, default=4),
preserve_default=False,
)
new_state = project_state.clone()
operation.state_forwards("test_adflpd", new_state)
self.assertEqual(len(new_state.models["test_adflpd", "pony"].fields), 4)
field = [
f for n, f in new_state.models["test_adflpd", "pony"].fields
if n == "height"
][0]
self.assertEqual(field.default, NOT_PROVIDED)
# Test the database alteration
project_state.apps.get_model("test_adflpd", "pony").objects.create(
weight=4,
)
self.assertColumnNotExists("test_adflpd_pony", "height")
with connection.schema_editor() as editor:
operation.database_forwards("test_adflpd", editor, project_state, new_state)
self.assertColumnExists("test_adflpd_pony", "height")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AddField")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["field", "model_name", "name", "preserve_default"])
def test_add_field_m2m(self):
"""
Tests the AddField operation with a ManyToManyField.
"""
project_state = self.set_up_test_model("test_adflmm", second_model=True)
# Test the state alteration
operation = migrations.AddField("Pony", "stables", models.ManyToManyField("Stable", related_name="ponies"))
new_state = project_state.clone()
operation.state_forwards("test_adflmm", new_state)
self.assertEqual(len(new_state.models["test_adflmm", "pony"].fields), 4)
# Test the database alteration
self.assertTableNotExists("test_adflmm_pony_stables")
with connection.schema_editor() as editor:
operation.database_forwards("test_adflmm", editor, project_state, new_state)
self.assertTableExists("test_adflmm_pony_stables")
self.assertColumnNotExists("test_adflmm_pony", "stables")
# Make sure the M2M field actually works
with atomic():
Pony = new_state.apps.get_model("test_adflmm", "Pony")
p = Pony.objects.create(pink=False, weight=4.55)
p.stables.create()
self.assertEqual(p.stables.count(), 1)
p.stables.all().delete()
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_adflmm", editor, new_state, project_state)
self.assertTableNotExists("test_adflmm_pony_stables")
def test_alter_field_m2m(self):
project_state = self.set_up_test_model("test_alflmm", second_model=True)
project_state = self.apply_operations("test_alflmm", project_state, operations=[
migrations.AddField("Pony", "stables", models.ManyToManyField("Stable", related_name="ponies"))
])
Pony = project_state.apps.get_model("test_alflmm", "Pony")
self.assertFalse(Pony._meta.get_field('stables').blank)
project_state = self.apply_operations("test_alflmm", project_state, operations=[
migrations.AlterField(
"Pony", "stables", models.ManyToManyField(to="Stable", related_name="ponies", blank=True)
)
])
Pony = project_state.apps.get_model("test_alflmm", "Pony")
self.assertTrue(Pony._meta.get_field('stables').blank)
def test_repoint_field_m2m(self):
project_state = self.set_up_test_model("test_alflmm", second_model=True, third_model=True)
project_state = self.apply_operations("test_alflmm", project_state, operations=[
migrations.AddField("Pony", "places", models.ManyToManyField("Stable", related_name="ponies"))
])
Pony = project_state.apps.get_model("test_alflmm", "Pony")
project_state = self.apply_operations("test_alflmm", project_state, operations=[
migrations.AlterField("Pony", "places", models.ManyToManyField(to="Van", related_name="ponies"))
])
# Ensure the new field actually works
Pony = project_state.apps.get_model("test_alflmm", "Pony")
p = Pony.objects.create(pink=False, weight=4.55)
p.places.create()
self.assertEqual(p.places.count(), 1)
p.places.all().delete()
def test_remove_field_m2m(self):
project_state = self.set_up_test_model("test_rmflmm", second_model=True)
project_state = self.apply_operations("test_rmflmm", project_state, operations=[
migrations.AddField("Pony", "stables", models.ManyToManyField("Stable", related_name="ponies"))
])
self.assertTableExists("test_rmflmm_pony_stables")
with_field_state = project_state.clone()
operations = [migrations.RemoveField("Pony", "stables")]
project_state = self.apply_operations("test_rmflmm", project_state, operations=operations)
self.assertTableNotExists("test_rmflmm_pony_stables")
# And test reversal
self.unapply_operations("test_rmflmm", with_field_state, operations=operations)
self.assertTableExists("test_rmflmm_pony_stables")
def test_remove_field_m2m_with_through(self):
project_state = self.set_up_test_model("test_rmflmmwt", second_model=True)
self.assertTableNotExists("test_rmflmmwt_ponystables")
project_state = self.apply_operations("test_rmflmmwt", project_state, operations=[
migrations.CreateModel("PonyStables", fields=[
("pony", models.ForeignKey('test_rmflmmwt.Pony', models.CASCADE)),
("stable", models.ForeignKey('test_rmflmmwt.Stable', models.CASCADE)),
]),
migrations.AddField(
"Pony", "stables",
models.ManyToManyField("Stable", related_name="ponies", through='test_rmflmmwt.PonyStables')
)
])
self.assertTableExists("test_rmflmmwt_ponystables")
operations = [migrations.RemoveField("Pony", "stables")]
self.apply_operations("test_rmflmmwt", project_state, operations=operations)
def test_remove_field(self):
"""
Tests the RemoveField operation.
"""
project_state = self.set_up_test_model("test_rmfl")
# Test the state alteration
operation = migrations.RemoveField("Pony", "pink")
self.assertEqual(operation.describe(), "Remove field pink from Pony")
new_state = project_state.clone()
operation.state_forwards("test_rmfl", new_state)
self.assertEqual(len(new_state.models["test_rmfl", "pony"].fields), 2)
# Test the database alteration
self.assertColumnExists("test_rmfl_pony", "pink")
with connection.schema_editor() as editor:
operation.database_forwards("test_rmfl", editor, project_state, new_state)
self.assertColumnNotExists("test_rmfl_pony", "pink")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_rmfl", editor, new_state, project_state)
self.assertColumnExists("test_rmfl_pony", "pink")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RemoveField")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'model_name': "Pony", 'name': 'pink'})
def test_remove_fk(self):
"""
Tests the RemoveField operation on a foreign key.
"""
project_state = self.set_up_test_model("test_rfk", related_model=True)
self.assertColumnExists("test_rfk_rider", "pony_id")
operation = migrations.RemoveField("Rider", "pony")
new_state = project_state.clone()
operation.state_forwards("test_rfk", new_state)
with connection.schema_editor() as editor:
operation.database_forwards("test_rfk", editor, project_state, new_state)
self.assertColumnNotExists("test_rfk_rider", "pony_id")
with connection.schema_editor() as editor:
operation.database_backwards("test_rfk", editor, new_state, project_state)
self.assertColumnExists("test_rfk_rider", "pony_id")
def test_alter_model_table(self):
"""
Tests the AlterModelTable operation.
"""
project_state = self.set_up_test_model("test_almota")
# Test the state alteration
operation = migrations.AlterModelTable("Pony", "test_almota_pony_2")
self.assertEqual(operation.describe(), "Rename table for Pony to test_almota_pony_2")
new_state = project_state.clone()
operation.state_forwards("test_almota", new_state)
self.assertEqual(new_state.models["test_almota", "pony"].options["db_table"], "test_almota_pony_2")
# Test the database alteration
self.assertTableExists("test_almota_pony")
self.assertTableNotExists("test_almota_pony_2")
with connection.schema_editor() as editor:
operation.database_forwards("test_almota", editor, project_state, new_state)
self.assertTableNotExists("test_almota_pony")
self.assertTableExists("test_almota_pony_2")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_almota", editor, new_state, project_state)
self.assertTableExists("test_almota_pony")
self.assertTableNotExists("test_almota_pony_2")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterModelTable")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'name': "Pony", 'table': "test_almota_pony_2"})
def test_alter_model_table_none(self):
"""
Tests the AlterModelTable operation if the table name is set to None.
"""
operation = migrations.AlterModelTable("Pony", None)
self.assertEqual(operation.describe(), "Rename table for Pony to (default)")
def test_alter_model_table_noop(self):
"""
Tests the AlterModelTable operation if the table name is not changed.
"""
project_state = self.set_up_test_model("test_almota")
# Test the state alteration
operation = migrations.AlterModelTable("Pony", "test_almota_pony")
new_state = project_state.clone()
operation.state_forwards("test_almota", new_state)
self.assertEqual(new_state.models["test_almota", "pony"].options["db_table"], "test_almota_pony")
# Test the database alteration
self.assertTableExists("test_almota_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_almota", editor, project_state, new_state)
self.assertTableExists("test_almota_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_almota", editor, new_state, project_state)
self.assertTableExists("test_almota_pony")
def test_alter_model_table_m2m(self):
"""
AlterModelTable should rename auto-generated M2M tables.
"""
app_label = "test_talflmltlm2m"
pony_db_table = 'pony_foo'
project_state = self.set_up_test_model(app_label, second_model=True, db_table=pony_db_table)
# Add the M2M field
first_state = project_state.clone()
operation = migrations.AddField("Pony", "stables", models.ManyToManyField("Stable"))
operation.state_forwards(app_label, first_state)
with connection.schema_editor() as editor:
operation.database_forwards(app_label, editor, project_state, first_state)
original_m2m_table = "%s_%s" % (pony_db_table, "stables")
new_m2m_table = "%s_%s" % (app_label, "pony_stables")
self.assertTableExists(original_m2m_table)
self.assertTableNotExists(new_m2m_table)
# Rename the Pony db_table which should also rename the m2m table.
second_state = first_state.clone()
operation = migrations.AlterModelTable(name='pony', table=None)
operation.state_forwards(app_label, second_state)
with connection.schema_editor() as editor:
operation.database_forwards(app_label, editor, first_state, second_state)
self.assertTableExists(new_m2m_table)
self.assertTableNotExists(original_m2m_table)
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards(app_label, editor, second_state, first_state)
self.assertTableExists(original_m2m_table)
self.assertTableNotExists(new_m2m_table)
def test_alter_field(self):
"""
Tests the AlterField operation.
"""
project_state = self.set_up_test_model("test_alfl")
# Test the state alteration
operation = migrations.AlterField("Pony", "pink", models.IntegerField(null=True))
self.assertEqual(operation.describe(), "Alter field pink on Pony")
new_state = project_state.clone()
operation.state_forwards("test_alfl", new_state)
self.assertIs(project_state.models["test_alfl", "pony"].get_field_by_name("pink").null, False)
self.assertIs(new_state.models["test_alfl", "pony"].get_field_by_name("pink").null, True)
# Test the database alteration
self.assertColumnNotNull("test_alfl_pony", "pink")
with connection.schema_editor() as editor:
operation.database_forwards("test_alfl", editor, project_state, new_state)
self.assertColumnNull("test_alfl_pony", "pink")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alfl", editor, new_state, project_state)
self.assertColumnNotNull("test_alfl_pony", "pink")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterField")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["field", "model_name", "name"])
def test_alter_field_pk(self):
"""
Tests the AlterField operation on primary keys (for things like PostgreSQL's SERIAL weirdness)
"""
project_state = self.set_up_test_model("test_alflpk")
# Test the state alteration
operation = migrations.AlterField("Pony", "id", models.IntegerField(primary_key=True))
new_state = project_state.clone()
operation.state_forwards("test_alflpk", new_state)
self.assertIsInstance(project_state.models["test_alflpk", "pony"].get_field_by_name("id"), models.AutoField)
self.assertIsInstance(new_state.models["test_alflpk", "pony"].get_field_by_name("id"), models.IntegerField)
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_alflpk", editor, project_state, new_state)
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alflpk", editor, new_state, project_state)
@skipUnlessDBFeature('supports_foreign_keys')
def test_alter_field_pk_fk(self):
"""
Tests the AlterField operation on primary keys changes any FKs pointing to it.
"""
project_state = self.set_up_test_model("test_alflpkfk", related_model=True)
# Test the state alteration
operation = migrations.AlterField("Pony", "id", models.FloatField(primary_key=True))
new_state = project_state.clone()
operation.state_forwards("test_alflpkfk", new_state)
self.assertIsInstance(project_state.models["test_alflpkfk", "pony"].get_field_by_name("id"), models.AutoField)
self.assertIsInstance(new_state.models["test_alflpkfk", "pony"].get_field_by_name("id"), models.FloatField)
def assertIdTypeEqualsFkType():
with connection.cursor() as cursor:
id_type, id_null = [
(c.type_code, c.null_ok)
for c in connection.introspection.get_table_description(cursor, "test_alflpkfk_pony")
if c.name == "id"
][0]
fk_type, fk_null = [
(c.type_code, c.null_ok)
for c in connection.introspection.get_table_description(cursor, "test_alflpkfk_rider")
if c.name == "pony_id"
][0]
self.assertEqual(id_type, fk_type)
self.assertEqual(id_null, fk_null)
assertIdTypeEqualsFkType()
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_alflpkfk", editor, project_state, new_state)
assertIdTypeEqualsFkType()
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alflpkfk", editor, new_state, project_state)
assertIdTypeEqualsFkType()
def test_rename_field(self):
"""
Tests the RenameField operation.
"""
project_state = self.set_up_test_model("test_rnfl", unique_together=True, index_together=True)
# Test the state alteration
operation = migrations.RenameField("Pony", "pink", "blue")
self.assertEqual(operation.describe(), "Rename field pink on Pony to blue")
new_state = project_state.clone()
operation.state_forwards("test_rnfl", new_state)
self.assertIn("blue", [n for n, f in new_state.models["test_rnfl", "pony"].fields])
self.assertNotIn("pink", [n for n, f in new_state.models["test_rnfl", "pony"].fields])
# Make sure the unique_together has the renamed column too
self.assertIn("blue", new_state.models["test_rnfl", "pony"].options['unique_together'][0])
self.assertNotIn("pink", new_state.models["test_rnfl", "pony"].options['unique_together'][0])
# Make sure the index_together has the renamed column too
self.assertIn("blue", new_state.models["test_rnfl", "pony"].options['index_together'][0])
self.assertNotIn("pink", new_state.models["test_rnfl", "pony"].options['index_together'][0])
# Test the database alteration
self.assertColumnExists("test_rnfl_pony", "pink")
self.assertColumnNotExists("test_rnfl_pony", "blue")
with connection.schema_editor() as editor:
operation.database_forwards("test_rnfl", editor, project_state, new_state)
self.assertColumnExists("test_rnfl_pony", "blue")
self.assertColumnNotExists("test_rnfl_pony", "pink")
# Ensure the unique constraint has been ported over
with connection.cursor() as cursor:
cursor.execute("INSERT INTO test_rnfl_pony (blue, weight) VALUES (1, 1)")
with self.assertRaises(IntegrityError):
with atomic():
cursor.execute("INSERT INTO test_rnfl_pony (blue, weight) VALUES (1, 1)")
cursor.execute("DELETE FROM test_rnfl_pony")
# Ensure the index constraint has been ported over
self.assertIndexExists("test_rnfl_pony", ["weight", "blue"])
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_rnfl", editor, new_state, project_state)
self.assertColumnExists("test_rnfl_pony", "pink")
self.assertColumnNotExists("test_rnfl_pony", "blue")
# Ensure the index constraint has been reset
self.assertIndexExists("test_rnfl_pony", ["weight", "pink"])
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RenameField")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'model_name': "Pony", 'old_name': "pink", 'new_name': "blue"})
def test_alter_unique_together(self):
"""
Tests the AlterUniqueTogether operation.
"""
project_state = self.set_up_test_model("test_alunto")
# Test the state alteration
operation = migrations.AlterUniqueTogether("Pony", [("pink", "weight")])
self.assertEqual(operation.describe(), "Alter unique_together for Pony (1 constraint(s))")
new_state = project_state.clone()
operation.state_forwards("test_alunto", new_state)
self.assertEqual(len(project_state.models["test_alunto", "pony"].options.get("unique_together", set())), 0)
self.assertEqual(len(new_state.models["test_alunto", "pony"].options.get("unique_together", set())), 1)
# Make sure we can insert duplicate rows
with connection.cursor() as cursor:
cursor.execute("INSERT INTO test_alunto_pony (pink, weight) VALUES (1, 1)")
cursor.execute("INSERT INTO test_alunto_pony (pink, weight) VALUES (1, 1)")
cursor.execute("DELETE FROM test_alunto_pony")
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_alunto", editor, project_state, new_state)
cursor.execute("INSERT INTO test_alunto_pony (pink, weight) VALUES (1, 1)")
with self.assertRaises(IntegrityError):
with atomic():
cursor.execute("INSERT INTO test_alunto_pony (pink, weight) VALUES (1, 1)")
cursor.execute("DELETE FROM test_alunto_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alunto", editor, new_state, project_state)
cursor.execute("INSERT INTO test_alunto_pony (pink, weight) VALUES (1, 1)")
cursor.execute("INSERT INTO test_alunto_pony (pink, weight) VALUES (1, 1)")
cursor.execute("DELETE FROM test_alunto_pony")
# Test flat unique_together
operation = migrations.AlterUniqueTogether("Pony", ("pink", "weight"))
operation.state_forwards("test_alunto", new_state)
self.assertEqual(len(new_state.models["test_alunto", "pony"].options.get("unique_together", set())), 1)
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterUniqueTogether")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'name': "Pony", 'unique_together': {("pink", "weight")}})
def test_alter_unique_together_remove(self):
operation = migrations.AlterUniqueTogether("Pony", None)
self.assertEqual(operation.describe(), "Alter unique_together for Pony (0 constraint(s))")
def test_add_index(self):
"""
Test the AddIndex operation.
"""
project_state = self.set_up_test_model("test_adin")
msg = (
"Indexes passed to AddIndex operations require a name argument. "
"<Index: fields='pink'> doesn't have one."
)
with self.assertRaisesMessage(ValueError, msg):
migrations.AddIndex("Pony", models.Index(fields=["pink"]))
index = models.Index(fields=["pink"], name="test_adin_pony_pink_idx")
operation = migrations.AddIndex("Pony", index)
self.assertEqual(operation.describe(), "Create index test_adin_pony_pink_idx on field(s) pink of model Pony")
new_state = project_state.clone()
operation.state_forwards("test_adin", new_state)
# Test the database alteration
self.assertEqual(len(new_state.models["test_adin", "pony"].options['indexes']), 1)
self.assertIndexNotExists("test_adin_pony", ["pink"])
with connection.schema_editor() as editor:
operation.database_forwards("test_adin", editor, project_state, new_state)
self.assertIndexExists("test_adin_pony", ["pink"])
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_adin", editor, new_state, project_state)
self.assertIndexNotExists("test_adin_pony", ["pink"])
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AddIndex")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'model_name': "Pony", 'index': index})
def test_remove_index(self):
"""
Test the RemoveIndex operation.
"""
project_state = self.set_up_test_model("test_rmin", multicol_index=True)
self.assertTableExists("test_rmin_pony")
self.assertIndexExists("test_rmin_pony", ["pink", "weight"])
operation = migrations.RemoveIndex("Pony", "pony_test_idx")
self.assertEqual(operation.describe(), "Remove index pony_test_idx from Pony")
new_state = project_state.clone()
operation.state_forwards("test_rmin", new_state)
# Test the state alteration
self.assertEqual(len(new_state.models["test_rmin", "pony"].options['indexes']), 0)
self.assertIndexExists("test_rmin_pony", ["pink", "weight"])
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_rmin", editor, project_state, new_state)
self.assertIndexNotExists("test_rmin_pony", ["pink", "weight"])
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_rmin", editor, new_state, project_state)
self.assertIndexExists("test_rmin_pony", ["pink", "weight"])
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RemoveIndex")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'model_name': "Pony", 'name': "pony_test_idx"})
# Also test a field dropped with index - sqlite remake issue
operations = [
migrations.RemoveIndex("Pony", "pony_test_idx"),
migrations.RemoveField("Pony", "pink"),
]
self.assertColumnExists("test_rmin_pony", "pink")
self.assertIndexExists("test_rmin_pony", ["pink", "weight"])
# Test database alteration
new_state = project_state.clone()
self.apply_operations('test_rmin', new_state, operations=operations)
self.assertColumnNotExists("test_rmin_pony", "pink")
self.assertIndexNotExists("test_rmin_pony", ["pink", "weight"])
# And test reversal
self.unapply_operations("test_rmin", project_state, operations=operations)
self.assertIndexExists("test_rmin_pony", ["pink", "weight"])
def test_alter_field_with_index(self):
"""
Test AlterField operation with an index to ensure indexes created via
Meta.indexes don't get dropped with sqlite3 remake.
"""
project_state = self.set_up_test_model("test_alflin", index=True)
operation = migrations.AlterField("Pony", "pink", models.IntegerField(null=True))
new_state = project_state.clone()
operation.state_forwards("test_alflin", new_state)
# Test the database alteration
self.assertColumnNotNull("test_alflin_pony", "pink")
with connection.schema_editor() as editor:
operation.database_forwards("test_alflin", editor, project_state, new_state)
# Index hasn't been dropped
self.assertIndexExists("test_alflin_pony", ["pink"])
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alflin", editor, new_state, project_state)
# Ensure the index is still there
self.assertIndexExists("test_alflin_pony", ["pink"])
def test_alter_index_together(self):
"""
Tests the AlterIndexTogether operation.
"""
project_state = self.set_up_test_model("test_alinto")
# Test the state alteration
operation = migrations.AlterIndexTogether("Pony", [("pink", "weight")])
self.assertEqual(operation.describe(), "Alter index_together for Pony (1 constraint(s))")
new_state = project_state.clone()
operation.state_forwards("test_alinto", new_state)
self.assertEqual(len(project_state.models["test_alinto", "pony"].options.get("index_together", set())), 0)
self.assertEqual(len(new_state.models["test_alinto", "pony"].options.get("index_together", set())), 1)
# Make sure there's no matching index
self.assertIndexNotExists("test_alinto_pony", ["pink", "weight"])
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_alinto", editor, project_state, new_state)
self.assertIndexExists("test_alinto_pony", ["pink", "weight"])
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alinto", editor, new_state, project_state)
self.assertIndexNotExists("test_alinto_pony", ["pink", "weight"])
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterIndexTogether")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'name': "Pony", 'index_together': {("pink", "weight")}})
def test_alter_index_together_remove(self):
operation = migrations.AlterIndexTogether("Pony", None)
self.assertEqual(operation.describe(), "Alter index_together for Pony (0 constraint(s))")
def test_alter_model_options(self):
"""
Tests the AlterModelOptions operation.
"""
project_state = self.set_up_test_model("test_almoop")
# Test the state alteration (no DB alteration to test)
operation = migrations.AlterModelOptions("Pony", {"permissions": [("can_groom", "Can groom")]})
self.assertEqual(operation.describe(), "Change Meta options on Pony")
new_state = project_state.clone()
operation.state_forwards("test_almoop", new_state)
self.assertEqual(len(project_state.models["test_almoop", "pony"].options.get("permissions", [])), 0)
self.assertEqual(len(new_state.models["test_almoop", "pony"].options.get("permissions", [])), 1)
self.assertEqual(new_state.models["test_almoop", "pony"].options["permissions"][0][0], "can_groom")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterModelOptions")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'name': "Pony", 'options': {"permissions": [("can_groom", "Can groom")]}})
def test_alter_model_options_emptying(self):
"""
The AlterModelOptions operation removes keys from the dict (#23121)
"""
project_state = self.set_up_test_model("test_almoop", options=True)
# Test the state alteration (no DB alteration to test)
operation = migrations.AlterModelOptions("Pony", {})
self.assertEqual(operation.describe(), "Change Meta options on Pony")
new_state = project_state.clone()
operation.state_forwards("test_almoop", new_state)
self.assertEqual(len(project_state.models["test_almoop", "pony"].options.get("permissions", [])), 1)
self.assertEqual(len(new_state.models["test_almoop", "pony"].options.get("permissions", [])), 0)
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterModelOptions")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'name': "Pony", 'options': {}})
def test_alter_order_with_respect_to(self):
"""
Tests the AlterOrderWithRespectTo operation.
"""
project_state = self.set_up_test_model("test_alorwrtto", related_model=True)
# Test the state alteration
operation = migrations.AlterOrderWithRespectTo("Rider", "pony")
self.assertEqual(operation.describe(), "Set order_with_respect_to on Rider to pony")
new_state = project_state.clone()
operation.state_forwards("test_alorwrtto", new_state)
self.assertIsNone(
project_state.models["test_alorwrtto", "rider"].options.get("order_with_respect_to", None)
)
self.assertEqual(
new_state.models["test_alorwrtto", "rider"].options.get("order_with_respect_to", None),
"pony"
)
# Make sure there's no matching index
self.assertColumnNotExists("test_alorwrtto_rider", "_order")
# Create some rows before alteration
rendered_state = project_state.apps
pony = rendered_state.get_model("test_alorwrtto", "Pony").objects.create(weight=50)
rendered_state.get_model("test_alorwrtto", "Rider").objects.create(pony=pony, friend_id=1)
rendered_state.get_model("test_alorwrtto", "Rider").objects.create(pony=pony, friend_id=2)
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_alorwrtto", editor, project_state, new_state)
self.assertColumnExists("test_alorwrtto_rider", "_order")
# Check for correct value in rows
updated_riders = new_state.apps.get_model("test_alorwrtto", "Rider").objects.all()
self.assertEqual(updated_riders[0]._order, 0)
self.assertEqual(updated_riders[1]._order, 0)
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_alorwrtto", editor, new_state, project_state)
self.assertColumnNotExists("test_alorwrtto_rider", "_order")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "AlterOrderWithRespectTo")
self.assertEqual(definition[1], [])
self.assertEqual(definition[2], {'name': "Rider", 'order_with_respect_to': "pony"})
def test_alter_model_managers(self):
"""
The managers on a model are set.
"""
project_state = self.set_up_test_model("test_almoma")
# Test the state alteration
operation = migrations.AlterModelManagers(
"Pony",
managers=[
("food_qs", FoodQuerySet.as_manager()),
("food_mgr", FoodManager("a", "b")),
("food_mgr_kwargs", FoodManager("x", "y", 3, 4)),
]
)
self.assertEqual(operation.describe(), "Change managers on Pony")
managers = project_state.models["test_almoma", "pony"].managers
self.assertEqual(managers, [])
new_state = project_state.clone()
operation.state_forwards("test_almoma", new_state)
self.assertIn(("test_almoma", "pony"), new_state.models)
managers = new_state.models["test_almoma", "pony"].managers
self.assertEqual(managers[0][0], "food_qs")
self.assertIsInstance(managers[0][1], models.Manager)
self.assertEqual(managers[1][0], "food_mgr")
self.assertIsInstance(managers[1][1], FoodManager)
self.assertEqual(managers[1][1].args, ("a", "b", 1, 2))
self.assertEqual(managers[2][0], "food_mgr_kwargs")
self.assertIsInstance(managers[2][1], FoodManager)
self.assertEqual(managers[2][1].args, ("x", "y", 3, 4))
rendered_state = new_state.apps
model = rendered_state.get_model('test_almoma', 'pony')
self.assertIsInstance(model.food_qs, models.Manager)
self.assertIsInstance(model.food_mgr, FoodManager)
self.assertIsInstance(model.food_mgr_kwargs, FoodManager)
def test_alter_model_managers_emptying(self):
"""
The managers on a model are set.
"""
project_state = self.set_up_test_model("test_almomae", manager_model=True)
# Test the state alteration
operation = migrations.AlterModelManagers("Food", managers=[])
self.assertEqual(operation.describe(), "Change managers on Food")
self.assertIn(("test_almomae", "food"), project_state.models)
managers = project_state.models["test_almomae", "food"].managers
self.assertEqual(managers[0][0], "food_qs")
self.assertIsInstance(managers[0][1], models.Manager)
self.assertEqual(managers[1][0], "food_mgr")
self.assertIsInstance(managers[1][1], FoodManager)
self.assertEqual(managers[1][1].args, ("a", "b", 1, 2))
self.assertEqual(managers[2][0], "food_mgr_kwargs")
self.assertIsInstance(managers[2][1], FoodManager)
self.assertEqual(managers[2][1].args, ("x", "y", 3, 4))
new_state = project_state.clone()
operation.state_forwards("test_almomae", new_state)
managers = new_state.models["test_almomae", "food"].managers
self.assertEqual(managers, [])
def test_alter_fk(self):
"""
Creating and then altering an FK works correctly
and deals with the pending SQL (#23091)
"""
project_state = self.set_up_test_model("test_alfk")
# Test adding and then altering the FK in one go
create_operation = migrations.CreateModel(
name="Rider",
fields=[
("id", models.AutoField(primary_key=True)),
("pony", models.ForeignKey("Pony", models.CASCADE)),
],
)
create_state = project_state.clone()
create_operation.state_forwards("test_alfk", create_state)
alter_operation = migrations.AlterField(
model_name='Rider',
name='pony',
field=models.ForeignKey("Pony", models.CASCADE, editable=False),
)
alter_state = create_state.clone()
alter_operation.state_forwards("test_alfk", alter_state)
with connection.schema_editor() as editor:
create_operation.database_forwards("test_alfk", editor, project_state, create_state)
alter_operation.database_forwards("test_alfk", editor, create_state, alter_state)
def test_alter_fk_non_fk(self):
"""
Altering an FK to a non-FK works (#23244)
"""
# Test the state alteration
operation = migrations.AlterField(
model_name="Rider",
name="pony",
field=models.FloatField(),
)
project_state, new_state = self.make_test_state("test_afknfk", operation, related_model=True)
# Test the database alteration
self.assertColumnExists("test_afknfk_rider", "pony_id")
self.assertColumnNotExists("test_afknfk_rider", "pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_afknfk", editor, project_state, new_state)
self.assertColumnExists("test_afknfk_rider", "pony")
self.assertColumnNotExists("test_afknfk_rider", "pony_id")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_afknfk", editor, new_state, project_state)
self.assertColumnExists("test_afknfk_rider", "pony_id")
self.assertColumnNotExists("test_afknfk_rider", "pony")
@unittest.skipIf(sqlparse is None and connection.features.requires_sqlparse_for_splitting, "Missing sqlparse")
def test_run_sql(self):
"""
Tests the RunSQL operation.
"""
project_state = self.set_up_test_model("test_runsql")
# Create the operation
operation = migrations.RunSQL(
# Use a multi-line string with a comment to test splitting on SQLite and MySQL respectively
"CREATE TABLE i_love_ponies (id int, special_thing varchar(15));\n"
"INSERT INTO i_love_ponies (id, special_thing) VALUES (1, 'i love ponies'); -- this is magic!\n"
"INSERT INTO i_love_ponies (id, special_thing) VALUES (2, 'i love django');\n"
"UPDATE i_love_ponies SET special_thing = 'Ponies' WHERE special_thing LIKE '%%ponies';"
"UPDATE i_love_ponies SET special_thing = 'Django' WHERE special_thing LIKE '%django';",
# Run delete queries to test for parameter substitution failure
# reported in #23426
"DELETE FROM i_love_ponies WHERE special_thing LIKE '%Django%';"
"DELETE FROM i_love_ponies WHERE special_thing LIKE '%%Ponies%%';"
"DROP TABLE i_love_ponies",
state_operations=[migrations.CreateModel("SomethingElse", [("id", models.AutoField(primary_key=True))])],
)
self.assertEqual(operation.describe(), "Raw SQL operation")
# Test the state alteration
new_state = project_state.clone()
operation.state_forwards("test_runsql", new_state)
self.assertEqual(len(new_state.models["test_runsql", "somethingelse"].fields), 1)
# Make sure there's no table
self.assertTableNotExists("i_love_ponies")
# Test SQL collection
with connection.schema_editor(collect_sql=True) as editor:
operation.database_forwards("test_runsql", editor, project_state, new_state)
self.assertIn("LIKE '%%ponies';", "\n".join(editor.collected_sql))
operation.database_backwards("test_runsql", editor, project_state, new_state)
self.assertIn("LIKE '%%Ponies%%';", "\n".join(editor.collected_sql))
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_runsql", editor, project_state, new_state)
self.assertTableExists("i_love_ponies")
# Make sure all the SQL was processed
with connection.cursor() as cursor:
cursor.execute("SELECT COUNT(*) FROM i_love_ponies")
self.assertEqual(cursor.fetchall()[0][0], 2)
cursor.execute("SELECT COUNT(*) FROM i_love_ponies WHERE special_thing = 'Django'")
self.assertEqual(cursor.fetchall()[0][0], 1)
cursor.execute("SELECT COUNT(*) FROM i_love_ponies WHERE special_thing = 'Ponies'")
self.assertEqual(cursor.fetchall()[0][0], 1)
# And test reversal
self.assertTrue(operation.reversible)
with connection.schema_editor() as editor:
operation.database_backwards("test_runsql", editor, new_state, project_state)
self.assertTableNotExists("i_love_ponies")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RunSQL")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["reverse_sql", "sql", "state_operations"])
# And elidable reduction
self.assertIs(False, operation.reduce(operation, []))
elidable_operation = migrations.RunSQL('SELECT 1 FROM void;', elidable=True)
self.assertEqual(elidable_operation.reduce(operation, []), [operation])
def test_run_sql_params(self):
"""
#23426 - RunSQL should accept parameters.
"""
project_state = self.set_up_test_model("test_runsql")
# Create the operation
operation = migrations.RunSQL(
["CREATE TABLE i_love_ponies (id int, special_thing varchar(15));"],
["DROP TABLE i_love_ponies"],
)
param_operation = migrations.RunSQL(
# forwards
(
"INSERT INTO i_love_ponies (id, special_thing) VALUES (1, 'Django');",
["INSERT INTO i_love_ponies (id, special_thing) VALUES (2, %s);", ['Ponies']],
("INSERT INTO i_love_ponies (id, special_thing) VALUES (%s, %s);", (3, 'Python',)),
),
# backwards
[
"DELETE FROM i_love_ponies WHERE special_thing = 'Django';",
["DELETE FROM i_love_ponies WHERE special_thing = 'Ponies';", None],
("DELETE FROM i_love_ponies WHERE id = %s OR special_thing = %s;", [3, 'Python']),
]
)
# Make sure there's no table
self.assertTableNotExists("i_love_ponies")
new_state = project_state.clone()
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_runsql", editor, project_state, new_state)
# Test parameter passing
with connection.schema_editor() as editor:
param_operation.database_forwards("test_runsql", editor, project_state, new_state)
# Make sure all the SQL was processed
with connection.cursor() as cursor:
cursor.execute("SELECT COUNT(*) FROM i_love_ponies")
self.assertEqual(cursor.fetchall()[0][0], 3)
with connection.schema_editor() as editor:
param_operation.database_backwards("test_runsql", editor, new_state, project_state)
with connection.cursor() as cursor:
cursor.execute("SELECT COUNT(*) FROM i_love_ponies")
self.assertEqual(cursor.fetchall()[0][0], 0)
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_runsql", editor, new_state, project_state)
self.assertTableNotExists("i_love_ponies")
def test_run_sql_params_invalid(self):
"""
#23426 - RunSQL should fail when a list of statements with an incorrect
number of tuples is given.
"""
project_state = self.set_up_test_model("test_runsql")
new_state = project_state.clone()
operation = migrations.RunSQL(
# forwards
[
["INSERT INTO foo (bar) VALUES ('buz');"]
],
# backwards
(
("DELETE FROM foo WHERE bar = 'buz';", 'invalid', 'parameter count'),
),
)
with connection.schema_editor() as editor:
with self.assertRaisesMessage(ValueError, "Expected a 2-tuple but got 1"):
operation.database_forwards("test_runsql", editor, project_state, new_state)
with connection.schema_editor() as editor:
with self.assertRaisesMessage(ValueError, "Expected a 2-tuple but got 3"):
operation.database_backwards("test_runsql", editor, new_state, project_state)
def test_run_sql_noop(self):
"""
#24098 - Tests no-op RunSQL operations.
"""
operation = migrations.RunSQL(migrations.RunSQL.noop, migrations.RunSQL.noop)
with connection.schema_editor() as editor:
operation.database_forwards("test_runsql", editor, None, None)
operation.database_backwards("test_runsql", editor, None, None)
def test_run_python(self):
"""
Tests the RunPython operation
"""
project_state = self.set_up_test_model("test_runpython", mti_model=True)
# Create the operation
def inner_method(models, schema_editor):
Pony = models.get_model("test_runpython", "Pony")
Pony.objects.create(pink=1, weight=3.55)
Pony.objects.create(weight=5)
def inner_method_reverse(models, schema_editor):
Pony = models.get_model("test_runpython", "Pony")
Pony.objects.filter(pink=1, weight=3.55).delete()
Pony.objects.filter(weight=5).delete()
operation = migrations.RunPython(inner_method, reverse_code=inner_method_reverse)
self.assertEqual(operation.describe(), "Raw Python operation")
# Test the state alteration does nothing
new_state = project_state.clone()
operation.state_forwards("test_runpython", new_state)
self.assertEqual(new_state, project_state)
# Test the database alteration
self.assertEqual(project_state.apps.get_model("test_runpython", "Pony").objects.count(), 0)
with connection.schema_editor() as editor:
operation.database_forwards("test_runpython", editor, project_state, new_state)
self.assertEqual(project_state.apps.get_model("test_runpython", "Pony").objects.count(), 2)
# Now test reversal
self.assertTrue(operation.reversible)
with connection.schema_editor() as editor:
operation.database_backwards("test_runpython", editor, project_state, new_state)
self.assertEqual(project_state.apps.get_model("test_runpython", "Pony").objects.count(), 0)
# Now test we can't use a string
with self.assertRaises(ValueError):
migrations.RunPython("print 'ahahaha'")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RunPython")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["code", "reverse_code"])
# Also test reversal fails, with an operation identical to above but without reverse_code set
no_reverse_operation = migrations.RunPython(inner_method)
self.assertFalse(no_reverse_operation.reversible)
with connection.schema_editor() as editor:
no_reverse_operation.database_forwards("test_runpython", editor, project_state, new_state)
with self.assertRaises(NotImplementedError):
no_reverse_operation.database_backwards("test_runpython", editor, new_state, project_state)
self.assertEqual(project_state.apps.get_model("test_runpython", "Pony").objects.count(), 2)
def create_ponies(models, schema_editor):
Pony = models.get_model("test_runpython", "Pony")
pony1 = Pony.objects.create(pink=1, weight=3.55)
self.assertIsNot(pony1.pk, None)
pony2 = Pony.objects.create(weight=5)
self.assertIsNot(pony2.pk, None)
self.assertNotEqual(pony1.pk, pony2.pk)
operation = migrations.RunPython(create_ponies)
with connection.schema_editor() as editor:
operation.database_forwards("test_runpython", editor, project_state, new_state)
self.assertEqual(project_state.apps.get_model("test_runpython", "Pony").objects.count(), 4)
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "RunPython")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["code"])
def create_shetlandponies(models, schema_editor):
ShetlandPony = models.get_model("test_runpython", "ShetlandPony")
pony1 = ShetlandPony.objects.create(weight=4.0)
self.assertIsNot(pony1.pk, None)
pony2 = ShetlandPony.objects.create(weight=5.0)
self.assertIsNot(pony2.pk, None)
self.assertNotEqual(pony1.pk, pony2.pk)
operation = migrations.RunPython(create_shetlandponies)
with connection.schema_editor() as editor:
operation.database_forwards("test_runpython", editor, project_state, new_state)
self.assertEqual(project_state.apps.get_model("test_runpython", "Pony").objects.count(), 6)
self.assertEqual(project_state.apps.get_model("test_runpython", "ShetlandPony").objects.count(), 2)
# And elidable reduction
self.assertIs(False, operation.reduce(operation, []))
elidable_operation = migrations.RunPython(inner_method, elidable=True)
self.assertEqual(elidable_operation.reduce(operation, []), [operation])
def test_run_python_atomic(self):
"""
Tests the RunPython operation correctly handles the "atomic" keyword
"""
project_state = self.set_up_test_model("test_runpythonatomic", mti_model=True)
def inner_method(models, schema_editor):
Pony = models.get_model("test_runpythonatomic", "Pony")
Pony.objects.create(pink=1, weight=3.55)
raise ValueError("Adrian hates ponies.")
atomic_migration = Migration("test", "test_runpythonatomic")
atomic_migration.operations = [migrations.RunPython(inner_method)]
non_atomic_migration = Migration("test", "test_runpythonatomic")
non_atomic_migration.operations = [migrations.RunPython(inner_method, atomic=False)]
# If we're a fully-transactional database, both versions should rollback
if connection.features.can_rollback_ddl:
self.assertEqual(project_state.apps.get_model("test_runpythonatomic", "Pony").objects.count(), 0)
with self.assertRaises(ValueError):
with connection.schema_editor() as editor:
atomic_migration.apply(project_state, editor)
self.assertEqual(project_state.apps.get_model("test_runpythonatomic", "Pony").objects.count(), 0)
with self.assertRaises(ValueError):
with connection.schema_editor() as editor:
non_atomic_migration.apply(project_state, editor)
self.assertEqual(project_state.apps.get_model("test_runpythonatomic", "Pony").objects.count(), 0)
# Otherwise, the non-atomic operation should leave a row there
else:
self.assertEqual(project_state.apps.get_model("test_runpythonatomic", "Pony").objects.count(), 0)
with self.assertRaises(ValueError):
with connection.schema_editor() as editor:
atomic_migration.apply(project_state, editor)
self.assertEqual(project_state.apps.get_model("test_runpythonatomic", "Pony").objects.count(), 0)
with self.assertRaises(ValueError):
with connection.schema_editor() as editor:
non_atomic_migration.apply(project_state, editor)
self.assertEqual(project_state.apps.get_model("test_runpythonatomic", "Pony").objects.count(), 1)
# And deconstruction
definition = non_atomic_migration.operations[0].deconstruct()
self.assertEqual(definition[0], "RunPython")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["atomic", "code"])
def test_run_python_related_assignment(self):
"""
#24282 - Model changes to a FK reverse side update the model
on the FK side as well.
"""
def inner_method(models, schema_editor):
Author = models.get_model("test_authors", "Author")
Book = models.get_model("test_books", "Book")
author = Author.objects.create(name="Hemingway")
Book.objects.create(title="Old Man and The Sea", author=author)
create_author = migrations.CreateModel(
"Author",
[
("id", models.AutoField(primary_key=True)),
("name", models.CharField(max_length=100)),
],
options={},
)
create_book = migrations.CreateModel(
"Book",
[
("id", models.AutoField(primary_key=True)),
("title", models.CharField(max_length=100)),
("author", models.ForeignKey("test_authors.Author", models.CASCADE))
],
options={},
)
add_hometown = migrations.AddField(
"Author",
"hometown",
models.CharField(max_length=100),
)
create_old_man = migrations.RunPython(inner_method, inner_method)
project_state = ProjectState()
new_state = project_state.clone()
with connection.schema_editor() as editor:
create_author.state_forwards("test_authors", new_state)
create_author.database_forwards("test_authors", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
create_book.state_forwards("test_books", new_state)
create_book.database_forwards("test_books", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
add_hometown.state_forwards("test_authors", new_state)
add_hometown.database_forwards("test_authors", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
create_old_man.state_forwards("test_books", new_state)
create_old_man.database_forwards("test_books", editor, project_state, new_state)
def test_model_with_bigautofield(self):
"""
A model with BigAutoField can be created.
"""
def create_data(models, schema_editor):
Author = models.get_model("test_author", "Author")
Book = models.get_model("test_book", "Book")
author1 = Author.objects.create(name="Hemingway")
Book.objects.create(title="Old Man and The Sea", author=author1)
Book.objects.create(id=2 ** 33, title="A farewell to arms", author=author1)
author2 = Author.objects.create(id=2 ** 33, name="Remarque")
Book.objects.create(title="All quiet on the western front", author=author2)
Book.objects.create(title="Arc de Triomphe", author=author2)
create_author = migrations.CreateModel(
"Author",
[
("id", models.BigAutoField(primary_key=True)),
("name", models.CharField(max_length=100)),
],
options={},
)
create_book = migrations.CreateModel(
"Book",
[
("id", models.BigAutoField(primary_key=True)),
("title", models.CharField(max_length=100)),
("author", models.ForeignKey(to="test_author.Author", on_delete=models.CASCADE))
],
options={},
)
fill_data = migrations.RunPython(create_data)
project_state = ProjectState()
new_state = project_state.clone()
with connection.schema_editor() as editor:
create_author.state_forwards("test_author", new_state)
create_author.database_forwards("test_author", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
create_book.state_forwards("test_book", new_state)
create_book.database_forwards("test_book", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
fill_data.state_forwards("fill_data", new_state)
fill_data.database_forwards("fill_data", editor, project_state, new_state)
def test_autofield_foreignfield_growth(self):
"""
A field may be migrated from AutoField to BigAutoField.
"""
def create_initial_data(models, schema_editor):
Article = models.get_model("test_article", "Article")
Blog = models.get_model("test_blog", "Blog")
blog = Blog.objects.create(name="web development done right")
Article.objects.create(name="Frameworks", blog=blog)
Article.objects.create(name="Programming Languages", blog=blog)
def create_big_data(models, schema_editor):
Article = models.get_model("test_article", "Article")
Blog = models.get_model("test_blog", "Blog")
blog2 = Blog.objects.create(name="Frameworks", id=2 ** 33)
Article.objects.create(name="Django", blog=blog2)
Article.objects.create(id=2 ** 33, name="Django2", blog=blog2)
create_blog = migrations.CreateModel(
"Blog",
[
("id", models.AutoField(primary_key=True)),
("name", models.CharField(max_length=100)),
],
options={},
)
create_article = migrations.CreateModel(
"Article",
[
("id", models.AutoField(primary_key=True)),
("blog", models.ForeignKey(to="test_blog.Blog", on_delete=models.CASCADE)),
("name", models.CharField(max_length=100)),
("data", models.TextField(default="")),
],
options={},
)
fill_initial_data = migrations.RunPython(create_initial_data, create_initial_data)
fill_big_data = migrations.RunPython(create_big_data, create_big_data)
grow_article_id = migrations.AlterField("Article", "id", models.BigAutoField(primary_key=True))
grow_blog_id = migrations.AlterField("Blog", "id", models.BigAutoField(primary_key=True))
project_state = ProjectState()
new_state = project_state.clone()
with connection.schema_editor() as editor:
create_blog.state_forwards("test_blog", new_state)
create_blog.database_forwards("test_blog", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
create_article.state_forwards("test_article", new_state)
create_article.database_forwards("test_article", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
fill_initial_data.state_forwards("fill_initial_data", new_state)
fill_initial_data.database_forwards("fill_initial_data", editor, project_state, new_state)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
grow_article_id.state_forwards("test_article", new_state)
grow_article_id.database_forwards("test_article", editor, project_state, new_state)
state = new_state.clone()
article = state.apps.get_model("test_article.Article")
self.assertIsInstance(article._meta.pk, models.BigAutoField)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
grow_blog_id.state_forwards("test_blog", new_state)
grow_blog_id.database_forwards("test_blog", editor, project_state, new_state)
state = new_state.clone()
blog = state.apps.get_model("test_blog.Blog")
self.assertIsInstance(blog._meta.pk, models.BigAutoField)
project_state = new_state
new_state = new_state.clone()
with connection.schema_editor() as editor:
fill_big_data.state_forwards("fill_big_data", new_state)
fill_big_data.database_forwards("fill_big_data", editor, project_state, new_state)
def test_run_python_noop(self):
"""
#24098 - Tests no-op RunPython operations.
"""
project_state = ProjectState()
new_state = project_state.clone()
operation = migrations.RunPython(migrations.RunPython.noop, migrations.RunPython.noop)
with connection.schema_editor() as editor:
operation.database_forwards("test_runpython", editor, project_state, new_state)
operation.database_backwards("test_runpython", editor, new_state, project_state)
@unittest.skipIf(sqlparse is None and connection.features.requires_sqlparse_for_splitting, "Missing sqlparse")
def test_separate_database_and_state(self):
"""
Tests the SeparateDatabaseAndState operation.
"""
project_state = self.set_up_test_model("test_separatedatabaseandstate")
# Create the operation
database_operation = migrations.RunSQL(
"CREATE TABLE i_love_ponies (id int, special_thing int);",
"DROP TABLE i_love_ponies;"
)
state_operation = migrations.CreateModel("SomethingElse", [("id", models.AutoField(primary_key=True))])
operation = migrations.SeparateDatabaseAndState(
state_operations=[state_operation],
database_operations=[database_operation]
)
self.assertEqual(operation.describe(), "Custom state/database change combination")
# Test the state alteration
new_state = project_state.clone()
operation.state_forwards("test_separatedatabaseandstate", new_state)
self.assertEqual(len(new_state.models["test_separatedatabaseandstate", "somethingelse"].fields), 1)
# Make sure there's no table
self.assertTableNotExists("i_love_ponies")
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards("test_separatedatabaseandstate", editor, project_state, new_state)
self.assertTableExists("i_love_ponies")
# And test reversal
self.assertTrue(operation.reversible)
with connection.schema_editor() as editor:
operation.database_backwards("test_separatedatabaseandstate", editor, new_state, project_state)
self.assertTableNotExists("i_love_ponies")
# And deconstruction
definition = operation.deconstruct()
self.assertEqual(definition[0], "SeparateDatabaseAndState")
self.assertEqual(definition[1], [])
self.assertEqual(sorted(definition[2]), ["database_operations", "state_operations"])
def test_separate_database_and_state2(self):
"""
A complex SeparateDatabaseAndState operation: Multiple operations both
for state and database. Verify the state dependencies within each list
and that state ops don't affect the database.
"""
app_label = "test_separatedatabaseandstate2"
project_state = self.set_up_test_model(app_label)
# Create the operation
database_operations = [
migrations.CreateModel(
"ILovePonies",
[("id", models.AutoField(primary_key=True))],
options={"db_table": "iloveponies"},
),
migrations.CreateModel(
"ILoveMorePonies",
# We use IntegerField and not AutoField because
# the model is going to be deleted immediately
# and with an AutoField this fails on Oracle
[("id", models.IntegerField(primary_key=True))],
options={"db_table": "ilovemoreponies"},
),
migrations.DeleteModel("ILoveMorePonies"),
migrations.CreateModel(
"ILoveEvenMorePonies",
[("id", models.AutoField(primary_key=True))],
options={"db_table": "iloveevenmoreponies"},
),
]
state_operations = [
migrations.CreateModel(
"SomethingElse",
[("id", models.AutoField(primary_key=True))],
options={"db_table": "somethingelse"},
),
migrations.DeleteModel("SomethingElse"),
migrations.CreateModel(
"SomethingCompletelyDifferent",
[("id", models.AutoField(primary_key=True))],
options={"db_table": "somethingcompletelydifferent"},
),
]
operation = migrations.SeparateDatabaseAndState(
state_operations=state_operations,
database_operations=database_operations,
)
# Test the state alteration
new_state = project_state.clone()
operation.state_forwards(app_label, new_state)
def assertModelsAndTables(after_db):
# Tables and models exist, or don't, as they should:
self.assertNotIn((app_label, "somethingelse"), new_state.models)
self.assertEqual(len(new_state.models[app_label, "somethingcompletelydifferent"].fields), 1)
self.assertNotIn((app_label, "iloveponiesonies"), new_state.models)
self.assertNotIn((app_label, "ilovemoreponies"), new_state.models)
self.assertNotIn((app_label, "iloveevenmoreponies"), new_state.models)
self.assertTableNotExists("somethingelse")
self.assertTableNotExists("somethingcompletelydifferent")
self.assertTableNotExists("ilovemoreponies")
if after_db:
self.assertTableExists("iloveponies")
self.assertTableExists("iloveevenmoreponies")
else:
self.assertTableNotExists("iloveponies")
self.assertTableNotExists("iloveevenmoreponies")
assertModelsAndTables(after_db=False)
# Test the database alteration
with connection.schema_editor() as editor:
operation.database_forwards(app_label, editor, project_state, new_state)
assertModelsAndTables(after_db=True)
# And test reversal
self.assertTrue(operation.reversible)
with connection.schema_editor() as editor:
operation.database_backwards(app_label, editor, new_state, project_state)
assertModelsAndTables(after_db=False)
class SwappableOperationTests(OperationTestBase):
"""
Key operations ignore swappable models
(we don't want to replicate all of them here, as the functionality
is in a common base class anyway)
"""
available_apps = ['migrations']
@override_settings(TEST_SWAP_MODEL="migrations.SomeFakeModel")
def test_create_ignore_swapped(self):
"""
The CreateTable operation ignores swapped models.
"""
operation = migrations.CreateModel(
"Pony",
[
("id", models.AutoField(primary_key=True)),
("pink", models.IntegerField(default=1)),
],
options={
"swappable": "TEST_SWAP_MODEL",
},
)
# Test the state alteration (it should still be there!)
project_state = ProjectState()
new_state = project_state.clone()
operation.state_forwards("test_crigsw", new_state)
self.assertEqual(new_state.models["test_crigsw", "pony"].name, "Pony")
self.assertEqual(len(new_state.models["test_crigsw", "pony"].fields), 2)
# Test the database alteration
self.assertTableNotExists("test_crigsw_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_crigsw", editor, project_state, new_state)
self.assertTableNotExists("test_crigsw_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_crigsw", editor, new_state, project_state)
self.assertTableNotExists("test_crigsw_pony")
@override_settings(TEST_SWAP_MODEL="migrations.SomeFakeModel")
def test_delete_ignore_swapped(self):
"""
Tests the DeleteModel operation ignores swapped models.
"""
operation = migrations.DeleteModel("Pony")
project_state, new_state = self.make_test_state("test_dligsw", operation)
# Test the database alteration
self.assertTableNotExists("test_dligsw_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_dligsw", editor, project_state, new_state)
self.assertTableNotExists("test_dligsw_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_dligsw", editor, new_state, project_state)
self.assertTableNotExists("test_dligsw_pony")
@override_settings(TEST_SWAP_MODEL="migrations.SomeFakeModel")
def test_add_field_ignore_swapped(self):
"""
Tests the AddField operation.
"""
# Test the state alteration
operation = migrations.AddField(
"Pony",
"height",
models.FloatField(null=True, default=5),
)
project_state, new_state = self.make_test_state("test_adfligsw", operation)
# Test the database alteration
self.assertTableNotExists("test_adfligsw_pony")
with connection.schema_editor() as editor:
operation.database_forwards("test_adfligsw", editor, project_state, new_state)
self.assertTableNotExists("test_adfligsw_pony")
# And test reversal
with connection.schema_editor() as editor:
operation.database_backwards("test_adfligsw", editor, new_state, project_state)
self.assertTableNotExists("test_adfligsw_pony")
@override_settings(TEST_SWAP_MODEL='migrations.SomeFakeModel')
def test_indexes_ignore_swapped(self):
"""
Add/RemoveIndex operations ignore swapped models.
"""
operation = migrations.AddIndex('Pony', models.Index(fields=['pink'], name='my_name_idx'))
project_state, new_state = self.make_test_state('test_adinigsw', operation)
with connection.schema_editor() as editor:
# No database queries should be run for swapped models
operation.database_forwards('test_adinigsw', editor, project_state, new_state)
operation.database_backwards('test_adinigsw', editor, new_state, project_state)
operation = migrations.RemoveIndex('Pony', models.Index(fields=['pink'], name='my_name_idx'))
project_state, new_state = self.make_test_state("test_rminigsw", operation)
with connection.schema_editor() as editor:
operation.database_forwards('test_rminigsw', editor, project_state, new_state)
operation.database_backwards('test_rminigsw', editor, new_state, project_state)
class TestCreateModel(SimpleTestCase):
def test_references_model_mixin(self):
CreateModel('name', [], bases=(Mixin, models.Model)).references_model('other_model')
| 48.081148 | 118 | 0.640814 |
7943b59eb1c4ae9effe4dc644bf84349fad04ce8 | 4,268 | py | Python | keystone/policy/core.py | jamielennox/keystone | b33cbef7841f00aa6bd103f2a76409bda6f86169 | [
"Apache-2.0"
] | null | null | null | keystone/policy/core.py | jamielennox/keystone | b33cbef7841f00aa6bd103f2a76409bda6f86169 | [
"Apache-2.0"
] | null | null | null | keystone/policy/core.py | jamielennox/keystone | b33cbef7841f00aa6bd103f2a76409bda6f86169 | [
"Apache-2.0"
] | null | null | null | # Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Main entry point into the Policy service."""
import abc
from oslo_config import cfg
import six
from keystone.common import dependency
from keystone.common import manager
from keystone import exception
from keystone import notifications
CONF = cfg.CONF
@dependency.provider('policy_api')
class Manager(manager.Manager):
"""Default pivot point for the Policy backend.
See :mod:`keystone.common.manager.Manager` for more details on how this
dynamically calls the backend.
"""
driver_namespace = 'keystone.policy'
_POLICY = 'policy'
def __init__(self):
super(Manager, self).__init__(CONF.policy.driver)
def create_policy(self, policy_id, policy, initiator=None):
ref = self.driver.create_policy(policy_id, policy)
notifications.Audit.created(self._POLICY, policy_id, initiator)
return ref
def get_policy(self, policy_id):
try:
return self.driver.get_policy(policy_id)
except exception.NotFound:
raise exception.PolicyNotFound(policy_id=policy_id)
def update_policy(self, policy_id, policy, initiator=None):
if 'id' in policy and policy_id != policy['id']:
raise exception.ValidationError('Cannot change policy ID')
try:
ref = self.driver.update_policy(policy_id, policy)
except exception.NotFound:
raise exception.PolicyNotFound(policy_id=policy_id)
notifications.Audit.updated(self._POLICY, policy_id, initiator)
return ref
@manager.response_truncated
def list_policies(self, hints=None):
# NOTE(henry-nash): Since the advantage of filtering or list limiting
# of policies at the driver level is minimal, we leave this to the
# caller.
return self.driver.list_policies()
def delete_policy(self, policy_id, initiator=None):
try:
ret = self.driver.delete_policy(policy_id)
except exception.NotFound:
raise exception.PolicyNotFound(policy_id=policy_id)
notifications.Audit.deleted(self._POLICY, policy_id, initiator)
return ret
@six.add_metaclass(abc.ABCMeta)
class Driver(object):
def _get_list_limit(self):
return CONF.policy.list_limit or CONF.list_limit
@abc.abstractmethod
def enforce(self, context, credentials, action, target):
"""Verify that a user is authorized to perform action.
For more information on a full implementation of this see:
`keystone.policy.backends.rules.Policy.enforce`
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def create_policy(self, policy_id, policy):
"""Store a policy blob.
:raises: keystone.exception.Conflict
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def list_policies(self):
"""List all policies."""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def get_policy(self, policy_id):
"""Retrieve a specific policy blob.
:raises: keystone.exception.PolicyNotFound
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def update_policy(self, policy_id, policy):
"""Update a policy blob.
:raises: keystone.exception.PolicyNotFound
"""
raise exception.NotImplemented() # pragma: no cover
@abc.abstractmethod
def delete_policy(self, policy_id):
"""Remove a policy blob.
:raises: keystone.exception.PolicyNotFound
"""
raise exception.NotImplemented() # pragma: no cover
| 30.705036 | 77 | 0.687441 |
7943b778654752a944d718966f86db64ecbf4134 | 575 | py | Python | bench-scripts/bench_histogram_nominal.py | holmes-anonymous-submission/holmes-library | 35d01400c6694681ed61b2b3e1f6c7eb6b660e90 | [
"MIT"
] | null | null | null | bench-scripts/bench_histogram_nominal.py | holmes-anonymous-submission/holmes-library | 35d01400c6694681ed61b2b3e1f6c7eb6b660e90 | [
"MIT"
] | null | null | null | bench-scripts/bench_histogram_nominal.py | holmes-anonymous-submission/holmes-library | 35d01400c6694681ed61b2b3e1f6c7eb6b660e90 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from common import *
configure_network()
subprocess.run(["cmake", "."])
subprocess.run(["make"])
print("running the bench_histogram_nominal")
num_records = 100000
print("running the case for " + str(num_records) + " entries")
time.sleep(5)
for i in range(5):
num_groups = 10 + 10 * i
write_configure_info(str(num_records) + " " + str(num_groups))
subprocess.run(["bin/bench_histogram_nominal", os.getenv("EMP_MY_PARTY_ID"), "5000"])
copy_benchmark_result_to_log("bench_histogram_nominal " + str(num_records) + " " + str(num_groups))
| 30.263158 | 103 | 0.711304 |
7943b86116fd32742e340520d94c0306403b6cdd | 2,470 | pyw | Python | venv/Lib/site-packages/PyQt4/examples/script/helloscript.pyw | prateekfxtd/ns_Startup | 095a62b3a8c7bf0ff7b767355d57d993bbd2423d | [
"MIT"
] | 1 | 2022-03-16T02:10:30.000Z | 2022-03-16T02:10:30.000Z | venv/Lib/site-packages/PyQt4/examples/script/helloscript.pyw | prateekfxtd/ns_Startup | 095a62b3a8c7bf0ff7b767355d57d993bbd2423d | [
"MIT"
] | null | null | null | venv/Lib/site-packages/PyQt4/examples/script/helloscript.pyw | prateekfxtd/ns_Startup | 095a62b3a8c7bf0ff7b767355d57d993bbd2423d | [
"MIT"
] | 2 | 2019-05-28T11:58:59.000Z | 2020-09-23T17:21:19.000Z | #!/usr/bin/env python
#############################################################################
##
## Copyright (C) 2010 Riverbank Computing Limited.
## Copyright (C) 2010 Nokia Corporation and/or its subsidiary(-ies).
## All rights reserved.
##
## This file is part of the examples of PyQt.
##
## $QT_BEGIN_LICENSE:BSD$
## You may use this file under the terms of the BSD license as follows:
##
## "Redistribution and use in source and binary forms, with or without
## modification, are permitted provided that the following conditions are
## met:
## * Redistributions of source code must retain the above copyright
## notice, this list of conditions and the following disclaimer.
## * Redistributions in binary form must reproduce the above copyright
## notice, this list of conditions and the following disclaimer in
## the documentation and/or other materials provided with the
## distribution.
## * Neither the name of Nokia Corporation and its Subsidiary(-ies) nor
## the names of its contributors may be used to endorse or promote
## products derived from this software without specific prior written
## permission.
##
## THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
## "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
## LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
## A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
## OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
## SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
## LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
## DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
## THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
## (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
## OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE."
## $QT_END_LICENSE$
##
#############################################################################
import sys
from PyQt4 import QtGui, QtScript
app = QtGui.QApplication(sys.argv)
engine = QtScript.QScriptEngine()
button = QtGui.QPushButton()
scriptButton = engine.newQObject(button)
engine.globalObject().setProperty('button', scriptButton)
engine.evaluate("button.text = 'Hello World!'")
engine.evaluate("button.styleSheet = 'font-style: italic'")
engine.evaluate("button.show()")
sys.exit(app.exec_())
| 39.206349 | 77 | 0.703239 |
7943b96017a0e082994b3d97986a96569ac29b43 | 1,248 | py | Python | restexpress/setup.py | oberhamsi/FrameworkBenchmarks | 660a66d51a9aad10b43c0660208fb13c098121af | [
"BSD-3-Clause"
] | 4 | 2015-01-22T02:13:03.000Z | 2018-06-13T12:02:46.000Z | frameworks/Java/restexpress/setup.py | ratpack/FrameworkBenchmarks | 81604309e46e382fe2ffb7970a87d728f20c8be6 | [
"BSD-3-Clause"
] | null | null | null | frameworks/Java/restexpress/setup.py | ratpack/FrameworkBenchmarks | 81604309e46e382fe2ffb7970a87d728f20c8be6 | [
"BSD-3-Clause"
] | null | null | null |
import subprocess
import sys
import setup_util
import os
def start(args, logfile, errfile):
setup_util.replace_text("restexpress/config/dev/environment.properties", "mongodb:\/\/.*\/hello_world", "mongodb://" + args.database_host + "/hello_world")
setup_util.replace_text("restexpress/config/dev/environment.properties", "mysql:\/\/.*:3306", "mysql://" + args.database_host + ":3306")
try:
subprocess.check_call("mvn clean package", shell=True, cwd="restexpress", stderr=errfile, stdout=logfile)
subprocess.check_call("mvn assembly:single", shell=True, cwd="restexpress", stderr=errfile, stdout=logfile)
subprocess.check_call("unzip world-1.0-SNAPSHOT-zip-with-dependencies.zip", shell=True, cwd="restexpress/target", stderr=errfile, stdout=logfile)
subprocess.Popen("java -jar world-1.0-SNAPSHOT.jar".rsplit(" "), cwd="restexpress/target/world-1.0-SNAPSHOT", stderr=errfile, stdout=logfile)
return 0
except subprocess.CalledProcessError:
return 1
def stop(logfile, errfile):
p = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE)
out, err = p.communicate()
for line in out.splitlines():
if 'world-1.0-SNAPSHOT.jar' in line:
pid = int(line.split(None, 2)[1])
os.kill(pid, 15)
return 0 | 48 | 157 | 0.721955 |
7943ba4a92e516a09a5d8885736604a7fcda8127 | 4,640 | py | Python | formulaic/rules.py | Govexec/django-formulaic | b9516a963673cb21279a2ecdf62a199b9203d538 | [
"MIT"
] | 1 | 2021-08-20T04:21:20.000Z | 2021-08-20T04:21:20.000Z | formulaic/rules.py | Govexec/django-formulaic | b9516a963673cb21279a2ecdf62a199b9203d538 | [
"MIT"
] | 4 | 2020-12-05T00:31:40.000Z | 2021-09-22T20:06:53.000Z | formulaic/rules.py | Govexec/django-formulaic | b9516a963673cb21279a2ecdf62a199b9203d538 | [
"MIT"
] | 1 | 2020-12-04T19:16:36.000Z | 2020-12-04T19:16:36.000Z | from six import iteritems
from formulaic import exceptions
class RuleAssessor(object):
"""
Mimics the Rule implementation found in JavaScript to toggle
validation on and off for fields depending on if they're
supposed to be visible to the user.
"""
def __init__(self, rules_data, form_fields, submission_data={}):
self.fields = {}
for slug, form_field in form_fields.items():
value = submission_data.get(slug, None)
self.fields[slug] = Field(slug, form_field, value)
self.rules = []
for rule_data in rules_data:
self.rules.append(Rule(self, rule_data))
self.visible_fields = []
self.invisible_fields = []
for slug, field in iteritems(self.fields):
field.evaluate_observed_rules()
if field.visible:
self.visible_fields.append(slug)
else:
self.invisible_fields.append(slug)
def is_field_visible(self, slug):
return slug in self.visible_fields
class FieldStatus(object):
def __init__(self):
self.visible = True
class Field(object):
def __init__(self, slug, form_field, value):
self.slug = slug
self.form_field = form_field
self.value = value
self.observed_rules = []
self._visible = None
def add_observed_rule(self, rule):
self.observed_rules.append(rule)
def evaluate_observed_rules(self):
new_field_status = FieldStatus()
for rule in self.observed_rules:
rule.evaluate(self, new_field_status)
# apply field status
self._visible = new_field_status.visible
def has_value(self, value):
from django.forms import CheckboxInput
if type(self.form_field) is CheckboxInput:
return self.value
elif type(self.value) is list:
return value in self.value
else:
return self.value == value
@property
def visible(self):
if self._visible is None:
raise exceptions.NotSetException("Field hasn't been set active/inactive, yet")
else:
return self._visible
class Rule(object):
OPERATOR_AND = "and"
OPERATOR_OR = "or"
def __init__(self, assessor, rule_data):
self.assessor = assessor
self.operator = rule_data["operator"]
self.conditions = []
for condition_data in rule_data["conditions"]:
self.conditions.append(RuleCondition(self, condition_data))
self.results = []
for result_data in rule_data["results"]:
self.results.append(RuleResult(self, result_data))
def conditions_met(self):
require_all = self.operator == Rule.OPERATOR_AND
all_true = True
any_true = False
for condition in self.conditions:
if condition.is_met():
any_true = True
else:
all_true = False
if all_true:
return True
elif not require_all and any_true:
return True
else:
return False
def evaluate(self, field, display_status):
if self.conditions_met():
for result in self.results:
result.update_result(field, display_status)
class RuleCondition(object):
def __init__(self, rule, condition_data):
self.rule = rule
self.field = self.rule.assessor.fields[condition_data["field_slug"]]
self.operator = condition_data["operator"]
self.value = condition_data["value"]
def is_met(self):
if self.operator == "is":
return self.field.has_value(self.value)
else:
return not self.field.has_value(self.value)
class RuleResult(object):
ACTION_SHOW = "show"
ACTION_HIDE = "hide"
ACTION_CHANGE_OPTION_GROUP = "change-option-group"
def __init__(self, rule, result_data):
self.rule = rule
self.action = result_data["action"]
self.field = self.rule.assessor.fields[result_data["field_slug"]]
self.field.add_observed_rule(self.rule)
def update_result(self, field, display_status):
if self.field == field:
if self.action == RuleResult.ACTION_SHOW:
display_status.visible = True
elif self.action == RuleResult.ACTION_HIDE:
display_status.visible = False
elif self.action == RuleResult.ACTION_CHANGE_OPTION_GROUP:
pass
else:
raise exceptions.InvalidParamException("Invalid RuleResult action: {}".format(self.action))
| 29.18239 | 107 | 0.619828 |
7943bb5ed91f6a8c196c8a93ab0867332e3ee6c2 | 5,608 | py | Python | tests/test_4_check_word.py | Duckbilled/ebook-reader-dict | 0a9131db89d353b686bb25459a62cb6c3c4ffb2c | [
"MIT"
] | null | null | null | tests/test_4_check_word.py | Duckbilled/ebook-reader-dict | 0a9131db89d353b686bb25459a62cb6c3c4ffb2c | [
"MIT"
] | null | null | null | tests/test_4_check_word.py | Duckbilled/ebook-reader-dict | 0a9131db89d353b686bb25459a62cb6c3c4ffb2c | [
"MIT"
] | null | null | null | from unittest.mock import patch
from wikidict import check_word
def test_simple():
assert check_word.main("fr", "vendre") == 0
def test_word_of_the_day():
assert check_word.main("fr", "") == 0
def test_etymology_list():
assert check_word.main("fr", "bath") == 0
def test_sublist():
assert check_word.main("fr", "éperon") == 0
def test_subsublist():
assert check_word.main("fr", "vache") == 0
def test_error():
with patch.object(check_word, "contains", return_value=False):
assert check_word.main("fr", "42") > 0
def test_no_definition_nor_etymology():
assert check_word.main("es", "42") == 0
def test_filter_obsolete_tpl():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += "<span id='FormattingError'>bouh !</span>"
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_math_chem():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<span class="mwe-math-element"></span>'
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_anchors():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<a href="#cite">[1]</a>'
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_es():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += "<dl><dt>1 Finanzas.</dt></dl>"
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("es", "cartel") == 0
def test_filter_es_2():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += "<dl><dt>2 Coloquial</dt></dl>"
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("es", "buena") == 0
def test_filter_fr_refnec():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<span><sup><i><b>Référence nécessaire</b></i></sup></span><span id="refnec"></span>'
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_fr_sources():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += (
'<span class="sources"><span class="tiret">— </span>(<i>Ordonnance de Louis XI pour la formation d'
"un port et château fort à la Hague</i>)</span>"
)
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_fr_external_autonumber():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<a rel="nofollow" class="external autonumber" href="http://www.iupac.org/publications/pac/1994/pdf/6612x2419.pdf">[2]</a>' # noqa
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_fr_attention():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<a href="/wiki/Fichier:Twemoji12_26a0.svg" class="image" title="alt = attention"><img alt="alt = attention" src="//26a0.svg.png"></a>' # noqa
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_fr_wikispecies():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<i><a href="https://species.wikimedia.org/wiki/Panthera_leo" class="extiw" title="wikispecies:Panthera leo">Panthera leo</a></i> sur Wikispecies' # noqa
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_fr_lien_rouge_trad():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<a href="https://en.wiktionary.org/wiki/Reconstruction:Proto-Indo-European/wemh%E2%82%81-" class="extiw" title="en:Reconstruction:Proto-Indo-European/wemh₁-"><span style="font-family:monospace;font-weight:bold;font-size:small;font-style:normal;" title="Équivalent de l’article « Reconstruction:indo-européen commun/*wem- » dans une autre langue">(en)</span></a>' # noqa
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
def test_filter_fr_a_preciser():
orig = check_word.filter_html
def new_filter_html(html: str, locale: str) -> str:
html += '<span title="Cette information a besoin d’être précisée"><small> <span style="color:red">(information <i>à préciser ou à vérifier</i>)</span></small></span>' # noqa
return orig(html, locale)
with patch.object(check_word, "filter_html", new=new_filter_html):
assert check_word.main("fr", "42") == 0
| 33.380952 | 387 | 0.660128 |
7943bbc2b516fe0274409cfa4ba3e5bd4c33cdac | 6,495 | py | Python | admin/deadeye/models.py | strykeforce/deadeye | 09d791e5fe0677f4d1890c7fc86ab2075725cca7 | [
"MIT"
] | 1 | 2020-01-30T05:21:37.000Z | 2020-01-30T05:21:37.000Z | admin/deadeye/models.py | strykeforce/deadeye | 09d791e5fe0677f4d1890c7fc86ab2075725cca7 | [
"MIT"
] | 31 | 2019-10-14T10:11:02.000Z | 2022-03-27T20:05:45.000Z | admin/deadeye/models.py | strykeforce/deadeye | 09d791e5fe0677f4d1890c7fc86ab2075725cca7 | [
"MIT"
] | null | null | null | # pylint: disable=no-member
import json
import time
from urllib.parse import urlparse
from flask import current_app
from networktables import NetworkTables, NetworkTablesInstance
class Unit:
units = {}
api = None
def __init__(self, unit_id, camera_inums):
self.cameras = {}
self.id = unit_id
for inum in camera_inums:
try:
cam = Camera(unit_id, inum)
self.cameras[inum] = cam
except Exception as e:
current_app.logger.error(
"error loading Camera %s%s: %s", unit_id, inum, e
)
@classmethod
def init(cls, api):
Unit.api = api
deadeye_table = NetworkTables.getTable("/Deadeye")
unit_ids = deadeye_table.getSubTables()
for unit_id in unit_ids:
unit_table = NetworkTables.getTable(f"/Deadeye/{unit_id}")
camera_inums = unit_table.getSubTables()
cls.units[unit_id] = Unit(unit_id, camera_inums)
def __repr__(self):
return f"Unit({self.id})"
class Camera:
def __init__(self, unit_id, inum):
self.unit = unit_id
self.inum = int(inum)
self.id = f"{unit_id}{inum}"
self.light = Light(self)
self.on = self.table().getBoolean("On", False)
self.error = self.table().getBoolean("Error", False)
self.capture = self.load_json("Capture")
self.pipeline = self.load_json("Pipeline")
self.stream = self.load_json("Stream")
self.info = self.load_json("Info")
self.table().addEntryListenerEx(
self.entry_listener, NetworkTablesInstance.NotifyFlags.UPDATE
)
def load_json(self, key):
try:
return json.loads(self.table().getString(key, ""))
except json.JSONDecodeError as err:
raise ValueError(f"Camera {key}: {err}")
def enable(self, enabled):
if self.on == enabled:
return
control_table = self.table()
control_table.putBoolean("On", enabled)
control_table.putBoolean("Off", not enabled)
self.on = enabled
Unit.api.refresh_units = True
def set_capture(self, capture):
capture_entry = self.table().getEntry("Capture")
capture_entry.setString(json.dumps(capture))
self.capture = capture
Unit.api.refresh_units = True
def set_pipeline(self, pipeline):
pipeline_entry = self.table().getEntry("Pipeline")
pipeline_entry.setString(json.dumps(pipeline))
self.pipeline = pipeline
Unit.api.refresh_units = True
def set_stream(self, stream):
r = urlparse(stream["url"])
r = r._replace(query=f"s={int(time.monotonic() * 1000)}")
stream["url"] = r.geturl()
stream_entry = self.table().getEntry("Stream")
stream_entry.setString(json.dumps(stream))
self.stream = stream
Unit.api.refresh_units = True
def table(self):
return NetworkTables.getTable(f"/Deadeye/{self.unit}/{self.inum}")
def entry_listener(self, table, key, value, is_new):
del is_new, table # unused
if not value:
return
if key == "On":
self.on = value
self.error = False
elif key == "Off":
self.on = not value
self.error = False
elif key == "Error":
self.error = value
elif key == "Stream":
self.stream = json.loads(value)
elif key == "Pipeline":
self.pipeline = json.loads(value)
elif key == "Capture":
self.capture = json.loads(value)
elif key == "Info":
self.info = json.loads(value)
else:
current_app.logger.error("unrecognized key: %s", key)
Unit.api.refresh_units = True
def __repr__(self):
return f"Camera({self.unit}, {self.inum}"
def __str__(self):
on = self.on
error = self.error
return f"Camera {self.unit}{self.inum}: on={on} error={error}"
class Light:
# properties other than "camera" will be stored in "__dict__" and therefore serialized
__slots__ = ["camera", "__dict__"]
def __init__(self, camera):
control_table = camera.table().getSubTable("Light")
self.camera = camera
self.on = control_table.getBoolean("On", False)
self.blink = control_table.getBoolean("Blink", False)
control_table.addEntryListenerEx(
self.entry_listener, NetworkTablesInstance.NotifyFlags.UPDATE
)
def enable(self, enabled):
if self.on == enabled:
return
control_table = self.camera.table().getSubTable("Light")
control_table.putBoolean("On", enabled)
control_table.putBoolean("Off", not enabled)
self.on = enabled
Unit.api.refresh_units = True
def entry_listener(self, table, key, value, is_new):
del is_new, table # unused
if not value:
return
if key == "On":
self.on = value
self.blink = False
elif key == "Off":
self.on = not value
self.blink = False
elif key == "Blink":
self.blink = value
else:
current_app.logger.error("unrecognized key: %s", key)
Unit.api.refresh_units = True
def __repr__(self):
return f"Light({self.on}, {self.blink}"
def __str__(self):
return f"Light: on={self.on} blink={self.blink}"
class Link:
def __init__(self, api):
self.api = api
deadeye_table = NetworkTables.getTable("/Deadeye")
entry = deadeye_table.getString("Link", "[]")
print(f"Link entry = {entry}")
self.entries = json.loads(entry)
deadeye_table.addEntryListenerEx(
self.entry_listener, NetworkTablesInstance.NotifyFlags.UPDATE
)
def entry_listener(self, table, key, value, is_new):
del is_new, table # unused
if not value:
return
if key == "Link":
self.entries = json.loads(value)
else:
current_app.logger.error("unrecognized key: %s", key)
self.api.refresh_link = True
def set_entries(self, message):
deadeye_table = NetworkTables.getTable("/Deadeye")
entry = deadeye_table.getEntry("Link")
entry.setString(json.dumps(message))
self.entries = message
self.api.refresh_link = True
| 31.225962 | 90 | 0.589376 |
7943bc3f0544b57846d37b79f75e0994cdd5ca5f | 9,175 | py | Python | spinupImplementation/utils/plot.py | PeterChauYEG/spinningupImplementations | d521d27df17bf77b087da09adff2a49fc1588a9f | [
"MIT"
] | null | null | null | spinupImplementation/utils/plot.py | PeterChauYEG/spinningupImplementations | d521d27df17bf77b087da09adff2a49fc1588a9f | [
"MIT"
] | null | null | null | spinupImplementation/utils/plot.py | PeterChauYEG/spinningupImplementations | d521d27df17bf77b087da09adff2a49fc1588a9f | [
"MIT"
] | null | null | null | import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import json
import os
import os.path as osp
import numpy as np
DIV_LINE_WIDTH = 50
# Global vars for tracking and labeling data at load time.
exp_idx = 0
units = dict()
def plot_data(data, xaxis='Epoch', value="AverageEpRet", condition="Condition1", smooth=1, **kwargs):
if smooth > 1:
"""
smooth data with moving window average.
that is,
smoothed_y[t] = average(y[t-k], y[t-k+1], ..., y[t+k-1], y[t+k])
where the "smooth" param is width of that window (2k+1)
"""
y = np.ones(smooth)
for datum in data:
x = np.asarray(datum[value])
z = np.ones(len(x))
smoothed_x = np.convolve(x,y,'same') / np.convolve(z,y,'same')
datum[value] = smoothed_x
if isinstance(data, list):
data = pd.concat(data, ignore_index=True)
sns.set(style="darkgrid", font_scale=1.5)
sns.tsplot(data=data, time=xaxis, value=value, unit="Unit", condition=condition, ci='sd', **kwargs)
"""
If you upgrade to any version of Seaborn greater than 0.8.1, switch from
tsplot to lineplot replacing L29 with:
sns.lineplot(data=data, x=xaxis, y=value, hue=condition, ci='sd', **kwargs)
Changes the colorscheme and the default legend style, though.
"""
plt.legend(loc='best').draggable()
"""
For the version of the legend used in the Spinning Up benchmarking page,
swap L38 with:
plt.legend(loc='upper center', ncol=6, handlelength=1,
mode="expand", borderaxespad=0., prop={'size': 13})
"""
xscale = np.max(np.asarray(data[xaxis])) > 5e3
if xscale:
# Just some formatting niceness: x-axis scale in scientific notation if max x is large
plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
plt.tight_layout(pad=0.5)
def get_datasets(logdir, condition=None):
"""
Recursively look through logdir for output files produced by
spinupImplementation.logx.Logger.
Assumes that any file "progress.txt" is a valid hit.
"""
global exp_idx
global units
datasets = []
for root, _, files in os.walk(logdir):
if 'progress.txt' in files:
exp_name = None
try:
config_path = open(os.path.join(root,'config.json'))
config = json.load(config_path)
if 'exp_name' in config:
exp_name = config['exp_name']
except:
print('No file named config.json')
condition1 = condition or exp_name or 'exp'
condition2 = condition1 + '-' + str(exp_idx)
exp_idx += 1
if condition1 not in units:
units[condition1] = 0
unit = units[condition1]
units[condition1] += 1
exp_data = pd.read_table(os.path.join(root,'progress.txt'))
performance = 'AverageTestEpRet' if 'AverageTestEpRet' in exp_data else 'AverageEpRet'
exp_data.insert(len(exp_data.columns),'Unit',unit)
exp_data.insert(len(exp_data.columns),'Condition1',condition1)
exp_data.insert(len(exp_data.columns),'Condition2',condition2)
exp_data.insert(len(exp_data.columns),'Performance',exp_data[performance])
datasets.append(exp_data)
return datasets
def get_all_datasets(all_logdirs, legend=None, select=None, exclude=None):
"""
For every entry in all_logdirs,
1) check if the entry is a real directory and if it is,
pull data from it;
2) if not, check to see if the entry is a prefix for a
real directory, and pull data from that.
"""
logdirs = []
for logdir in all_logdirs:
if osp.isdir(logdir) and logdir[-1]=='/':
logdirs += [logdir]
else:
basedir = osp.dirname(logdir)
fulldir = lambda x : osp.join(basedir, x)
prefix = logdir.split('/')[-1]
listdir= os.listdir(basedir)
logdirs += sorted([fulldir(x) for x in listdir if prefix in x])
"""
Enforce selection rules, which check logdirs for certain substrings.
Makes it easier to look at graphs from particular ablations, if you
launch many jobs at once with similar names.
"""
if select is not None:
logdirs = [log for log in logdirs if all(x in log for x in select)]
if exclude is not None:
logdirs = [log for log in logdirs if all(not(x in log) for x in exclude)]
# Verify logdirs
print('Plotting from...\n' + '='*DIV_LINE_WIDTH + '\n')
for logdir in logdirs:
print(logdir)
print('\n' + '='*DIV_LINE_WIDTH)
# Make sure the legend is compatible with the logdirs
assert not(legend) or (len(legend) == len(logdirs)), \
"Must give a legend title for each set of experiments."
# Load data from logdirs
data = []
if legend:
for log, leg in zip(logdirs, legend):
data += get_datasets(log, leg)
else:
for log in logdirs:
data += get_datasets(log)
return data
def make_plots(all_logdirs, legend=None, xaxis=None, values=None, count=False,
font_scale=1.5, smooth=1, select=None, exclude=None, estimator='mean'):
data = get_all_datasets(all_logdirs, legend, select, exclude)
values = values if isinstance(values, list) else [values]
condition = 'Condition2' if count else 'Condition1'
estimator = getattr(np, estimator) # choose what to show on main curve: mean? max? min?
for value in values:
plt.figure()
plot_data(data, xaxis=xaxis, value=value, condition=condition, smooth=smooth, estimator=estimator)
plt.show()
def main():
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('logdir', nargs='*')
parser.add_argument('--legend', '-l', nargs='*')
parser.add_argument('--xaxis', '-x', default='TotalEnvInteracts')
parser.add_argument('--value', '-y', default='Performance', nargs='*')
parser.add_argument('--count', action='store_true')
parser.add_argument('--smooth', '-s', type=int, default=1)
parser.add_argument('--select', nargs='*')
parser.add_argument('--exclude', nargs='*')
parser.add_argument('--est', default='mean')
args = parser.parse_args()
"""
Args:
logdir (strings): As many log directories (or prefixes to log
directories, which the plotter will autocomplete internally) as
you'd like to plot from.
legend (strings): Optional way to specify legend for the plot. The
plotter legend will automatically use the ``exp_name`` from the
config.json file, unless you tell it otherwise through this flag.
This only works if you provide a name for each directory that
will get plotted. (Note: this may not be the same as the number
of logdir args you provide! Recall that the plotter looks for
autocompletes of the logdir args: there may be more than one
match for a given logdir prefix, and you will need to provide a
legend string for each one of those matches---unless you have
removed some of them as candidates via selection or exclusion
rules (below).)
xaxis (string): Pick what column from data is used for the x-axis.
Defaults to ``TotalEnvInteracts``.
value (strings): Pick what columns from data to graph on the y-axis.
Submitting multiple values will produce multiple graphs. Defaults
to ``Performance``, which is not an actual output of any algorithm.
Instead, ``Performance`` refers to either ``AverageEpRet``, the
correct performance measure for the on-policy algorithms, or
``AverageTestEpRet``, the correct performance measure for the
off-policy algorithms. The plotter will automatically figure out
which of ``AverageEpRet`` or ``AverageTestEpRet`` to report for
each separate logdir.
count: Optional flag. By default, the plotter shows y-values which
are averaged across all results that share an ``exp_name``,
which is typically a set of identical experiments that only vary
in random seed. But if you'd like to see all of those curves
separately, use the ``--count`` flag.
smooth (int): Smooth data by averaging it over a fixed window. This
parameter says how wide the averaging window will be.
select (strings): Optional selection rule: the plotter will only show
curves from logdirs that contain all of these substrings.
exclude (strings): Optional exclusion rule: plotter will only show
curves from logdirs that do not contain these substrings.
"""
make_plots(args.logdir, args.legend, args.xaxis, args.value, args.count,
smooth=args.smooth, select=args.select, exclude=args.exclude,
estimator=args.est)
if __name__ == "__main__":
main() | 40.418502 | 106 | 0.629101 |
7943bcf77f15ae76020dff23302530d5c3775880 | 407 | py | Python | step9/src/data/version1/BeaconTypeV1.py | pip-services-samples/sample-beacons-python | a117054b1029a6284b9eb304f232088bb997456c | [
"MIT"
] | null | null | null | step9/src/data/version1/BeaconTypeV1.py | pip-services-samples/sample-beacons-python | a117054b1029a6284b9eb304f232088bb997456c | [
"MIT"
] | null | null | null | step9/src/data/version1/BeaconTypeV1.py | pip-services-samples/sample-beacons-python | a117054b1029a6284b9eb304f232088bb997456c | [
"MIT"
] | 1 | 2020-08-03T13:12:26.000Z | 2020-08-03T13:12:26.000Z | # -*- coding: utf-8 -*-
"""
step9.data.version1.BeaconTypeV1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Beacon Type class
:copyright: Conceptual Vision Consulting LLC 2018-2019, see AUTHORS for more details.
:license: MIT, see LICENSE for more details.
"""
class BeaconTypeV1:
Unknown = "unknown"
AltBeacon = "altbeacon"
iBeacon = "ibeacons"
EddyStoneUdi = "eddistoneudi"
| 22.611111 | 89 | 0.604423 |
7943bd4dd996d3876298eb810f9261fe24193ac8 | 1,780 | py | Python | test_service.py | cmput401-fall2018/web-app-ci-cd-with-travis-ci-dcam4 | 10e19f7cdf87fcd799c48a53eea1c988574f57ce | [
"MIT"
] | null | null | null | test_service.py | cmput401-fall2018/web-app-ci-cd-with-travis-ci-dcam4 | 10e19f7cdf87fcd799c48a53eea1c988574f57ce | [
"MIT"
] | 1 | 2018-10-20T05:13:48.000Z | 2018-10-20T05:13:48.000Z | test_service.py | cmput401-fall2018/web-app-ci-cd-with-travis-ci-dcam4 | 10e19f7cdf87fcd799c48a53eea1c988574f57ce | [
"MIT"
] | null | null | null | import unittest
from service import Service
from mock import patch
class TestService(unittest.TestCase):
def setUp(self):
self.service = Service()
@patch('service.Service.bad_random')
def test_bad_random(self, mock_bad_random):
mock_bad_random.return_value = 20
#check to see if mock method works
self.assertEqual(self.service.bad_random(), 20)
@patch('service.Service.bad_random')
def test_divide(self, mock_bad_random):
mock_bad_random.return_value = 20
#positive
self.assertEqual(self.service.divide(5), 4)
#zero
with self.assertRaises(ZeroDivisionError):
self.service.divide(0)
#negative
self.assertEqual(self.service.divide(-2), -10)
def test_abs_plus(self):
#negative
self.assertEqual(self.service.abs_plus(-5), 6)
#zero
self.assertEqual(self.service.abs_plus(0), 1)
#positive
self.assertEqual(self.service.abs_plus(5), 6)
@patch('service.Service.bad_random')
def test_complicated_function(self, mock_bad_random):
mock_bad_random.return_value = 20
#test mod with a 0 result
self.assertEqual(self.service.complicated_function(5), (4,0))
mock_bad_random.return_value = 9
#test mod with a non zero result
self.assertEqual(self.service.complicated_function(3), (3,1))
if __name__ == '__main__':
unittest.main() | 22.820513 | 69 | 0.543258 |
7943bdb77bb69352f6c8b0760c6f14d11eb7c0b9 | 19,763 | py | Python | g2p_seq2seq/g2p_problem.py | b00f/g2p-seq2seq | 96c3e36754b9a4571d6416446a24c5eef2171128 | [
"Apache-2.0"
] | 661 | 2016-04-27T05:47:54.000Z | 2022-03-20T08:42:43.000Z | g2p_seq2seq/g2p_problem.py | b00f/g2p-seq2seq | 96c3e36754b9a4571d6416446a24c5eef2171128 | [
"Apache-2.0"
] | 189 | 2016-04-28T20:10:18.000Z | 2021-08-19T02:04:34.000Z | g2p_seq2seq/g2p_problem.py | b00f/g2p-seq2seq | 96c3e36754b9a4571d6416446a24c5eef2171128 | [
"Apache-2.0"
] | 219 | 2016-04-30T23:58:45.000Z | 2022-03-09T16:05:37.000Z | # Copyright 2016 AC Technologies LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Module for registering custom Grapheme-to-Phoneme problem in tensor2tensor.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import random
import re
from collections import OrderedDict
import tensorflow as tf
from tensorflow.python.data.ops import dataset_ops as dataset_ops
from g2p_seq2seq import g2p_encoder
from tensor2tensor.data_generators import problem
from tensor2tensor.data_generators import text_encoder
from tensor2tensor.utils import registry
from tensor2tensor.data_generators import generator_utils
from tensor2tensor.data_generators import text_problems
EOS = text_encoder.EOS_ID
@registry.register_problem
class GraphemeToPhonemeProblem(text_problems.Text2TextProblem):
"""Problem spec for cmudict PRONALSYL Grapheme-to-Phoneme translation."""
def __init__(self, model_dir, train_path=None, dev_path=None, test_path=None,
cleanup=False, p2g_mode=False):
"""Create a Problem.
Args:
was_reversed: bool, whether to reverse inputs and targets.
was_copy: bool, whether to copy inputs to targets. Can be composed with
was_reversed so that if both are true, the targets become the inputs,
which are then copied to targets so that the task is targets->targets.
"""
super(GraphemeToPhonemeProblem, self).__init__()
self._encoders = None
self._hparams = None
self._feature_info = None
self._model_dir = model_dir
self.train_path, self.dev_path, self.test_path = train_path, dev_path,\
test_path
self.cleanup = cleanup
self.p2g_mode = p2g_mode
vocab_filename = os.path.join(self._model_dir, "vocab.g2p")
if train_path:
self.train_path, self.dev_path, self.test_path = create_data_files(
init_train_path=train_path, init_dev_path=dev_path,
init_test_path=test_path, cleanup=self.cleanup,
p2g_mode=self.p2g_mode)
self.source_vocab, self.target_vocab = g2p_encoder.load_create_vocabs(
vocab_filename, train_path=self.train_path, dev_path=self.dev_path,
test_path=self.test_path, p2g_mode=self.p2g_mode)
elif not os.path.exists(os.path.join(self._model_dir, "checkpoint")):
raise Exception("Model not found in {}".format(self._model_dir))
else:
self.source_vocab, self.target_vocab = g2p_encoder.load_create_vocabs(
vocab_filename, p2g_mode=self.p2g_mode)
def generator(self, data_path, source_vocab, target_vocab):
"""Generator for the training and evaluation data.
Generate source and target data from a single file.
Args:
data_path: The path to data file.
source_vocab: the object of GraphemePhonemeEncoder class with encode and
decode functions for symbols from source file.
target_vocab: the object of GraphemePhonemeEncoder class with encode and
decode functions for symbols from target file.
Yields:
dicts with keys "inputs" and "targets", with values being lists of token
ids.
"""
return self.tabbed_generator(data_path, source_vocab, target_vocab, EOS)
def filepattern(self, data_dir, dataset_split, shard=None):
if not (".preprocessed" in dataset_split):
return os.path.join(self._model_dir, dataset_split + ".preprocessed")
return os.path.join(data_dir, dataset_split)
@property
def input_space_id(self):
return 0
@property
def target_space_id(self):
return 0
@property
def num_shards(self):
return 1
@property
def use_subword_tokenizer(self):
return False
@property
def is_character_level(self):
return False
@property
def targeted_vocab_size(self):
return None
@property
def vocab_name(self):
return None
def generate_preprocess_data(self):
"""Generate and save preprocessed data as TFRecord files.
Args:
train_path: the path to the train data file.
eval_path: the path to the evaluation data file.
Returns:
train_preprocess_path: the path where the preprocessed train data
was saved.
eval_preprocess_path: the path where the preprocessed evaluation data
was saved.
"""
train_preprocess_path = os.path.join(self._model_dir, "train.preprocessed")
eval_preprocess_path = os.path.join(self._model_dir, "eval.preprocessed")
train_gen = self.generator(self.train_path, self.source_vocab,
self.target_vocab)
eval_gen = self.generator(self.dev_path, self.source_vocab,
self.target_vocab)
generate_preprocess_files(train_gen, eval_gen, train_preprocess_path,
eval_preprocess_path)
return train_preprocess_path, eval_preprocess_path
def get_feature_encoders(self, data_dir=None):
if self._encoders is None:
self._encoders = self.feature_encoders()
return self._encoders
def feature_encoders(self):
targets_encoder = self.target_vocab
if self.has_inputs:
inputs_encoder = self.source_vocab
return {"inputs": inputs_encoder, "targets": targets_encoder}
return {"targets": targets_encoder}
def tabbed_generator(self, source_path, source_vocab, target_vocab, eos=None):
r"""Generator for sequence-to-sequence tasks using tabbed files.
Tokens are derived from text files where each line contains both
a source and a target string. The two strings are separated by a tab
character ('\t'). It yields dictionaries of "inputs" and "targets" where
inputs are characters from the source lines converted to integers, and
targets are characters from the target lines, also converted to integers.
Args:
source_path: path to the file with source and target sentences.
source_vocab: a SubwordTextEncoder to encode the source string.
target_vocab: a SubwordTextEncoder to encode the target string.
eos: integer to append at the end of each sequence (default: None).
Yields:
A dictionary {"inputs": source-line, "targets": target-line} where
the lines are integer lists converted from characters in the file lines.
"""
eos_list = [] if eos is None else [eos]
with tf.gfile.GFile(source_path, mode="r") as source_file:
for line_idx, line in enumerate(source_file):
if line:
source, target = split_graphemes_phonemes(line,
cleanup=self.cleanup,
p2g_mode=self.p2g_mode)
if not (source and target):
tf.logging.warning("Invalid data format in line {} in {}:\n"
"{}\nGraphemes and phonemes should be separated by white space."
.format(line_idx, source_path, line))
continue
source_ints = source_vocab.encode(source) + eos_list
target_ints = target_vocab.encode(target) + eos_list
yield {"inputs": source_ints, "targets": target_ints}
def dataset(self,
mode,
data_dir=None,
num_threads=None,
output_buffer_size=None,
shuffle_files=None,
hparams=None,
preprocess=True,
dataset_split=None,
shard=None,
partition_id=0,
num_partitions=1):
"""Build a Dataset for this problem.
Args:
mode: tf.estimator.ModeKeys; determines which files to read from.
data_dir: directory that contains data files.
num_threads: int, number of threads to use for decode and preprocess
Dataset.map calls.
output_buffer_size: int, how many elements to prefetch in Dataset.map
calls.
shuffle_files: whether to shuffle input files. Default behavior (i.e. when
shuffle_files=None) is to shuffle if mode == TRAIN.
hparams: tf.contrib.training.HParams; hparams to be passed to
Problem.preprocess_example and Problem.hparams. If None, will use a
default set that is a no-op.
preprocess: bool, whether to map the Dataset through
Problem.preprocess_example.
dataset_split: tf.estimator.ModeKeys + ["test"], which split to read data
from (TRAIN:"-train", EVAL:"-dev", "test":"-test"). Defaults to mode.
shard: int, if provided, will only read data from the specified shard.
Returns:
Dataset containing dict<feature name, Tensor>.
"""
if dataset_split or (mode in ["train", "eval"]):
# In case when pathes to preprocessed files pointed out or if train mode
# launched, we save preprocessed data first, and then create dataset from
# that files.
dataset_split = dataset_split or mode
assert data_dir
if not hasattr(hparams, "data_dir"):
hparams.add_hparam("data_dir", data_dir)
if not hparams.data_dir:
hparams.data_dir = data_dir
# Construct the Problem's hparams so that items within it are accessible
_ = self.get_hparams(hparams)
data_fields, data_items_to_decoders = self.example_reading_spec()
if data_items_to_decoders is None:
data_items_to_decoders = {
field: tf.contrib.slim.tfexample_decoder.Tensor(field)
for field in data_fields}
is_training = mode == tf.estimator.ModeKeys.TRAIN
data_filepattern = self.filepattern(data_dir, dataset_split, shard=shard)
tf.logging.info("Reading data files from %s", data_filepattern)
data_files = tf.contrib.slim.parallel_reader.get_data_files(
data_filepattern)
if shuffle_files or shuffle_files is None and is_training:
random.shuffle(data_files)
else:
# In case when pathes to preprocessed files not pointed out, we create
# dataset from generator object.
eos_list = [] if EOS is None else [EOS]
data_list = []
with tf.gfile.GFile(self.test_path, mode="r") as source_file:
for line in source_file:
if line:
if "\t" in line:
parts = line.split("\t", 1)
source, target = parts[0].strip(), parts[1].strip()
source_ints = self.source_vocab.encode(source) + eos_list
target_ints = self.target_vocab.encode(target) + eos_list
data_list.append({"inputs":source_ints, "targets":target_ints})
else:
source_ints = self.source_vocab.encode(line) + eos_list
data_list.append(generator_utils.to_example(
{"inputs":source_ints}))
gen = Gen(self.generator(self.test_path, self.source_vocab,
self.target_vocab))
dataset = dataset_ops.Dataset.from_generator(gen, tf.string)
preprocess = False
def decode_record(record):
"""Serialized Example to dict of <feature name, Tensor>."""
decoder = tf.contrib.slim.tfexample_decoder.TFExampleDecoder(
data_fields, data_items_to_decoders)
decode_items = list(data_items_to_decoders)
decoded = decoder.decode(record, items=decode_items)
return dict(zip(decode_items, decoded))
def _preprocess(example):
"""Whether preprocess data into required format."""
example = self.preprocess_example(example, mode, hparams)
self.maybe_reverse_features(example)
self.maybe_copy_features(example)
return example
dataset = (tf.data.Dataset.from_tensor_slices(data_files)
.interleave(lambda x:
tf.data.TFRecordDataset(x).map(decode_record,
num_parallel_calls=4),
cycle_length=4, block_length=16))
if preprocess:
dataset = dataset.map(_preprocess, num_parallel_calls=4)
return dataset
class Gen:
"""Generator class for dataset creation.
Function dataset_ops.Dataset.from_generator() required callable generator
object."""
def __init__(self, gen):
""" Initialize generator."""
self._gen = gen
def __call__(self):
for case in self._gen:
source_ints = case["inputs"]
target_ints = case["targets"]
yield generator_utils.to_example({"inputs":source_ints,
"targets":target_ints})
def generate_preprocess_files(train_gen, dev_gen, train_preprocess_path,
dev_preprocess_path):
"""Generate cases from a generators and save as TFRecord files.
Generated cases are transformed to tf.Example protos and saved as TFRecords
in sharded files named output_dir/output_name-00..N-of-00..M=num_shards.
Args:
train_gen: a generator yielding (string -> int/float/str list) train data.
dev_gen: a generator yielding development data.
train_preprocess_path: path to the file where preprocessed train data
will be saved.
dev_preprocess_path: path to the file where preprocessed development data
will be saved.
"""
if dev_gen:
gen_file(train_gen, train_preprocess_path)
gen_file(dev_gen, dev_preprocess_path)
else:
# In case when development generator was not given, we create development
# preprocess file from train generator.
train_writer = tf.python_io.TFRecordWriter(train_preprocess_path)
dev_writer = tf.python_io.TFRecordWriter(dev_preprocess_path)
line_counter = 1
for case in train_gen:
sequence_example = generator_utils.to_example(case)
if line_counter % 20 == 0:
dev_writer.write(sequence_example.SerializeToString())
else:
train_writer.write(sequence_example.SerializeToString())
line_counter += 1
train_writer.close()
dev_writer.close()
def gen_file(generator, output_file_path):
"""Generate cases from generator and save as TFRecord file.
Args:
generator: a generator yielding (string -> int/float/str list) data.
output_file_path: path to the file where preprocessed data will be saved.
"""
writer = tf.python_io.TFRecordWriter(output_file_path)
for case in generator:
sequence_example = generator_utils.to_example(case)
writer.write(sequence_example.SerializeToString())
writer.close()
def create_data_files(init_train_path, init_dev_path, init_test_path,
cleanup=False, p2g_mode=False):
"""Create train, development and test data files from initial data files
in case when not provided development or test data files or active cleanup
flag.
Args:
init_train_path: path to the train data file.
init_dev_path: path to the development data file.
init_test_path: path to the test data file.
cleanup: flag indicating whether to cleanup datasets from stress and
comments.
Returns:
train_path: path to the new train data file generated from initially
provided data.
dev_path: path to the new development data file generated from initially
provided data.
test_path: path to the new test data file generated from initially
provided data.
"""
train_path, dev_path, test_path = init_train_path, init_dev_path,\
init_test_path
if (init_dev_path and init_test_path and os.path.exists(init_dev_path) and
os.path.exists(init_test_path)):
if not cleanup:
return init_train_path, init_dev_path, init_test_path
else:
train_path = init_train_path + ".part.train"
if init_dev_path:
if not os.path.exists(init_dev_path):
raise IOError("File {} not found.".format(init_dev_path))
else:
dev_path = init_train_path + ".part.dev"
if init_test_path:
if not os.path.exists(init_test_path):
raise IOError("File {} not found.".format(init_test_path))
else:
test_path = init_train_path + ".part.test"
if cleanup:
train_path += ".cleanup"
dev_path += ".cleanup"
test_path += ".cleanup"
train_dic, dev_dic, test_dic = OrderedDict(), OrderedDict(), OrderedDict()
source_dic = collect_pronunciations(source_path=init_train_path,
cleanup=cleanup, p2g_mode=p2g_mode)
if init_dev_path:
dev_dic = collect_pronunciations(source_path=init_dev_path,
cleanup=cleanup, p2g_mode=p2g_mode)
if init_test_path:
test_dic = collect_pronunciations(source_path=init_test_path,
cleanup=cleanup, p2g_mode=p2g_mode)
#Split dictionary to train, validation and test (if not assigned).
for word_counter, (word, pronunciations) in enumerate(source_dic.items()):
if word_counter % 20 == 19 and not init_dev_path:
dev_dic[word] = pronunciations
elif ((word_counter % 20 == 18 or word_counter % 20 == 17) and
not init_test_path):
test_dic[word] = pronunciations
else:
train_dic[word] = pronunciations
save_dic(train_dic, train_path)
if not init_dev_path or cleanup:
save_dic(dev_dic, dev_path)
if not init_test_path or cleanup:
save_dic(test_dic, test_path)
return train_path, dev_path, test_path
def collect_pronunciations(source_path, cleanup=False, p2g_mode=False):
"""Create dictionary mapping word to its different pronounciations.
Args:
source_path: path to the data file;
cleanup: flag indicating whether to cleanup datasets from stress and
comments.
Returns:
dic: dictionary mapping word to its pronunciations.
"""
dic = OrderedDict()
with tf.gfile.GFile(source_path, mode="r") as source_file:
word_counter = 0
for line in source_file:
if line:
source, target = split_graphemes_phonemes(line, cleanup=cleanup,
p2g_mode=p2g_mode)
if not (source, target):
tf.logging.warning("Invalid data format in line {} in {}:\n"
"{}\nGraphemes and phonemes should be separated by white space."
.format(line_idx, source_path, line))
continue
if source in dic:
dic[source].append(target)
else:
dic[source] = [target]
return dic
def split_graphemes_phonemes(input_line, cleanup=False, p2g_mode=False):
"""Split line into graphemes and phonemes.
Args:
input_line: raw input line;
cleanup: flag indicating whether to cleanup datasets from stress and
comments.
Returns:
graphemes: graphemes string;
phonemes: phonemes string.
"""
line = input_line
if cleanup:
clean_pattern = re.compile(r"(\[.*\]|\{.*\}|\(.*\)|#.*)")
stress_pattern = re.compile(r"(?<=[a-zA-Z])\d+")
line = re.sub(clean_pattern, r"", line)
line = re.sub(stress_pattern, r"", line)
items = line.split()
source, target = None, None
if len(items) > 1:
if not p2g_mode:
source, target = items[0].strip(), " ".join(items[1:]).strip()
else:
source, target = " ".join(items[:-1]).strip(), items[-1].strip()
return source, target
def save_dic(dic, save_path):
with tf.gfile.GFile(save_path, mode="w") as save_file:
for word, pronunciations in dic.items():
for pron in pronunciations:
save_file.write(word + " " + pron + "\n")
| 37.932821 | 80 | 0.681071 |
7943bdd0f4b5e55f8c3c1ee2f3e60e57f423024b | 57 | py | Python | kerascv/examples/voc_12_download.py | tanzhenyu/keras-cv | b7208ee25735c492ccc171874e34076111dcf637 | [
"Apache-2.0"
] | null | null | null | kerascv/examples/voc_12_download.py | tanzhenyu/keras-cv | b7208ee25735c492ccc171874e34076111dcf637 | [
"Apache-2.0"
] | null | null | null | kerascv/examples/voc_12_download.py | tanzhenyu/keras-cv | b7208ee25735c492ccc171874e34076111dcf637 | [
"Apache-2.0"
] | null | null | null | import tensorflow_datasets as tfds
tfds.load('voc/2012') | 19 | 34 | 0.807018 |
7943c192fe8b15cc93391b3595562a1114a7117d | 2,697 | py | Python | src/signer/eth/signer.py | Leibniz137/EthereumBridge | 4b82a68cdc09e5ea79ec2fbf87aa065a2a3a5ffa | [
"MIT"
] | null | null | null | src/signer/eth/signer.py | Leibniz137/EthereumBridge | 4b82a68cdc09e5ea79ec2fbf87aa065a2a3a5ffa | [
"MIT"
] | null | null | null | src/signer/eth/signer.py | Leibniz137/EthereumBridge | 4b82a68cdc09e5ea79ec2fbf87aa065a2a3a5ffa | [
"MIT"
] | null | null | null | from threading import Thread, Event
from time import sleep
from src.contracts.ethereum.event_listener import EthEventListener
from src.contracts.ethereum.multisig_wallet import MultisigWallet
from src.signer.eth.impl import EthSignerImpl
from src.util.config import Config
from src.util.crypto_store.crypto_manager import CryptoManagerBase
from src.util.logger import get_logger
class EtherSigner(Thread):
"""
secretETH --> Swap TX --> ETH
On Ethereum the leader monitors the sETH Secret Contract. When it sees a new swap, it will
broadcast a submit transaction on-chain.
On detecting a submit transaction from the event listener, the signer signs and broadcasts
a confirm transaction on-chain. The multisig contract, after receiving a number of confirmations
greater than the set threshold will trigger the transfer of funds
Will first attempt to catch up with unsigned transactions by scanning past events,
and only then will start monitoring new transactions.
The account set here must have enough ETH for all the transactions you're planning on doing
"""
def __init__(
self,
contract: MultisigWallet,
signer: CryptoManagerBase,
dst_network: str,
config: Config,
**kwargs
):
self.account = signer.address
# self.private_key = private_key
self.event_listener = EthEventListener(contract, config)
self.stop_event = Event()
self.logger = get_logger(
db_name=config['db_name'],
logger_name=config.get('logger_name', f"{self.__class__.__name__}-{self.account[0:5]}")
)
self.config = config
self.signer = EthSignerImpl(contract, signer, dst_network, config)
super().__init__(group=None, name=f"{self.__class__.__name__}-{self.account[0:5]}", target=self.run, **kwargs)
self.setDaemon(True) # so tests don't hang
def run(self):
self.logger.info("Starting..")
from_block = self.choose_starting_block()
self.event_listener.register(self.signer.sign, ['Submission'], from_block=from_block)
self.event_listener.start()
while not self.stop_event.is_set():
if not self.event_listener.is_alive():
self.logger.critical("Event listener stopped - stopping signer")
self.stop()
sleep(10)
def stop(self):
self.logger.info("Stopping..")
self.event_listener.stop()
self.stop_event.set()
def choose_starting_block(self) -> int:
"""Returns the block from which we start scanning Ethereum for new tx"""
return int(self.config.get('eth_start_block', 0))
| 38.528571 | 118 | 0.687801 |
7943c1ba5c70403f8e97dfc777520fd4b6abca25 | 201 | py | Python | Coursera/CICCP1/primalidade.py | marcelomiky/python_code | 4b843c78e16c37e981e4adfe47ae974ee0f2ad81 | [
"MIT"
] | 2 | 2020-10-19T13:53:59.000Z | 2021-08-05T19:48:07.000Z | Coursera/CICCP1/primalidade.py | marcelomiky/PythonCodes | 07f0db8019805b3b9567b7b57ddb49b4333a3aa2 | [
"MIT"
] | null | null | null | Coursera/CICCP1/primalidade.py | marcelomiky/PythonCodes | 07f0db8019805b3b9567b7b57ddb49b4333a3aa2 | [
"MIT"
] | null | null | null | x = int(input('Insira um número inteiro positivo:'))
divisores = 0
for i in range(1, x):
if x % i == 0:
divisores += 1
if divisores >= 2:
print('não primo')
else:
print('primo')
| 15.461538 | 52 | 0.572139 |
7943c306dcd128b0eb78bacb2a294f5f1a071b7f | 9,765 | py | Python | docs/conf.py | hthompson6/a10-neutron-lbaas | f1639758cd3abcc6c86c8e6b64dcb0397c359621 | [
"Apache-2.0"
] | 10 | 2015-09-15T05:16:15.000Z | 2020-03-18T02:34:39.000Z | docs/conf.py | hthompson6/a10-neutron-lbaas | f1639758cd3abcc6c86c8e6b64dcb0397c359621 | [
"Apache-2.0"
] | 334 | 2015-02-11T23:45:00.000Z | 2020-02-28T08:58:51.000Z | docs/conf.py | hthompson6/a10-neutron-lbaas | f1639758cd3abcc6c86c8e6b64dcb0397c359621 | [
"Apache-2.0"
] | 24 | 2015-01-13T21:14:45.000Z | 2021-06-02T17:22:14.000Z | # -*- coding: utf-8 -*-
#
# A10 Openstack documentation build configuration file, created by
# sphinx-quickstart on Wed Apr 20 19:30:55 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
import sphinx_rtd_theme
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.intersphinx',
'sphinxcontrib.httpdomain'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'A10 Openstack'
copyright = u'2016, A10 Networks'
author = u'A10 Networks'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u'git this from the version file'
# The full version, including alpha/beta/rc tags.
release = u'git this from the version file'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinx_rtd_theme'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#html_title = u'A10 Openstack vgit this from the version file'
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not None, a 'Last updated on:' timestamp is inserted at every page
# bottom, using the given strftime format.
# The empty string is equivalent to '%b %d, %Y'.
#html_last_updated_fmt = None
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
#html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'A10Openstackdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# Latex figure (float) alignment
#'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'A10Openstack.tex', u'A10 Openstack Documentation',
u'A10 Networks', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'a10openstack', u'A10 Openstack Documentation',
[author], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'A10Openstack', u'A10 Openstack Documentation',
author, 'A10Openstack', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'https://docs.python.org/': None}
| 32.989865 | 80 | 0.722171 |
7943c34d5f4d8ec022c72eb4522fb59aef9e7d29 | 6,284 | py | Python | landmark/models/mobilefacenet.py | greedpejo/FER_SPRING | 622d093da579d17e167f2771da4fb3365824cdd3 | [
"MIT"
] | 433 | 2020-04-02T04:24:50.000Z | 2022-03-21T12:57:53.000Z | landmark/models/mobilefacenet.py | greedpejo/FER_SPRING | 622d093da579d17e167f2771da4fb3365824cdd3 | [
"MIT"
] | 35 | 2020-04-09T02:13:52.000Z | 2022-03-07T07:48:10.000Z | landmark/models/mobilefacenet.py | greedpejo/FER_SPRING | 622d093da579d17e167f2771da4fb3365824cdd3 | [
"MIT"
] | 72 | 2020-04-02T21:57:37.000Z | 2022-01-10T02:50:33.000Z | from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, ReLU, Sigmoid, Dropout2d, Dropout, AvgPool2d, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module, Parameter
import torch.nn.functional as F
import torch
import torch.nn as nn
from collections import namedtuple
import math
import pdb
################################## Original Arcface Model #############################################################
class Flatten(Module):
def forward(self, input):
return input.view(input.size(0), -1)
################################## MobileFaceNet #############################################################
class Conv_block(Module):
def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):
super(Conv_block, self).__init__()
self.conv = Conv2d(in_c, out_channels=out_c, kernel_size=kernel, groups=groups, stride=stride, padding=padding, bias=False)
self.bn = BatchNorm2d(out_c)
self.prelu = PReLU(out_c)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.prelu(x)
return x
class Linear_block(Module):
def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1):
super(Linear_block, self).__init__()
self.conv = Conv2d(in_c, out_channels=out_c, kernel_size=kernel, groups=groups, stride=stride, padding=padding, bias=False)
self.bn = BatchNorm2d(out_c)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return x
class Depth_Wise(Module):
def __init__(self, in_c, out_c, residual = False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1):
super(Depth_Wise, self).__init__()
self.conv = Conv_block(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1))
self.conv_dw = Conv_block(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride)
self.project = Linear_block(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1))
self.residual = residual
def forward(self, x):
if self.residual:
short_cut = x
x = self.conv(x)
x = self.conv_dw(x)
x = self.project(x)
if self.residual:
output = short_cut + x
else:
output = x
return output
class Residual(Module):
def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)):
super(Residual, self).__init__()
modules = []
for _ in range(num_block):
modules.append(Depth_Wise(c, c, residual=True, kernel=kernel, padding=padding, stride=stride, groups=groups))
self.model = Sequential(*modules)
def forward(self, x):
return self.model(x)
class GNAP(Module):
def __init__(self, embedding_size):
super(GNAP, self).__init__()
assert embedding_size == 512
self.bn1 = BatchNorm2d(512, affine=False)
self.pool = nn.AdaptiveAvgPool2d((1, 1))
self.bn2 = BatchNorm1d(512, affine=False)
def forward(self, x):
x = self.bn1(x)
x_norm = torch.norm(x, 2, 1, True)
x_norm_mean = torch.mean(x_norm)
weight = x_norm_mean / x_norm
x = x * weight
x = self.pool(x)
x = x.view(x.shape[0], -1)
feature = self.bn2(x)
return feature
class GDC(Module):
def __init__(self, embedding_size):
super(GDC, self).__init__()
self.conv_6_dw = Linear_block(512, 512, groups=512, kernel=(7,7), stride=(1, 1), padding=(0, 0))
self.conv_6_flatten = Flatten()
self.linear = Linear(512, embedding_size, bias=False)
#self.bn = BatchNorm1d(embedding_size, affine=False)
self.bn = BatchNorm1d(embedding_size)
def forward(self, x):
x = self.conv_6_dw(x)
x = self.conv_6_flatten(x)
x = self.linear(x)
x = self.bn(x)
return x
class MobileFaceNet(Module):
def __init__(self, input_size, embedding_size = 512, output_name = "GDC"):
super(MobileFaceNet, self).__init__()
assert output_name in ["GNAP", 'GDC']
assert input_size[0] in [112]
self.conv1 = Conv_block(3, 64, kernel=(3, 3), stride=(2, 2), padding=(1, 1))
self.conv2_dw = Conv_block(64, 64, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
self.conv_23 = Depth_Wise(64, 64, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128)
self.conv_3 = Residual(64, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1))
self.conv_34 = Depth_Wise(64, 128, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256)
self.conv_4 = Residual(128, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1))
self.conv_45 = Depth_Wise(128, 128, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512)
self.conv_5 = Residual(128, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1))
self.conv_6_sep = Conv_block(128, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0))
if output_name == "GNAP":
self.output_layer = GNAP(512)
else:
self.output_layer = GDC(embedding_size)
self._initialize_weights()
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
if m.bias is not None:
m.bias.data.zero_()
def forward(self, x):
out = self.conv1(x)
out = self.conv2_dw(out)
out = self.conv_23(out)
out = self.conv_3(out)
out = self.conv_34(out)
out = self.conv_4(out)
out = self.conv_45(out)
out = self.conv_5(out)
conv_features = self.conv_6_sep(out)
out = self.output_layer(conv_features)
return out, conv_features
| 39.275 | 175 | 0.58275 |
7943c67c61599df8b5c24b600e482e73be5973fd | 274 | py | Python | users/permissions.py | tgamauf/spritstat | 849526ec8dec46c57194d50ff3b32c16d0cb684a | [
"MIT"
] | 1 | 2022-01-30T10:50:14.000Z | 2022-01-30T10:50:14.000Z | users/permissions.py | tgamauf/spritstat | 849526ec8dec46c57194d50ff3b32c16d0cb684a | [
"MIT"
] | 47 | 2022-02-02T22:07:28.000Z | 2022-03-30T13:53:37.000Z | users/permissions.py | tgamauf/spritstat | 849526ec8dec46c57194d50ff3b32c16d0cb684a | [
"MIT"
] | null | null | null | from rest_framework import permissions
class IsOwner(permissions.BasePermission):
"""
Custom permission to only allow owners of an object to view or edit it.
"""
def has_object_permission(self, request, view, obj):
return obj.user == request.user
| 24.909091 | 75 | 0.711679 |
7943c6ce3e9ed2545d9a0ab661efa5022c630c05 | 1,066 | py | Python | pushcast/pcfilter.py | anotherpyr/pushmac | 44e87cf6642c59465fc84c8e564829d33a2006ca | [
"Apache-2.0"
] | null | null | null | pushcast/pcfilter.py | anotherpyr/pushmac | 44e87cf6642c59465fc84c8e564829d33a2006ca | [
"Apache-2.0"
] | null | null | null | pushcast/pcfilter.py | anotherpyr/pushmac | 44e87cf6642c59465fc84c8e564829d33a2006ca | [
"Apache-2.0"
] | null | null | null | '''
Created on Aug 24, 2015
@author: anotherpyr
'''
class SimpleDescription():
def filter(self, lines):
output = u""
append = False
for k in range(0, len(lines)):
line = lines[k]
# Remove excess URLs from descriptions
if line.find("://") < 0:
if append:
output += u" "
else:
append = True
output += line
# if this removed all of the lines
if len(output) < 1:
#Then remove just URLs (could be humorous
for k in range(0, len(lines)):
if append:
output += u" "
else:
append = True
output += lines[k]
output = output.replace("<p>", "")
output = output.replace("</p>", "")
output = output.replace("<span>", "")
output = output.replace("</span>", "")
return output | 27.333333 | 53 | 0.407129 |
7943c7f8c30524dbca6c55e6c847544a19bd29ba | 6,803 | py | Python | lit_nlp/lib/wsgi_app.py | bigprince97/lit | cd221e2e6fa02518ccd2fda17d43074939d952a3 | [
"Apache-2.0"
] | 2 | 2020-08-27T06:12:30.000Z | 2020-08-27T06:13:08.000Z | lit_nlp/lib/wsgi_app.py | screwdriver66/lit | a40ef90b514383fb78b3f8742aea31135445693e | [
"Apache-2.0"
] | null | null | null | lit_nlp/lib/wsgi_app.py | screwdriver66/lit | a40ef90b514383fb78b3f8742aea31135445693e | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
# Lint as: python3
"""Simple WSGI app implementation.
This takes a list of handlers, and creates a WSGI application that can be served
through a variety of methods.
Why not use Flask or something? Historical reasons, and if it ain't broke, don't
fix it.
"""
import mimetypes
import os
import time
import traceback
import wsgiref.handlers
from absl import logging
import six
from six.moves.urllib.parse import urlparse
from werkzeug import wrappers
def _LoadResource(path):
"""Load the resource at given path.
Args:
path: a string resource path.
Returns:
The contents of that resource.
Raises:
ValueError: If the path is not set up correctly.
IOError: If the path is not found, or the resource can't be opened.
"""
try:
with open(path,"rb") as f:
return f.read()
except IOError as e:
logging.warning('IOError %s on path %s', e, path)
raise e
class App(object):
"""Standalone WSGI app that can serve files, etc."""
_TEXTUAL_MIMETYPES = set([
'application/javascript',
'application/json',
'application/json+protobuf',
'image/svg+xml',
'text/css',
'text/csv',
'text/html',
'text/json',
'text/plain',
'text/tab-separated-values',
'text/x-protobuf',
])
def __init__(self, handlers, project_root, index_file='index.html'):
self._handlers = handlers
self._project_root = project_root
self._index_file = index_file
def respond( # pylint: disable=invalid-name
self,
request,
content,
content_type,
code=200,
expires=0,
content_encoding=None):
"""Construct a werkzeug WSGI response object.
Args:
request: A werkzeug Request object. Used mostly to check the
Accept-Encoding header.
content: Payload data as bytes or unicode string (will be UTF-8 encoded).
content_type: Media type only - "charset=utf-8" will be added for text.
code: Numeric HTTP status code to use.
expires: Second duration for browser caching, default 0.
content_encoding: Encoding if content is already encoded, e.g. 'gzip'.
Returns:
A werkzeug Response object (a WSGI application).
"""
if isinstance(content, six.text_type):
content = content.encode('utf-8')
if content_type in self._TEXTUAL_MIMETYPES:
content_type += '; charset=utf-8'
headers = []
headers.append(('Content-Length', str(len(content))))
if content_encoding:
headers.append(('Content-Encoding', content_encoding))
if expires > 0:
e = wsgiref.handlers.format_date_time(time.time() + float(expires))
headers.append(('Expires', e))
headers.append(('Cache-Control', 'private, max-age=%d' % expires))
else:
headers.append(('Expires', '0'))
headers.append(('Cache-Control', 'no-cache, must-revalidate'))
if request.method == 'HEAD':
content = None
return wrappers.Response(
response=content,
status=code,
headers=headers,
content_type=content_type)
def _ServeStaticFile(self, request, path):
"""Serves the static file located at the given path.
Args:
request: A Werkzeug Request object.
path: The path of the static file, relative to the current directory.
Returns:
A Werkzeug Response object.
"""
if not self._PathIsSafe(path):
logging.info('path %s not safe, sending 400', path)
# Traversal attack, so 400.
return self.respond(request, 'Path not safe', 'text/plain', 400)
# Open the file and read it.
try:
contents = _LoadResource(path)
except IOError:
logging.info('path %s not found, sending 404', path)
return self.respond(request, 'Not found', 'text/plain', code=404)
mimetype, content_encoding = mimetypes.guess_type(path)
mimetype = mimetype or 'application/octet-stream'
return self.respond(
request,
contents,
mimetype,
expires=3600,
content_encoding=content_encoding)
def _PathIsSafe(self, path):
"""Check path is safe (stays within current directory).
This is for preventing directory-traversal attacks.
Args:
path: The path to check for safety.
Returns:
True if the given path stays within the current directory, and false
if it would escape to a higher directory. E.g. _path_is_safe('index.html')
returns true, but _path_is_safe('../../../etc/password') returns false.
"""
base = os.path.abspath(os.curdir)
absolute_path = os.path.abspath(path)
prefix = os.path.commonprefix([base, absolute_path])
return prefix == base
def _ServeCustomHandler(self, request, clean_path):
return self._handlers[clean_path](self, request)
def __call__(self, environ, start_response):
"""Implementation of the WSGI interface."""
request = wrappers.Request(environ)
try:
parsed_url = urlparse(request.path)
# Remove a trailing slash, if present.
clean_path = parsed_url.path
if clean_path.endswith('/'):
clean_path = clean_path[:-1]
if clean_path in self._handlers:
return self._ServeCustomHandler(request, clean_path)(environ,
start_response)
else:
is_index = not clean_path or clean_path == '/index.html'
if is_index:
clean_path = os.path.join(self._project_root, self._index_file)
else:
# Strip off the leading forward slash. Don't do it for index because
# in the vulcanized version we use an absolute path.
clean_path = os.path.join(self._project_root, clean_path.lstrip('/'))
response = self._ServeStaticFile(request, clean_path)
except Exception as e: # pylint: disable=broad-except
errors = (str(e), str(traceback.format_exc()))
html_response = (
'<code>Uncaught error: %s <br><br> <code>%s</code></code>' % errors)
logging.error('Uncaught error: %s \n\n %s', *errors)
response = self.respond(request, html_response, 'text/html', 500)
return response(environ, start_response)
| 32.089623 | 80 | 0.657504 |
7943c83a843e00548170e79563c3645c1a2669e9 | 2,140 | py | Python | binho/binhoHostAdapter.py | pixelfelon/binho-python-package | 92f4898722f0578973cb672d1387fbea6cb7bb3d | [
"BSD-3-Clause"
] | 4 | 2021-03-11T12:40:27.000Z | 2022-02-01T10:08:20.000Z | binho/binhoHostAdapter.py | pixelfelon/binho-python-package | 92f4898722f0578973cb672d1387fbea6cb7bb3d | [
"BSD-3-Clause"
] | 1 | 2021-11-26T10:20:18.000Z | 2021-11-30T14:25:56.000Z | binho/binhoHostAdapter.py | pixelfelon/binho-python-package | 92f4898722f0578973cb672d1387fbea6cb7bb3d | [
"BSD-3-Clause"
] | 2 | 2021-02-28T00:39:39.000Z | 2021-04-05T12:45:56.000Z | from .device import binhoDevice
from .devices import nova # pylint: disable=unused-import
# Ensure that we have access to all Binho host adapter devices. Normally, we'd avoid
# importing an entire namespace, but in this case, this allows us to ensure
# that all board modules are loaded for autoidentification.
from .errors import DeviceNotFoundError
active_connections = {}
def binhoHostAdapter(**board_identifiers):
"""
Attempts to create a new instance of binhoHostAdapter board (sub)class
most applicable to the given device. For example, if the attached
board is a Binho Nova, this will automatically create a
binhoNova object.
Accepts the same arguments as pyusb's usb.find() method, allowing narrowing
to a more specific binhoHostAdapter by e.g. serial number. Like usb.find(), providing
find_all will return a list of all found devices.
Throws a DeviceNotFoundError if no device is avaiable and find_all is not set.
"""
if not board_identifiers:
board_identifiers["index"] = 0
return binhoDevice.autodetect(board_identifiers)
if (
"port" in board_identifiers
and board_identifiers["port"]
or "deviceID" in board_identifiers
and board_identifiers["deviceID"]
or "index" in board_identifiers
):
return binhoDevice.autodetect(board_identifiers)
if "find_all" in board_identifiers and board_identifiers["find_all"]:
return binhoDevice.autodetect_all(board_identifiers)
raise DeviceNotFoundError
def binhoHostAdapterSingleton(serial=None):
""" Returns a binhoHostAdapter object, re-using an existing object if we already have a connection to the given
binhoHostAdapter. """
# If we already have a binhoHostAdapter with the given serial,
if serial in active_connections:
device = active_connections[serial]
if device.comms.still_connected():
return device
# Otherwise, try to create a new binhoHostAdapter instance.
hostAdapter = binhoHostAdapter(serial_number=serial)
active_connections[serial] = hostAdapter
return hostAdapter
| 37.54386 | 115 | 0.734579 |
7943c84e63e15045ad36ea2542a48f3fd21ce1d5 | 4,118 | py | Python | demo/demo/settings.py | mwj9446/MWJ_python | 48a152803249a62cbedff1a4e7f8367ea861ede5 | [
"MIT"
] | null | null | null | demo/demo/settings.py | mwj9446/MWJ_python | 48a152803249a62cbedff1a4e7f8367ea861ede5 | [
"MIT"
] | null | null | null | demo/demo/settings.py | mwj9446/MWJ_python | 48a152803249a62cbedff1a4e7f8367ea861ede5 | [
"MIT"
] | null | null | null | """
Django settings for demo project.
Generated by 'django-admin startproject' using Django 1.11.11.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
# 加密解密
SECRET_KEY = 'tw9tiw+0=wm=fhy#h5$*1^qz5a!8$oynz-@fql-^$mql4!mm%q'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['106.12.192.168', 'localhost', '0.0.0.0', '127.0.0.1']
# Application definition
# users子应用。apps。子应用的apps.py中的类名
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'users.apps.UsersConfig',
'login.apps.LoginConfig',
'index.apps.IndexConfig',
'booktest.apps.BooktestConfig'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
# 'django.middleware.csrf.CsrfViewMiddleware', # POST 请求
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'demo.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'template')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'demo.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
# DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.sqlite3',
# 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
# }
# }
#mysql配置
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': '106.12.192.168', # 数据库主机
'PORT': 3306, # 数据库端口
'USER': 'root', # 数据库用户名
'PASSWORD': 'maiweijie', # 数据库用户密码
'NAME': 'demo' # 数据库名字
}
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'zh-hans'
TIME_ZONE = 'Asia/Shanghai'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static')]
# SESSION_ENGINE = 'django.contrib.sessions.backends.db'
# redis 数据库配置
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://:[email protected]:6379/1",
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
SESSION_CACHE_ALIAS = "default"
| 27.637584 | 91 | 0.673871 |
7943c929266ff7cf0baff3e98fb9bc8a61155826 | 987 | py | Python | hwtHls/platform/opRealizationMeta.py | Nic30/hwtHls | 1fac6ed128318e698d51e15e9871249ddf243e1c | [
"MIT"
] | 8 | 2018-09-25T03:28:11.000Z | 2021-12-15T07:44:38.000Z | hwtHls/platform/opRealizationMeta.py | Nic30/hwtHls | 1fac6ed128318e698d51e15e9871249ddf243e1c | [
"MIT"
] | 1 | 2020-12-21T10:56:44.000Z | 2020-12-21T10:56:44.000Z | hwtHls/platform/opRealizationMeta.py | Nic30/hwtHls | 1fac6ed128318e698d51e15e9871249ddf243e1c | [
"MIT"
] | 2 | 2018-09-25T03:28:18.000Z | 2021-12-15T10:28:35.000Z |
class OpRealizationMeta():
"""
:ivar in_cycles_offset: number of cycles from component first cycle when the input is accepted
:ivar latency_pre: minimal amount of time until next clock cycle
:ivar latency_post: time required to stabilize output value after clock cycle
:ivar cycles_latency: number of clock cycles required for data to reach output
:ivar cycles_delay: number of cycles required until input can process other data
"""
def __init__(self, cycles_in_offset=0, latency_pre=0.0, latency_post=0.0,
cycles_latency=0, cycles_delay=0):
self.in_cycles_offset = cycles_in_offset
self.latency_pre = latency_pre
self.latency_post = latency_post
self.cycles_latency = cycles_latency
self.cycles_delay = cycles_delay
EMPTY_OP_REALIZATION = OpRealizationMeta()
UNSPECIFIED_OP_REALIZATION = OpRealizationMeta(
latency_pre=None, latency_post=None,
cycles_latency=None, cycles_delay=None)
| 39.48 | 98 | 0.742655 |
7943cae5f0b20dd8f424d31b0b38bc6da0464e63 | 108 | py | Python | yt_dlp/version.py | git-anony-mouse/yt-dlp | 6da22e7d4f1ffcda3f802da3e56ac6e171095388 | [
"Unlicense"
] | 1 | 2021-12-25T10:22:59.000Z | 2021-12-25T10:22:59.000Z | yt_dlp/version.py | git-anony-mouse/yt-dlp | 6da22e7d4f1ffcda3f802da3e56ac6e171095388 | [
"Unlicense"
] | null | null | null | yt_dlp/version.py | git-anony-mouse/yt-dlp | 6da22e7d4f1ffcda3f802da3e56ac6e171095388 | [
"Unlicense"
] | null | null | null | # Autogenerated by devscripts/update-version.py
__version__ = '2021.12.25'
RELEASE_GIT_HEAD = '87e049962'
| 18 | 47 | 0.777778 |
7943cd0717675a774dbe1e28d49b4461b94902a5 | 332 | py | Python | AppPkg/Applications/Python/Python-2.7.2/Lib/test/infinite_reload.py | CEOALT1/RefindPlusUDK | 116b957ad735f96fbb6d80a0ba582046960ba164 | [
"BSD-2-Clause"
] | 2,757 | 2018-04-28T21:41:36.000Z | 2022-03-29T06:33:36.000Z | AppPkg/Applications/Python/Python-2.7.2/Lib/test/infinite_reload.py | CEOALT1/RefindPlusUDK | 116b957ad735f96fbb6d80a0ba582046960ba164 | [
"BSD-2-Clause"
] | 20 | 2019-07-23T15:29:32.000Z | 2022-01-21T12:53:04.000Z | AppPkg/Applications/Python/Python-2.7.2/Lib/test/infinite_reload.py | CEOALT1/RefindPlusUDK | 116b957ad735f96fbb6d80a0ba582046960ba164 | [
"BSD-2-Clause"
] | 449 | 2018-05-09T05:54:05.000Z | 2022-03-30T14:54:18.000Z | # For testing http://python.org/sf/742342, which reports that Python
# segfaults (infinite recursion in C) in the presence of infinite
# reload()ing. This module is imported by test_import.py:test_infinite_reload
# to make sure this doesn't happen any more.
import imp
import infinite_reload
imp.reload(infinite_reload)
| 36.888889 | 79 | 0.771084 |
7943cd2e5d949d4f95cac104cd86bdde812e6cb3 | 1,043 | py | Python | src/config/middleware.py | vaporyorg/safe-config-service | b90d0dce0a528738859d42e90766f621ca37e89c | [
"MIT"
] | null | null | null | src/config/middleware.py | vaporyorg/safe-config-service | b90d0dce0a528738859d42e90766f621ca37e89c | [
"MIT"
] | 40 | 2021-06-14T06:56:59.000Z | 2022-03-21T10:07:27.000Z | src/config/middleware.py | vaporyorg/safe-config-service | b90d0dce0a528738859d42e90766f621ca37e89c | [
"MIT"
] | null | null | null | import logging
import time
from django.http import HttpRequest
class LoggingMiddleware:
def __init__(self, get_response):
self.get_response = get_response
self.logger = logging.getLogger("LoggingMiddleware")
@staticmethod
def get_milliseconds_now():
return int(time.time() * 1000)
def __call__(self, request: HttpRequest):
# before view (and other middleware) are called
milliseconds = self.get_milliseconds_now()
response = self.get_response(request)
# after view is called
if request.resolver_match:
route = (
request.resolver_match.route[1:]
if request.resolver_match
else request.path
)
self.logger.info(
"MT::%s::%s::%s::%d::%s",
request.method,
route,
self.get_milliseconds_now() - milliseconds,
response.status_code,
request.path,
)
return response
| 27.447368 | 60 | 0.571429 |
7943cd72a638ac648c144e3cfa0a08f8b2a8ed55 | 1,594 | py | Python | tests/test_execution.py | bpannier/TahomaProtocol | be78f68be8776134a2923389aa3e31cce445b676 | [
"Apache-2.0"
] | 3 | 2016-08-30T09:02:08.000Z | 2018-01-09T09:35:55.000Z | tests/test_execution.py | bpannier/TahomaProtocol | be78f68be8776134a2923389aa3e31cce445b676 | [
"Apache-2.0"
] | null | null | null | tests/test_execution.py | bpannier/TahomaProtocol | be78f68be8776134a2923389aa3e31cce445b676 | [
"Apache-2.0"
] | null | null | null | import unittest
from tahoma.execution import Execution
from tahoma.eventState import EventState
class TestExecution(unittest.TestCase):
def test_parse(self):
data = {
"startTime": 1448329821365,
"owner": "[email protected]",
"actionGroup": {
"label": "DeviceName - Action - AppDevice",
"shortcut": False,
"actions": [{
"deviceURL": "io://1234-1234-1234/12345678",
"commands": [{
"type": 1,
"name": "setClosure",
"parameters": [21]
}]
}]
},
"id": "12345678-1234-1234-1234-1234567890",
"executionType": "Immediate execution",
"executionSubType": "MANUAL_CONTROL",
"state": "NOT_TRANSMITTED"
}
exe = Execution(data)
self.assertEqual(exe.startTime, 1448329821365)
self.assertEqual(exe.id, "12345678-1234-1234-1234-1234567890")
self.assertEqual(exe.state, EventState.NotTransmitted)
self.assertEqual(exe.name, "DeviceName - Action - AppDevice")
self.assertEqual(len(exe.actions), 1 )
self.assertEqual(exe.actions[0].deviceURL, "io://1234-1234-1234/12345678")
self.assertEqual(len(exe.actions[0].commands), 1)
self.assertEqual(exe.actions[0].commands[0].name, "setClosure")
self.assertEqual(len(exe.actions[0].commands[0].parameter), 1)
self.assertEqual(exe.actions[0].commands[0].parameter[0], 21)
| 36.227273 | 82 | 0.556462 |
7943cda13881dc8f37bbbdde6b0f2c3c81b4c748 | 515 | py | Python | env/lib/python3.8/site-packages/plotly/validators/parcoords/line/colorbar/_ticklen.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 76 | 2020-07-06T14:44:05.000Z | 2022-02-14T15:30:21.000Z | env/lib/python3.8/site-packages/plotly/validators/parcoords/line/colorbar/_ticklen.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 11 | 2020-08-09T02:30:14.000Z | 2022-03-12T00:50:14.000Z | env/lib/python3.8/site-packages/plotly/validators/parcoords/line/colorbar/_ticklen.py | acrucetta/Chicago_COVI_WebApp | a37c9f492a20dcd625f8647067394617988de913 | [
"MIT",
"Unlicense"
] | 11 | 2020-07-12T16:18:07.000Z | 2022-02-05T16:48:35.000Z | import _plotly_utils.basevalidators
class TicklenValidator(_plotly_utils.basevalidators.NumberValidator):
def __init__(
self, plotly_name="ticklen", parent_name="parcoords.line.colorbar", **kwargs
):
super(TicklenValidator, self).__init__(
plotly_name=plotly_name,
parent_name=parent_name,
edit_type=kwargs.pop("edit_type", "colorbars"),
min=kwargs.pop("min", 0),
role=kwargs.pop("role", "style"),
**kwargs
)
| 32.1875 | 84 | 0.629126 |
7943ce3232027e9c0568463f9b27b52d0c14aca9 | 2,493 | py | Python | jax/abstract_arrays.py | aldragan0/jax | 3ddd3905a4c8f59480c43c24bb1f9374238bb2a0 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-07-08T16:58:56.000Z | 2021-07-08T16:58:56.000Z | jax/abstract_arrays.py | aldragan0/jax | 3ddd3905a4c8f59480c43c24bb1f9374238bb2a0 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | jax/abstract_arrays.py | aldragan0/jax | 3ddd3905a4c8f59480c43c24bb1f9374238bb2a0 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-07-17T18:17:31.000Z | 2020-07-17T18:17:31.000Z | # Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from functools import partial
import numpy as np
from . import ad_util
from . import core
from . import dtypes
_DIMENSION_TYPES = core._DIMENSION_TYPES
UnshapedArray = core.UnshapedArray
ShapedArray = core.ShapedArray
ConcreteArray = core.ConcreteArray
AbstractToken = core.AbstractToken
abstract_token = core.abstract_token
canonicalize_shape = core.canonicalize_shape
raise_to_shaped = core.raise_to_shaped
def make_shaped_array(x):
dtype = dtypes.canonicalize_dtype(dtypes.result_type(x))
return ShapedArray(np.shape(x), dtype)
def zeros_like_array(x):
dtype = dtypes.canonicalize_dtype(dtypes.result_type(x))
return zeros_like_shaped_array(ShapedArray(np.shape(x), dtype))
array_types = {np.ndarray, np.bool_,
np.int8, np.int16, np.int32, np.int64,
np.uint8, np.uint16, np.uint32, np.uint64,
dtypes.bfloat16, np.float16, np.float32, np.float64,
np.complex64, np.complex128,
np.longlong}
for t in array_types:
core.pytype_aval_mappings[t] = ConcreteArray
ad_util.jaxval_zeros_likers[t] = zeros_like_array
def zeros_like_shaped_array(aval):
assert isinstance(aval, ShapedArray)
if aval.dtype == dtypes.float0:
return np.zeros(aval.shape, dtypes.float0)
return np.broadcast_to(np.array(0, aval.dtype), aval.shape)
ad_util.aval_zeros_likers[ShapedArray] = zeros_like_shaped_array
core.literalable_types.update(array_types)
def _zeros_like_python_scalar(t, x):
return np.array(0, dtypes.python_scalar_dtypes[t])
def _make_concrete_python_scalar(t, x):
return ConcreteArray(
np.array(x, dtype=dtypes.python_scalar_dtypes[t]),
weak_type=True)
for t in dtypes.python_scalar_dtypes:
core.pytype_aval_mappings[t] = partial(_make_concrete_python_scalar, t)
ad_util.jaxval_zeros_likers[t] = partial(_zeros_like_python_scalar, t)
core.literalable_types.update(dtypes.python_scalar_dtypes.keys())
| 32.376623 | 74 | 0.764541 |
7943ce3f07cb07214649b36ee85a6c003703d9cc | 123 | py | Python | ukb/weak_supervision/numbskull/numbskull/__init__.py | wi905252/ukb-cardiac-mri | 3177dde898a65b1d7f385b78e4f134de3852bea5 | [
"Apache-2.0"
] | 19 | 2018-05-30T22:13:17.000Z | 2022-01-18T14:04:40.000Z | ukb/weak_supervision/numbskull/numbskull/__init__.py | wi905252/ukb-cardiac-mri | 3177dde898a65b1d7f385b78e4f134de3852bea5 | [
"Apache-2.0"
] | 1 | 2019-08-07T07:29:07.000Z | 2019-08-07T08:54:10.000Z | ukb/weak_supervision/numbskull/numbskull/__init__.py | wi905252/ukb-cardiac-mri | 3177dde898a65b1d7f385b78e4f134de3852bea5 | [
"Apache-2.0"
] | 8 | 2019-07-03T23:19:43.000Z | 2021-11-15T17:09:24.000Z | """TODO."""
from .numbskull import NumbSkull
from .numbskull import main
__all__ = ('numbskull', 'factorgraph', 'timer')
| 17.571429 | 47 | 0.707317 |
7943cfe26e97bc043278d8f170a11cae053eb661 | 5,332 | py | Python | models/pointnet_seg_svm.py | OmidPoursaeed/Self_supervised_Learning_Point_Clouds | 4f684cc761347f329eb967823f80522a8a3aedc0 | [
"MIT"
] | 11 | 2020-12-16T16:27:36.000Z | 2021-12-01T04:07:56.000Z | models/pointnet_seg_svm.py | OmidPoursaeed/Self_supervised_Learning_Point_Clouds | 4f684cc761347f329eb967823f80522a8a3aedc0 | [
"MIT"
] | 2 | 2021-02-09T11:35:01.000Z | 2021-08-06T01:39:42.000Z | models/pointnet_seg_svm.py | OmidPoursaeed/Self_supervised_Learning_Point_Clouds | 4f684cc761347f329eb967823f80522a8a3aedc0 | [
"MIT"
] | 1 | 2021-08-05T14:07:51.000Z | 2021-08-05T14:07:51.000Z | import tensorflow as tf
import numpy as np
import math
import sys
import os
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(BASE_DIR)
sys.path.append(os.path.join(BASE_DIR, '../utils'))
import tf_util
from transform_nets import input_transform_net, feature_transform_net
def placeholder_inputs(batch_size, num_point):
pointclouds_pl = tf.placeholder(tf.float32,
shape=(batch_size, num_point, 3))
labels_pl = tf.placeholder(tf.int32,
shape=(batch_size, num_point))
return pointclouds_pl, labels_pl
def get_model(point_cloud, is_training, bn_decay=None, num_classes=50, use_input_trans=True, use_feature_trans=True):
""" Classification PointNet, input is BxNx3, output BxNx50 """
batch_size = point_cloud.get_shape()[0].value
num_point = point_cloud.get_shape()[1].value
end_points = {}
if use_input_trans:
with tf.variable_scope('transform_net1') as sc:
transform = input_transform_net(point_cloud, is_training, bn_decay, K=3)
point_cloud_transformed = tf.matmul(point_cloud, transform)
else:
point_cloud_transformed = point_cloud
input_image = tf.expand_dims(point_cloud_transformed, -1)
with tf.variable_scope('pointnet_cls_rotation'):
net = tf_util.conv2d(input_image, 64, [1,3],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv1', bn_decay=bn_decay)
net = tf_util.conv2d(net, 64, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv2', bn_decay=bn_decay)
if use_feature_trans:
with tf.variable_scope('transform_net2') as sc:
transform = feature_transform_net(net, is_training, bn_decay, K=64)
end_points['transform'] = transform
net_transformed = tf.matmul(tf.squeeze(net, axis=[2]), transform)
point_feat = tf.expand_dims(net_transformed, [2])
else:
point_feat = tf.expand_dims(net, [2])
with tf.variable_scope('pointnet_cls_rotation'):
net = tf_util.conv2d(point_feat, 64, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv3', bn_decay=bn_decay)
net = tf_util.conv2d(net, 128, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv4', bn_decay=bn_decay)
net = tf_util.conv2d(net, 1024, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv5', bn_decay=bn_decay)
global_feat = tf_util.max_pool2d(net, [num_point,1],
padding='VALID', scope='maxpool')
print(global_feat)
global_feat_expand = tf.tile(global_feat, [1, num_point, 1, 1])
concat_feat = tf.concat([point_feat, global_feat_expand], axis=3)
print(concat_feat)
net = tf_util.conv2d(concat_feat, 512, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv6', bn_decay=bn_decay)
net = tf_util.conv2d(net, 256, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv7', bn_decay=bn_decay)
net = tf_util.conv2d(net, 128, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv8', bn_decay=bn_decay)
net = tf_util.conv2d(net, 128, [1,1],
padding='VALID', stride=[1,1],
bn=True, is_training=is_training,
scope='conv9', bn_decay=bn_decay)
net = tf_util.conv2d(net, num_classes, [1,1],
padding='VALID', stride=[1,1], activation_fn=None,
scope='conv10')
net = tf.squeeze(net, [2]) # BxNxC
return net, end_points
def get_loss(pred, label, end_points, reg_weight=0.001, use_trans_loss=True, use_angle_loss=False):
""" pred: BxNxC,
label: BxN, """
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=pred, labels=label)
classify_loss = tf.reduce_mean(loss)
tf.summary.scalar('classify loss', classify_loss)
if not use_trans_loss:
return classify_loss
# Enforce the transformation as orthogonal matrix
transform = end_points['transform'] # BxKxK
K = transform.get_shape()[1].value
mat_diff = tf.matmul(transform, tf.transpose(transform, perm=[0,2,1]))
mat_diff -= tf.constant(np.eye(K), dtype=tf.float32)
mat_diff_loss = tf.nn.l2_loss(mat_diff)
tf.summary.scalar('mat_loss', mat_diff_loss)
return classify_loss + mat_diff_loss * reg_weight
if __name__=='__main__':
with tf.Graph().as_default():
inputs = tf.zeros((32,1024,3))
outputs = get_model(inputs, tf.constant(True))
print(outputs)
| 43 | 117 | 0.589085 |
7943d003750fc6b2887903922762b0e1e8f657fb | 3,508 | py | Python | middlewear/parallel_proces.py | astronaut71/KU-TII | f46e493bb1ffe83dbcc9aa21deb5a8fb219e088b | [
"MIT"
] | null | null | null | middlewear/parallel_proces.py | astronaut71/KU-TII | f46e493bb1ffe83dbcc9aa21deb5a8fb219e088b | [
"MIT"
] | null | null | null | middlewear/parallel_proces.py | astronaut71/KU-TII | f46e493bb1ffe83dbcc9aa21deb5a8fb219e088b | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
from sys import ps1
import rospy
import numpy as np
from os import system
import time
import threading
import Microcontroller_Manager_Serial as Serial
import IMU_Functions as IMU
import Pressure_Functions as Pressure
import Modem_Functions as Modem
import threading
import time
from time import sleep
from std_msgs.msg import Float32
from std_msgs.msg import String
from __future__ import print_function
from beginner_tutorials.srv import AddTwoInts,AddTwoIntsResponse
class RepeatedTimer(object):
def __init__(self, interval, function, *args, **kwargs):
self._timer = None
self.interval = interval
self.function = function
self.args = args
self.kwargs = kwargs
self.is_running = False
self.next_call = time.time()
self.start()
def _run(self):
self.is_running = False
self.start()
self.function(*self.args, **self.kwargs)
def start(self):
if not self.is_running:
self.next_call += self.interval
self._timer = threading.Timer(self.next_call - time.time(), self._run)
self._timer.start()
self.is_running = True
def stop(self):
self._timer.cancel()
self.is_running = False
mutex = threading.Lock()
Communication_Mode_ = 0
pub_pressure = rospy.Publisher('depth',Float32,queue_size=1)
pub_modem = rospy.Publisher('modem_data',Float32,queue_size=1)
P0 = 1.01325 #Default Pressure
def callback(data):
global P0
rospy.init_node('talker', anonymous=True)
mutex.acquire()
while not rospy.is_shutdown():
try:
data_received_pressure = Pressure.Pressure_Get_Final_Values(1,1)
data_received_imu = IMU.IMU_Get_Values(1, 1)
P1 = (np.int16((data_received_pressure[6]<<24) | (data_received_pressure[7]<<16) | (data_received_pressure[8]<<8) | (data_received_pressure[9])))/10000
P0 = (np.int16((data_received_pressure[6]<<24) | (data_received_pressure[7]<<16) | (data_received_pressure[8]<<8) | (data_received_pressure[9])))/10000
P = P1 - P0 # Relative Measured Pressure
pressure = P
pub_pressure.publish(pressure)
except:
print ("pressure not obtained")
def callback_modem(data_in):
data_in = Serial.Serial_Port_Receive_Data(20,0.2)
if (data_in[0] == 91) # Received data from acoustic modem
rt = RepeatedTimer(1, data_in, "Recieved Data") # it auto-starts, no need of rt.start()
modem_data= data_in
pub_modem.publish(modem_data)
try:
sleep(0.3) # your long-running job goes here...
finally:
rt.stop() # better in a try/finally block to make sure the program ends!
def talker():
pub = rospy.Publisher('chatter', String, queue_size=10)
rospy.init_node('talker', anonymous=True)
rate = rospy.Rate(10) # 10hz
while not rospy.is_shutdown():
hello_str = "hello world %s" % rospy.get_time()
rospy.loginfo(hello_str)
pub.publish(hello_str)
rate.sleep()
def handle_add_two_ints(req):
print("Returning [%s + %s = %s]"%(req.a, req.b, (req.a + req.b)))
return AddTwoIntsResponse(req.a + req.b)
def add_two_ints_server():
rospy.init_node('add_two_ints_server')
s = rospy.Service('add_two_ints', AddTwoInts, handle_add_two_ints)
print("Ready to add two ints.")
rospy.spin()
if __name__ == "__main__":
add_two_ints_server() | 28.991736 | 163 | 0.663056 |
7943d1498bbbf0e253d96160d9480e4d052d6cb8 | 17,166 | py | Python | tests/test_attributes_managers.py | srama2512/habitat-sim | 1a0a36451bc0a94c98d9f9fa497b4c3cfc095638 | [
"MIT"
] | 1 | 2022-01-22T09:04:44.000Z | 2022-01-22T09:04:44.000Z | tests/test_attributes_managers.py | Shubodh/habitat-sim | 83b97f42199855e63407d898e20e99780dfc79c0 | [
"MIT"
] | null | null | null | tests/test_attributes_managers.py | Shubodh/habitat-sim | 83b97f42199855e63407d898e20e99780dfc79c0 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import magnum as mn
import examples.settings
import habitat_sim
def perform_general_tests(attr_mgr, search_string):
# get size of template library
orig_num_templates = attr_mgr.get_num_templates()
# make sure this is same value as size of template handles list
assert orig_num_templates > 0
# search for attributes with expected string
template_handles = attr_mgr.get_template_handles(search_string)
# verify handle exists
assert len(template_handles) > 0
template_handle = template_handles[0]
# get id from handle
template_id = attr_mgr.get_template_id_by_handle(template_handle)
assert search_string in template_handle
# verify both handle and id exist in the manager
assert attr_mgr.get_library_has_handle(template_handle)
assert attr_mgr.get_library_has_id(template_id)
# verify that access is the same for ID and handle lookup
template0 = attr_mgr.get_template_by_handle(template_handle)
template1 = attr_mgr.get_template_by_id(template_id)
# verify template 0 and template 1 are copies of the same template
assert template0.handle == template1.handle
assert template0.template_id == template1.template_id
# modify template, register, then verify that
# retrieved template is not same as previous template
template0.set("test_key", "template0_test")
template1.set("test_key", "template1_test")
# save modified template
attr_mgr.register_template(template0, template_handle)
# retrieve registered template
template2 = attr_mgr.get_template_by_handle(template_handle)
# verify templates have the same identifiers
assert template1.handle == template2.handle
assert template1.template_id == template2.template_id
# verify the templates hold different data and are not the
# same object
assert template1.get_string("test_key") != template2.get_string("test_key")
# verify 0 and 2 hold same user-set value
assert template0.get_string("test_key") == template2.get_string("test_key")
# change retrieved template, verify it is not same object as template0
template2.set("test_key", "template2_test")
# verify the templates hold different data and are not the
# same object
assert template0.get_string("test_key") != template2.get_string("test_key")
# add new template with specified handle
new_template_handle = "new_template_0"
attr_mgr.register_template(template0, new_template_handle)
# register new template and verify size is greater than original
curr_num_templates = attr_mgr.get_num_templates()
assert curr_num_templates != orig_num_templates
# lock template
attr_mgr.set_template_lock(new_template_handle, True)
# attempt to delete
template3 = attr_mgr.remove_template_by_handle(new_template_handle)
# verify that template was not deleted
assert template3 == None
# unlock template
attr_mgr.set_template_lock(new_template_handle, False)
# remove template that has been unlocked; retrieves removed template
template3 = attr_mgr.remove_template_by_handle(new_template_handle)
# verify not NONE
assert template3 != None
# verify has expected handle
assert template3.handle == new_template_handle
# get new size of library after remove and verify same as original
curr_num_templates = attr_mgr.get_num_templates()
assert curr_num_templates == orig_num_templates
# add many templates with specified handles, then remove them
new_handle_stub = "new_template_"
num_to_add = 10
for i in range(num_to_add):
new_iter_handle = new_handle_stub + str(i)
tmplt_id = attr_mgr.register_template(template3, new_iter_handle)
assert tmplt_id != -1
# lock all added templates
locked_template_handles = attr_mgr.set_lock_by_substring(
True, new_handle_stub, True
)
# verify that the number of added and locked templates are equal
assert num_to_add == len(locked_template_handles)
# attempt to remove templates that are locked
removed_templates = attr_mgr.remove_templates_by_str(new_handle_stub, True)
# verify that no templates were removed that have the new handle stub
assert len(removed_templates) == 0
# unlock all added templates
unlocked_template_handles = attr_mgr.set_lock_by_substring(
False, new_handle_stub, True
)
# verify that the number of added and unlocked templates are equal
assert num_to_add == len(unlocked_template_handles)
# verify lists of names are same
assert sorted(unlocked_template_handles) == sorted(locked_template_handles)
# now delete all templates with handle stub
removed_templates = attr_mgr.remove_templates_by_str(new_handle_stub, True)
# verify that the number of added and removed templates are equal
assert num_to_add == len(removed_templates)
# get new size of library after remove and verify same as original
curr_num_templates = attr_mgr.get_num_templates()
assert curr_num_templates == orig_num_templates
return template0, template3
def perform_add_blank_template_test(attr_mgr, valid_render_handle=None):
# get size of template library
orig_num_templates = attr_mgr.get_num_templates()
# add new blank template
new_template_handle = "new template"
# create new default template, do not register it
new_template0 = attr_mgr.create_new_template(new_template_handle, False)
# change new template field
new_template0.set("test_key", "new_template_test")
# give new template valid render asset handle, otherwise registration might fail
if valid_render_handle is not None:
new_template0.render_asset_handle = valid_render_handle
# add new template
attr_mgr.register_template(new_template0, new_template_handle)
# get new size of library after remove and verify not same as original
curr_num_templates = attr_mgr.get_num_templates()
assert curr_num_templates > orig_num_templates
# verify new template added properly
new_template1 = attr_mgr.get_template_by_handle(new_template_handle)
# verify template 0 and template 1 are copies of the same template
assert new_template0.handle == new_template1.handle
assert new_template0.template_id == new_template1.template_id
assert new_template0.get_string("test_key") == new_template1.get_string("test_key")
# remove newly added default template
new_template2 = attr_mgr.remove_template_by_handle(new_template_handle)
# verify added template was one removed
assert new_template0.handle == new_template2.handle
assert new_template0.template_id == new_template2.template_id
assert new_template0.get_string("test_key") == new_template2.get_string("test_key")
# test addition of user-configurations and verify values
# create new template, do not register it
new_template_usr = attr_mgr.create_new_template(new_template_handle, False)
# give new template valid render asset handle, otherwise registration might fail
if valid_render_handle is not None:
new_template_usr.render_asset_handle = valid_render_handle
usr_template_handle = "new_usr_cfg_handle"
new_template_usr.handle = usr_template_handle
# get user configs and set key
new_template_usr.set_user_config_val("my_custom_key0", "my_custom_string")
assert (
new_template_usr.get_user_config_string("my_custom_key0") == "my_custom_string"
)
new_template_usr.set_user_config_val("my_custom_key1", True)
assert new_template_usr.get_user_config_bool("my_custom_key1") == True
new_template_usr.set_user_config_val("my_custom_key2", 10)
assert new_template_usr.get_user_config_int("my_custom_key2") == 10
new_template_usr.set_user_config_val("my_custom_key3", 5.8)
assert new_template_usr.get_user_config_double("my_custom_key3") == 5.8
new_template_usr.set_user_config_val("my_custom_key4", mn.Vector3(1.0, -2.8, 3.0))
assert new_template_usr.get_user_config_vec3("my_custom_key4") == mn.Vector3(
1.0, -2.8, 3.0
)
quat_val = mn.Quaternion.rotation(mn.Deg(-115), mn.Vector3.y_axis())
new_template_usr.set_user_config_val("my_custom_key5", quat_val)
assert new_template_usr.num_user_configs == 6
# add new template - should use template-specified name as handle
usr_tmplt_ID = attr_mgr.register_template(new_template_usr, "")
assert usr_tmplt_ID != -1
#
reg_template_usr = attr_mgr.get_template_by_handle(usr_template_handle)
assert reg_template_usr != None
assert reg_template_usr.num_user_configs == new_template_usr.num_user_configs
assert (
reg_template_usr.get_user_config_string("my_custom_key0") == "my_custom_string"
)
assert reg_template_usr.get_user_config_bool("my_custom_key1") == True
assert reg_template_usr.get_user_config_int("my_custom_key2") == 10
assert reg_template_usr.get_user_config_double("my_custom_key3") == 5.8
assert reg_template_usr.get_user_config_vec3("my_custom_key4") == mn.Vector3(
1.0, -2.8, 3.0
)
assert reg_template_usr.get_user_config_quat("my_custom_key5") == quat_val
rmv_template_usr = attr_mgr.remove_template_by_handle(usr_template_handle)
assert rmv_template_usr != None
assert new_template_usr.handle == rmv_template_usr.handle
assert new_template_usr.template_id == rmv_template_usr.template_id
# get new size of library after remove and verify same as original
curr_num_templates = attr_mgr.get_num_templates()
assert curr_num_templates == orig_num_templates
def test_physics_attributes_managers():
cfg_settings = examples.settings.default_sim_settings.copy()
cfg_settings["scene"] = "data/scene_datasets/habitat-test-scenes/van-gogh-room.glb"
cfg_settings["enable_physics"] = True
hab_cfg = examples.settings.make_cfg(cfg_settings)
with habitat_sim.Simulator(hab_cfg) as sim:
# get attribute managers
phys_attr_mgr = sim.get_physics_template_manager()
# perform general tests for this attributes manager
template0, _ = perform_general_tests(
phys_attr_mgr, cfg_settings["physics_config_file"]
)
# verify that physics template matches expected values in file
assert template0.timestep == 0.008
assert template0.simulator == "bullet"
# verify creating new template
perform_add_blank_template_test(phys_attr_mgr)
def test_stage_attributes_managers():
cfg_settings = examples.settings.default_sim_settings.copy()
cfg_settings["scene"] = "data/scene_datasets/habitat-test-scenes/van-gogh-room.glb"
cfg_settings["enable_physics"] = True
hab_cfg = examples.settings.make_cfg(cfg_settings)
with habitat_sim.Simulator(hab_cfg) as sim:
stage_name = cfg_settings["scene"]
# get attribute managers
stage_mgr = sim.get_stage_template_manager()
# perform general tests for this attributes manager
template0, _ = perform_general_tests(stage_mgr, stage_name)
# verify gravity in template is as expected
assert template0.gravity == mn.Vector3(0.0, -9.8, 0.0)
# verify creating new template
perform_add_blank_template_test(stage_mgr, template0.render_asset_handle)
def test_object_attributes_managers():
cfg_settings = examples.settings.default_sim_settings.copy()
cfg_settings["scene"] = "data/scene_datasets/habitat-test-scenes/van-gogh-room.glb"
cfg_settings["enable_physics"] = True
hab_cfg = examples.settings.make_cfg(cfg_settings)
with habitat_sim.Simulator(hab_cfg) as sim:
# get object attribute managers
obj_mgr = sim.get_object_template_manager()
# get object template random handle
rand_obj_handle = obj_mgr.get_random_template_handle()
# perform general tests for object attribute manager
template0, _ = perform_general_tests(obj_mgr, rand_obj_handle)
# verify creating new template
perform_add_blank_template_test(obj_mgr, template0.render_asset_handle)
def perform_asset_attrib_mgr_tests(
attr_mgr, default_attribs, ctor_mod_field, legalVal, illegalVal
):
# get size of template library
orig_num_templates = attr_mgr.get_num_templates()
# make sure this is same value as size of template handles list
assert orig_num_templates > 0
# get old handle - based on how default_attribs was constructed
old_handle = default_attribs.handle
# verify that default_attribs is valid
assert default_attribs.is_valid_template
# modify field values that impact primitive construction, if exist
if "none" != ctor_mod_field:
default_attribs.set(ctor_mod_field, illegalVal)
# verify that this is now an illegal template
assert not (default_attribs.is_valid_template)
# modify to hold legal value
default_attribs.set(ctor_mod_field, legalVal)
# verify that default_attribs is valid
assert default_attribs.is_valid_template
# build new handle reflected in modified template.
# This is only necessary because we are bypassing setters
default_attribs.build_handle()
# verify that new handle is different from old handle
new_handle = default_attribs.handle
assert new_handle != old_handle
# register new template
attr_mgr.register_template(default_attribs)
# verify that both templates now exist in library
assert attr_mgr.get_library_has_handle(new_handle)
assert attr_mgr.get_library_has_handle(old_handle)
# get new and old templates
old_template = attr_mgr.get_template_by_handle(old_handle)
new_template = attr_mgr.get_template_by_handle(new_handle)
# verify they do not hold the same values in the important fields
assert old_template.get_int(ctor_mod_field) != new_template.get_int(ctor_mod_field)
# verify we have more templates than when we started
assert orig_num_templates != attr_mgr.get_num_templates()
# delete new template
deleted_template = attr_mgr.remove_template_by_handle(new_handle)
assert deleted_template != None
# verify we are back where we started
assert orig_num_templates == attr_mgr.get_num_templates()
def test_asset_attributes_managers():
cfg_settings = examples.settings.default_sim_settings.copy()
cfg_settings["scene"] = "data/scene_datasets/habitat-test-scenes/van-gogh-room.glb"
cfg_settings["enable_physics"] = True
hab_cfg = examples.settings.make_cfg(cfg_settings)
with habitat_sim.Simulator(hab_cfg) as sim:
# legal and illegal vals for primitives based on wireframe or solid
legal_mod_val_wf = 64
illegal_mod_val_wf = 25
legal_mod_val_solid = 5
illegal_mod_val_solid = 0
# get object attribute managers
attr_mgr = sim.get_asset_template_manager()
# capsule
print("Test Capsule attributes construction, modification, saving and removal.")
dflt_solid_attribs = attr_mgr.get_default_capsule_template(False)
perform_asset_attrib_mgr_tests(
attr_mgr,
dflt_solid_attribs,
"segments",
legal_mod_val_solid,
illegal_mod_val_solid,
)
dflt_wf_attribs = attr_mgr.get_default_capsule_template(True)
perform_asset_attrib_mgr_tests(
attr_mgr, dflt_wf_attribs, "segments", legal_mod_val_wf, illegal_mod_val_wf
)
# cone
print("Test Cone attributes construction, modification, saving and removal.")
dflt_solid_attribs = attr_mgr.get_default_cone_template(False)
perform_asset_attrib_mgr_tests(
attr_mgr,
dflt_solid_attribs,
"segments",
legal_mod_val_solid,
illegal_mod_val_solid,
)
dflt_wf_attribs = attr_mgr.get_default_cone_template(True)
perform_asset_attrib_mgr_tests(
attr_mgr, dflt_wf_attribs, "segments", legal_mod_val_wf, illegal_mod_val_wf
)
# cylinder
print(
"Test Cylinder attributes construction, modification, saving and removal."
)
dflt_solid_attribs = attr_mgr.get_default_cylinder_template(False)
perform_asset_attrib_mgr_tests(
attr_mgr, dflt_solid_attribs, "segments", 5, illegal_mod_val_solid
)
dflt_wf_attribs = attr_mgr.get_default_cylinder_template(True)
perform_asset_attrib_mgr_tests(
attr_mgr, dflt_wf_attribs, "segments", legal_mod_val_wf, illegal_mod_val_wf
)
# UVSphere
print(
"Test UVSphere attributes construction, modification, saving and removal."
)
dflt_solid_attribs = attr_mgr.get_default_UVsphere_template(False)
perform_asset_attrib_mgr_tests(
attr_mgr, dflt_solid_attribs, "segments", 5, illegal_mod_val_solid
)
dflt_wf_attribs = attr_mgr.get_default_UVsphere_template(True)
perform_asset_attrib_mgr_tests(
attr_mgr, dflt_wf_attribs, "segments", legal_mod_val_wf, illegal_mod_val_wf
)
| 40.390588 | 88 | 0.743446 |
7943d16639e24add06156081e9fee532c520fc82 | 9,523 | py | Python | geokit/test/test_03_geom.py | ENSYSTRA/geokit | 510ec5c3fe3c034f1dff776c813eb28c6cd07c40 | [
"MIT"
] | null | null | null | geokit/test/test_03_geom.py | ENSYSTRA/geokit | 510ec5c3fe3c034f1dff776c813eb28c6cd07c40 | [
"MIT"
] | null | null | null | geokit/test/test_03_geom.py | ENSYSTRA/geokit | 510ec5c3fe3c034f1dff776c813eb28c6cd07c40 | [
"MIT"
] | null | null | null | from .helpers import MASK_DATA, np, pointInAachen3035, pointsInAachen4326, EPSG3035, EPSG4326, POLY, GEOM, SUB_GEOMS, SUB_GEOM, result
from geokit import geom
import matplotlib.pyplot as plt
import pytest
import pandas as pd
# box
def test_box():
# fun func
b1 = geom.box(0, 0, 5, 10, srs=EPSG3035)
assert np.isclose(b1.Area(), 50)
def test_tile():
# fun func
t1 = geom.tile(xi=4250, yi=2775, zoom=13)
envelope = t1.GetEnvelope()
assert np.isclose(envelope[0], 753363.3507786973)
assert np.isclose(envelope[1], 758255.3205889486)
assert np.isclose(envelope[2], 6457400.14953169)
assert np.isclose(envelope[3], 6462292.119341941)
def test_subTiles():
tiles = list(geom.subTiles(GEOM,
zoom=5,
checkIntersect=False,
asGeom=False))
assert len(tiles) == 4
assert tiles[0] == (16, 12, 5)
assert tiles[1] == (16, 13, 5)
assert tiles[2] == (17, 12, 5)
assert tiles[3] == (17, 13, 5)
tiles = list(geom.subTiles(GEOM,
zoom=7,
checkIntersect=True,
asGeom=False))
assert len(tiles) == 7
assert tiles[0] == (67, 50, 7)
assert tiles[1] == (67, 51, 7)
assert tiles[2] == (67, 52, 7)
assert tiles[3] == (68, 49, 7)
assert tiles[4] == (68, 50, 7)
assert tiles[5] == (68, 51, 7)
assert tiles[6] == (69, 49, 7)
def test_tileize():
geoms = list(geom.tileize(GEOM, zoom=7))
assert np.isclose(geoms[0].Area(), 6185440214.480698)
assert np.isclose(geoms[1].Area(), 22669806295.02369)
assert np.isclose(geoms[2].Area(), 4971343426.690063)
assert np.isclose(geoms[3].Area(), 11085156736.902699)
assert np.isclose(geoms[4].Area(), 60694504952.24364)
assert np.isclose(geoms[5].Area(), 8127832949.697159)
assert np.isclose(geoms[6].Area(), 4469553269.708176)
def test_point():
x, y = pointInAachen3035
# test separate input
p1 = geom.point(x, y, srs=EPSG3035)
assert np.isclose(p1.GetX(), x)
assert np.isclose(p1.GetY(), y)
assert p1.GetSpatialReference().IsSame(EPSG3035)
# test tuple input
p2 = geom.point((x, y), srs=EPSG3035)
assert np.isclose(p2.GetX(), x)
assert np.isclose(p2.GetY(), y)
assert p2.GetSpatialReference().IsSame(EPSG3035)
@pytest.mark.skip("No test implemented for: geom.empty")
def test_empty(): assert False
def test_convertWKT():
g1 = geom.convertWKT(POLY, srs=EPSG4326)
assert np.isclose(g1.Area(), 7.8149999999999995)
assert g1.GetSpatialReference().IsSame(EPSG4326)
def test_polygonizeMatrix():
# test a simple box
boxmatrix = np.array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=np.int)
g1 = geom.polygonizeMatrix(boxmatrix, shrink=None)
assert np.isclose(g1.geom[0].Area(), 8.0) # polygonizeMatrix: simple area
# polygonizeMatrix: empty srs
assert g1.geom[0].GetSpatialReference() is None
assert g1.value[0] == 1 # polygonizeMatrix: Value retention
# test shrink
g1b = geom.polygonizeMatrix(boxmatrix, shrink=0.0001)
# polygonizeMatrix: shrunk area
assert np.isclose(g1b.geom[0].Area(), 7.99984000)
# test a more complex area
complexmatrix = np.array([[0, 2, 0, 0, 0],
[2, 2, 0, 1, 0],
[0, 0, 0, 1, 1],
[1, 1, 0, 1, 0],
[3, 1, 0, 0, 0]], dtype=np.int)
g2 = geom.polygonizeMatrix(complexmatrix, shrink=None)
assert np.isclose(g2.shape[0], 4) # polygonizeMatrix: geometry count
assert np.isclose(sum([g.Area() for g in g2.geom]),
11.0) # polygonizeMatrix: area"
assert np.isclose(g2.value[0], 2) # polygonizeMatrix: Value retention
# flatten the complex area
g3 = geom.polygonizeMatrix(complexmatrix, flat=True, shrink=None)
assert np.isclose(g3.shape[0], 3) # polygonizeMatrix: geometry count
# polygonizeMatrix: flattened area
assert np.isclose(g3.geom[0].Area(), 7.0)
# set a boundary and srs context
g4 = geom.polygonizeMatrix(
complexmatrix, bounds=(-3, 10, 22, 35), srs=EPSG3035, flat=True, shrink=None)
# polygonizeMatrix: contexted area
assert np.isclose(g4.geom[0].Area(), 175.0)
assert g4.geom[0].GetSpatialReference().IsSame(
EPSG3035) # polygonizeMatrix: contexted srs
def test_polygonizeMask():
# test a simple box
boxmask = np.array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 0, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]], dtype=np.bool)
g1 = geom.polygonizeMask(boxmask, shrink=None)
assert np.isclose(g1.Area(), 8.0) # polygonizeMask: simple area
assert g1.GetSpatialReference() is None # polygonizeMask: empty srs
# test shrink
g1b = geom.polygonizeMask(boxmask, shrink=0.0001)
assert np.isclose(g1b.Area(), 7.99984000) # polygonizeMask: shrunk area
# test a more complex area
complexmask = np.array([[0, 1, 0, 0, 0],
[1, 1, 0, 1, 0],
[0, 0, 0, 1, 1],
[1, 1, 0, 1, 0],
[0, 1, 0, 0, 0]], dtype=np.bool)
g2 = geom.polygonizeMask(complexmask, shrink=None, flat=False)
assert np.isclose(len(g2), 3) # polygonizeMask: geometry count
assert np.isclose(sum([g.Area() for g in g2]),
10.0) # polygonizeMask: area
# flatten the complex area
g3 = geom.polygonizeMask(complexmask, flat=True, shrink=None)
assert np.isclose(g3.Area(), 10.0) # polygonizeMask: flattened area
# set a boundary and srs context
g4 = geom.polygonizeMask(
complexmask, bounds=(-3, 10, 22, 35), srs=EPSG3035, flat=True, shrink=None)
assert np.isclose(g4.Area(), 250.0) # polygonizeMask: contexted area
assert g4.GetSpatialReference().IsSame(
EPSG3035) # error("polygonizeMask: contexted srs
def test_flatten():
# Overlapping polygons
bounds = [(i, i, i + 2, i + 2) for i in range(5)]
# test basic combination
geomList = [geom.box(b, srs=EPSG4326) for b in bounds]
f1 = geom.flatten(geomList)
assert np.isclose(f1.Area(), 16.0) # flattened area
env = f1.GetEnvelope()
assert np.isclose(env[0], 0)
assert np.isclose(env[1], 6)
assert np.isclose(env[2], 0)
assert np.isclose(env[3], 6)
assert f1.GetSpatialReference().IsSame(EPSG4326) # flattened srs
def test_transform():
# test a single point
pt = geom.point(7, 48, srs=EPSG4326)
t1 = geom.transform(pt, toSRS=EPSG3035)
assert np.isclose(t1.GetX(), 4097075.016)
assert np.isclose(t1.GetY(), 2769703.15423898)
# make a collection of polygons using polygonizeMask
complexmask = np.array([[0, 1, 0, 0, 0],
[1, 1, 0, 1, 0],
[0, 0, 0, 1, 1],
[1, 1, 0, 1, 0],
[0, 1, 0, 0, 0]], dtype=np.bool)
polygons = geom.polygonizeMask(complexmask, bounds=(
6, 45, 11, 50), flat=False, srs=EPSG4326, shrink=None)
t2 = geom.transform(polygons, toSRS='europe_m', segment=0.1)
assert (len(t2) == 3) # "Transform Count
assert t2[0].GetSpatialReference().IsSame(EPSG3035) # "Transform srs
assert np.isclose(sum([t.Area() for t in t2]),
83747886418.48529) # "Transform Area
def test_extractVerticies():
# Test polygon
pts1 = geom.extractVerticies(GEOM)
assert np.isclose(pts1[5, 1], 35.1)
assert pts1.shape == (10, 2)
# Test multipolygon
pts2 = geom.extractVerticies(geom.flatten(SUB_GEOMS))
assert pts2.shape == (12, 2)
# Test linestring
pts3 = geom.extractVerticies(GEOM.Boundary())
assert np.isclose(pts3[5, 1], 35.1)
assert pts3.shape == (10, 2)
# Test multilinestring
assert np.isclose(pts3[5, 1], 35.1)
assert pts3.shape == (10, 2)
# Test Point
pts5 = geom.extractVerticies(geom.point(5, 20))
assert np.isclose(pts5[0, 0], 5)
assert pts5.shape == (1, 2)
def test_drawGeoms():
# Draw single polygon
r = geom.drawGeoms(SUB_GEOM)
plt.savefig(result("drawGeoms-1.png"), dpi=100)
# Draw single linestring
r = geom.drawGeoms(SUB_GEOM.Boundary())
plt.savefig(result("drawGeoms-2.png"), dpi=100)
# Draw a multipolygon
r = geom.drawGeoms(geom.flatten(SUB_GEOMS))
plt.savefig(result("drawGeoms-3.png"), dpi=100)
# Draw a list of polygons and set an MPL argument
r = geom.drawGeoms(SUB_GEOMS, fc='b')
plt.savefig(result("drawGeoms-4.png"), dpi=100)
# Change projection systems
r = geom.drawGeoms(SUB_GEOMS, fc='r', srs=3035)
plt.savefig(result("drawGeoms-5.png"), dpi=100)
# Draw from a dataframe
df = pd.DataFrame(dict(geom=SUB_GEOMS, hats=[1, 2, 3]))
r = geom.drawGeoms(df, srs=3035)
plt.savefig(result("drawGeoms-6.png"), dpi=100)
# Set individual mpl args
df["MPL:hatch"] = ["//", "+", None]
r = geom.drawGeoms(df, srs=3035)
plt.savefig(result("drawGeoms-7.png"), dpi=100)
# Test colorby
r = geom.drawGeoms(df, srs=3035, colorBy="hats")
plt.savefig(result("drawGeoms-8.png"), dpi=100)
assert True
| 33.297203 | 134 | 0.59393 |
7943d166bc52544970d236e9d18710ad17117dec | 807 | bzl | Python | bazel/cc_fuzz_target.bzl | bianpengyuan/opencensus-cpp | 4ffcd168f165d9fc44b4581337275a071ff7a0da | [
"Apache-2.0"
] | 139 | 2017-08-15T01:03:37.000Z | 2022-02-10T11:53:30.000Z | bazel/cc_fuzz_target.bzl | bianpengyuan/opencensus-cpp | 4ffcd168f165d9fc44b4581337275a071ff7a0da | [
"Apache-2.0"
] | 235 | 2017-12-12T09:28:12.000Z | 2022-03-30T06:36:14.000Z | bazel/cc_fuzz_target.bzl | bianpengyuan/opencensus-cpp | 4ffcd168f165d9fc44b4581337275a071ff7a0da | [
"Apache-2.0"
] | 68 | 2017-11-08T02:35:56.000Z | 2022-03-28T21:27:21.000Z | # Copyright 2019, OpenCensus Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Placeholder rule for fuzzing.
Currently, fuzzing is done using the CMake build system.
Please refer to ../opencensus/doc/fuzzing.md
"""
def cc_fuzz_target(name, srcs, corpus, deps):
"""TODO: Implement."""
pass
| 33.625 | 74 | 0.748451 |
7943d1f2c2c0a40a25e16c3b82d6ba25b405da03 | 632 | py | Python | edd/rest/paginators.py | TeselaGen/jbei-edd | 92792fb30bbd504143b2f75bf08d05b141a7ef6f | [
"BSD-3-Clause-LBNL"
] | null | null | null | edd/rest/paginators.py | TeselaGen/jbei-edd | 92792fb30bbd504143b2f75bf08d05b141a7ef6f | [
"BSD-3-Clause-LBNL"
] | null | null | null | edd/rest/paginators.py | TeselaGen/jbei-edd | 92792fb30bbd504143b2f75bf08d05b141a7ef6f | [
"BSD-3-Clause-LBNL"
] | null | null | null | from rest_framework.pagination import PageNumberPagination
from jbei.rest.clients.edd.api import (
DEFAULT_PAGE_SIZE, PAGE_NUMBER_URL_PARAM, PAGE_SIZE_QUERY_PARAM,
)
class ClientConfigurablePagination(PageNumberPagination):
"""
Overrides defaults to enable client-configurable control (up to a limit) of result pagination
by EDD's REST API. Note that specific REST views may override this behavior. See REST_FRAMEWORK
setting in edd.settings.py.
"""
page_size = DEFAULT_PAGE_SIZE
page_size_query_param = PAGE_SIZE_QUERY_PARAM
page_query_param = PAGE_NUMBER_URL_PARAM
max_page_size = 10000
| 35.111111 | 99 | 0.787975 |
7943d3054d3bd7e3d2dd13662f10719a0c4b996a | 99 | py | Python | test/arena1/generator.py | da-x/crumb | ff2bd8a38949c50a4bdb590ea98804b386402a53 | [
"BSD-3-Clause"
] | 1 | 2015-01-22T18:45:32.000Z | 2015-01-22T18:45:32.000Z | test/arena1/generator.py | da-x/crumb | ff2bd8a38949c50a4bdb590ea98804b386402a53 | [
"BSD-3-Clause"
] | null | null | null | test/arena1/generator.py | da-x/crumb | ff2bd8a38949c50a4bdb590ea98804b386402a53 | [
"BSD-3-Clause"
] | null | null | null | import someimport
import sys
someimport.knows_how_to_generate(sys.argv[1], "and-another-dep.txt")
| 19.8 | 68 | 0.808081 |
7943d3bcf77eb7a09f1ac2b821d5e95cc36b4f25 | 1,332 | py | Python | market_maker/lion_fx/update.py | teacup123123/fx-predictor | 5c25b6f0cb5435a2294262f4ac2681b38abb1dac | [
"MIT"
] | null | null | null | market_maker/lion_fx/update.py | teacup123123/fx-predictor | 5c25b6f0cb5435a2294262f4ac2681b38abb1dac | [
"MIT"
] | null | null | null | market_maker/lion_fx/update.py | teacup123123/fx-predictor | 5c25b6f0cb5435a2294262f4ac2681b38abb1dac | [
"MIT"
] | null | null | null | import requests
from quick_query.xe import grab
import os
_dir, _ = os.path.split(__file__)
file = r'data_lionfx_swap.csv' # manually downloaded from https://hirose-fx.co.jp/swap/lionfx_swap.csv
def update_swap_data():
got = requests.get(r'https://hirose-fx.co.jp/swap/lionfx_swap.csv')
string = got.content.decode('shift-jis')
lines = string.splitlines()
lines = [line.strip() for line in lines]
header = lines[0]
header = header.split(',')
header[0] = 'date'
for i in range(len(header) - 1, 0, -1):
if i % 2 == 0:
header[i] = header[i - 1]
for i, (word, type) in enumerate(zip(header, lines[2].split(','))):
if type.endswith('売り'):
header[i] += '_sell'
elif type.endswith('買い'):
header[i] += '_buy'
del lines[1:3]
lines[0] = ','.join(header)
with open('data_lionfx_swap.csv', 'w') as f:
f.write('\n'.join(lines))
def update():
rates = {}
with open(rf'{_dir}/data_available_currencies.txt', 'r') as f:
currencies = f.read(-1).split()
for c in currencies:
rate = grab(c, 'JPY')
rates[c] = rate
with open(rf'{_dir}/data_currency_now.txt', 'w') as f:
f.write('\n'.join(f'{k} {v}' for k, v in rates.items()))
if __name__ == '__main__':
update_swap_data()
| 27.75 | 103 | 0.585586 |
7943d56193072cf119970d9c0e719d7caa7e3e22 | 2,165 | py | Python | test/test_matmul.py | vishalbelsare/pytorch_sparse | efc98089dde782f3600a60265c1013e0f002465d | [
"MIT"
] | 623 | 2018-07-29T10:45:05.000Z | 2022-03-29T04:35:08.000Z | test/test_matmul.py | vishalbelsare/pytorch_sparse | efc98089dde782f3600a60265c1013e0f002465d | [
"MIT"
] | 201 | 2018-08-15T14:11:03.000Z | 2022-03-31T14:14:01.000Z | test/test_matmul.py | vishalbelsare/pytorch_sparse | efc98089dde782f3600a60265c1013e0f002465d | [
"MIT"
] | 101 | 2018-10-14T04:08:11.000Z | 2022-03-23T21:33:37.000Z | from itertools import product
import pytest
import torch
from torch_sparse.matmul import matmul
from torch_sparse.tensor import SparseTensor
import torch_scatter
from .utils import reductions, devices, grad_dtypes
@pytest.mark.parametrize('dtype,device,reduce',
product(grad_dtypes, devices, reductions))
def test_spmm(dtype, device, reduce):
src = torch.randn((10, 8), dtype=dtype, device=device)
src[2:4, :] = 0 # Remove multiple rows.
src[:, 2:4] = 0 # Remove multiple columns.
src = SparseTensor.from_dense(src).requires_grad_()
row, col, value = src.coo()
other = torch.randn((2, 8, 2), dtype=dtype, device=device,
requires_grad=True)
src_col = other.index_select(-2, col) * value.unsqueeze(-1)
expected = torch_scatter.scatter(src_col, row, dim=-2, reduce=reduce)
if reduce == 'min':
expected[expected > 1000] = 0
if reduce == 'max':
expected[expected < -1000] = 0
grad_out = torch.randn_like(expected)
expected.backward(grad_out)
expected_grad_value = value.grad
value.grad = None
expected_grad_other = other.grad
other.grad = None
out = matmul(src, other, reduce)
out.backward(grad_out)
assert torch.allclose(expected, out, atol=1e-2)
assert torch.allclose(expected_grad_value, value.grad, atol=1e-2)
assert torch.allclose(expected_grad_other, other.grad, atol=1e-2)
@pytest.mark.parametrize('dtype,device', product(grad_dtypes, devices))
def test_spspmm(dtype, device):
src = torch.tensor([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=dtype,
device=device)
src = SparseTensor.from_dense(src)
out = matmul(src, src)
assert out.sizes() == [3, 3]
assert out.has_value()
rowptr, col, value = out.csr()
assert rowptr.tolist() == [0, 1, 2, 3]
assert col.tolist() == [0, 1, 2]
assert value.tolist() == [1, 1, 1]
src.set_value_(None)
out = matmul(src, src)
assert out.sizes() == [3, 3]
assert not out.has_value()
rowptr, col, value = out.csr()
assert rowptr.tolist() == [0, 1, 2, 3]
assert col.tolist() == [0, 1, 2]
| 31.376812 | 73 | 0.642032 |
7943d6af152456334b359a441712f2d2fd8afda1 | 3,241 | py | Python | monai/config/deviceconfig.py | ntenenz/MONAI | e7bdea0934ad08ef47de183eaa558942962fbaf4 | [
"Apache-2.0"
] | 2 | 2020-06-23T16:03:45.000Z | 2020-06-25T05:30:45.000Z | monai/config/deviceconfig.py | ntenenz/MONAI | e7bdea0934ad08ef47de183eaa558942962fbaf4 | [
"Apache-2.0"
] | null | null | null | monai/config/deviceconfig.py | ntenenz/MONAI | e7bdea0934ad08ef47de183eaa558942962fbaf4 | [
"Apache-2.0"
] | 1 | 2020-09-14T13:16:01.000Z | 2020-09-14T13:16:01.000Z | # Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from collections import OrderedDict
import monai
import numpy as np
import torch
try:
import ignite
ignite_version = ignite.__version__
del ignite
except (ImportError, AttributeError):
ignite_version = "NOT INSTALLED or UNKNOWN VERSION."
try:
import nibabel
nibabel_version = nibabel.__version__
del nibabel
except (ImportError, AttributeError):
nibabel_version = "NOT INSTALLED or UNKNOWN VERSION."
try:
import skimage
skimage_version = skimage.__version__
del skimage
except (ImportError, AttributeError):
skimage_version = "NOT INSTALLED or UNKNOWN VERSION."
try:
import PIL
PIL_version = PIL.__version__
del PIL
except (ImportError, AttributeError):
PIL_version = "NOT INSTALLED or UNKNOWN VERSION."
try:
import tensorboard
tensorboard_version = tensorboard.__version__
del tensorboard
except (ImportError, AttributeError):
tensorboard_version = "NOT INSTALLED or UNKNOWN VERSION."
def get_config_values():
"""
Read the package versions into a dictionary.
"""
output = OrderedDict()
output["MONAI"] = monai.__version__
output["Python"] = sys.version.replace("\n", " ")
output["Numpy"] = np.version.full_version
output["Pytorch"] = torch.__version__
return output
def get_optional_config_values():
"""
Read the optional package versions into a dictionary.
"""
output = OrderedDict()
output["Pytorch Ignite"] = ignite_version
output["Nibabel"] = nibabel_version
output["scikit-image"] = skimage_version
output["Pillow"] = PIL_version
output["Tensorboard"] = tensorboard_version
return output
def print_config(file=sys.stdout):
"""
Print the package versions to `file`.
Defaults to `sys.stdout`.
"""
for k, v in get_config_values().items():
print(f"{k} version: {v}", file=file, flush=True)
print("\nOptional dependencies:", file=file, flush=True)
for k, v in get_optional_config_values().items():
print(f"{k} version: {v}", file=file, flush=True)
print("\nFor details about installing the optional dependencies, please visit:", file=file, flush=True)
print(
" http://monai.rtfd.io/en/latest/installation.html#installing-the-recommended-dependencies",
file=file,
flush=True,
)
def set_visible_devices(*dev_inds):
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join(map(str, dev_inds))
def get_torch_version_tuple():
"""
Returns:
tuple of ints represents the pytorch major/minor version.
"""
return tuple([int(x) for x in torch.__version__.split(".")[:2]])
| 27.008333 | 107 | 0.703178 |
7943d6e96742628e6817aa56c3cfda2560ecde1a | 7,037 | py | Python | mailchimp_marketing_asyncio/models/rss_options3.py | john-parton/mailchimp-asyncio | 3865ca0867bec8f537dc1e3256aa3a160c00f8a2 | [
"Apache-2.0"
] | null | null | null | mailchimp_marketing_asyncio/models/rss_options3.py | john-parton/mailchimp-asyncio | 3865ca0867bec8f537dc1e3256aa3a160c00f8a2 | [
"Apache-2.0"
] | null | null | null | mailchimp_marketing_asyncio/models/rss_options3.py | john-parton/mailchimp-asyncio | 3865ca0867bec8f537dc1e3256aa3a160c00f8a2 | [
"Apache-2.0"
] | 1 | 2022-03-09T14:52:22.000Z | 2022-03-09T14:52:22.000Z | # coding: utf-8
"""
Mailchimp Marketing API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 3.0.74
Contact: [email protected]
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class RSSOptions3(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'feed_url': 'str',
'frequency': 'str',
'schedule': 'SendingSchedule1',
'last_sent': 'datetime',
'constrain_rss_img': 'bool'
}
attribute_map = {
'feed_url': 'feed_url',
'frequency': 'frequency',
'schedule': 'schedule',
'last_sent': 'last_sent',
'constrain_rss_img': 'constrain_rss_img'
}
def __init__(self, feed_url=None, frequency=None, schedule=None, last_sent=None, constrain_rss_img=None): # noqa: E501
"""RSSOptions3 - a model defined in Swagger""" # noqa: E501
self._feed_url = None
self._frequency = None
self._schedule = None
self._last_sent = None
self._constrain_rss_img = None
self.discriminator = None
self.feed_url = feed_url
self.frequency = frequency
if schedule is not None:
self.schedule = schedule
if last_sent is not None:
self.last_sent = last_sent
if constrain_rss_img is not None:
self.constrain_rss_img = constrain_rss_img
@property
def feed_url(self):
"""Gets the feed_url of this RSSOptions3. # noqa: E501
The URL for the RSS feed. # noqa: E501
:return: The feed_url of this RSSOptions3. # noqa: E501
:rtype: str
"""
return self._feed_url
@feed_url.setter
def feed_url(self, feed_url):
"""Sets the feed_url of this RSSOptions3.
The URL for the RSS feed. # noqa: E501
:param feed_url: The feed_url of this RSSOptions3. # noqa: E501
:type: str
"""
if feed_url is None:
raise ValueError("Invalid value for `feed_url`, must not be `None`") # noqa: E501
self._feed_url = feed_url
@property
def frequency(self):
"""Gets the frequency of this RSSOptions3. # noqa: E501
The frequency of the RSS Campaign. # noqa: E501
:return: The frequency of this RSSOptions3. # noqa: E501
:rtype: str
"""
return self._frequency
@frequency.setter
def frequency(self, frequency):
"""Sets the frequency of this RSSOptions3.
The frequency of the RSS Campaign. # noqa: E501
:param frequency: The frequency of this RSSOptions3. # noqa: E501
:type: str
"""
if frequency is None:
raise ValueError("Invalid value for `frequency`, must not be `None`") # noqa: E501
allowed_values = ["daily", "weekly", "monthly"] # noqa: E501
if frequency not in allowed_values:
raise ValueError(
"Invalid value for `frequency` ({0}), must be one of {1}" # noqa: E501
.format(frequency, allowed_values)
)
self._frequency = frequency
@property
def schedule(self):
"""Gets the schedule of this RSSOptions3. # noqa: E501
:return: The schedule of this RSSOptions3. # noqa: E501
:rtype: SendingSchedule1
"""
return self._schedule
@schedule.setter
def schedule(self, schedule):
"""Sets the schedule of this RSSOptions3.
:param schedule: The schedule of this RSSOptions3. # noqa: E501
:type: SendingSchedule1
"""
self._schedule = schedule
@property
def last_sent(self):
"""Gets the last_sent of this RSSOptions3. # noqa: E501
The date the campaign was last sent. # noqa: E501
:return: The last_sent of this RSSOptions3. # noqa: E501
:rtype: datetime
"""
return self._last_sent
@last_sent.setter
def last_sent(self, last_sent):
"""Sets the last_sent of this RSSOptions3.
The date the campaign was last sent. # noqa: E501
:param last_sent: The last_sent of this RSSOptions3. # noqa: E501
:type: datetime
"""
self._last_sent = last_sent
@property
def constrain_rss_img(self):
"""Gets the constrain_rss_img of this RSSOptions3. # noqa: E501
Whether to add CSS to images in the RSS feed to constrain their width in campaigns. # noqa: E501
:return: The constrain_rss_img of this RSSOptions3. # noqa: E501
:rtype: bool
"""
return self._constrain_rss_img
@constrain_rss_img.setter
def constrain_rss_img(self, constrain_rss_img):
"""Sets the constrain_rss_img of this RSSOptions3.
Whether to add CSS to images in the RSS feed to constrain their width in campaigns. # noqa: E501
:param constrain_rss_img: The constrain_rss_img of this RSSOptions3. # noqa: E501
:type: bool
"""
self._constrain_rss_img = constrain_rss_img
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(RSSOptions3, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, RSSOptions3):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| 29.817797 | 123 | 0.589314 |
7943d713c8b00e74e1599bc3662dc50b21ce2ffe | 162 | py | Python | bin/twigs/polytwigs-1234-hex-2.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | null | null | null | bin/twigs/polytwigs-1234-hex-2.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | null | null | null | bin/twigs/polytwigs-1234-hex-2.py | tiwo/puzzler | 7ad3d9a792f0635f7ec59ffa85fb46b54fd77a7e | [
"Intel"
] | 1 | 2022-01-02T16:54:14.000Z | 2022-01-02T16:54:14.000Z | #!/usr/bin/env python
# $Id$
"""
1506 solutions.
"""
import puzzler
from puzzler.puzzles.polytwigs1234 import Polytwigs1234Hex2
puzzler.run(Polytwigs1234Hex2)
| 13.5 | 59 | 0.765432 |
7943d75a3a6ed5dad7eab7d082799dd15265651d | 2,285 | py | Python | api/tests/opentrons/hardware_control/test_execution_manager.py | anuwrag/opentrons | 28c8d76a19e367c6bd38f5290faaa32abf378715 | [
"Apache-2.0"
] | null | null | null | api/tests/opentrons/hardware_control/test_execution_manager.py | anuwrag/opentrons | 28c8d76a19e367c6bd38f5290faaa32abf378715 | [
"Apache-2.0"
] | null | null | null | api/tests/opentrons/hardware_control/test_execution_manager.py | anuwrag/opentrons | 28c8d76a19e367c6bd38f5290faaa32abf378715 | [
"Apache-2.0"
] | null | null | null | import asyncio
import pytest
from opentrons.hardware_control import (
ExecutionManager,
ExecutionState,
ExecutionCancelledError,
)
async def test_state_machine(loop):
"""
Test that an execution manager's state is RUNNING on init
and PAUSE when it when pause is called, unless CANCELLED
"""
exec_mgr = ExecutionManager()
assert await exec_mgr.get_state() == ExecutionState.RUNNING
# passes through on wait_for_is_running if state is RUNNING
await asyncio.wait_for(exec_mgr.wait_for_is_running(), timeout=0.2)
await exec_mgr.pause()
assert await exec_mgr.get_state() == ExecutionState.PAUSED
with pytest.raises(asyncio.TimeoutError):
# should stall on wait_for_is_running when state is PAUSED
await asyncio.wait_for(exec_mgr.wait_for_is_running(), timeout=0.2)
await exec_mgr.resume()
assert await exec_mgr.get_state() == ExecutionState.RUNNING
await exec_mgr.cancel()
assert await exec_mgr.get_state() == ExecutionState.CANCELLED
with pytest.raises(ExecutionCancelledError):
# attempting to pause when CANCELLED should raise
await exec_mgr.pause()
with pytest.raises(ExecutionCancelledError):
# should raise on wait_for_is_running when state is CANCELLED
await asyncio.wait_for(exec_mgr.wait_for_is_running(), timeout=0.2)
await exec_mgr.reset()
assert await exec_mgr.get_state() == ExecutionState.RUNNING
async def test_cancel_tasks(loop):
"""
Test that an execution manager cancels all un-protected
running asyncio Tasks when cancel is called
"""
async def fake_task():
while True:
await asyncio.sleep(1)
exec_mgr = ExecutionManager()
cancellable_task = loop.create_task(fake_task())
await exec_mgr.register_cancellable_task(cancellable_task)
other_task = loop.create_task(fake_task())
# current, cancellable, and other
assert len(asyncio.all_tasks(loop)) == 3
assert len([t for t in asyncio.all_tasks(loop) if t.cancelled()]) == 0
await exec_mgr.cancel()
await asyncio.sleep(0.1)
all_tasks = asyncio.all_tasks(loop)
assert len(all_tasks) == 2 # current and other
assert other_task in all_tasks
assert cancellable_task not in all_tasks
| 30.878378 | 75 | 0.721663 |
7943d7faa6320938f4176c6398be9f9f098c8c85 | 3,334 | py | Python | python/ray/serve/handle.py | carlos-aguayo/ray | fedbdd5dc6a47aa9cba170816f8c0950193b4fd6 | [
"Apache-2.0"
] | null | null | null | python/ray/serve/handle.py | carlos-aguayo/ray | fedbdd5dc6a47aa9cba170816f8c0950193b4fd6 | [
"Apache-2.0"
] | null | null | null | python/ray/serve/handle.py | carlos-aguayo/ray | fedbdd5dc6a47aa9cba170816f8c0950193b4fd6 | [
"Apache-2.0"
] | null | null | null | from typing import Optional, Dict, Any, Union
from ray.serve.context import TaskContext
from ray.serve.router import RequestMetadata
class RayServeHandle:
"""A handle to a service endpoint.
Invoking this endpoint with .remote is equivalent to pinging
an HTTP endpoint.
Example:
>>> handle = serve.get_handle("my_endpoint")
>>> handle
RayServeHandle(
Endpoint="my_endpoint",
Traffic=...
)
>>> handle.remote(my_request_content)
ObjectRef(...)
>>> ray.get(handle.remote(...))
# result
>>> ray.get(handle.remote(let_it_crash_request))
# raises RayTaskError Exception
"""
def __init__(
self,
router_handle,
endpoint_name,
*,
method_name=None,
shard_key=None,
http_method=None,
http_headers=None,
):
self.router_handle = router_handle
self.endpoint_name = endpoint_name
self.method_name = method_name
self.shard_key = shard_key
self.http_method = http_method
self.http_headers = http_headers
def remote(self, request_data: Optional[Union[Dict, Any]] = None,
**kwargs):
"""Issue an asynchrounous request to the endpoint.
Returns a Ray ObjectRef whose results can be waited for or retrieved
using ray.wait or ray.get, respectively.
Returns:
ray.ObjectRef
Input:
request_data(dict, Any): If it's a dictionary, the data will be
available in ``request.json()`` or ``request.form()``. Otherwise,
it will be available in ``request.data``.
``**kwargs``: All keyword arguments will be available in
``request.args``.
"""
request_metadata = RequestMetadata(
self.endpoint_name,
TaskContext.Python,
call_method=self.method_name or "__call__",
shard_key=self.shard_key,
http_method=self.http_method or "GET",
http_headers=self.http_headers or dict(),
)
return self.router_handle.enqueue_request.remote(
request_metadata, request_data, **kwargs)
def options(self,
method_name: Optional[str] = None,
*,
shard_key: Optional[str] = None,
http_method: Optional[str] = None,
http_headers: Optional[Dict[str, str]] = None):
"""Set options for this handle.
Args:
method_name(str): The method to invoke on the backend.
http_method(str): The HTTP method to use for the request.
shard_key(str): A string to use to deterministically map this
request to a backend if there are multiple for this endpoint.
"""
return RayServeHandle(
self.router_handle,
self.endpoint_name,
# Don't override existing method
method_name=self.method_name or method_name,
shard_key=self.shard_key or shard_key,
http_method=self.http_method or http_method,
http_headers=self.http_headers or http_headers,
)
def __repr__(self):
return f"RayServeHandle(endpoint='{self.endpoint_name}')"
| 33.676768 | 79 | 0.592981 |
7943da2f12d911349a210448f76eb686389570b6 | 7,518 | py | Python | fawkes/BoundaryConditions.py | bdevl/PGMCPC | cac2fe4304ae42ef2a0d94219b4349d51e86ab2d | [
"MIT"
] | 3 | 2020-10-23T13:40:56.000Z | 2022-02-10T03:42:52.000Z | fawkes/BoundaryConditions.py | pkmtum/generative-physics-informed-pde | 63ec383da0f2dbf0d8ffbbb44a670e90d07c132e | [
"MIT"
] | null | null | null | fawkes/BoundaryConditions.py | pkmtum/generative-physics-informed-pde | 63ec383da0f2dbf0d8ffbbb44a670e90d07c132e | [
"MIT"
] | null | null | null | import numpy as np
from dolfin import DirichletBC
import dolfin as df
class BoundaryEncodingEnsemble(object):
def __init__(self, boundary_encodings):
self._boundary_encodings = boundary_encodings
def __getitem__(self, item):
return self._boundary_encodings[item]
def __iter__(self):
yield from self._boundary_encodings
class BoundaryEncoding(object):
def __init__(self, dirichlet_encoding, neumann_encoding):
assert isinstance(dirichlet_encoding, DirichletBoundaryEncoding)
assert isinstance(neumann_encoding, NeumannBoundaryEncoding)
self.dirichlet_encoding = dirichlet_encoding
self.neumann_encoding = neumann_encoding
def reconstruct(self):
raise NotImplementedError
class DirichletBoundaryEncoding(object):
def __init__(self, type, data = None):
self.type = type
if data is None:
self._data = dict()
else:
assert isinstance(data, dict)
self._data = data
@property
def data(self):
return self._data
def __getitem__(self, item):
try:
return self._data[item]
except KeyError:
raise KeyError
def __setitem__(self, key, value):
self._data[key] = value
def reconstruct(self, factory):
return factory.reconstruct_dirichlet(self)
class NeumannBoundaryEncoding(object):
def __init__(self, type, data = None):
self.type = type
if data is None:
self._data = dict()
else:
assert isinstance(data, dict)
self._data = data
@property
def data(self):
return self._data
def __getitem__(self, item):
try:
return self._data[item]
except KeyError:
raise KeyError
def __setitem__(self, key, value):
self._data[key] = value
def reconstruct(self, factory):
return factory.reconstruct_neumann(self)
class DirichletSpecification(object):
def __init__(self, expression, domain, component=None, pointwise =False):
self.expression = expression
self.domain = domain
self.component = component
self.pointwise = pointwise
class DirichletBoundaryCondition(object):
def __init__(self, bcs, encoding = None, encoding_type = None, encoding_data = None):
if isinstance(bcs, list):
for bc in bcs:
if not isinstance(bc, DirichletSpecification):
raise TypeError
elif isinstance(bcs, DirichletSpecification):
bcs = [bcs]
else:
raise TypeError
self._bcs = bcs
if encoding is not None:
assert encoding_type is None
assert encoding_data is None
if encoding is not None:
assert isinstance(encoding, DirichletBoundaryEncoding)
self._encoding = encoding
if encoding_data is not None or encoding_type is not None:
assert encoding_data is not None and encoding_type is not None
assert isinstance(encoding_type, str)
assert isinstance(encoding_data, dict)
self._encoding = DirichletBoundaryEncoding(encoding_type, encoding_data)
def encode(self):
if self._encoding is None:
raise NotImplementedError
return self._encoding
def extract(self, V, ReturnFreeDofs = False):
# slow and clumsy
fbcs = self.transfer(V)
dofs = np.array([dof for bc in fbcs for dof in bc.get_boundary_values().keys()], dtype=int)
vals = np.array([val for bc in fbcs for val in bc.get_boundary_values().values()], dtype=float)
dofs, index = np.unique(dofs, return_index=True)
values = vals[index]
if ReturnFreeDofs:
all_dofs = set(V.dofmap().dofs())
free_dofs = np.array(list(all_dofs - set(dofs)), dtype=np.int)
return dofs, values, free_dofs
return dofs, values
def is_homogeneous(self, V):
dofs, values = self.extract(V)
return not any(values)
def transfer(self, V):
fenics_bcs = list()
for bc in self._bcs:
if bc.component is not None:
if not bc.pointwise:
fenics_bcs.append(DirichletBC(V.sub(bc.component), bc.expression, bc.domain))
else:
fenics_bcs.append(DirichletBC(V.sub(bc.component), bc.expression, bc.domain, method='pointwise'))
else:
if not bc.pointwise:
fenics_bcs.append(DirichletBC(V, bc.expression, bc.domain))
else:
fenics_bcs.append(DirichletBC(V, bc.expression, bc.domain, method='pointwise'))
return fenics_bcs
def mark_facets(self, mesh):
facetfct = df.MeshFunction('size_t', mesh, mesh.topology().dim() - 1)
facetfct.set_all(0)
for bc in self._bcs:
bc.domain.mark(facetfct, 1)
return facetfct
def apply(self, X):
raise NotImplementedError
class NeumannSpecification(object):
def __init__(self, type, expression, subdomain = None):
if type not in ['ds', 'dx']:
raise ValueError('Type must either be "ds" or "dx')
self._type = type # e.g. ds
self._subdomain = subdomain
self._expression = expression
@property
def type(self):
return self._type
@property
def subdomain(self):
return self._subdomain
@property
def expression(self):
return self._expression
class NeumannBoundaryCondition(object):
def __init__(self, NeumannSpecifications, encoding = None, encoding_type = None, encoding_data = None):
self._neumman_specifications = NeumannSpecifications
if encoding is not None:
assert isinstance(encoding, NeumannBoundaryEncoding)
self._encoding = encoding
if encoding_data is not None or encoding_type is not None:
assert encoding_data is not None and encoding_type is not None
assert isinstance(encoding_type, str)
assert isinstance(encoding_data, dict)
self._encoding = NeumannBoundaryEncoding(encoding_type, encoding_data)
def encode(self):
if self._encoding is None:
raise NotImplementedError
return self._encoding
def __getitem__(self, ind):
return self._neumman_specifications[ind]
def compile_form(self, V):
mesh = V.mesh()
v = df.TestFunction(V)
form = None
for ns in self._neumman_specifications:
if ns.type == 'dx':
meshfct = df.MeshFunction('size_t', mesh, mesh.topology().dim() , 0)
elif ns.type == 'ds':
meshfct = df.MeshFunction('size_t', mesh, mesh.topology().dim() - 1 , 0)
else:
raise NotImplementedError
meshfct.set_all(0)
if ns.subdomain is None:
ID = 0
else:
ns.subdomain.mark(meshfct, 1)
ID = 1
measure = df.Measure(ns.type, domain=mesh, subdomain_data = meshfct, subdomain_id=ID)
form_ = ns.expression * v * measure
if form is None:
form = form_
else:
form = form + form_
return form
def assemble_flux(self, V):
return df.assemble(self.compile_form(V)).get_local()
| 27.947955 | 117 | 0.610801 |
7943da73c6636fd9b1de8a31129fd6920c8aeb71 | 4,224 | py | Python | test/test_relationships.py | chalkchisel/neomodel | 5aeef4afaefd8fb52f1541abeb279663b985dfd0 | [
"MIT"
] | 1 | 2019-06-27T13:05:08.000Z | 2019-06-27T13:05:08.000Z | test/test_relationships.py | techdragon/neomodel | 7408a608287402138cf0082a29818f6bc28bc5d4 | [
"MIT"
] | null | null | null | test/test_relationships.py | techdragon/neomodel | 7408a608287402138cf0082a29818f6bc28bc5d4 | [
"MIT"
] | null | null | null | from neomodel import (StructuredNode, RelationshipTo, RelationshipFrom,
Relationship, StringProperty, IntegerProperty, One)
class Person(StructuredNode):
name = StringProperty(unique_index=True)
age = IntegerProperty(index=True)
is_from = RelationshipTo('Country', 'IS_FROM')
knows = Relationship('Person', 'KNOWS')
@property
def special_name(self):
return self.name
def special_power(self):
return "I have no powers"
class Country(StructuredNode):
code = StringProperty(unique_index=True)
inhabitant = RelationshipFrom(Person, 'IS_FROM')
president = RelationshipTo(Person, 'PRESIDENT', cardinality=One)
class SuperHero(Person):
power = StringProperty(index=True)
def special_power(self):
return "I have powers"
def test_actions_on_deleted_node():
u = Person(name='Jim2', age=3).save()
u.delete()
try:
u.is_from.connect(None)
except ValueError:
assert True
else:
assert False
try:
u.is_from.get()
except ValueError:
assert True
else:
assert False
try:
u.save()
except ValueError:
assert True
else:
assert False
def test_bidirectional_relationships():
u = Person(name='Jim', age=3).save()
assert u
de = Country(code='DE').save()
assert de
assert len(u.is_from) == 0
assert not u.is_from
assert u.is_from.__class__.__name__ == 'ZeroOrMore'
u.is_from.connect(de)
assert len(u.is_from) == 1
assert u.is_from
assert u.is_from.is_connected(de)
b = u.is_from.all()[0]
assert b.__class__.__name__ == 'Country'
assert b.code == 'DE'
s = b.inhabitant.all()[0]
assert s.name == 'Jim'
u.is_from.disconnect(b)
assert not u.is_from.is_connected(b)
def test_either_direction_connect():
rey = Person(name='Rey', age=3).save()
sakis = Person(name='Sakis', age=3).save()
rey.knows.connect(sakis)
assert rey.knows.is_connected(sakis)
assert sakis.knows.is_connected(rey)
sakis.knows.connect(rey)
result, meta = sakis.cypher("""START us=node({self}), them=node({them})
MATCH (us)-[r:KNOWS]-(them) RETURN COUNT(r)""",
{'them': rey.__node__._id})
assert int(result[0][0]) == 1
def test_search():
fred = Person(name='Fred', age=13).save()
zz = Country(code='ZZ').save()
zx = Country(code='ZX').save()
zt = Country(code='ZY').save()
fred.is_from.connect(zz)
fred.is_from.connect(zx)
fred.is_from.connect(zt)
result = fred.is_from.search(code='ZX')
assert result[0].code == 'ZX'
def test_custom_methods():
u = Person(name='Joe90', age=13).save()
assert u.special_power() == "I have no powers"
u = SuperHero(name='Joe91', age=13, power='xxx').save()
assert u.special_power() == "I have powers"
assert u.special_name == 'Joe91'
def test_valid_reconnection():
p = Person(name='ElPresidente', age=93).save()
assert p
pp = Person(name='TheAdversary', age=33).save()
assert pp
c = Country(code='CU').save()
assert c
c.president.connect(p)
assert c.president.is_connected(p)
# the coup d'etat
c.president.reconnect(p, pp)
assert c.president.is_connected(pp)
# reelection time
c.president.reconnect(pp, pp)
assert c.president.is_connected(pp)
def test_props_relationship():
u = Person(name='Mar', age=20).save()
assert u
c = Country(code='AT').save()
assert c
c2 = Country(code='LA').save()
assert c2
c.inhabitant.connect(u, properties={'city': 'Thessaloniki'})
assert c.inhabitant.is_connected(u)
# Check if properties were inserted
result, meta = u.cypher('START root=node:Person(name={name})' +
' MATCH root-[r:IS_FROM]->() RETURN r.city', {'name': u.name})
assert result and result[0][0] == 'Thessaloniki'
u.is_from.reconnect(c, c2)
assert u.is_from.is_connected(c2)
# Check if properties are transferred correctly
result, meta = u.cypher('START root=node:Person(name={name})' +
' MATCH root-[r:IS_FROM]->() RETURN r.city', {'name': u.name})
assert result and result[0][0] == 'Thessaloniki'
| 25.142857 | 75 | 0.638258 |
7943dccc97d4f13a7dccbfa4e9488cbda8e12d9b | 975 | py | Python | setup.py | HFOI/napalm-extreme-netiron | 6630cfe8644f302c8a3b2824d33101257e526b55 | [
"Apache-2.0"
] | 1 | 2021-01-24T00:35:42.000Z | 2021-01-24T00:35:42.000Z | setup.py | HFOI/napalm-extreme-netiron | 6630cfe8644f302c8a3b2824d33101257e526b55 | [
"Apache-2.0"
] | null | null | null | setup.py | HFOI/napalm-extreme-netiron | 6630cfe8644f302c8a3b2824d33101257e526b55 | [
"Apache-2.0"
] | 1 | 2019-02-15T09:18:49.000Z | 2019-02-15T09:18:49.000Z | """setup.py file."""
import uuid
from setuptools import setup, find_packages
from pip.req import parse_requirements
__author__ = 'Carles Kishimoto <[email protected]>'
install_reqs = parse_requirements('requirements.txt', session=uuid.uuid1())
reqs = [str(ir.req) for ir in install_reqs]
setup(
name="napalm-extreme-netiron",
version="0.1.0",
packages=find_packages(),
author="Carles Kishimoto",
author_email="[email protected]",
description="Network Automation and Programmability Abstraction Layer with Multivendor support",
classifiers=[
'Topic :: Utilities',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Operating System :: POSIX :: Linux',
'Operating System :: MacOS',
],
url="https://github.com/ckishimo/napalm-extreme-netiron",
include_package_data=True,
install_requires=reqs,
)
| 30.46875 | 100 | 0.687179 |
7943dd58269a428bd4ba2aafc3f9541503ea8dbb | 1,858 | py | Python | setup.py | vertiond/vertcoinhash-python | 679d7191490311b1c2561c19ae45275047f37709 | [
"MIT"
] | 2 | 2022-01-15T22:14:40.000Z | 2022-01-16T03:26:43.000Z | setup.py | vertiond/vertcoinhash-python | 679d7191490311b1c2561c19ae45275047f37709 | [
"MIT"
] | null | null | null | setup.py | vertiond/vertcoinhash-python | 679d7191490311b1c2561c19ae45275047f37709 | [
"MIT"
] | null | null | null | from setuptools import setup, Extension
vertcoinsources = [
'scrypt.c',
'Lyra2RE.c',
'Sponge.c',
'Lyra2.c',
'sha3/blake.c',
'sha3/groestl.c',
'sha3/keccak.c',
'sha3/cubehash.c',
'sha3/bmw.c',
'sha3/skein.c',
'h2.c',
'tiny_sha3/sha3.c'
]
vertcoinincludes = [
'.',
'./sha3',
'./tiny_sha3'
]
vtc_scrypt_hash_module = Extension('vtc_scrypt_new',
sources = vertcoinsources + ['scryptmodule.c'],
extra_compile_args=['-O3', '-msse3'],
include_dirs=vertcoinincludes)
lyra2re_hash_module = Extension('lyra2re_hash',
sources = vertcoinsources + ['lyra2remodule.c'],
include_dirs=vertcoinincludes)
lyra2re2_hash_module = Extension('lyra2re2_hash',
sources = vertcoinsources + ['lyra2re2module.c'],
include_dirs=vertcoinincludes)
lyra2re3_hash_module = Extension('lyra2re3_hash',
sources = vertcoinsources + ['lyra2re3module.c'],
include_dirs=vertcoinincludes)
verthash_module = Extension('verthash',
sources = vertcoinsources + ['verthashmodule.c'],
extra_compile_args=['-std=c99'],
include_dirs=vertcoinincludes)
setup (name = 'vertcoinhash',
version = '1.0.1',
author_email = '[email protected]',
author = 'vertion',
url = 'https://github.com/vertcoin-project/vertcoinhash-python',
description = 'Bindings for proof of work used by Vertcoin',
ext_modules = [verthash_module, lyra2re3_hash_module, lyra2re2_hash_module, lyra2re_hash_module, vtc_scrypt_hash_module])
| 33.781818 | 128 | 0.559203 |
7943dd7e89d341ea7ee291b25a307783c66e26f1 | 1,201 | py | Python | pages/index.py | AmyBeisel/Olympics-Data | ea11f429fbfb9a5554e5e9f1dfffaf89007985da | [
"MIT"
] | null | null | null | pages/index.py | AmyBeisel/Olympics-Data | ea11f429fbfb9a5554e5e9f1dfffaf89007985da | [
"MIT"
] | null | null | null | pages/index.py | AmyBeisel/Olympics-Data | ea11f429fbfb9a5554e5e9f1dfffaf89007985da | [
"MIT"
] | null | null | null | # Imports from 3rd party libraries
import dash
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import plotly.express as px
# Imports from this application
from app import app
# 2 column layout. 1st column width = 4/12
# https://dash-bootstrap-components.opensource.faculty.ai/l/components/layout
column1 = dbc.Col(
[
dcc.Markdown(
"""
## What sport would you medal in the Olympics?
What if you wanted to compete in the olympics?
You can use this app and enter your gender, age, weight and height, to see
what sport you would do best in, compared to real olympians stats.
This is soley for educational purposes, as most do not train day in and day out,
but fun to see!!
"""
),
dcc.Link(dbc.Button('Predict Your Sport!', color='primary'), href='/predictions')
],
md=4,
)
column2 = dbc.Col([html.Img(src = 'assets/olympic_rings.jpg', className = 'Olympic Rings')
],
align = 'center'
)
layout = dbc.Row([column1, column2]) | 27.295455 | 92 | 0.647794 |
7943dffdfefb027361412439c748a391eb4fe91b | 943 | py | Python | language_formatters_pre_commit_hooks/__init__.py | greggiacovelli/language-formatters-pre-commit-hooks | f6b82c7eae7b930d613fd20a2fcded0daa60cf3c | [
"Apache-2.0"
] | null | null | null | language_formatters_pre_commit_hooks/__init__.py | greggiacovelli/language-formatters-pre-commit-hooks | f6b82c7eae7b930d613fd20a2fcded0daa60cf3c | [
"Apache-2.0"
] | null | null | null | language_formatters_pre_commit_hooks/__init__.py | greggiacovelli/language-formatters-pre-commit-hooks | f6b82c7eae7b930d613fd20a2fcded0daa60cf3c | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import typing
import pkg_resources
__version__ = pkg_resources.get_distribution("language_formatters_pre_commit_hooks").version
def _get_default_version(tool_name): # pragma: no cover
# type: (typing.Text) -> typing.Text
"""
Read tool_name default version.
The method is intended to be used only from language_formatters_pre_commit_hooks modules
"""
try:
with open(
pkg_resources.resource_filename(
"language_formatters_pre_commit_hooks",
"{tool_name}.version".format(tool_name=tool_name),
)
) as f:
return f.readline().split()[0]
except: # noqa: E722 (allow usage of bare 'except')
raise RuntimeError("No default version found for {tool_name}".format(tool_name=tool_name))
| 31.433333 | 98 | 0.693531 |
7943e04260b0d410eef59a5288687601141893e4 | 6,680 | py | Python | src/ui/ui_cfg_lpspinor.py | ben-github/NXP-MCUBootUtility | 3ff9fa203d667844f83a08c855fef85723d2612e | [
"Apache-2.0"
] | null | null | null | src/ui/ui_cfg_lpspinor.py | ben-github/NXP-MCUBootUtility | 3ff9fa203d667844f83a08c855fef85723d2612e | [
"Apache-2.0"
] | null | null | null | src/ui/ui_cfg_lpspinor.py | ben-github/NXP-MCUBootUtility | 3ff9fa203d667844f83a08c855fef85723d2612e | [
"Apache-2.0"
] | null | null | null | #! /usr/bin/env python
# -*- coding: utf-8 -*-
import wx
import sys
import os
import math
import RTyyyy_uidef
import uidef
import uivar
import uilang
sys.path.append(os.path.abspath(".."))
from win import bootDeviceWin_LpspiNor
from utils import sound
class secBootUiCfgLpspiNor(bootDeviceWin_LpspiNor.bootDeviceWin_LpspiNor):
def __init__(self, parent):
bootDeviceWin_LpspiNor.bootDeviceWin_LpspiNor.__init__(self, parent)
self._setLanguage()
lpspiNorOpt0, lpspiNorOpt1 = uivar.getBootDeviceConfiguration(RTyyyy_uidef.kBootDevice_LpspiNor)
#1. Prepare SPI NOR/EEPROM option block
# bit [31:28] tag, fixed to 0x0c
# bit [27:24] Size, (bytes/4) - 1
# bit [23:20] SPI instance
# bit [19:16] PCS index
# bit [15:12] Flash type, 0-SPI NOR, 1-SPI EEPROM
# bit [11:08] Flash size(Bytes) 0 - 512K, 1-1M, 2-2M, 3-4M, 4-8M
# 13-64K, 14-128K, 15-256K, etc.
# bit [07:04] Sector size (Bytes), 0-4K, 1-8K, 2-32K, 3-64K,
# 4-128K, 5-256K
# bit [03:00] Page size (Bytes) 0-256, 1-512
self.lpspiNorOpt0 = lpspiNorOpt0
self.lpspiNorOpt1 = lpspiNorOpt1
self._recoverLastSettings()
def _setLanguage( self ):
runtimeSettings = uivar.getRuntimeSettings()
langIndex = runtimeSettings[3]
self.m_notebook_memOpt.SetPageText(0, uilang.kSubLanguageContentDict['panel_memOpt'][langIndex])
self.m_staticText_deviceType.SetLabel(uilang.kSubLanguageContentDict['sText_deviceType'][langIndex])
self.m_staticText_pageSize.SetLabel(uilang.kSubLanguageContentDict['sText_pageSize'][langIndex])
self.m_staticText_sectorSize.SetLabel(uilang.kSubLanguageContentDict['sText_sectorSize'][langIndex])
self.m_staticText_totalSize.SetLabel(uilang.kSubLanguageContentDict['sText_totalSize'][langIndex])
self.m_notebook_spiOpt.SetPageText(0, uilang.kSubLanguageContentDict['panel_spiOpt'][langIndex])
self.m_staticText_spiIndex.SetLabel(uilang.kSubLanguageContentDict['sText_spiIndex'][langIndex])
self.m_staticText_spiPcs.SetLabel(uilang.kSubLanguageContentDict['sText_spiPcs'][langIndex])
self.m_staticText_spiSpeed.SetLabel(uilang.kSubLanguageContentDict['sText_spiSpeed'][langIndex])
self.m_button_ok.SetLabel(uilang.kSubLanguageContentDict['button_lpspinor_ok'][langIndex])
self.m_button_cancel.SetLabel(uilang.kSubLanguageContentDict['button_lpspinor_cancel'][langIndex])
def _recoverLastSettings ( self ):
deviceType = (self.lpspiNorOpt0 & 0x0000F000) >> 12
self.m_choice_deviceType.SetSelection(deviceType)
pageSize = (self.lpspiNorOpt0 & 0x0000000F) >> 0
if pageSize <= 2:
self.m_choice_pageSize.SetSelection(pageSize + 3)
else:
self.m_choice_pageSize.SetSelection(pageSize - 3)
sectorSize = (self.lpspiNorOpt0 & 0x000000F0) >> 4
self.m_choice_sectorSize.SetSelection(sectorSize)
totalSize = (self.lpspiNorOpt0 & 0x00000F00) >> 8
if totalSize <= 11:
self.m_choice_totalSize.SetSelection(totalSize + 4)
else:
self.m_choice_totalSize.SetSelection(totalSize - 12)
spiIndex = (self.lpspiNorOpt0 & 0x00F00000) >> 20
self.m_choice_spiIndex.SetSelection(spiIndex - 1)
spiPcs = (self.lpspiNorOpt0 & 0x000F0000) >> 16
self.m_choice_spiPcs.SetSelection(spiPcs)
spiSpeed = (self.lpspiNorOpt1 & 0x0000000F) >> 0
self.m_choice_spiSpeed.SetSelection(spiSpeed)
def _getDeviceType( self ):
txt = self.m_choice_deviceType.GetString(self.m_choice_deviceType.GetSelection())
if txt == '1bit NOR Flash':
val = 0x0
elif txt == 'EEPROM':
val = 0x1
else:
pass
self.lpspiNorOpt0 = (self.lpspiNorOpt0 & 0xFFFF0FFF) | (val << 12)
def _getPageSize( self ):
val = int(self.m_choice_pageSize.GetString(self.m_choice_pageSize.GetSelection()))
val = int(math.log(val, 2))
if val >= 8:
val -= 8
elif val >= 5:
val -= 2
else:
pass
self.lpspiNorOpt0 = (self.lpspiNorOpt0 & 0xFFFFFFF0) | (val << 0)
def _getSectorSize( self ):
val = int(self.m_choice_sectorSize.GetString(self.m_choice_sectorSize.GetSelection()))
val = int(math.log(val, 2))
if val <= 3:
val -= 2
else:
val -= 3
self.lpspiNorOpt0 = (self.lpspiNorOpt0 & 0xFFFFFF0F) | (val << 4)
def _getTotalSize( self ):
val = int(self.m_choice_totalSize.GetString(self.m_choice_totalSize.GetSelection()))
val = int(math.log(val, 2))
if val >= 9:
val -= 9
elif val >= 5:
val += 7
else:
pass
self.lpspiNorOpt0 = (self.lpspiNorOpt0 & 0xFFFFF0FF) | (val << 8)
def _getSpiIndex( self ):
val = int(self.m_choice_spiIndex.GetString(self.m_choice_spiIndex.GetSelection()))
self.lpspiNorOpt0 = (self.lpspiNorOpt0 & 0xFF0FFFFF) | (val << 20)
def _getSpiPcs( self ):
val = int(self.m_choice_spiPcs.GetString(self.m_choice_spiPcs.GetSelection()))
self.lpspiNorOpt0 = (self.lpspiNorOpt0 & 0xFFF0FFFF) | (val << 16)
def _getSpiSpeed( self ):
txt = self.m_choice_spiSpeed.GetString(self.m_choice_spiSpeed.GetSelection())
if txt == '20MHz':
val = 0x0
elif txt == '10MHz':
val = 0x1
elif txt == '5MHz':
val = 0x2
elif txt == '2MHz':
val = 0x3
else:
pass
self.lpspiNorOpt1 = (self.lpspiNorOpt1 & 0xFFFFFFF0) | (val << 0)
def callbackOk(self, event):
self._getDeviceType()
self._getPageSize()
self._getSectorSize()
self._getTotalSize()
self._getSpiIndex()
self._getSpiPcs()
self._getSpiSpeed()
uivar.setBootDeviceConfiguration(RTyyyy_uidef.kBootDevice_LpspiNor, self.lpspiNorOpt0, self.lpspiNorOpt1)
uivar.setRuntimeSettings(False)
self.Show(False)
runtimeSettings = uivar.getRuntimeSettings()
sound.playSoundEffect(runtimeSettings[1], runtimeSettings[2], uidef.kSoundEffectFilename_Progress)
def callbackCancel(self, event):
uivar.setRuntimeSettings(False)
self.Show(False)
def callbackClose( self, event ):
uivar.setRuntimeSettings(False)
self.Show(False)
| 40.981595 | 114 | 0.633683 |
7943e2c85e6c542d0c71aa336a259b42d68dd735 | 5,031 | py | Python | Python Code/Double_Pendulum_v1.py | yingranbc/Double-Pendulum-Motion-Animation | 22895c835e023b51be7d8c3e2f82e484e642469c | [
"MIT"
] | 26 | 2018-07-02T20:47:53.000Z | 2022-03-24T08:10:41.000Z | Python Code/Double_Pendulum_v1.py | yingranbc/Double-Pendulum-Motion-Animation | 22895c835e023b51be7d8c3e2f82e484e642469c | [
"MIT"
] | null | null | null | Python Code/Double_Pendulum_v1.py | yingranbc/Double-Pendulum-Motion-Animation | 22895c835e023b51be7d8c3e2f82e484e642469c | [
"MIT"
] | 12 | 2019-04-06T08:18:29.000Z | 2022-03-24T01:47:13.000Z | # -*- coding: utf-8 -*-
"""
Created on Mon Apr 13 20:40:06 2020
@author: Mohammad Asif Zaman
Double pendulum motion animation using FuncAnimation()
"""
from __future__ import print_function
from scipy.integrate import odeint
import time
import math
import numpy as np
import pylab as py
#import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
from matplotlib import pyplot as plt
m1 = 2 # mass of pendulum 1 (in kg)
m2 = 1 # mass of pendulum 2 (in kg)
L1 = 1.4 # length of pendulum 1 (in meter)
L2 = 1 # length of pendulum 2 (in meter)
g = 9.8 # gravitatioanl acceleration constant (m/s^2)
u0 = [-np.pi/2.2, 0, np.pi/1.8, 0] # initial conditions.
# u[0] = angle of the first pendulum
# u[1] = angular velocity of the first pendulum
# u[2] = angle of the second pendulum
# u[3] = angular velocity of the second pendulum
tfinal = 25.0 # Final time. Simulation time = 0 to tfinal.
Nt = 751
t = np.linspace(0, tfinal, Nt)
# Differential equations describing the system
def double_pendulum(u,t,m1,m2,L1,L2,g):
# du = derivatives
# u = variables
# p = parameters
# t = time variable
du = np.zeros(4)
c = np.cos(u[0]-u[2]) # intermediate variables
s = np.sin(u[0]-u[2]) # intermediate variables
du[0] = u[1] # d(theta 1)
du[1] = ( m2*g*np.sin(u[2])*c - m2*s*(L1*c*u[1]**2 + L2*u[3]**2) - (m1+m2)*g*np.sin(u[0]) ) /( L1 *(m1+m2*s**2) )
du[2] = u[3] # d(theta 2)
du[3] = ((m1+m2)*(L1*u[1]**2*s - g*np.sin(u[2]) + g*np.sin(u[0])*c) + m2*L2*u[3]**2*s*c) / (L2 * (m1 + m2*s**2))
return du
sol = odeint(double_pendulum, u0, t, args=(m1,m2,L1,L2,g))
#sol[:,0] = u1 = Θ_1
#sol[:,1] = u2 = ω_1
#sol[:,2] = u3 = Θ_2
#sol[:,3] = u4 = ω_2
u0 = sol[:,0] # theta_1
u1 = sol[:,1] # omega 1
u2 = sol[:,2] # theta_2
u3 = sol[:,3] # omega_2
# Mapping from polar to Cartesian
x1 = L1*np.sin(u0); # First Pendulum
y1 = -L1*np.cos(u0);
x2 = x1 + L2*np.sin(u2); # Second Pendulum
y2 = y1 - L2*np.cos(u2);
py.close('all')
py.figure(1)
#py.plot(t,x1)
#py.plot(t,y1)
py.plot(x1,y1,'.',color = '#0077BE',label = 'mass 1')
py.plot(x2,y2,'.',color = '#f66338',label = 'mass 2' )
py.legend()
py.xlabel('x (m)')
py.ylabel('y (m)')
#py.figure(2)
#py.plot(t,x2)
#py.plot(t,y2)
fig = plt.figure()
ax = plt.axes(xlim=(-L1-L2-0.5, L1+L2+0.5), ylim=(-2.5, 1.5))
#line, = ax.plot([], [], lw=2,,markersize = 9, markerfacecolor = "#FDB813",markeredgecolor ="#FD7813")
line1, = ax.plot([], [], 'o-',color = '#d2eeff',markersize = 12, markerfacecolor = '#0077BE',lw=2, markevery=10000, markeredgecolor = 'k') # line for Earth
line2, = ax.plot([], [], 'o-',color = '#ffebd8',markersize = 12, markerfacecolor = '#f66338',lw=2, markevery=10000, markeredgecolor = 'k') # line for Jupiter
line3, = ax.plot([], [], color='k', linestyle='-', linewidth=2)
line4, = ax.plot([], [], color='k', linestyle='-', linewidth=2)
line5, = ax.plot([], [], 'o', color='k', markersize = 10)
time_template = 'Time = %.1f s'
time_string = ax.text(0.05, 0.9, '', transform=ax.transAxes)
ax.get_xaxis().set_ticks([]) # enable this to hide x axis ticks
ax.get_yaxis().set_ticks([]) # enable this to hide y axis ticks
# initialization function: plot the background of each frame
def init():
line1.set_data([], [])
line2.set_data([], [])
line3.set_data([], [])
line4.set_data([], [])
line5.set_data([], [])
time_string.set_text('')
return line3,line4, line5, line1, line2, time_string
# animation function. This is called sequentially
def animate(i):
# Motion trail sizes. Defined in terms of indices. Length will vary with the time step, dt. E.g. 5 indices will span a lower distance if the time step is reduced.
trail1 = 6 # length of motion trail of weight 1
trail2 = 8 # length of motion trail of weight 2
dt = t[2]-t[1] # time step
line1.set_data(x1[i:max(1,i-trail1):-1], y1[i:max(1,i-trail1):-1]) # marker + line of first weight
line2.set_data(x2[i:max(1,i-trail2):-1], y2[i:max(1,i-trail2):-1]) # marker + line of the second weight
line3.set_data([x1[i], x2[i]], [y1[i], y2[i]]) # line connecting weight 2 to weight 1
line4.set_data([x1[i], 0], [y1[i],0]) # line connecting origin to weight 1
line5.set_data([0, 0], [0, 0])
time_string.set_text(time_template % (i*dt))
return line3, line4,line5,line1, line2, time_string
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=Nt, interval=1000*(t[2]-t[1])*0.8, blit=True)
# Comment out the following lines if you do not want to save the animation to file
#anim.save('double_pendulum_animation.mp4', fps=30, extra_args=['-vcodec', 'libx264'])
anim.save('double_pendulum_animation.gif', fps=1.0/(t[2]-t[1]), writer = 'imagemagick')
plt.show()
| 32.044586 | 166 | 0.601272 |
7943e3313fc8d6a8742f2f6043fdc63a65d3cad9 | 2,812 | py | Python | astore.py | anton-muravev/ased | 16ddb70ac3e46556cf49569915df0165a6fb7d16 | [
"Apache-2.0"
] | null | null | null | astore.py | anton-muravev/ased | 16ddb70ac3e46556cf49569915df0165a6fb7d16 | [
"Apache-2.0"
] | null | null | null | astore.py | anton-muravev/ased | 16ddb70ac3e46556cf49569915df0165a6fb7d16 | [
"Apache-2.0"
] | 1 | 2021-12-06T08:42:59.000Z | 2021-12-06T08:42:59.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
import pickle
from collections.abc import MutableMapping
class Astore(MutableMapping):
"""Class to store the data arrays. Extends the standard dictionary,
but offers additional file operations.
Needs to be initialized before use."""
def __init__(self):
"""Initialize with an empty dictionary."""
self.dicty = dict()
def __getitem__(self, key):
return self.dicty.__getitem__(key)
def __setitem__(self, key, value):
return self.dicty.__setitem__(key, value)
def __delitem__(self, key):
return self.dicty.__delitem__(key)
def keys(self):
return self.dicty.keys()
def __iter__(self):
yield from self.dicty
def __len__(self):
return self.dicty.__len__()
def extract(self, key):
"""Return the safe copy of the given contained variable.
Arguments
----------
key: string
The key of the target variable.
Returns
----------
A memory copy of the given variable, safe for modification."""
return self.dicty[key].copy()
def load(self, filename, names=None):
"""Fill the store with the contents of a given file.
This function can operate in two modes. If the names argument is not
provided, the new file format is implied (pickled file containing only
a single dictionary). Otherwise the contents of the names argument are
used to deserialize the file contents in the given order.
Arguments
----------
filename: string
Full name of the file to load from.
names: list of strings, optional
Indicates the variable names to be loaded from the file (in the
given order).
Returns
----------
Nothing.
"""
if names is None:
with open(filename, 'rb') as f:
self.dicty = pickle.load(f)
else:
with open(filename, 'rb') as f:
for k in names:
self.dicty[k] = pickle.load(f)
def get_names(self):
"""Return the list of value names."""
return list(self.dicty.keys())
def dump(self, filename):
"""Save the store contents to the given file.
This operation just pickles the internal dictionary.
Arguments
----------
filename: string
Full name of the file to save to.
Returns
----------
Nothing."""
with open(filename, 'wb') as f:
pickle.dump(self.dicty, f)
| 28.989691 | 78 | 0.54303 |
7943e338e4307eb990a8f6cb96621b0a546a6801 | 635 | py | Python | app/artworks/migrations/0013_auto_20211230_0601.py | Vadee-art/backend | 9b068d6ed11c1ffeccc13c4be67f1bb87a12d6ad | [
"MIT"
] | null | null | null | app/artworks/migrations/0013_auto_20211230_0601.py | Vadee-art/backend | 9b068d6ed11c1ffeccc13c4be67f1bb87a12d6ad | [
"MIT"
] | null | null | null | app/artworks/migrations/0013_auto_20211230_0601.py | Vadee-art/backend | 9b068d6ed11c1ffeccc13c4be67f1bb87a12d6ad | [
"MIT"
] | null | null | null | # Generated by Django 3.1.7 on 2021-12-30 02:31
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('artworks', '0012_auto_20211230_0559'),
]
operations = [
migrations.RemoveField(
model_name='thetoken',
name='artwork',
),
migrations.AddField(
model_name='thetoken',
name='artwork',
field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='token_artwork', to='artworks.artwork'),
),
]
| 26.458333 | 158 | 0.626772 |
7943e36631b7aecf2f0c102c5ec7849a40c62b40 | 5,524 | py | Python | tests/unit/acquisition/test_interface.py | clinton0313/trieste | 1e4351144c6cf6853c5cbd4dd7b74714aed569b2 | [
"Apache-2.0"
] | null | null | null | tests/unit/acquisition/test_interface.py | clinton0313/trieste | 1e4351144c6cf6853c5cbd4dd7b74714aed569b2 | [
"Apache-2.0"
] | 2 | 2022-03-11T18:42:45.000Z | 2022-03-19T17:00:12.000Z | tests/unit/acquisition/test_interface.py | clinton0313/trieste | 1e4351144c6cf6853c5cbd4dd7b74714aed569b2 | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 The Trieste Contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from typing import List, Optional, Tuple, cast
import pytest
from tests.util.misc import empty_dataset, raise_exc
from tests.util.models.gpflow.models import QuadraticMeanAndRBFKernel
from trieste.acquisition import (
AugmentedExpectedImprovement,
BatchMonteCarloExpectedImprovement,
ExpectedConstrainedHypervolumeImprovement,
ExpectedConstrainedImprovement,
ExpectedHypervolumeImprovement,
ExpectedImprovement,
NegativeLowerConfidenceBound,
NegativePredictiveMean,
PredictiveVariance,
ProbabilityOfFeasibility,
)
from trieste.acquisition.interface import (
AcquisitionFunction,
SingleModelAcquisitionBuilder,
SingleModelGreedyAcquisitionBuilder,
)
from trieste.data import Dataset
from trieste.models import ProbabilisticModel
from trieste.models.interfaces import SupportsPredictJoint
from trieste.types import TensorType
from trieste.utils import DEFAULTS
class _ArbitrarySingleBuilder(SingleModelAcquisitionBuilder[ProbabilisticModel]):
def prepare_acquisition_function(
self,
model: ProbabilisticModel,
dataset: Optional[Dataset] = None,
) -> AcquisitionFunction:
return raise_exc
class _ArbitraryGreedySingleBuilder(SingleModelGreedyAcquisitionBuilder[ProbabilisticModel]):
def prepare_acquisition_function(
self,
model: ProbabilisticModel,
dataset: Optional[Dataset] = None,
pending_points: Optional[TensorType] = None,
) -> AcquisitionFunction:
return raise_exc
def test_single_model_acquisition_builder_raises_immediately_for_wrong_key() -> None:
builder = _ArbitrarySingleBuilder().using("foo")
with pytest.raises(KeyError):
builder.prepare_acquisition_function(
{"bar": QuadraticMeanAndRBFKernel()}, datasets={"bar": empty_dataset([1], [1])}
)
def test_single_model_acquisition_builder_repr_includes_class_name() -> None:
builder = _ArbitrarySingleBuilder()
assert type(builder).__name__ in repr(builder)
def test_single_model_acquisition_builder_using_passes_on_correct_dataset_and_model() -> None:
class Builder(SingleModelAcquisitionBuilder[ProbabilisticModel]):
def prepare_acquisition_function(
self,
model: ProbabilisticModel,
dataset: Optional[Dataset] = None,
) -> AcquisitionFunction:
assert dataset is data["foo"]
assert model is models["foo"]
return raise_exc
data = {"foo": empty_dataset([1], [1]), "bar": empty_dataset([1], [1])}
models = {"foo": QuadraticMeanAndRBFKernel(), "bar": QuadraticMeanAndRBFKernel()}
Builder().using("foo").prepare_acquisition_function(models, datasets=data)
def test_single_model_greedy_acquisition_builder_raises_immediately_for_wrong_key() -> None:
builder = _ArbitraryGreedySingleBuilder().using("foo")
with pytest.raises(KeyError):
builder.prepare_acquisition_function(
{"bar": QuadraticMeanAndRBFKernel()}, {"bar": empty_dataset([1], [1])}, None
)
def test_single_model_greedy_acquisition_builder_repr_includes_class_name() -> None:
builder = _ArbitraryGreedySingleBuilder()
assert type(builder).__name__ in repr(builder)
@pytest.mark.parametrize(
"function, function_repr",
cast(
List[Tuple[SingleModelAcquisitionBuilder[SupportsPredictJoint]]],
[
(ExpectedImprovement(), "ExpectedImprovement()"),
(AugmentedExpectedImprovement(), "AugmentedExpectedImprovement()"),
(NegativeLowerConfidenceBound(1.96), "NegativeLowerConfidenceBound(1.96)"),
(NegativePredictiveMean(), "NegativePredictiveMean()"),
(ProbabilityOfFeasibility(0.5), "ProbabilityOfFeasibility(0.5)"),
(
ExpectedHypervolumeImprovement(),
"ExpectedHypervolumeImprovement(get_reference_point)",
),
(
BatchMonteCarloExpectedImprovement(10_000),
f"BatchMonteCarloExpectedImprovement(10000, jitter={DEFAULTS.JITTER})",
),
(PredictiveVariance(), f"PredictiveVariance(jitter={DEFAULTS.JITTER})"),
],
),
)
def test_single_model_acquisition_function_builder_reprs(
function: SingleModelAcquisitionBuilder[SupportsPredictJoint], function_repr: str
) -> None:
assert repr(function) == function_repr
assert repr(function.using("TAG")) == f"{function_repr} using tag 'TAG'"
assert (
repr(ExpectedConstrainedImprovement("TAG", function.using("TAG"), 0.0))
== f"ExpectedConstrainedImprovement('TAG', {function_repr} using tag 'TAG', 0.0)"
)
assert (
repr(ExpectedConstrainedHypervolumeImprovement("TAG", function.using("TAG"), 0.0))
== f"ExpectedConstrainedHypervolumeImprovement('TAG', "
f"{function_repr} using tag 'TAG', 0.0, get_reference_point)"
)
| 38.096552 | 94 | 0.722303 |
7943e3bde0c682af9627d7d997c5ef206ef49e0a | 1,580 | py | Python | tests/unit/altimeter/aws/resource/elbv1/test_load_balancer.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 48 | 2019-11-06T03:20:53.000Z | 2022-02-22T21:10:45.000Z | tests/unit/altimeter/aws/resource/elbv1/test_load_balancer.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 27 | 2020-01-07T23:48:30.000Z | 2022-02-26T00:24:04.000Z | tests/unit/altimeter/aws/resource/elbv1/test_load_balancer.py | elliotsegler/altimeter | c3924524938b4bae86b1acda2a4fc3f79ac523ff | [
"MIT"
] | 21 | 2019-12-20T03:06:35.000Z | 2021-12-15T23:26:00.000Z | import boto3
from botocore.exceptions import ClientError
from unittest import TestCase
from moto import mock_elb
from unittest.mock import patch
from altimeter.aws.resource.elbv1.load_balancer import ClassicLoadBalancerResourceSpec
from altimeter.aws.scan.aws_accessor import AWSAccessor
class TestLB(TestCase):
@mock_elb
def test_disappearing_elb_race_condition(self):
account_id = "123456789012"
region_name = "us-east-1"
lb_name = "foo"
session = boto3.Session()
client = session.client("elb", region_name=region_name)
client.create_load_balancer(
LoadBalancerName=lb_name,
Listeners=[{"Protocol": "HTTP", "LoadBalancerPort": 80, "InstancePort": 80}],
Tags=[{"Key": "Name", "Value": lb_name}],
)
scan_accessor = AWSAccessor(session=session, account_id=account_id, region_name=region_name)
with patch(
"altimeter.aws.resource.elbv1.load_balancer.ClassicLoadBalancerResourceSpec.get_lb_attrs"
) as mock_get_lb_attrs:
mock_get_lb_attrs.side_effect = ClientError(
operation_name="DescribeLoadBalancerAttributes",
error_response={
"Error": {
"Code": "LoadBalancerNotFound",
"Message": f"There is no ACTIVE Load Balancer named '{lb_name}'",
}
},
)
resources = ClassicLoadBalancerResourceSpec.scan(scan_accessor=scan_accessor)
self.assertEqual(resources, [])
| 38.536585 | 101 | 0.643671 |
7943e4181e82b07c2a880c24bcbfa643f489a57c | 1,917 | py | Python | repos/system_upgrade/el7toel8/actors/checkosrelease/tests/test_checkosrelease.py | t184256/leapp-repository | b0d8c46eb9bd967692e8cef271587739dced38e8 | [
"Apache-2.0"
] | null | null | null | repos/system_upgrade/el7toel8/actors/checkosrelease/tests/test_checkosrelease.py | t184256/leapp-repository | b0d8c46eb9bd967692e8cef271587739dced38e8 | [
"Apache-2.0"
] | null | null | null | repos/system_upgrade/el7toel8/actors/checkosrelease/tests/test_checkosrelease.py | t184256/leapp-repository | b0d8c46eb9bd967692e8cef271587739dced38e8 | [
"Apache-2.0"
] | null | null | null | import os
import pytest
from leapp import reporting
from leapp.exceptions import StopActorExecution, StopActorExecutionError
from leapp.libraries.actor import library
from leapp.libraries.common.config import version
from leapp.libraries.common.testutils import (create_report_mocked,
produce_mocked)
from leapp.libraries.stdlib import api
from leapp.models import OSReleaseFacts
def test_skip_check(monkeypatch):
monkeypatch.setattr(os, "getenv", lambda _unused: True)
monkeypatch.setattr(reporting, "create_report", create_report_mocked())
assert library.skip_check()
assert reporting.create_report.called == 1
assert 'Skipped OS release check' in reporting.create_report.report_fields['title']
assert reporting.create_report.report_fields['severity'] == 'high'
assert 'flags' not in reporting.create_report.report_fields
def test_no_skip_check(monkeypatch):
monkeypatch.setattr(os, "getenv", lambda _unused: False)
monkeypatch.setattr(reporting, "create_report", create_report_mocked())
assert not library.skip_check()
assert reporting.create_report.called == 0
def test_not_supported_release(monkeypatch):
monkeypatch.setattr(version, "is_supported_version", lambda: False)
monkeypatch.setattr(reporting, "create_report", create_report_mocked())
library.check_os_version()
assert reporting.create_report.called == 1
assert 'Unsupported OS' in reporting.create_report.report_fields['title']
assert 'flags' in reporting.create_report.report_fields
assert 'inhibitor' in reporting.create_report.report_fields['flags']
def test_supported_release(monkeypatch):
monkeypatch.setattr(version, "is_supported_version", lambda: True)
monkeypatch.setattr(reporting, "create_report", create_report_mocked())
library.check_os_version()
assert reporting.create_report.called == 0
| 37.588235 | 87 | 0.771518 |
7943e57c189544849f99576e224a9fe2e59a0f3f | 7,780 | py | Python | modules/music_race/cog.py | kevslinger/bot-be-named | a476598044a06e446e5d0a52da66fff1d7b69a27 | [
"MIT"
] | 4 | 2021-10-30T15:42:33.000Z | 2021-11-29T20:02:45.000Z | modules/music_race/cog.py | kevslinger/bot-be-named | a476598044a06e446e5d0a52da66fff1d7b69a27 | [
"MIT"
] | 97 | 2021-10-30T14:40:34.000Z | 2022-03-01T15:19:29.000Z | modules/music_race/cog.py | kevslinger/bot-be-named | a476598044a06e446e5d0a52da66fff1d7b69a27 | [
"MIT"
] | 1 | 2022-03-05T11:53:27.000Z | 2022-03-05T11:53:27.000Z | import constants
import discord
import re
from discord.ext import commands
from utils import discord_utils, logging_utils
from modules.music_race import music_race_constants
import os
import numpy as np
import string
def get_partition_mapping():
map = {}
for answer in music_race_constants.ANSWERS:
for idx, letter in enumerate(answer):
if letter in map:
map[letter].append(f"{answer}_part_{idx}")
else:
map[letter] = [f"{answer}_part_{idx}"]
for letter in filter(lambda x: x not in map, string.ascii_uppercase):
map[letter] = ["silence"]
return map
class MusicRace(commands.Cog, name="Music Race"):
"""Puzzle for Arithmancy June 2020! Identify each of the movies based on their theme songs!"""
def __init__(self, bot):
self.bot = bot
self.partition_map = get_partition_mapping()
@commands.command(name="hint")
async def hint(self, ctx):
"""Hint
Usage: `~hint`
"""
logging_utils.log_command("hint", ctx.guild, ctx.channel, ctx.author)
embed = discord_utils.create_embed()
embed.add_field(
name="This is not a hint",
value="*Hints will always be given at Hogwarts to those who ask for it.*",
)
await ctx.send(embed=embed)
@commands.command(name="notesaw", aliases=["musicpuzzleinfo"])
async def musicpuzzleinfo(self, ctx):
"""Give the users everything they need to know about the puzzle
Usage: `~notesaw`
"""
logging_utils.log_command("notesaw", ctx.guild, ctx.channel, ctx.author)
embed = discord_utils.create_embed()
embed.add_field(
name=f"Welcome to Notesaw!",
value=f"To start the puzzle, use `{ctx.prefix}guesstune`. "
f"For example, try `{ctx.prefix}guesstune PIANO`. Have fun!",
inline=False,
)
embed.add_field(
name=f"Notice",
value=f"Headphone users! We recommend turning the volume down a bit. Some minor glitches might "
f"hurt your ear",
inline=False,
)
await ctx.send(embed=embed)
@commands.command(name="guesstune")
async def guesstune(self, ctx, *args):
"""Take a user's guess and give them a response based on what letters they provided
Usage: `~guesstune (WORD)`
"""
logging_utils.log_command("guesstune", ctx.guild, ctx.channel, ctx.author)
embed = discord_utils.create_embed()
if len(args) < 1:
embed = discord_utils.create_no_argument_embed("word")
await ctx.send(embed=embed)
return
# Replace any non-letters
word = re.sub("[^A-Z]+", "", "".join(args).upper())
if len(word) < 1 or len(word) > 20:
embed.add_field(
name=f"{constants.FAILED}!",
value=f"Word provided `{word}` is not between 1-20 letters",
)
await ctx.send(embed=embed)
return
if word in music_race_constants.ANSWERS:
final_song_path = os.path.join(
music_race_constants.PUZZLE_OUTPUTS_DIR,
word + f"_final{music_race_constants.MP3_EXTENSION}",
)
if not os.path.exists(final_song_path):
delay = (
music_race_constants.ANSWERS[word][music_race_constants.DELAY]
* 1000
)
os.system(
f"ffmpeg -y -hide_banner -loglevel error -i {os.path.join(music_race_constants.PUZZLE_FULL_SONGS_DIR, word + music_race_constants.MP3_EXTENSION)} -filter_complex 'adelay={delay}|{delay}' {final_song_path}"
)
# TODO: ffmpeg-normalize is too slow for now. Try to optimize later.
# os.system(
# f"ffmpeg-normalize -f -c:a libmp3lame {output_path} -o {output_path}"
# )
embed.add_field(
name=f"Well done! You guessed {word}!",
value=f"\nTo play this tune yourself, use this command. (See {ctx.prefix}playtunehelp for more help)"
f"\n\n`{music_race_constants.ANSWERS[word][music_race_constants.TUNE]}`",
)
await ctx.send(embed=embed)
await ctx.send(
file=discord.File(
final_song_path,
filename=f"{list(music_race_constants.ANSWERS).index(word)+1} of {len(music_race_constants.ANSWERS)}.mp3",
)
)
return
# We need to figure out the longest substring that is part of the answer
# Add that duration of the song.
# Afterwards, we need to find all the remaining letters
# and add random partitions, or silence.
finalanswer = []
delay = 0
# For each letter we match, we can add that to the start of our output song
for i in range(len(word)):
found_song = False
x = word[0 : (i + 1)]
# Matches one of the songs
for answer in music_race_constants.ANSWERS:
if answer.startswith(x):
finalanswer.append((f"{answer}_part_{i}", delay))
found_song = True
break
# Newly added character is not the start to a song
if not found_song:
# Get a random clip with that letter
if word[i] in self.partition_map:
finalanswer.append(
(np.random.choice(self.partition_map[word[i]]), delay)
)
# Increments
delay += 3
# debug_output_msg = ""
# for ans in finalanswer:
# debug_output_msg += f"{ans[1]}-{ans[1]+3}: {ans[0]}\n"
# TODO: Remove once we are more certain about how this works. It ruins the puzzle, obviously
# await ctx.send(debug_output_msg)
# print(word)
# print(debug_output_msg)
inputs = "".join(
[
f"-i {os.path.join(music_race_constants.PUZZLE_PARTIAL_SONGS_DIR, finalanswer[idx][0] + '.mp3')} "
for idx in range(len(finalanswer))
]
)
# Otherwise, we just chop each song into 3s bits, with 0.5s between them
filter_complex = "".join(
[
f"[{idx}]atrim=0:{music_race_constants.SONG_SNIPPET_LENGTH},adelay={finalanswer[idx][1]*1000+500*idx}|{finalanswer[idx][1]*1000+500*idx},volume={music_race_constants.VOLUME/2}[{letter}];"
for idx, letter in zip(range(len(finalanswer)), string.ascii_lowercase)
]
)
mix = "".join(
[
f"[{letter}]"
for _, letter in zip(finalanswer, list(string.ascii_lowercase))
]
)
output_dir = os.path.join(
music_race_constants.PUZZLE_OUTPUTS_DIR, ctx.channel.name
)
if not os.path.exists(output_dir):
os.mkdir(output_dir)
output_path = os.path.join(output_dir, f"{word}.mp3")
os.system(
f"ffmpeg -y -hide_banner -loglevel error {inputs} -preset veryfast "
+ f"-filter_complex '{filter_complex}{mix}amix=inputs={len(finalanswer)}:dropout_transition=1000,volume={music_race_constants.VOLUME/2},loudnorm' "
f"{output_path}"
)
# TODO: ffmpeg-normalize is too slow for now. Try to optimize later.
# os.system(
# f"ffmpeg-normalize -f -c:a libmp3lame {output_path} -o {output_path}"
# )
await ctx.send(file=discord.File(output_path))
def setup(bot):
bot.add_cog(MusicRace(bot))
| 38.9 | 225 | 0.574165 |
7943e581c7b4cb8342d4adc829fca8ec929a1b99 | 78,418 | py | Python | libs/mpfit.py | richardseifert/Hydra_pipeline | a31d782219359bae7fa82fa9b081fb72bef69fce | [
"MIT"
] | 1 | 2017-11-04T15:08:42.000Z | 2017-11-04T15:08:42.000Z | libs/mpfit.py | richardseifert/Hydra_pipeline | a31d782219359bae7fa82fa9b081fb72bef69fce | [
"MIT"
] | 1 | 2018-11-05T17:28:58.000Z | 2018-11-05T18:20:00.000Z | libs/mpfit.py | richardseifert/Hydra_pipeline | a31d782219359bae7fa82fa9b081fb72bef69fce | [
"MIT"
] | null | null | null | """
Perform Levenberg-Marquardt least-squares minimization, based on MINPACK-1.
AUTHORS
The original version of this software, called LMFIT, was written in FORTRAN
as part of the MINPACK-1 package by XXX.
Craig Markwardt converted the FORTRAN code to IDL. The information for the
IDL version is:
Craig B. Markwardt, NASA/GSFC Code 662, Greenbelt, MD 20770
[email protected]
UPDATED VERSIONs can be found on my WEB PAGE:
http://cow.physics.wisc.edu/~craigm/idl/idl.html
Mark Rivers created this Python version from Craig's IDL version.
Mark Rivers, University of Chicago
Building 434A, Argonne National Laboratory
9700 South Cass Avenue, Argonne, IL 60439
[email protected]
Updated versions can be found at http://cars.uchicago.edu/software
Sergey Koposov converted the Mark's Python version from Numeric to numpy
Sergey Koposov, University of Cambridge, Institute of Astronomy,
Madingley road, CB3 0HA, Cambridge, UK
[email protected]
Updated versions can be found at http://code.google.com/p/astrolibpy/source/browse/trunk/
DESCRIPTION
MPFIT uses the Levenberg-Marquardt technique to solve the
least-squares problem. In its typical use, MPFIT will be used to
fit a user-supplied function (the "model") to user-supplied data
points (the "data") by adjusting a set of parameters. MPFIT is
based upon MINPACK-1 (LMDIF.F) by More' and collaborators.
For example, a researcher may think that a set of observed data
points is best modelled with a Gaussian curve. A Gaussian curve is
parameterized by its mean, standard deviation and normalization.
MPFIT will, within certain constraints, find the set of parameters
which best fits the data. The fit is "best" in the least-squares
sense; that is, the sum of the weighted squared differences between
the model and data is minimized.
The Levenberg-Marquardt technique is a particular strategy for
iteratively searching for the best fit. This particular
implementation is drawn from MINPACK-1 (see NETLIB), and is much faster
and more accurate than the version provided in the Scientific Python package
in Scientific.Functions.LeastSquares.
This version allows upper and lower bounding constraints to be placed on each
parameter, or the parameter can be held fixed.
The user-supplied Python function should return an array of weighted
deviations between model and data. In a typical scientific problem
the residuals should be weighted so that each deviate has a
gaussian sigma of 1.0. If X represents values of the independent
variable, Y represents a measurement for each value of X, and ERR
represents the error in the measurements, then the deviates could
be calculated as follows:
DEVIATES = (Y - F(X)) / ERR
where F is the analytical function representing the model. You are
recommended to use the convenience functions MPFITFUN and
MPFITEXPR, which are driver functions that calculate the deviates
for you. If ERR are the 1-sigma uncertainties in Y, then
TOTAL( DEVIATES^2 )
will be the total chi-squared value. MPFIT will minimize the
chi-square value. The values of X, Y and ERR are passed through
MPFIT to the user-supplied function via the FUNCTKW keyword.
Simple constraints can be placed on parameter values by using the
PARINFO keyword to MPFIT. See below for a description of this
keyword.
MPFIT does not perform more general optimization tasks. See TNMIN
instead. MPFIT is customized, based on MINPACK-1, to the
least-squares minimization problem.
USER FUNCTION
The user must define a function which returns the appropriate
values as specified above. The function should return the weighted
deviations between the model and the data. It should also return a status
flag and an optional partial derivative array. For applications which
use finite-difference derivatives -- the default -- the user
function should be declared in the following way:
def myfunct(p, fjac=None, x=None, y=None, err=None)
# Parameter values are passed in "p"
# If fjac==None then partial derivatives should not be
# computed. It will always be None if MPFIT is called with default
# flag.
model = F(x, p)
# Non-negative status value means MPFIT should continue, negative means
# stop the calculation.
status = 0
return([status, (y-model)/err]
See below for applications with analytical derivatives.
The keyword parameters X, Y, and ERR in the example above are
suggestive but not required. Any parameters can be passed to
MYFUNCT by using the functkw keyword to MPFIT. Use MPFITFUN and
MPFITEXPR if you need ideas on how to do that. The function *must*
accept a parameter list, P.
In general there are no restrictions on the number of dimensions in
X, Y or ERR. However the deviates *must* be returned in a
one-dimensional Numeric array of type Float.
User functions may also indicate a fatal error condition using the
status return described above. If status is set to a number between
-15 and -1 then MPFIT will stop the calculation and return to the caller.
ANALYTIC DERIVATIVES
In the search for the best-fit solution, MPFIT by default
calculates derivatives numerically via a finite difference
approximation. The user-supplied function need not calculate the
derivatives explicitly. However, if you desire to compute them
analytically, then the AUTODERIVATIVE=0 keyword must be passed to MPFIT.
As a practical matter, it is often sufficient and even faster to allow
MPFIT to calculate the derivatives numerically, and so
AUTODERIVATIVE=0 is not necessary.
If AUTODERIVATIVE=0 is used then the user function must check the parameter
FJAC, and if FJAC!=None then return the partial derivative array in the
return list.
def myfunct(p, fjac=None, x=None, y=None, err=None)
# Parameter values are passed in "p"
# If FJAC!=None then partial derivatives must be comptuer.
# FJAC contains an array of len(p), where each entry
# is 1 if that parameter is free and 0 if it is fixed.
model = F(x, p)
Non-negative status value means MPFIT should continue, negative means
# stop the calculation.
status = 0
if (dojac):
pderiv = zeros([len(x), len(p)], Float)
for j in range(len(p)):
pderiv[:,j] = FGRAD(x, p, j)
else:
pderiv = None
return([status, (y-model)/err, pderiv]
where FGRAD(x, p, i) is a user function which must compute the
derivative of the model with respect to parameter P[i] at X. When
finite differencing is used for computing derivatives (ie, when
AUTODERIVATIVE=1), or when MPFIT needs only the errors but not the
derivatives the parameter FJAC=None.
Derivatives should be returned in the PDERIV array. PDERIV should be an m x
n array, where m is the number of data points and n is the number
of parameters. dp[i,j] is the derivative at the ith point with
respect to the jth parameter.
The derivatives with respect to fixed parameters are ignored; zero
is an appropriate value to insert for those derivatives. Upon
input to the user function, FJAC is set to a vector with the same
length as P, with a value of 1 for a parameter which is free, and a
value of zero for a parameter which is fixed (and hence no
derivative needs to be calculated).
If the data is higher than one dimensional, then the *last*
dimension should be the parameter dimension. Example: fitting a
50x50 image, "dp" should be 50x50xNPAR.
CONSTRAINING PARAMETER VALUES WITH THE PARINFO KEYWORD
The behavior of MPFIT can be modified with respect to each
parameter to be fitted. A parameter value can be fixed; simple
boundary constraints can be imposed; limitations on the parameter
changes can be imposed; properties of the automatic derivative can
be modified; and parameters can be tied to one another.
These properties are governed by the PARINFO structure, which is
passed as a keyword parameter to MPFIT.
PARINFO should be a list of dictionaries, one list entry for each parameter.
Each parameter is associated with one element of the array, in
numerical order. The dictionary can have the following keys
(none are required, keys are case insensitive):
'value' - the starting parameter value (but see the START_PARAMS
parameter for more information).
'fixed' - a boolean value, whether the parameter is to be held
fixed or not. Fixed parameters are not varied by
MPFIT, but are passed on to MYFUNCT for evaluation.
'limited' - a two-element boolean array. If the first/second
element is set, then the parameter is bounded on the
lower/upper side. A parameter can be bounded on both
sides. Both LIMITED and LIMITS must be given
together.
'limits' - a two-element float array. Gives the
parameter limits on the lower and upper sides,
respectively. Zero, one or two of these values can be
set, depending on the values of LIMITED. Both LIMITED
and LIMITS must be given together.
'parname' - a string, giving the name of the parameter. The
fitting code of MPFIT does not use this tag in any
way. However, the default iterfunct will print the
parameter name if available.
'step' - the step size to be used in calculating the numerical
derivatives. If set to zero, then the step size is
computed automatically. Ignored when AUTODERIVATIVE=0.
'mpside' - the sidedness of the finite difference when computing
numerical derivatives. This field can take four
values:
0 - one-sided derivative computed automatically
1 - one-sided derivative (f(x+h) - f(x) )/h
-1 - one-sided derivative (f(x) - f(x-h))/h
2 - two-sided derivative (f(x+h) - f(x-h))/(2*h)
Where H is the STEP parameter described above. The
"automatic" one-sided derivative method will chose a
direction for the finite difference which does not
violate any constraints. The other methods do not
perform this check. The two-sided method is in
principle more precise, but requires twice as many
function evaluations. Default: 0.
'mpmaxstep' - the maximum change to be made in the parameter
value. During the fitting process, the parameter
will never be changed by more than this value in
one iteration.
A value of 0 indicates no maximum. Default: 0.
'tied' - a string expression which "ties" the parameter to other
free or fixed parameters. Any expression involving
constants and the parameter array P are permitted.
Example: if parameter 2 is always to be twice parameter
1 then use the following: parinfo(2).tied = '2 * p(1)'.
Since they are totally constrained, tied parameters are
considered to be fixed; no errors are computed for them.
[ NOTE: the PARNAME can't be used in expressions. ]
'mpprint' - if set to 1, then the default iterfunct will print the
parameter value. If set to 0, the parameter value
will not be printed. This tag can be used to
selectively print only a few parameter values out of
many. Default: 1 (all parameters printed)
Future modifications to the PARINFO structure, if any, will involve
adding dictionary tags beginning with the two letters "MP".
Therefore programmers are urged to avoid using tags starting with
the same letters; otherwise they are free to include their own
fields within the PARINFO structure, and they will be ignored.
PARINFO Example:
parinfo = [{'value':0., 'fixed':0, 'limited':[0,0], 'limits':[0.,0.]}
for i in range(5)]
parinfo[0]['fixed'] = 1
parinfo[4]['limited'][0] = 1
parinfo[4]['limits'][0] = 50.
values = [5.7, 2.2, 500., 1.5, 2000.]
for i in range(5): parinfo[i]['value']=values[i]
A total of 5 parameters, with starting values of 5.7,
2.2, 500, 1.5, and 2000 are given. The first parameter
is fixed at a value of 5.7, and the last parameter is
constrained to be above 50.
EXAMPLE
import mpfit
import numpy.oldnumeric as Numeric
x = arange(100, float)
p0 = [5.7, 2.2, 500., 1.5, 2000.]
y = ( p[0] + p[1]*[x] + p[2]*[x**2] + p[3]*sqrt(x) +
p[4]*log(x))
fa = {'x':x, 'y':y, 'err':err}
m = mpfit('myfunct', p0, functkw=fa)
print 'status = ', m.status
if (m.status <= 0): print 'error message = ', m.errmsg
print 'parameters = ', m.params
Minimizes sum of squares of MYFUNCT. MYFUNCT is called with the X,
Y, and ERR keyword parameters that are given by FUNCTKW. The
results can be obtained from the returned object m.
THEORY OF OPERATION
There are many specific strategies for function minimization. One
very popular technique is to use function gradient information to
realize the local structure of the function. Near a local minimum
the function value can be taylor expanded about x0 as follows:
f(x) = f(x0) + f'(x0) . (x-x0) + (1/2) (x-x0) . f''(x0) . (x-x0)
----- --------------- ------------------------------- (1)
Order 0th 1st 2nd
Here f'(x) is the gradient vector of f at x, and f''(x) is the
Hessian matrix of second derivatives of f at x. The vector x is
the set of function parameters, not the measured data vector. One
can find the minimum of f, f(xm) using Newton's method, and
arrives at the following linear equation:
f''(x0) . (xm-x0) = - f'(x0) (2)
If an inverse can be found for f''(x0) then one can solve for
(xm-x0), the step vector from the current position x0 to the new
projected minimum. Here the problem has been linearized (ie, the
gradient information is known to first order). f''(x0) is
symmetric n x n matrix, and should be positive definite.
The Levenberg - Marquardt technique is a variation on this theme.
It adds an additional diagonal term to the equation which may aid the
convergence properties:
(f''(x0) + nu I) . (xm-x0) = -f'(x0) (2a)
where I is the identity matrix. When nu is large, the overall
matrix is diagonally dominant, and the iterations follow steepest
descent. When nu is small, the iterations are quadratically
convergent.
In principle, if f''(x0) and f'(x0) are known then xm-x0 can be
determined. However the Hessian matrix is often difficult or
impossible to compute. The gradient f'(x0) may be easier to
compute, if even by finite difference techniques. So-called
quasi-Newton techniques attempt to successively estimate f''(x0)
by building up gradient information as the iterations proceed.
In the least squares problem there are further simplifications
which assist in solving eqn (2). The function to be minimized is
a sum of squares:
f = Sum(hi^2) (3)
where hi is the ith residual out of m residuals as described
above. This can be substituted back into eqn (2) after computing
the derivatives:
f' = 2 Sum(hi hi')
f'' = 2 Sum(hi' hj') + 2 Sum(hi hi'') (4)
If one assumes that the parameters are already close enough to a
minimum, then one typically finds that the second term in f'' is
negligible [or, in any case, is too difficult to compute]. Thus,
equation (2) can be solved, at least approximately, using only
gradient information.
In matrix notation, the combination of eqns (2) and (4) becomes:
hT' . h' . dx = - hT' . h (5)
Where h is the residual vector (length m), hT is its transpose, h'
is the Jacobian matrix (dimensions n x m), and dx is (xm-x0). The
user function supplies the residual vector h, and in some cases h'
when it is not found by finite differences (see MPFIT_FDJAC2,
which finds h and hT'). Even if dx is not the best absolute step
to take, it does provide a good estimate of the best *direction*,
so often a line minimization will occur along the dx vector
direction.
The method of solution employed by MINPACK is to form the Q . R
factorization of h', where Q is an orthogonal matrix such that QT .
Q = I, and R is upper right triangular. Using h' = Q . R and the
ortogonality of Q, eqn (5) becomes
(RT . QT) . (Q . R) . dx = - (RT . QT) . h
RT . R . dx = - RT . QT . h (6)
R . dx = - QT . h
where the last statement follows because R is upper triangular.
Here, R, QT and h are known so this is a matter of solving for dx.
The routine MPFIT_QRFAC provides the QR factorization of h, with
pivoting, and MPFIT_QRSOLV provides the solution for dx.
REFERENCES
MINPACK-1, Jorge More', available from netlib (www.netlib.org).
"Optimization Software Guide," Jorge More' and Stephen Wright,
SIAM, *Frontiers in Applied Mathematics*, Number 14.
More', Jorge J., "The Levenberg-Marquardt Algorithm:
Implementation and Theory," in *Numerical Analysis*, ed. Watson,
G. A., Lecture Notes in Mathematics 630, Springer-Verlag, 1977.
MODIFICATION HISTORY
Translated from MINPACK-1 in FORTRAN, Apr-Jul 1998, CM
Copyright (C) 1997-2002, Craig Markwardt
This software is provided as is without any warranty whatsoever.
Permission to use, copy, modify, and distribute modified or
unmodified copies is granted, provided this copyright and disclaimer
are included unchanged.
Translated from MPFIT (Craig Markwardt's IDL package) to Python,
August, 2002. Mark Rivers
Converted from Numeric to numpy (Sergey Koposov, July 2008)
"""
import numpy
import types
import scipy.linalg.blas
# Original FORTRAN documentation
# **********
#
# subroutine lmdif
#
# the purpose of lmdif is to minimize the sum of the squares of
# m nonlinear functions in n variables by a modification of
# the levenberg-marquardt algorithm. the user must provide a
# subroutine which calculates the functions. the jacobian is
# then calculated by a forward-difference approximation.
#
# the subroutine statement is
#
# subroutine lmdif(fcn,m,n,x,fvec,ftol,xtol,gtol,maxfev,epsfcn,
# diag,mode,factor,nprint,info,nfev,fjac,
# ldfjac,ipvt,qtf,wa1,wa2,wa3,wa4)
#
# where
#
# fcn is the name of the user-supplied subroutine which
# calculates the functions. fcn must be declared
# in an external statement in the user calling
# program, and should be written as follows.
#
# subroutine fcn(m,n,x,fvec,iflag)
# integer m,n,iflag
# double precision x(n),fvec(m)
# ----------
# calculate the functions at x and
# return this vector in fvec.
# ----------
# return
# end
#
# the value of iflag should not be changed by fcn unless
# the user wants to terminate execution of lmdif.
# in this case set iflag to a negative integer.
#
# m is a positive integer input variable set to the number
# of functions.
#
# n is a positive integer input variable set to the number
# of variables. n must not exceed m.
#
# x is an array of length n. on input x must contain
# an initial estimate of the solution vector. on output x
# contains the final estimate of the solution vector.
#
# fvec is an output array of length m which contains
# the functions evaluated at the output x.
#
# ftol is a nonnegative input variable. termination
# occurs when both the actual and predicted relative
# reductions in the sum of squares are at most ftol.
# therefore, ftol measures the relative error desired
# in the sum of squares.
#
# xtol is a nonnegative input variable. termination
# occurs when the relative error between two consecutive
# iterates is at most xtol. therefore, xtol measures the
# relative error desired in the approximate solution.
#
# gtol is a nonnegative input variable. termination
# occurs when the cosine of the angle between fvec and
# any column of the jacobian is at most gtol in absolute
# value. therefore, gtol measures the orthogonality
# desired between the function vector and the columns
# of the jacobian.
#
# maxfev is a positive integer input variable. termination
# occurs when the number of calls to fcn is at least
# maxfev by the end of an iteration.
#
# epsfcn is an input variable used in determining a suitable
# step length for the forward-difference approximation. this
# approximation assumes that the relative errors in the
# functions are of the order of epsfcn. if epsfcn is less
# than the machine precision, it is assumed that the relative
# errors in the functions are of the order of the machine
# precision.
#
# diag is an array of length n. if mode = 1 (see
# below), diag is internally set. if mode = 2, diag
# must contain positive entries that serve as
# multiplicative scale factors for the variables.
#
# mode is an integer input variable. if mode = 1, the
# variables will be scaled internally. if mode = 2,
# the scaling is specified by the input diag. other
# values of mode are equivalent to mode = 1.
#
# factor is a positive input variable used in determining the
# initial step bound. this bound is set to the product of
# factor and the euclidean norm of diag*x if nonzero, or else
# to factor itself. in most cases factor should lie in the
# interval (.1,100.). 100. is a generally recommended value.
#
# nprint is an integer input variable that enables controlled
# printing of iterates if it is positive. in this case,
# fcn is called with iflag = 0 at the beginning of the first
# iteration and every nprint iterations thereafter and
# immediately prior to return, with x and fvec available
# for printing. if nprint is not positive, no special calls
# of fcn with iflag = 0 are made.
#
# info is an integer output variable. if the user has
# terminated execution, info is set to the (negative)
# value of iflag. see description of fcn. otherwise,
# info is set as follows.
#
# info = 0 improper input parameters.
#
# info = 1 both actual and predicted relative reductions
# in the sum of squares are at most ftol.
#
# info = 2 relative error between two consecutive iterates
# is at most xtol.
#
# info = 3 conditions for info = 1 and info = 2 both hold.
#
# info = 4 the cosine of the angle between fvec and any
# column of the jacobian is at most gtol in
# absolute value.
#
# info = 5 number of calls to fcn has reached or
# exceeded maxfev.
#
# info = 6 ftol is too small. no further reduction in
# the sum of squares is possible.
#
# info = 7 xtol is too small. no further improvement in
# the approximate solution x is possible.
#
# info = 8 gtol is too small. fvec is orthogonal to the
# columns of the jacobian to machine precision.
#
# nfev is an integer output variable set to the number of
# calls to fcn.
#
# fjac is an output m by n array. the upper n by n submatrix
# of fjac contains an upper triangular matrix r with
# diagonal elements of nonincreasing magnitude such that
#
# t t t
# p *(jac *jac)*p = r *r,
#
# where p is a permutation matrix and jac is the final
# calculated jacobian. column j of p is column ipvt(j)
# (see below) of the identity matrix. the lower trapezoidal
# part of fjac contains information generated during
# the computation of r.
#
# ldfjac is a positive integer input variable not less than m
# which specifies the leading dimension of the array fjac.
#
# ipvt is an integer output array of length n. ipvt
# defines a permutation matrix p such that jac*p = q*r,
# where jac is the final calculated jacobian, q is
# orthogonal (not stored), and r is upper triangular
# with diagonal elements of nonincreasing magnitude.
# column j of p is column ipvt(j) of the identity matrix.
#
# qtf is an output array of length n which contains
# the first n elements of the vector (q transpose)*fvec.
#
# wa1, wa2, and wa3 are work arrays of length n.
#
# wa4 is a work array of length m.
#
# subprograms called
#
# user-supplied ...... fcn
#
# minpack-supplied ... dpmpar,enorm,fdjac2,,qrfac
#
# fortran-supplied ... dabs,dmax1,dmin1,dsqrt,mod
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
# **********
class mpfit:
blas_enorm32, = scipy.linalg.blas.get_blas_funcs(['nrm2'],numpy.array([0],dtype=numpy.float32))
blas_enorm64, = scipy.linalg.blas.get_blas_funcs(['nrm2'],numpy.array([0],dtype=numpy.float64))
def __init__(self, fcn, xall=None, functkw={}, parinfo=None,
ftol=1.e-10, xtol=1.e-10, gtol=1.e-10,
damp=0., maxiter=200, factor=100., nprint=1,
iterfunct='default', iterkw={}, nocovar=0,
rescale=0, autoderivative=1, quiet=0,
diag=None, epsfcn=None, debug=0):
"""
Inputs:
fcn:
The function to be minimized. The function should return the weighted
deviations between the model and the data, as described above.
xall:
An array of starting values for each of the parameters of the model.
The number of parameters should be fewer than the number of measurements.
This parameter is optional if the parinfo keyword is used (but see
parinfo). The parinfo keyword provides a mechanism to fix or constrain
individual parameters.
Keywords:
autoderivative:
If this is set, derivatives of the function will be computed
automatically via a finite differencing procedure. If not set, then
fcn must provide the (analytical) derivatives.
Default: set (=1)
NOTE: to supply your own analytical derivatives,
explicitly pass autoderivative=0
ftol:
A nonnegative input variable. Termination occurs when both the actual
and predicted relative reductions in the sum of squares are at most
ftol (and status is accordingly set to 1 or 3). Therefore, ftol
measures the relative error desired in the sum of squares.
Default: 1E-10
functkw:
A dictionary which contains the parameters to be passed to the
user-supplied function specified by fcn via the standard Python
keyword dictionary mechanism. This is the way you can pass additional
data to your user-supplied function without using global variables.
Consider the following example:
if functkw = {'xval':[1.,2.,3.], 'yval':[1.,4.,9.],
'errval':[1.,1.,1.] }
then the user supplied function should be declared like this:
def myfunct(p, fjac=None, xval=None, yval=None, errval=None):
Default: {} No extra parameters are passed to the user-supplied
function.
gtol:
A nonnegative input variable. Termination occurs when the cosine of
the angle between fvec and any column of the jacobian is at most gtol
in absolute value (and status is accordingly set to 4). Therefore,
gtol measures the orthogonality desired between the function vector
and the columns of the jacobian.
Default: 1e-10
iterkw:
The keyword arguments to be passed to iterfunct via the dictionary
keyword mechanism. This should be a dictionary and is similar in
operation to FUNCTKW.
Default: {} No arguments are passed.
iterfunct:
The name of a function to be called upon each NPRINT iteration of the
MPFIT routine. It should be declared in the following way:
def iterfunct(myfunct, p, iter, fnorm, functkw=None,
parinfo=None, quiet=0, dof=None, [iterkw keywords here])
# perform custom iteration update
iterfunct must accept all three keyword parameters (FUNCTKW, PARINFO
and QUIET).
myfunct: The user-supplied function to be minimized,
p: The current set of model parameters
iter: The iteration number
functkw: The arguments to be passed to myfunct.
fnorm: The chi-squared value.
quiet: Set when no textual output should be printed.
dof: The number of degrees of freedom, normally the number of points
less the number of free parameters.
See below for documentation of parinfo.
In implementation, iterfunct can perform updates to the terminal or
graphical user interface, to provide feedback while the fit proceeds.
If the fit is to be stopped for any reason, then iterfunct should return a
a status value between -15 and -1. Otherwise it should return None
(e.g. no return statement) or 0.
In principle, iterfunct should probably not modify the parameter values,
because it may interfere with the algorithm's stability. In practice it
is allowed.
Default: an internal routine is used to print the parameter values.
Set iterfunct=None if there is no user-defined routine and you don't
want the internal default routine be called.
maxiter:
The maximum number of iterations to perform. If the number is exceeded,
then the status value is set to 5 and MPFIT returns.
Default: 200 iterations
nocovar:
Set this keyword to prevent the calculation of the covariance matrix
before returning (see COVAR)
Default: clear (=0) The covariance matrix is returned
nprint:
The frequency with which iterfunct is called. A value of 1 indicates
that iterfunct is called with every iteration, while 2 indicates every
other iteration, etc. Note that several Levenberg-Marquardt attempts
can be made in a single iteration.
Default value: 1
parinfo
Provides a mechanism for more sophisticated constraints to be placed on
parameter values. When parinfo is not passed, then it is assumed that
all parameters are free and unconstrained. Values in parinfo are never
modified during a call to MPFIT.
See description above for the structure of PARINFO.
Default value: None All parameters are free and unconstrained.
quiet:
Set this keyword when no textual output should be printed by MPFIT
damp:
A scalar number, indicating the cut-off value of residuals where
"damping" will occur. Residuals with magnitudes greater than this
number will be replaced by their hyperbolic tangent. This partially
mitigates the so-called large residual problem inherent in
least-squares solvers (as for the test problem CURVI,
http://www.maxthis.com/curviex.htm).
A value of 0 indicates no damping.
Default: 0
Note: DAMP doesn't work with autoderivative=0
xtol:
A nonnegative input variable. Termination occurs when the relative error
between two consecutive iterates is at most xtol (and status is
accordingly set to 2 or 3). Therefore, xtol measures the relative error
desired in the approximate solution.
Default: 1E-10
Outputs:
Returns an object of type mpfit. The results are attributes of this class,
e.g. mpfit.status, mpfit.errmsg, mpfit.params, npfit.niter, mpfit.covar.
.status
An integer status code is returned. All values greater than zero can
represent success (however .status == 5 may indicate failure to
converge). It can have one of the following values:
-16
A parameter or function value has become infinite or an undefined
number. This is usually a consequence of numerical overflow in the
user's model function, which must be avoided.
-15 to -1
These are error codes that either MYFUNCT or iterfunct may return to
terminate the fitting process. Values from -15 to -1 are reserved
for the user functions and will not clash with MPFIT.
0 Improper input parameters.
1 Both actual and predicted relative reductions in the sum of squares
are at most ftol.
2 Relative error between two consecutive iterates is at most xtol
3 Conditions for status = 1 and status = 2 both hold.
4 The cosine of the angle between fvec and any column of the jacobian
is at most gtol in absolute value.
5 The maximum number of iterations has been reached.
6 ftol is too small. No further reduction in the sum of squares is
possible.
7 xtol is too small. No further improvement in the approximate solution
x is possible.
8 gtol is too small. fvec is orthogonal to the columns of the jacobian
to machine precision.
.fnorm
The value of the summed squared residuals for the returned parameter
values.
.covar
The covariance matrix for the set of parameters returned by MPFIT.
The matrix is NxN where N is the number of parameters. The square root
of the diagonal elements gives the formal 1-sigma statistical errors on
the parameters if errors were treated "properly" in fcn.
Parameter errors are also returned in .perror.
To compute the correlation matrix, pcor, use this example:
cov = mpfit.covar
pcor = cov * 0.
for i in range(n):
for j in range(n):
pcor[i,j] = cov[i,j]/sqrt(cov[i,i]*cov[j,j])
If nocovar is set or MPFIT terminated abnormally, then .covar is set to
a scalar with value None.
.errmsg
A string error or warning message is returned.
.nfev
The number of calls to MYFUNCT performed.
.niter
The number of iterations completed.
.perror
The formal 1-sigma errors in each parameter, computed from the
covariance matrix. If a parameter is held fixed, or if it touches a
boundary, then the error is reported as zero.
If the fit is unweighted (i.e. no errors were given, or the weights
were uniformly set to unity), then .perror will probably not represent
the true parameter uncertainties.
*If* you can assume that the true reduced chi-squared value is unity --
meaning that the fit is implicitly assumed to be of good quality --
then the estimated parameter uncertainties can be computed by scaling
.perror by the measured chi-squared value.
dof = len(x) - len(mpfit.params) # deg of freedom
# scaled uncertainties
pcerror = mpfit.perror * sqrt(mpfit.fnorm / dof)
"""
self.niter = 0
self.params = None
self.covar = None
self.perror = None
self.status = 0 # Invalid input flag set while we check inputs
self.debug = debug
self.errmsg = ''
self.nfev = 0
self.damp = damp
self.dof=0
if fcn==None:
self.errmsg = "Usage: parms = mpfit('myfunt', ... )"
return
if iterfunct == 'default':
iterfunct = self.defiter
# Parameter damping doesn't work when user is providing their own
# gradients.
if (self.damp != 0) and (autoderivative == 0):
self.errmsg = 'ERROR: keywords DAMP and AUTODERIVATIVE are mutually exclusive'
return
# Parameters can either be stored in parinfo, or x. x takes precedence if it exists
if (xall is None) and (parinfo is None):
self.errmsg = 'ERROR: must pass parameters in P or PARINFO'
return
# Be sure that PARINFO is of the right type
if parinfo is not None:
if type(parinfo) != types.ListType:
self.errmsg = 'ERROR: PARINFO must be a list of dictionaries.'
return
else:
if type(parinfo[0]) != types.DictionaryType:
self.errmsg = 'ERROR: PARINFO must be a list of dictionaries.'
return
if ((xall is not None) and (len(xall) != len(parinfo))):
self.errmsg = 'ERROR: number of elements in PARINFO and P must agree'
return
# If the parameters were not specified at the command line, then
# extract them from PARINFO
if xall is None:
xall = self.parinfo(parinfo, 'value')
if xall is None:
self.errmsg = 'ERROR: either P or PARINFO(*)["value"] must be supplied.'
return
# Make sure parameters are numpy arrays
xall = numpy.asarray(xall)
# In the case if the xall is not float or if is float but has less
# than 64 bits we do convert it into double
if xall.dtype.kind != 'f' or xall.dtype.itemsize<=4:
xall = xall.astype(numpy.float)
npar = len(xall)
self.fnorm = -1.
fnorm1 = -1.
# TIED parameters?
ptied = self.parinfo(parinfo, 'tied', default='', n=npar)
self.qanytied = 0
for i in range(npar):
ptied[i] = ptied[i].strip()
if ptied[i] != '':
self.qanytied = 1
self.ptied = ptied
# FIXED parameters ?
pfixed = self.parinfo(parinfo, 'fixed', default=0, n=npar)
pfixed = (pfixed == 1)
for i in range(npar):
pfixed[i] = pfixed[i] or (ptied[i] != '') # Tied parameters are also effectively fixed
# Finite differencing step, absolute and relative, and sidedness of deriv.
step = self.parinfo(parinfo, 'step', default=0., n=npar)
dstep = self.parinfo(parinfo, 'relstep', default=0., n=npar)
dside = self.parinfo(parinfo, 'mpside', default=0, n=npar)
# Maximum and minimum steps allowed to be taken in one iteration
maxstep = self.parinfo(parinfo, 'mpmaxstep', default=0., n=npar)
minstep = self.parinfo(parinfo, 'mpminstep', default=0., n=npar)
qmin = minstep != 0
qmin[:] = False # Remove minstep for now!!
qmax = maxstep != 0
if numpy.any(qmin & qmax & (maxstep<minstep)):
self.errmsg = 'ERROR: MPMINSTEP is greater than MPMAXSTEP'
return
wh = (numpy.nonzero((qmin!=0.) | (qmax!=0.)))[0]
qminmax = len(wh > 0)
# Finish up the free parameters
ifree = (numpy.nonzero(pfixed != 1))[0]
nfree = len(ifree)
if nfree == 0:
self.errmsg = 'ERROR: no free parameters'
return
# Compose only VARYING parameters
self.params = xall.copy() # self.params is the set of parameters to be returned
x = self.params[ifree] # x is the set of free parameters
# LIMITED parameters ?
limited = self.parinfo(parinfo, 'limited', default=[0,0], n=npar)
limits = self.parinfo(parinfo, 'limits', default=[0.,0.], n=npar)
if (limited is not None) and (limits is not None):
# Error checking on limits in parinfo
if numpy.any((limited[:,0] & (xall < limits[:,0])) |
(limited[:,1] & (xall > limits[:,1]))):
self.errmsg = 'ERROR: parameters are not within PARINFO limits'
return
if numpy.any((limited[:,0] & limited[:,1]) &
(limits[:,0] >= limits[:,1]) &
(pfixed == 0)):
self.errmsg = 'ERROR: PARINFO parameter limits are not consistent'
return
# Transfer structure values to local variables
qulim = (limited[:,1])[ifree]
ulim = (limits [:,1])[ifree]
qllim = (limited[:,0])[ifree]
llim = (limits [:,0])[ifree]
if numpy.any((qulim!=0.) | (qllim!=0.)):
qanylim = 1
else:
qanylim = 0
else:
# Fill in local variables with dummy values
qulim = numpy.zeros(nfree)
ulim = x * 0.
qllim = qulim
llim = x * 0.
qanylim = 0
n = len(x)
# Check input parameters for errors
if (n < 0) or (ftol <= 0) or (xtol <= 0) or (gtol <= 0) \
or (maxiter < 0) or (factor <= 0):
self.errmsg = 'ERROR: input keywords are inconsistent'
return
if rescale != 0:
self.errmsg = 'ERROR: DIAG parameter scales are inconsistent'
if len(diag) < n:
return
if numpy.any(diag <= 0):
return
self.errmsg = ''
[self.status, fvec] = self.call(fcn, self.params, functkw)
if self.status < 0:
self.errmsg = 'ERROR: first call to "'+str(fcn)+'" failed'
return
# If the returned fvec has more than four bits I assume that we have
# double precision
# It is important that the machar is determined by the precision of
# the returned value, not by the precision of the input array
if numpy.array([fvec]).dtype.itemsize>4:
self.machar = machar(double=1)
self.blas_enorm = mpfit.blas_enorm64
else:
self.machar = machar(double=0)
self.blas_enorm = mpfit.blas_enorm32
machep = self.machar.machep
m = len(fvec)
if m < n:
self.errmsg = 'ERROR: number of parameters must not exceed data'
return
self.dof = m-nfree
self.fnorm = self.enorm(fvec)
# Initialize Levelberg-Marquardt parameter and iteration counter
par = 0.
self.niter = 1
qtf = x * 0.
self.status = 0
# Beginning of the outer loop
while(1):
# If requested, call fcn to enable printing of iterates
self.params[ifree] = x
if self.qanytied:
self.params = self.tie(self.params, ptied)
if (nprint > 0) and (iterfunct is not None):
if ((self.niter-1) % nprint) == 0:
mperr = 0
xnew0 = self.params.copy()
dof = numpy.max([len(fvec) - len(x), 0])
status = iterfunct(fcn, self.params, self.niter, self.fnorm**2,
functkw=functkw, parinfo=parinfo, quiet=quiet,
dof=dof, **iterkw)
if status is not None:
self.status = status
# Check for user termination
if self.status < 0:
self.errmsg = 'WARNING: premature termination by ' + str(iterfunct)
return
# If parameters were changed (grrr..) then re-tie
if numpy.max(numpy.abs(xnew0-self.params)) > 0:
if self.qanytied:
self.params = self.tie(self.params, ptied)
x = self.params[ifree]
# Calculate the jacobian matrix
self.status = 2
catch_msg = 'calling MPFIT_FDJAC2'
fjac = self.fdjac2(fcn, x, fvec, step, qulim, ulim, dside,
epsfcn=epsfcn,
autoderivative=autoderivative, dstep=dstep,
functkw=functkw, ifree=ifree, xall=self.params)
if fjac is None:
self.errmsg = 'WARNING: premature termination by FDJAC2'
return
# Determine if any of the parameters are pegged at the limits
if qanylim:
catch_msg = 'zeroing derivatives of pegged parameters'
whlpeg = (numpy.nonzero(qllim & (x == llim)))[0]
nlpeg = len(whlpeg)
whupeg = (numpy.nonzero(qulim & (x == ulim)))[0]
nupeg = len(whupeg)
# See if any "pegged" values should keep their derivatives
if nlpeg > 0:
# Total derivative of sum wrt lower pegged parameters
for i in range(nlpeg):
sum0 = sum(fvec * fjac[:,whlpeg[i]])
if sum0 > 0:
fjac[:,whlpeg[i]] = 0
if nupeg > 0:
# Total derivative of sum wrt upper pegged parameters
for i in range(nupeg):
sum0 = sum(fvec * fjac[:,whupeg[i]])
if sum0 < 0:
fjac[:,whupeg[i]] = 0
# Compute the QR factorization of the jacobian
[fjac, ipvt, wa1, wa2] = self.qrfac(fjac, pivot=1)
# On the first iteration if "diag" is unspecified, scale
# according to the norms of the columns of the initial jacobian
catch_msg = 'rescaling diagonal elements'
if self.niter == 1:
if (rescale==0) or (len(diag) < n):
diag = wa2.copy()
diag[diag == 0] = 1.
# On the first iteration, calculate the norm of the scaled x
# and initialize the step bound delta
wa3 = diag * x
xnorm = self.enorm(wa3)
delta = factor*xnorm
if delta == 0.:
delta = factor
# Form (q transpose)*fvec and store the first n components in qtf
catch_msg = 'forming (q transpose)*fvec'
wa4 = fvec.copy()
for j in range(n):
lj = ipvt[j]
temp3 = fjac[j,lj]
if temp3 != 0:
fj = fjac[j:,lj]
wj = wa4[j:]
# *** optimization wa4(j:*)
wa4[j:] = wj - fj * sum(fj*wj) / temp3
fjac[j,lj] = wa1[j]
qtf[j] = wa4[j]
# From this point on, only the square matrix, consisting of the
# triangle of R, is needed.
fjac = fjac[0:n, 0:n]
fjac.shape = [n, n]
temp = fjac.copy()
for i in range(n):
temp[:,i] = fjac[:, ipvt[i]]
fjac = temp.copy()
# Check for overflow. This should be a cheap test here since FJAC
# has been reduced to a (small) square matrix, and the test is
# O(N^2).
#wh = where(finite(fjac) EQ 0, ct)
#if ct GT 0 then goto, FAIL_OVERFLOW
# Compute the norm of the scaled gradient
catch_msg = 'computing the scaled gradient'
gnorm = 0.
if self.fnorm != 0:
for j in range(n):
l = ipvt[j]
if wa2[l] != 0:
sum0 = sum(fjac[0:j+1,j]*qtf[0:j+1])/self.fnorm
gnorm = numpy.max([gnorm,numpy.abs(sum0/wa2[l])])
# Test for convergence of the gradient norm
if gnorm <= gtol:
self.status = 4
break
if maxiter == 0:
self.status = 5
break
# Rescale if necessary
if rescale == 0:
diag = numpy.choose(diag>wa2, (wa2, diag))
# Beginning of the inner loop
while(1):
# Determine the levenberg-marquardt parameter
catch_msg = 'calculating LM parameter (MPFIT_)'
[fjac, par, wa1, wa2] = self.lmpar(fjac, ipvt, diag, qtf,
delta, wa1, wa2, par=par)
# Store the direction p and x+p. Calculate the norm of p
wa1 = -wa1
if (qanylim == 0) and (qminmax == 0):
# No parameter limits, so just move to new position WA2
alpha = 1.
wa2 = x + wa1
else:
# Respect the limits. If a step were to go out of bounds, then
# we should take a step in the same direction but shorter distance.
# The step should take us right to the limit in that case.
alpha = 1.
if qanylim:
# Do not allow any steps out of bounds
catch_msg = 'checking for a step out of bounds'
if nlpeg > 0:
wa1[whlpeg] = numpy.clip( wa1[whlpeg], 0., numpy.max(wa1))
if nupeg > 0:
wa1[whupeg] = numpy.clip(wa1[whupeg], numpy.min(wa1), 0.)
dwa1 = numpy.abs(wa1) > machep
whl = (numpy.nonzero(((dwa1!=0.) & qllim) & ((x + wa1) < llim)))[0]
if len(whl) > 0:
t = ((llim[whl] - x[whl]) /
wa1[whl])
alpha = numpy.min([alpha, numpy.min(t)])
whu = (numpy.nonzero(((dwa1!=0.) & qulim) & ((x + wa1) > ulim)))[0]
if len(whu) > 0:
t = ((ulim[whu] - x[whu]) /
wa1[whu])
alpha = numpy.min([alpha, numpy.min(t)])
# Obey any max step values.
if qminmax:
nwa1 = wa1 * alpha
whmax = (numpy.nonzero((qmax != 0.) & (maxstep > 0)))[0]
if len(whmax) > 0:
mrat = numpy.max(numpy.abs(nwa1[whmax]) /
numpy.abs(maxstep[ifree[whmax]]))
if mrat > 1:
alpha = alpha / mrat
# Scale the resulting vector
wa1 = wa1 * alpha
wa2 = x + wa1
# Adjust the final output values. If the step put us exactly
# on a boundary, make sure it is exact.
sgnu = (ulim >= 0) * 2. - 1.
sgnl = (llim >= 0) * 2. - 1.
# Handles case of
# ... nonzero *LIM ... ...zero * LIM
ulim1 = ulim * (1 - sgnu * machep) - (ulim == 0) * machep
llim1 = llim * (1 + sgnl * machep) + (llim == 0) * machep
wh = (numpy.nonzero((qulim!=0) & (wa2 >= ulim1)))[0]
if len(wh) > 0:
wa2[wh] = ulim[wh]
wh = (numpy.nonzero((qllim!=0.) & (wa2 <= llim1)))[0]
if len(wh) > 0:
wa2[wh] = llim[wh]
# endelse
wa3 = diag * wa1
pnorm = self.enorm(wa3)
# On the first iteration, adjust the initial step bound
if self.niter == 1:
delta = numpy.min([delta,pnorm])
self.params[ifree] = wa2
# Evaluate the function at x+p and calculate its norm
mperr = 0
catch_msg = 'calling '+str(fcn)
[self.status, wa4] = self.call(fcn, self.params, functkw)
if self.status < 0:
self.errmsg = 'WARNING: premature termination by "'+fcn+'"'
return
fnorm1 = self.enorm(wa4)
# Compute the scaled actual reduction
catch_msg = 'computing convergence criteria'
actred = -1.
if (0.1 * fnorm1) < self.fnorm:
actred = - (fnorm1/self.fnorm)**2 + 1.
# Compute the scaled predicted reduction and the scaled directional
# derivative
for j in range(n):
wa3[j] = 0
wa3[0:j+1] = wa3[0:j+1] + fjac[0:j+1,j]*wa1[ipvt[j]]
# Remember, alpha is the fraction of the full LM step actually
# taken
temp1 = self.enorm(alpha*wa3)/self.fnorm
temp2 = (numpy.sqrt(alpha*par)*pnorm)/self.fnorm
prered = temp1*temp1 + (temp2*temp2)/0.5
dirder = -(temp1*temp1 + temp2*temp2)
# Compute the ratio of the actual to the predicted reduction.
ratio = 0.
if prered != 0:
ratio = actred/prered
# Update the step bound
if ratio <= 0.25:
if actred >= 0:
temp = .5
else:
temp = .5*dirder/(dirder + .5*actred)
if ((0.1*fnorm1) >= self.fnorm) or (temp < 0.1):
temp = 0.1
delta = temp*numpy.min([delta,pnorm/0.1])
par = par/temp
else:
if (par == 0) or (ratio >= 0.75):
delta = pnorm/.5
par = .5*par
# Test for successful iteration
if ratio >= 0.0001:
# Successful iteration. Update x, fvec, and their norms
x = wa2
wa2 = diag * x
fvec = wa4
xnorm = self.enorm(wa2)
self.fnorm = fnorm1
self.niter = self.niter + 1
# Tests for convergence
if (numpy.abs(actred) <= ftol) and (prered <= ftol) \
and (0.5 * ratio <= 1):
self.status = 1
if delta <= xtol*xnorm:
self.status = 2
if (numpy.abs(actred) <= ftol) and (prered <= ftol) \
and (0.5 * ratio <= 1) and (self.status == 2):
self.status = 3
if self.status != 0:
break
# Tests for termination and stringent tolerances
if self.niter >= maxiter:
self.status = 5
if (numpy.abs(actred) <= machep) and (prered <= machep) \
and (0.5*ratio <= 1):
self.status = 6
if delta <= machep*xnorm:
self.status = 7
if gnorm <= machep:
self.status = 8
if self.status != 0:
break
# End of inner loop. Repeat if iteration unsuccessful
if ratio >= 0.0001:
break
# Check for over/underflow
if ~numpy.all(numpy.isfinite(wa1) & numpy.isfinite(wa2) & \
numpy.isfinite(x)) or ~numpy.isfinite(ratio):
errmsg = ('''ERROR: parameter or function value(s) have become
'infinite; check model function for over- 'and underflow''')
self.status = -16
break
#wh = where(finite(wa1) EQ 0 OR finite(wa2) EQ 0 OR finite(x) EQ 0, ct)
#if ct GT 0 OR finite(ratio) EQ 0 then begin
if self.status != 0:
break;
# End of outer loop.
catch_msg = 'in the termination phase'
# Termination, either normal or user imposed.
if len(self.params) == 0:
return
if nfree == 0:
self.params = xall.copy()
else:
self.params[ifree] = x
if (nprint > 0) and (self.status > 0):
catch_msg = 'calling ' + str(fcn)
[status, fvec] = self.call(fcn, self.params, functkw)
catch_msg = 'in the termination phase'
self.fnorm = self.enorm(fvec)
if (self.fnorm is not None) and (fnorm1 is not None):
self.fnorm = numpy.max([self.fnorm, fnorm1])
self.fnorm = self.fnorm**2.
self.covar = None
self.perror = None
# (very carefully) set the covariance matrix COVAR
if (self.status > 0) and (nocovar==0) and (n is not None) \
and (fjac is not None) and (ipvt is not None):
sz = fjac.shape
if (n > 0) and (sz[0] >= n) and (sz[1] >= n) \
and (len(ipvt) >= n):
catch_msg = 'computing the covariance matrix'
cv = self.calc_covar(fjac[0:n,0:n], ipvt[0:n])
cv.shape = [n, n]
nn = len(xall)
# Fill in actual covariance matrix, accounting for fixed
# parameters.
self.covar = numpy.zeros([nn, nn], dtype=float)
for i in range(n):
self.covar[ifree,ifree[i]] = cv[:,i]
# Compute errors in parameters
catch_msg = 'computing parameter errors'
self.perror = numpy.zeros(nn, dtype=float)
d = numpy.diagonal(self.covar)
wh = (numpy.nonzero(d >= 0))[0]
if len(wh) > 0:
self.perror[wh] = numpy.sqrt(d[wh])
return
def __str__(self):
return {'params': self.params,
'niter': self.niter,
'params': self.params,
'covar': self.covar,
'perror': self.perror,
'status': self.status,
'debug': self.debug,
'errmsg': self.errmsg,
'nfev': self.nfev,
'damp': self.damp
#,'machar':self.machar
}.__str__()
# Default procedure to be called every iteration. It simply prints
# the parameter values.
def defiter(self, fcn, x, iter, fnorm=None, functkw=None,
quiet=0, iterstop=None, parinfo=None,
format=None, pformat='%.10g', dof=1):
if self.debug:
print 'Entering defiter...'
if quiet:
return
if fnorm is None:
[status, fvec] = self.call(fcn, x, functkw)
fnorm = self.enorm(fvec)**2
# Determine which parameters to print
nprint = len(x)
print "Iter ", ('%6i' % iter)," CHI-SQUARE = ",('%.10g' % fnorm)," DOF = ", ('%i' % dof)
for i in range(nprint):
if (parinfo is not None) and (parinfo[i].has_key('parname')):
p = ' ' + parinfo[i]['parname'] + ' = '
else:
p = ' P' + str(i) + ' = '
if (parinfo is not None) and (parinfo[i].has_key('mpprint')):
iprint = parinfo[i]['mpprint']
else:
iprint = 1
if iprint:
print p + (pformat % x[i]) + ' '
return 0
# DO_ITERSTOP:
# if keyword_set(iterstop) then begin
# k = get_kbrd(0)
# if k EQ string(byte(7)) then begin
# message, 'WARNING: minimization not complete', /info
# print, 'Do you want to terminate this procedure? (y/n)', $
# format='(A,$)'
# k = ''
# read, k
# if strupcase(strmid(k,0,1)) EQ 'Y' then begin
# message, 'WARNING: Procedure is terminating.', /info
# mperr = -1
# endif
# endif
# endif
# Procedure to parse the parameter values in PARINFO, which is a list of dictionaries
def parinfo(self, parinfo=None, key='a', default=None, n=0):
if self.debug:
print 'Entering parinfo...'
if (n == 0) and (parinfo is not None):
n = len(parinfo)
if n == 0:
values = default
return values
values = []
for i in range(n):
if (parinfo is not None) and (parinfo[i].has_key(key)):
values.append(parinfo[i][key])
else:
values.append(default)
# Convert to numeric arrays if possible
test = default
if type(default) == types.ListType:
test=default[0]
if isinstance(test, types.IntType):
values = numpy.asarray(values, int)
elif isinstance(test, types.FloatType):
values = numpy.asarray(values, float)
return values
# Call user function or procedure, with _EXTRA or not, with
# derivatives or not.
def call(self, fcn, x, functkw, fjac=None):
if self.debug:
print 'Entering call...'
if self.qanytied:
x = self.tie(x, self.ptied)
self.nfev = self.nfev + 1
if fjac is None:
[status, f] = fcn(x, fjac=fjac, **functkw)
if self.damp > 0:
# Apply the damping if requested. This replaces the residuals
# with their hyperbolic tangent. Thus residuals larger than
# DAMP are essentially clipped.
f = numpy.tanh(f/self.damp)
return [status, f]
else:
return fcn(x, fjac=fjac, **functkw)
def enorm(self, vec):
ans = self.blas_enorm(vec)
return ans
def fdjac2(self, fcn, x, fvec, step=None, ulimited=None, ulimit=None, dside=None,
epsfcn=None, autoderivative=1,
functkw=None, xall=None, ifree=None, dstep=None):
if self.debug:
print 'Entering fdjac2...'
machep = self.machar.machep
if epsfcn is None:
epsfcn = machep
if xall is None:
xall = x
if ifree is None:
ifree = numpy.arange(len(xall))
if step is None:
step = x * 0.
nall = len(xall)
eps = numpy.sqrt(numpy.max([epsfcn, machep]))
m = len(fvec)
n = len(x)
# Compute analytical derivative if requested
if autoderivative == 0:
mperr = 0
fjac = numpy.zeros(nall, dtype=float)
fjac[ifree] = 1.0 # Specify which parameters need derivatives
[status, fp] = self.call(fcn, xall, functkw, fjac=fjac)
if len(fjac) != m*nall:
print 'ERROR: Derivative matrix was not computed properly.'
return None
# This definition is consistent with CURVEFIT
# Sign error found (thanks Jesus Fernandez <[email protected]>)
fjac.shape = [m,nall]
fjac = -fjac
# Select only the free parameters
if len(ifree) < nall:
fjac = fjac[:,ifree]
fjac.shape = [m, n]
return fjac
fjac = numpy.zeros([m, n], dtype=float)
h = eps * numpy.abs(x)
# if STEP is given, use that
# STEP includes the fixed parameters
if step is not None:
stepi = step[ifree]
wh = (numpy.nonzero(stepi > 0))[0]
if len(wh) > 0:
h[wh] = stepi[wh]
# if relative step is given, use that
# DSTEP includes the fixed parameters
if len(dstep) > 0:
dstepi = dstep[ifree]
wh = (numpy.nonzero(dstepi > 0))[0]
if len(wh) > 0:
h[wh] = numpy.abs(dstepi[wh]*x[wh])
# In case any of the step values are zero
h[h == 0] = eps
# Reverse the sign of the step if we are up against the parameter
# limit, or if the user requested it.
# DSIDE includes the fixed parameters (ULIMITED/ULIMIT have only
# varying ones)
mask = dside[ifree] == -1
if len(ulimited) > 0 and len(ulimit) > 0:
mask = (mask | ((ulimited!=0) & (x > ulimit-h)))
wh = (numpy.nonzero(mask))[0]
if len(wh) > 0:
h[wh] = - h[wh]
# Loop through parameters, computing the derivative for each
for j in range(n):
xp = xall.copy()
xp[ifree[j]] = xp[ifree[j]] + h[j]
[status, fp] = self.call(fcn, xp, functkw)
if status < 0:
return None
if numpy.abs(dside[ifree[j]]) <= 1:
# COMPUTE THE ONE-SIDED DERIVATIVE
# Note optimization fjac(0:*,j)
fjac[0:,j] = (fp-fvec)/h[j]
else:
# COMPUTE THE TWO-SIDED DERIVATIVE
xp[ifree[j]] = xall[ifree[j]] - h[j]
mperr = 0
[status, fm] = self.call(fcn, xp, functkw)
if status < 0:
return None
# Note optimization fjac(0:*,j)
fjac[0:,j] = (fp-fm)/(2*h[j])
return fjac
# Original FORTRAN documentation
# **********
#
# subroutine qrfac
#
# this subroutine uses householder transformations with column
# pivoting (optional) to compute a qr factorization of the
# m by n matrix a. that is, qrfac determines an orthogonal
# matrix q, a permutation matrix p, and an upper trapezoidal
# matrix r with diagonal elements of nonincreasing magnitude,
# such that a*p = q*r. the householder transformation for
# column k, k = 1,2,...,min(m,n), is of the form
#
# t
# i - (1/u(k))*u*u
#
# where u has zeros in the first k-1 positions. the form of
# this transformation and the method of pivoting first
# appeared in the corresponding linpack subroutine.
#
# the subroutine statement is
#
# subroutine qrfac(m,n,a,lda,pivot,ipvt,lipvt,rdiag,acnorm,wa)
#
# where
#
# m is a positive integer input variable set to the number
# of rows of a.
#
# n is a positive integer input variable set to the number
# of columns of a.
#
# a is an m by n array. on input a contains the matrix for
# which the qr factorization is to be computed. on output
# the strict upper trapezoidal part of a contains the strict
# upper trapezoidal part of r, and the lower trapezoidal
# part of a contains a factored form of q (the non-trivial
# elements of the u vectors described above).
#
# lda is a positive integer input variable not less than m
# which specifies the leading dimension of the array a.
#
# pivot is a logical input variable. if pivot is set true,
# then column pivoting is enforced. if pivot is set false,
# then no column pivoting is done.
#
# ipvt is an integer output array of length lipvt. ipvt
# defines the permutation matrix p such that a*p = q*r.
# column j of p is column ipvt(j) of the identity matrix.
# if pivot is false, ipvt is not referenced.
#
# lipvt is a positive integer input variable. if pivot is false,
# then lipvt may be as small as 1. if pivot is true, then
# lipvt must be at least n.
#
# rdiag is an output array of length n which contains the
# diagonal elements of r.
#
# acnorm is an output array of length n which contains the
# norms of the corresponding columns of the input matrix a.
# if this information is not needed, then acnorm can coincide
# with rdiag.
#
# wa is a work array of length n. if pivot is false, then wa
# can coincide with rdiag.
#
# subprograms called
#
# minpack-supplied ... dpmpar,enorm
#
# fortran-supplied ... dmax1,dsqrt,min0
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
# **********
#
# PIVOTING / PERMUTING:
#
# Upon return, A(*,*) is in standard parameter order, A(*,IPVT) is in
# permuted order.
#
# RDIAG is in permuted order.
# ACNORM is in standard parameter order.
#
#
# NOTE: in IDL the factors appear slightly differently than described
# above. The matrix A is still m x n where m >= n.
#
# The "upper" triangular matrix R is actually stored in the strict
# lower left triangle of A under the standard notation of IDL.
#
# The reflectors that generate Q are in the upper trapezoid of A upon
# output.
#
# EXAMPLE: decompose the matrix [[9.,2.,6.],[4.,8.,7.]]
# aa = [[9.,2.,6.],[4.,8.,7.]]
# mpfit_qrfac, aa, aapvt, rdiag, aanorm
# IDL> print, aa
# 1.81818* 0.181818* 0.545455*
# -8.54545+ 1.90160* 0.432573*
# IDL> print, rdiag
# -11.0000+ -7.48166+
#
# The components marked with a * are the components of the
# reflectors, and those marked with a + are components of R.
#
# To reconstruct Q and R we proceed as follows. First R.
# r = fltarr(m, n)
# for i = 0, n-1 do r(0:i,i) = aa(0:i,i) # fill in lower diag
# r(lindgen(n)*(m+1)) = rdiag
#
# Next, Q, which are composed from the reflectors. Each reflector v
# is taken from the upper trapezoid of aa, and converted to a matrix
# via (I - 2 vT . v / (v . vT)).
#
# hh = ident # identity matrix
# for i = 0, n-1 do begin
# v = aa(*,i) & if i GT 0 then v(0:i-1) = 0 # extract reflector
# hh = hh # (ident - 2*(v # v)/total(v * v)) # generate matrix
# endfor
#
# Test the result:
# IDL> print, hh # transpose(r)
# 9.00000 4.00000
# 2.00000 8.00000
# 6.00000 7.00000
#
# Note that it is usually never necessary to form the Q matrix
# explicitly, and MPFIT does not.
def qrfac(self, a, pivot=0):
if self.debug: print 'Entering qrfac...'
machep = self.machar.machep
sz = a.shape
m = sz[0]
n = sz[1]
# Compute the initial column norms and initialize arrays
acnorm = numpy.zeros(n, dtype=float)
for j in range(n):
acnorm[j] = self.enorm(a[:,j])
rdiag = acnorm.copy()
wa = rdiag.copy()
ipvt = numpy.arange(n)
# Reduce a to r with householder transformations
minmn = numpy.min([m,n])
for j in range(minmn):
if pivot != 0:
# Bring the column of largest norm into the pivot position
rmax = numpy.max(rdiag[j:])
kmax = (numpy.nonzero(rdiag[j:] == rmax))[0]
ct = len(kmax)
kmax = kmax + j
if ct > 0:
kmax = kmax[0]
# Exchange rows via the pivot only. Avoid actually exchanging
# the rows, in case there is lots of memory transfer. The
# exchange occurs later, within the body of MPFIT, after the
# extraneous columns of the matrix have been shed.
if kmax != j:
temp = ipvt[j] ; ipvt[j] = ipvt[kmax] ; ipvt[kmax] = temp
rdiag[kmax] = rdiag[j]
wa[kmax] = wa[j]
# Compute the householder transformation to reduce the jth
# column of A to a multiple of the jth unit vector
lj = ipvt[j]
ajj = a[j:,lj]
ajnorm = self.enorm(ajj)
if ajnorm == 0:
break
if a[j,lj] < 0:
ajnorm = -ajnorm
ajj = ajj / ajnorm
ajj[0] = ajj[0] + 1
# *** Note optimization a(j:*,j)
a[j:,lj] = ajj
# Apply the transformation to the remaining columns
# and update the norms
# NOTE to SELF: tried to optimize this by removing the loop,
# but it actually got slower. Reverted to "for" loop to keep
# it simple.
if j+1 < n:
for k in range(j+1, n):
lk = ipvt[k]
ajk = a[j:,lk]
# *** Note optimization a(j:*,lk)
# (corrected 20 Jul 2000)
if a[j,lj] != 0:
a[j:,lk] = ajk - ajj * sum(ajk*ajj)/a[j,lj]
if (pivot != 0) and (rdiag[k] != 0):
temp = a[j,lk]/rdiag[k]
rdiag[k] = rdiag[k] * numpy.sqrt(numpy.max([(1.-temp**2), 0.]))
temp = rdiag[k]/wa[k]
if (0.05*temp*temp) <= machep:
rdiag[k] = self.enorm(a[j+1:,lk])
wa[k] = rdiag[k]
rdiag[j] = -ajnorm
return [a, ipvt, rdiag, acnorm]
# Original FORTRAN documentation
# **********
#
# subroutine qrsolv
#
# given an m by n matrix a, an n by n diagonal matrix d,
# and an m-vector b, the problem is to determine an x which
# solves the system
#
# a*x = b , d*x = 0 ,
#
# in the least squares sense.
#
# this subroutine completes the solution of the problem
# if it is provided with the necessary information from the
# factorization, with column pivoting, of a. that is, if
# a*p = q*r, where p is a permutation matrix, q has orthogonal
# columns, and r is an upper triangular matrix with diagonal
# elements of nonincreasing magnitude, then qrsolv expects
# the full upper triangle of r, the permutation matrix p,
# and the first n components of (q transpose)*b. the system
# a*x = b, d*x = 0, is then equivalent to
#
# t t
# r*z = q *b , p *d*p*z = 0 ,
#
# where x = p*z. if this system does not have full rank,
# then a least squares solution is obtained. on output qrsolv
# also provides an upper triangular matrix s such that
#
# t t t
# p *(a *a + d*d)*p = s *s .
#
# s is computed within qrsolv and may be of separate interest.
#
# the subroutine statement is
#
# subroutine qrsolv(n,r,ldr,ipvt,diag,qtb,x,sdiag,wa)
#
# where
#
# n is a positive integer input variable set to the order of r.
#
# r is an n by n array. on input the full upper triangle
# must contain the full upper triangle of the matrix r.
# on output the full upper triangle is unaltered, and the
# strict lower triangle contains the strict upper triangle
# (transposed) of the upper triangular matrix s.
#
# ldr is a positive integer input variable not less than n
# which specifies the leading dimension of the array r.
#
# ipvt is an integer input array of length n which defines the
# permutation matrix p such that a*p = q*r. column j of p
# is column ipvt(j) of the identity matrix.
#
# diag is an input array of length n which must contain the
# diagonal elements of the matrix d.
#
# qtb is an input array of length n which must contain the first
# n elements of the vector (q transpose)*b.
#
# x is an output array of length n which contains the least
# squares solution of the system a*x = b, d*x = 0.
#
# sdiag is an output array of length n which contains the
# diagonal elements of the upper triangular matrix s.
#
# wa is a work array of length n.
#
# subprograms called
#
# fortran-supplied ... dabs,dsqrt
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
def qrsolv(self, r, ipvt, diag, qtb, sdiag):
if self.debug:
print 'Entering qrsolv...'
sz = r.shape
m = sz[0]
n = sz[1]
# copy r and (q transpose)*b to preserve input and initialize s.
# in particular, save the diagonal elements of r in x.
for j in range(n):
r[j:n,j] = r[j,j:n]
x = numpy.diagonal(r).copy()
wa = qtb.copy()
# Eliminate the diagonal matrix d using a givens rotation
for j in range(n):
l = ipvt[j]
if diag[l] == 0:
break
sdiag[j:] = 0
sdiag[j] = diag[l]
# The transformations to eliminate the row of d modify only a
# single element of (q transpose)*b beyond the first n, which
# is initially zero.
qtbpj = 0.
for k in range(j,n):
if sdiag[k] == 0:
break
if numpy.abs(r[k,k]) < numpy.abs(sdiag[k]):
cotan = r[k,k]/sdiag[k]
sine = 0.5/numpy.sqrt(.25 + .25*cotan*cotan)
cosine = sine*cotan
else:
tang = sdiag[k]/r[k,k]
cosine = 0.5/numpy.sqrt(.25 + .25*tang*tang)
sine = cosine*tang
# Compute the modified diagonal element of r and the
# modified element of ((q transpose)*b,0).
r[k,k] = cosine*r[k,k] + sine*sdiag[k]
temp = cosine*wa[k] + sine*qtbpj
qtbpj = -sine*wa[k] + cosine*qtbpj
wa[k] = temp
# Accumulate the transformation in the row of s
if n > k+1:
temp = cosine*r[k+1:n,k] + sine*sdiag[k+1:n]
sdiag[k+1:n] = -sine*r[k+1:n,k] + cosine*sdiag[k+1:n]
r[k+1:n,k] = temp
sdiag[j] = r[j,j]
r[j,j] = x[j]
# Solve the triangular system for z. If the system is singular
# then obtain a least squares solution
nsing = n
wh = (numpy.nonzero(sdiag == 0))[0]
if len(wh) > 0:
nsing = wh[0]
wa[nsing:] = 0
if nsing >= 1:
wa[nsing-1] = wa[nsing-1]/sdiag[nsing-1] # Degenerate case
# *** Reverse loop ***
for j in range(nsing-2,-1,-1):
sum0 = sum(r[j+1:nsing,j]*wa[j+1:nsing])
wa[j] = (wa[j]-sum0)/sdiag[j]
# Permute the components of z back to components of x
x[ipvt] = wa
return (r, x, sdiag)
# Original FORTRAN documentation
#
# subroutine lmpar
#
# given an m by n matrix a, an n by n nonsingular diagonal
# matrix d, an m-vector b, and a positive number delta,
# the problem is to determine a value for the parameter
# par such that if x solves the system
#
# a*x = b , sqrt(par)*d*x = 0 ,
#
# in the least squares sense, and dxnorm is the euclidean
# norm of d*x, then either par is zero and
#
# (dxnorm-delta) .le. 0.1*delta ,
#
# or par is positive and
#
# abs(dxnorm-delta) .le. 0.1*delta .
#
# this subroutine completes the solution of the problem
# if it is provided with the necessary information from the
# qr factorization, with column pivoting, of a. that is, if
# a*p = q*r, where p is a permutation matrix, q has orthogonal
# columns, and r is an upper triangular matrix with diagonal
# elements of nonincreasing magnitude, then lmpar expects
# the full upper triangle of r, the permutation matrix p,
# and the first n components of (q transpose)*b. on output
# lmpar also provides an upper triangular matrix s such that
#
# t t t
# p *(a *a + par*d*d)*p = s *s .
#
# s is employed within lmpar and may be of separate interest.
#
# only a few iterations are generally needed for convergence
# of the algorithm. if, however, the limit of 10 iterations
# is reached, then the output par will contain the best
# value obtained so far.
#
# the subroutine statement is
#
# subroutine lmpar(n,r,ldr,ipvt,diag,qtb,delta,par,x,sdiag,
# wa1,wa2)
#
# where
#
# n is a positive integer input variable set to the order of r.
#
# r is an n by n array. on input the full upper triangle
# must contain the full upper triangle of the matrix r.
# on output the full upper triangle is unaltered, and the
# strict lower triangle contains the strict upper triangle
# (transposed) of the upper triangular matrix s.
#
# ldr is a positive integer input variable not less than n
# which specifies the leading dimension of the array r.
#
# ipvt is an integer input array of length n which defines the
# permutation matrix p such that a*p = q*r. column j of p
# is column ipvt(j) of the identity matrix.
#
# diag is an input array of length n which must contain the
# diagonal elements of the matrix d.
#
# qtb is an input array of length n which must contain the first
# n elements of the vector (q transpose)*b.
#
# delta is a positive input variable which specifies an upper
# bound on the euclidean norm of d*x.
#
# par is a nonnegative variable. on input par contains an
# initial estimate of the levenberg-marquardt parameter.
# on output par contains the final estimate.
#
# x is an output array of length n which contains the least
# squares solution of the system a*x = b, sqrt(par)*d*x = 0,
# for the output par.
#
# sdiag is an output array of length n which contains the
# diagonal elements of the upper triangular matrix s.
#
# wa1 and wa2 are work arrays of length n.
#
# subprograms called
#
# minpack-supplied ... dpmpar,enorm,qrsolv
#
# fortran-supplied ... dabs,dmax1,dmin1,dsqrt
#
# argonne national laboratory. minpack project. march 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
def lmpar(self, r, ipvt, diag, qtb, delta, x, sdiag, par=None):
if self.debug:
print 'Entering lmpar...'
dwarf = self.machar.minnum
machep = self.machar.machep
sz = r.shape
m = sz[0]
n = sz[1]
# Compute and store in x the gauss-newton direction. If the
# jacobian is rank-deficient, obtain a least-squares solution
nsing = n
wa1 = qtb.copy()
rthresh = numpy.max(numpy.abs(numpy.diagonal(r))) * machep
wh = (numpy.nonzero(numpy.abs(numpy.diagonal(r)) < rthresh))[0]
if len(wh) > 0:
nsing = wh[0]
wa1[wh[0]:] = 0
if nsing >= 1:
# *** Reverse loop ***
for j in range(nsing-1,-1,-1):
wa1[j] = wa1[j]/r[j,j]
if j-1 >= 0:
wa1[0:j] = wa1[0:j] - r[0:j,j]*wa1[j]
# Note: ipvt here is a permutation array
x[ipvt] = wa1
# Initialize the iteration counter. Evaluate the function at the
# origin, and test for acceptance of the gauss-newton direction
iter = 0
wa2 = diag * x
dxnorm = self.enorm(wa2)
fp = dxnorm - delta
if fp <= 0.1*delta:
return [r, 0., x, sdiag]
# If the jacobian is not rank deficient, the newton step provides a
# lower bound, parl, for the zero of the function. Otherwise set
# this bound to zero.
parl = 0.
if nsing >= n:
wa1 = diag[ipvt] * wa2[ipvt] / dxnorm
wa1[0] = wa1[0] / r[0,0] # Degenerate case
for j in range(1,n): # Note "1" here, not zero
sum0 = sum(r[0:j,j]*wa1[0:j])
wa1[j] = (wa1[j] - sum0)/r[j,j]
temp = self.enorm(wa1)
parl = ((fp/delta)/temp)/temp
# Calculate an upper bound, paru, for the zero of the function
for j in range(n):
sum0 = sum(r[0:j+1,j]*qtb[0:j+1])
wa1[j] = sum0/diag[ipvt[j]]
gnorm = self.enorm(wa1)
paru = gnorm/delta
if paru == 0:
paru = dwarf/numpy.min([delta,0.1])
# If the input par lies outside of the interval (parl,paru), set
# par to the closer endpoint
par = numpy.max([par,parl])
par = numpy.min([par,paru])
if par == 0:
par = gnorm/dxnorm
# Beginning of an interation
while(1):
iter = iter + 1
# Evaluate the function at the current value of par
if par == 0:
par = numpy.max([dwarf, paru*0.001])
temp = numpy.sqrt(par)
wa1 = temp * diag
[r, x, sdiag] = self.qrsolv(r, ipvt, wa1, qtb, sdiag)
wa2 = diag*x
dxnorm = self.enorm(wa2)
temp = fp
fp = dxnorm - delta
if (numpy.abs(fp) <= 0.1*delta) or \
((parl == 0) and (fp <= temp) and (temp < 0)) or \
(iter == 10):
break;
# Compute the newton correction
wa1 = diag[ipvt] * wa2[ipvt] / dxnorm
for j in range(n-1):
wa1[j] = wa1[j]/sdiag[j]
wa1[j+1:n] = wa1[j+1:n] - r[j+1:n,j]*wa1[j]
wa1[n-1] = wa1[n-1]/sdiag[n-1] # Degenerate case
temp = self.enorm(wa1)
parc = ((fp/delta)/temp)/temp
# Depending on the sign of the function, update parl or paru
if fp > 0:
parl = numpy.max([parl,par])
if fp < 0:
paru = numpy.min([paru,par])
# Compute an improved estimate for par
par = numpy.max([parl, par+parc])
# End of an iteration
# Termination
return [r, par, x, sdiag]
# Procedure to tie one parameter to another.
def tie(self, p, ptied=None):
if self.debug:
print 'Entering tie...'
if ptied is None:
return
for i in range(len(ptied)):
if ptied[i] == '':
continue
cmd = 'p[' + str(i) + '] = ' + ptied[i]
exec(cmd)
return p
# Original FORTRAN documentation
# **********
#
# subroutine covar
#
# given an m by n matrix a, the problem is to determine
# the covariance matrix corresponding to a, defined as
#
# t
# inverse(a *a) .
#
# this subroutine completes the solution of the problem
# if it is provided with the necessary information from the
# qr factorization, with column pivoting, of a. that is, if
# a*p = q*r, where p is a permutation matrix, q has orthogonal
# columns, and r is an upper triangular matrix with diagonal
# elements of nonincreasing magnitude, then covar expects
# the full upper triangle of r and the permutation matrix p.
# the covariance matrix is then computed as
#
# t t
# p*inverse(r *r)*p .
#
# if a is nearly rank deficient, it may be desirable to compute
# the covariance matrix corresponding to the linearly independent
# columns of a. to define the numerical rank of a, covar uses
# the tolerance tol. if l is the largest integer such that
#
# abs(r(l,l)) .gt. tol*abs(r(1,1)) ,
#
# then covar computes the covariance matrix corresponding to
# the first l columns of r. for k greater than l, column
# and row ipvt(k) of the covariance matrix are set to zero.
#
# the subroutine statement is
#
# subroutine covar(n,r,ldr,ipvt,tol,wa)
#
# where
#
# n is a positive integer input variable set to the order of r.
#
# r is an n by n array. on input the full upper triangle must
# contain the full upper triangle of the matrix r. on output
# r contains the square symmetric covariance matrix.
#
# ldr is a positive integer input variable not less than n
# which specifies the leading dimension of the array r.
#
# ipvt is an integer input array of length n which defines the
# permutation matrix p such that a*p = q*r. column j of p
# is column ipvt(j) of the identity matrix.
#
# tol is a nonnegative input variable used to define the
# numerical rank of a in the manner described above.
#
# wa is a work array of length n.
#
# subprograms called
#
# fortran-supplied ... dabs
#
# argonne national laboratory. minpack project. august 1980.
# burton s. garbow, kenneth e. hillstrom, jorge j. more
#
# **********
def calc_covar(self, rr, ipvt=None, tol=1.e-14):
if self.debug:
print 'Entering calc_covar...'
if numpy.rank(rr) != 2:
print 'ERROR: r must be a two-dimensional matrix'
return -1
s = rr.shape
n = s[0]
if s[0] != s[1]:
print 'ERROR: r must be a square matrix'
return -1
if ipvt is None:
ipvt = numpy.arange(n)
r = rr.copy()
r.shape = [n,n]
# For the inverse of r in the full upper triangle of r
l = -1
tolr = tol * numpy.abs(r[0,0])
for k in range(n):
if numpy.abs(r[k,k]) <= tolr:
break
r[k,k] = 1./r[k,k]
for j in range(k):
temp = r[k,k] * r[j,k]
r[j,k] = 0.
r[0:j+1,k] = r[0:j+1,k] - temp*r[0:j+1,j]
l = k
# Form the full upper triangle of the inverse of (r transpose)*r
# in the full upper triangle of r
if l >= 0:
for k in range(l+1):
for j in range(k):
temp = r[j,k]
r[0:j+1,j] = r[0:j+1,j] + temp*r[0:j+1,k]
temp = r[k,k]
r[0:k+1,k] = temp * r[0:k+1,k]
# For the full lower triangle of the covariance matrix
# in the strict lower triangle or and in wa
wa = numpy.repeat([r[0,0]], n)
for j in range(n):
jj = ipvt[j]
sing = j > l
for i in range(j+1):
if sing:
r[i,j] = 0.
ii = ipvt[i]
if ii > jj:
r[ii,jj] = r[i,j]
if ii < jj:
r[jj,ii] = r[i,j]
wa[jj] = r[j,j]
# Symmetrize the covariance matrix in r
for j in range(n):
r[0:j+1,j] = r[j,0:j+1]
r[j,j] = wa[j]
return r
class machar:
def __init__(self, double=1):
if double == 0:
info = numpy.finfo(numpy.float32)
else:
info = numpy.finfo(numpy.float64)
self.machep = info.eps
self.maxnum = info.max
self.minnum = info.tiny
self.maxlog = numpy.log(self.maxnum)
self.minlog = numpy.log(self.minnum)
self.rdwarf = numpy.sqrt(self.minnum*1.5) * 10
self.rgiant = numpy.sqrt(self.maxnum) * 0.1
| 33.483348 | 96 | 0.666926 |
7943e794740abd67cbe3fa09614cde67eb5ed75e | 381 | py | Python | bugtests/test254c.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | bugtests/test254c.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null | bugtests/test254c.py | doom38/jython_v2.2.1 | 0803a0c953c294e6d14f9fc7d08edf6a3e630a15 | [
"CNRI-Jython"
] | null | null | null |
from java import awt, applet
import java
print "Hi! One stacktrace expected:"
try:
raise java.lang.Exception()
except java.lang.Exception,e:
e.printStackTrace()
class test254c(applet.Applet):
def paint(self, g):
g.setColor(awt.Color.black)
g.fill3DRect(5,5,590,100,0)
g.setFont(awt.Font('Arial', 0, 80))
g.setColor(awt.Color.blue)
g.drawString('Hello World', 90, 80)
| 21.166667 | 36 | 0.721785 |
7943e83d836c398e977030c34aff38281093b0a8 | 3,940 | py | Python | wagtail_transfer/serializers.py | KalobTaulien/wagtail-transfer | c4ec94ac8a18df354462e2528070feaccd65c493 | [
"BSD-3-Clause"
] | 3 | 2020-11-06T12:35:47.000Z | 2021-03-26T08:13:54.000Z | wagtail_transfer/serializers.py | KalobTaulien/wagtail-transfer | c4ec94ac8a18df354462e2528070feaccd65c493 | [
"BSD-3-Clause"
] | null | null | null | wagtail_transfer/serializers.py | KalobTaulien/wagtail-transfer | c4ec94ac8a18df354462e2528070feaccd65c493 | [
"BSD-3-Clause"
] | null | null | null | from functools import lru_cache
from django.db import models
from modelcluster.fields import ParentalKey
from treebeard.mp_tree import MP_Node
from wagtail.core.models import Page
from .field_adapters import get_field_adapter
from .models import get_base_model
class ModelSerializer:
ignored_fields = []
def __init__(self, model):
self.model = model
self.base_model = get_base_model(model)
self.field_adapters = []
for field in self.model._meta.get_fields():
if field.name in self.ignored_fields:
continue
if isinstance(field, models.Field):
# this is a genuine field rather than a reverse relation
# ignore primary keys (including MTI parent pointers)
if field.primary_key:
continue
else:
# this is probably a reverse relation, so fetch its related field
try:
related_field = field.field
except AttributeError:
# we don't know what sort of pseudo-field this is, so skip it
continue
# ignore relations other than ParentalKey
if not isinstance(related_field, ParentalKey):
continue
self.field_adapters.append(get_field_adapter(field))
def get_objects_by_ids(self, ids):
"""
Given a list of IDs, return a queryset of model instances that we can
run serialize and get_object_references on
"""
return self.model.objects.filter(pk__in=ids)
def serialize_fields(self, instance):
return {
field_adapter.name: field_adapter.serialize(instance)
for field_adapter in self.field_adapters
}
def serialize(self, instance):
return {
'model': self.model._meta.label_lower,
'pk': instance.pk,
'fields': self.serialize_fields(instance)
}
def get_object_references(self, instance):
refs = {
# always include the primary key as an object reference
(self.base_model, instance.pk)
}
for f in self.field_adapters:
refs.update(f.get_object_references(instance))
return refs
class TreeModelSerializer(ModelSerializer):
ignored_fields = ['path', 'depth', 'numchild']
def serialize(self, instance):
result = super().serialize(instance)
if instance.is_root():
result['parent_id'] = None
else:
result['parent_id'] = instance.get_parent().pk
return result
def get_object_references(self, instance):
refs = super().get_object_references(instance)
if not instance.is_root():
# add a reference for the parent ID
refs.add(
(self.base_model, instance.get_parent().pk)
)
return refs
class PageSerializer(TreeModelSerializer):
ignored_fields = TreeModelSerializer.ignored_fields + [
'url_path', 'content_type', 'draft_title', 'has_unpublished_changes', 'owner',
'go_live_at', 'expire_at', 'expired', 'locked', 'first_published_at', 'last_published_at',
'latest_revision_created_at', 'live_revision',
]
def get_objects_by_ids(self, ids):
# serialize method needs the instance in its specific form
return super().get_objects_by_ids(ids).specific()
SERIALIZERS_BY_MODEL_CLASS = {
models.Model: ModelSerializer,
MP_Node: TreeModelSerializer,
Page: PageSerializer,
}
@lru_cache(maxsize=None)
def get_model_serializer(model):
# find the serializer class for the most specific class in the model's inheritance tree
for cls in model.__mro__:
if cls in SERIALIZERS_BY_MODEL_CLASS:
serializer_class = SERIALIZERS_BY_MODEL_CLASS[cls]
return serializer_class(model)
| 32.295082 | 98 | 0.633756 |
7943e8fafc672c179d403f3d8e4ea2ba7d0840b2 | 16,208 | py | Python | tradingsystems/marketdata.py | GBERESEARCH/tradingsystems | 5158d41d32b48d35db34a6e132c7fa2f259987c1 | [
"MIT"
] | 1 | 2021-09-10T04:28:37.000Z | 2021-09-10T04:28:37.000Z | tradingsystems/marketdata.py | GBERESEARCH/tradingsystems | 5158d41d32b48d35db34a6e132c7fa2f259987c1 | [
"MIT"
] | null | null | null | tradingsystems/marketdata.py | GBERESEARCH/tradingsystems | 5158d41d32b48d35db34a6e132c7fa2f259987c1 | [
"MIT"
] | 1 | 2021-09-10T04:28:38.000Z | 2021-09-10T04:28:38.000Z | """
Market Data functions
"""
import os
import norgatedata
import numpy as np
import pandas as pd
import requests
from yahoofinancials import YahooFinancials
class Markets():
"""
Methods for collecting data from Norgate Data, Yahoo Finance and
AlphaVantage and extracting the long names of the Norgate Data tickers.
"""
@classmethod
def create_base_data(
cls, ticker=None, source=None, params=None):
"""
Create DataFrame of OHLC prices from NorgateData or Yahoo Finance
Parameters
----------
ticker : Str, optional
Underlying to return. The default '$SPX'.
ccy_1 : Str, optional
Primary currency of pair to return. The default 'GBP'.
ccy_2 : Str, optional
Secondary currency of pair to return. The default 'USD'.
start_date : Str, optional
Date to begin backtest. Format is 'YYYY-MM-DD'.
end_date : Str, optional
Date to end backtest. Format is 'YYYY-MM-DD'.
source : Str, optional
The data source to use, either 'norgate' or 'yahoo'. The default
is 'norgate'.
api_key : Str
AlphaVantage API key. If not provided will look for
'ALPHAVANTAGE_API_KEY' in the environment variables.
Returns
-------
prices : DataFrame
Returns OHLC DataFrame.
"""
# If a valid source has been provided
#try:
# Extract data from Norgate
if source == 'norgate':
prices = cls.return_norgate_data(
ticker=ticker, params=params)
params['asset_type'] = 'commodity'
# Extract data from Yahoo Finance
elif source == 'yahoo':
prices = cls.return_yahoo_data(
ticker=ticker, params=params)
params['asset_type'] = 'equity'
# Extract data from AlphaVantage
elif source == 'alpha':
prices = cls.return_alphavantage_data(
ticker=ticker, params=params)
else:
raise ValueError(
'Select a data source from yahoo, norgate or alpha')
return prices, params
@classmethod
def reset_data(cls, tables, params):
"""
Reset price table to initial data
Parameters
----------
tables : Dict
Dictionary of key tables.
params : Dict
Dictionary of key parameters.
Returns
-------
table_reset : Dict
Dictionary with prices table reset.
params : Dict
Dictionary of key parameters with contract data updated.
"""
tables_reset = {}
tables_reset['prices'] = tables['prices'][
['Open', 'High', 'Low', 'Close']]
tables_reset['benchmark'] = tables['benchmark'][
['Open', 'High', 'Low', 'Close']]
params = cls.contract_data(
ticker=params['ticker'], prices=tables['prices'],
params=params)
return tables_reset, params
@staticmethod
def return_norgate_data(ticker, params):
"""
Create DataFrame of historic prices for specified ticker using Norgate
Data as the source.
Parameters
----------
ticker : Str
Norgate data ticker.
params : Dict
start_date : Str, optional
Date to begin backtest. Format is 'YYYY-MM-DD'.
end_date : Str, optional
Date to end backtest. Format is 'YYYY-MM-DD'.
Returns
-------
prices : DataFrame
DataFrame of historic prices for given ticker.
"""
timeseriesformat = 'pandas-dataframe'
prices = norgatedata.price_timeseries(
symbol=ticker,
start_date=params['start_date'],
end_date=params['end_date'],
format=timeseriesformat)
return prices
@staticmethod
def return_yahoo_data(ticker, params):
"""
Create DataFrame of historic prices for specified ticker using Yahoo
Finance as the source.
Parameters
----------
ticker : Int
Stock to be returned in the form of Reuters RIC code as a
string.
params : Dict
start_date : Str, optional
Date to begin backtest. Format is 'YYYY-MM-DD'.
end_date : Str, optional
Date to end backtest. Format is 'YYYY-MM-DD'.
freq : Int
Frequency of data - set to 'daily'.
Returns
-------
prices : DataFrame
DataFrame of historic prices for given ticker.
"""
# Initialise data class
yahoo_financials = YahooFinancials(ticker)
freq='daily'
# Extract historic prices
prices = yahoo_financials.get_historical_price_data(
params['start_date'], params['end_date'], freq)
# Reformat columns
prices = pd.DataFrame(
prices[ticker]['prices']).drop(['date'], axis=1) \
.rename(columns={'formatted_date':'Date',
'open': 'Open',
'high': 'High',
'low': 'Low',
'close': 'Close',
'volume': 'Volume'}) \
.loc[:, ['Date','Open','High','Low','Close','Volume']] \
.set_index('Date')
# Set Index to Datetime
prices.index = pd.to_datetime(prices.index)
return prices
@classmethod
def return_alphavantage_data(cls, params, ticker=None):
"""
Create DataFrame of historic prices for specified ticker using
AlphaVantage as the source.
Parameters
----------
ticker : Str
Underlying to return. The default '$SPX'.
ccy_1 : Str
Primary currency of pair to return. The default 'GBP'.
ccy_2 : Str
Secondary currency of pair to return. The default 'USD'.
asset_type : Str
The alphavantage asset class type. The default is 'fx'.
start_date : Str, optional
Date to begin backtest. Format is 'YYYY-MM-DD'.
end_date : Str, optional
Date to end backtest. Format is 'YYYY-MM-DD'.
api_key : Str
AlphaVantage API key. If not provided will look for
'ALPHAVANTAGE_API_KEY' in the environment variables.
Returns
-------
prices : DataFrame
DataFrame of historic prices for given ticker.
"""
# Set API key
if params['api_key'] == '':
params['api_key'] = os.getenv('ALPHAVANTAGE_API_KEY')
# FX pair
if params['asset_type'] == 'fx':
prices = cls._alphavantage_fx(
ccy_1=params['ccy_1'],
ccy_2=params['ccy_2'],
api_key=params['api_key'])
# Cryptocurrency
elif params['asset_type'] == 'crypto':
prices = cls._alphavantage_crypto(
ccy_1=params['ccy_1'],
ccy_2=params['ccy_2'],
api_key=params['api_key'])
# Equity Single stock or Index
elif params['asset_type'] == 'equity':
prices = cls._alphavantage_equity(
ticker=ticker, api_key=params['api_key'])
# Otherwise raise an error
else:
raise ValueError("Please enter a valid asset type")
# Set Index to Datetime
prices.index = pd.to_datetime(prices.index)
# Sort data in ascending order
prices = prices[::-1]
# If a start date has been provided
if params['start_date'] is not None:
# Set the start variable to this, converting to datetime format
start = pd.to_datetime(params['start_date'])
# If no start date is provided
else:
# Set the start variable to the first row in the DataFrame
start = prices.index[0]
# If an end date has been provided
if params['end_date'] is not None:
# Set the end variable to this, converting to datetime format
end = pd.to_datetime(params['end_date'])
# If no end date is provided
else:
# Set the end variable to the last row in the DataFrame
end = prices.index[-1]
# Trim data to specified dates
prices = prices.loc[start:end]
return prices
@staticmethod
def _alphavantage_fx(ccy_1, ccy_2, api_key):
"""
Create DataFrame of historic prices for an fx pair using
AlphaVantage as the source.
Parameters
----------
ccy_1 : Str
Primary currency of pair to return. The default 'GBP'.
ccy_2 : Str
Secondary currency of pair to return. The default 'USD'.
api_key : Str
AlphaVantage API key. If not provided will look for
'ALPHAVANTAGE_API_KEY' in the environment variables.
Returns
-------
prices : DataFrame
DataFrame of historic prices for given ticker.
"""
# Set url to extract prices from
base_url = 'https://www.alphavantage.co/query?'
# Set fx params
params = {'function': 'FX_DAILY',
'from_symbol': ccy_1,
'to_symbol': ccy_2,
'outputsize':'full',
'apikey': api_key}
response = requests.get(base_url, params=params)
response_dict = response.json()
_, header = response.json()
#Convert to pandas dataframe
prices = pd.DataFrame.from_dict(
response_dict[header], orient='index')
#Clean up column names
df_cols = [i.split(' ')[1].title() for i in prices.columns]
prices.columns = df_cols
# Set datatype to float
prices = prices.astype(float)
return prices
@staticmethod
def _alphavantage_crypto(ccy_1, ccy_2, api_key):
"""
Create DataFrame of historic prices for a cryptocurrency pair using
AlphaVantage as the source.
Parameters
----------
ccy_1 : Str
Primary currency of pair to return.
ccy_2 : Str
Secondary currency of pair to return.
api_key : Str
AlphaVantage API key. If not provided will look for
'ALPHAVANTAGE_API_KEY' in the environment variables.
Returns
-------
prices : DataFrame
DataFrame of historic prices for given ticker.
"""
# Set url to extract prices from
base_url = 'https://www.alphavantage.co/query?'
# Set crypto params
params = {'function': 'DIGITAL_CURRENCY_DAILY',
'symbol': ccy_1,
'market': ccy_2,
'apikey': api_key}
response = requests.get(base_url, params=params)
response_dict = response.json()
_, header = response.json()
#Convert to pandas dataframe
prices = pd.DataFrame.from_dict(
response_dict[header], orient='index')
# Select the USD OHLC columns
prices = prices[
[prices.columns[1], prices.columns[3], prices.columns[5],
prices.columns[7]]]
# Set column names
prices.columns = ['Open', 'High', 'Low', 'Close']
# Set datatype to float
prices = prices.astype(float)
return prices
@staticmethod
def _alphavantage_equity(ticker, api_key):
"""
Create DataFrame of historic prices for an equity ticker using
AlphaVantage as the source.
Parameters
----------
ticker : Str
Underlying to return. The default '$SPX'.
api_key : Str
AlphaVantage API key. If not provided will look for
'ALPHAVANTAGE_API_KEY' in the environment variables.
Returns
-------
prices : DataFrame
DataFrame of historic prices for given ticker.
"""
# Set url to extract prices from
base_url = 'https://www.alphavantage.co/query?'
# Set equity params
params = {'function': 'TIME_SERIES_DAILY_ADJUSTED',
'symbol': ticker,
'outputsize':'full',
'apikey': api_key}
response = requests.get(base_url, params=params)
response_dict = response.json()
_, header = response.json()
#Convert to pandas dataframe
prices = pd.DataFrame.from_dict(
response_dict[header], orient='index')
#Clean up column names
df_cols = [i.split(' ')[1].title() for i in prices.columns]
prices.columns = df_cols
# Set datatype to float
prices = prices.astype(float)
# Calculate stock split multiplier
prices['split_mult'] = np.array([1.0]*len(prices))
for row in range(1, len(prices)):
if prices['Split'][row] == 1:
prices['split_mult'][row] = prices['split_mult'][row-1]
else:
prices['split_mult'][row] = (prices['split_mult'][row-1]
* prices['Split'][row])
# Adjust OHLC prices for splits
prices['O'] = np.round(prices['Open'] / prices['split_mult'], 2)
prices['H'] = np.round(prices['High'] / prices['split_mult'], 2)
prices['L'] = np.round(prices['Low'] / prices['split_mult'], 2)
prices['C'] = np.round(prices['Close'] / prices['split_mult'], 2)
# Select only OHLC columns
prices = prices[['O', 'H', 'L', 'C']]
# Set column names
prices.columns = ['Open', 'High', 'Low', 'Close']
return prices
@staticmethod
def contract_data(ticker, prices, params):
if ticker[0] == '&':
if ticker[-4:] == '_CCB':
ticker = ticker[:-4]
params['front_ticker'] = (
ticker[1:]
+'-'
+str(prices['Delivery Month'][-1])[:4]
+params['contract_months'][
str(prices['Delivery Month'][-1])[4:6]])
params['per_contract_margin'] = norgatedata.margin(
params['front_ticker'])
params['contract_point_value'] = norgatedata.point_value(
params['front_ticker'])
else:
params['contract_point_value'] = 1
return params
@staticmethod
def norgate_name_dict():
"""
Create a dictionary of the long names of the Norgate tickers.
Returns
-------
norgate_name_dict : Dict
Dictionary lookup of Norgate tickers to long names.
"""
# Get list of the available databases
alldatabasenames = norgatedata.databases()
# Create empty dictionary to store names
norgate_name_dict = {}
# For each of the available databases
for database in alldatabasenames:
# Extract dictionary of dictionaries, one for each ticker in the
# database
databasecontents = norgatedata.database(database)
# For each ticker in the database
for dicto in databasecontents:
# Set the key-value pair of the new dictionary to the ticker
# and long name respectively
key = dicto['symbol']
value = dicto['securityname']
# Whether to include backadjusted / regular continuous futures
if database == 'Continuous Futures':
#if '_CCB' in key:
norgate_name_dict[key] = value
# Don't include the individual futures contracts
elif database == 'Futures':
pass
# Store the values in the dictionary
else:
norgate_name_dict[key] = value
return norgate_name_dict
| 30.466165 | 78 | 0.547878 |
7943e9f41fe97ec417c49d925aa77dd02ea99b64 | 5,912 | py | Python | raisimGymTorch/raisimGymTorch/env/envs/rsg_anymal/runner.py | gsiddhant/raisimLib | ad844774595189d5d432a5631c05884af4adebdb | [
"Apache-2.0"
] | null | null | null | raisimGymTorch/raisimGymTorch/env/envs/rsg_anymal/runner.py | gsiddhant/raisimLib | ad844774595189d5d432a5631c05884af4adebdb | [
"Apache-2.0"
] | null | null | null | raisimGymTorch/raisimGymTorch/env/envs/rsg_anymal/runner.py | gsiddhant/raisimLib | ad844774595189d5d432a5631c05884af4adebdb | [
"Apache-2.0"
] | null | null | null | from ruamel.yaml import YAML, dump, RoundTripDumper
from raisimGymTorch.env.bin import rsg_anymal
from raisimGymTorch.env.RaisimGymVecEnv import RaisimGymVecEnv as VecEnv
from raisimGymTorch.helper.raisim_gym_helper import ConfigurationSaver, load_param, tensorboard_launcher
import os
import math
import time
import raisimGymTorch.algo.ppo.module as ppo_module
import raisimGymTorch.algo.ppo.ppo as PPO
import torch.nn as nn
import numpy as np
import torch
import datetime
import argparse
# task specification
task_name = "anymal_locomotion"
# configuration
parser = argparse.ArgumentParser()
parser.add_argument('-m', '--mode', help='set mode either train or test', type=str, default='train')
parser.add_argument('-w', '--weight', help='pre-trained weight path', type=str, default='')
args = parser.parse_args()
mode = args.mode
weight_path = args.weight
# check if gpu is available
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# directories
task_path = os.path.dirname(os.path.realpath(__file__))
home_path = task_path + "/../../../../.."
# config
cfg = YAML().load(open(task_path + "/cfg.yaml", 'r'))
# create environment from the configuration file
env = VecEnv(rsg_anymal.RaisimGymEnv(home_path + "/rsc", dump(cfg['environment'], Dumper=RoundTripDumper)), cfg['environment'])
# shortcuts
ob_dim = env.num_obs
act_dim = env.num_acts
# Training
n_steps = math.floor(cfg['environment']['max_time'] / cfg['environment']['control_dt'])
total_steps = n_steps * env.num_envs
avg_rewards = []
actor = ppo_module.Actor(ppo_module.MLP(cfg['architecture']['policy_net'], nn.LeakyReLU, ob_dim, act_dim),
ppo_module.MultivariateGaussianDiagonalCovariance(act_dim, 1.0),
device)
critic = ppo_module.Critic(ppo_module.MLP(cfg['architecture']['value_net'], nn.LeakyReLU, ob_dim, 1),
device)
saver = ConfigurationSaver(log_dir=home_path + "/raisimGymTorch/data/"+task_name,
save_items=[task_path + "/cfg.yaml", task_path + "/Environment.hpp"])
tensorboard_launcher(saver.data_dir+"/..") # press refresh (F5) after the first ppo update
ppo = PPO.PPO(actor=actor,
critic=critic,
num_envs=cfg['environment']['num_envs'],
num_transitions_per_env=n_steps,
num_learning_epochs=4,
gamma=0.996,
lam=0.95,
num_mini_batches=4,
device=device,
log_dir=saver.data_dir,
shuffle_batch=False,
)
if mode == 'retrain':
load_param(weight_path, env, actor, critic, ppo.optimizer, saver.data_dir)
for update in range(1000000):
start = time.time()
env.reset()
reward_ll_sum = 0
done_sum = 0
average_dones = 0.
if update % cfg['environment']['eval_every_n'] == 0:
print("Visualizing and evaluating the current policy")
torch.save({
'actor_architecture_state_dict': actor.architecture.state_dict(),
'actor_distribution_state_dict': actor.distribution.state_dict(),
'critic_architecture_state_dict': critic.architecture.state_dict(),
'optimizer_state_dict': ppo.optimizer.state_dict(),
}, saver.data_dir+"/full_"+str(update)+'.pt')
# we create another graph just to demonstrate the save/load method
loaded_graph = ppo_module.MLP(cfg['architecture']['policy_net'], nn.LeakyReLU, ob_dim, act_dim)
loaded_graph.load_state_dict(torch.load(saver.data_dir+"/full_"+str(update)+'.pt')['actor_architecture_state_dict'])
env.turn_on_visualization()
env.start_video_recording(datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S") + "policy_"+str(update)+'.mp4')
for step in range(n_steps*2):
frame_start = time.time()
obs = env.observe(False)
action_ll = loaded_graph.architecture(torch.from_numpy(obs).cpu())
reward_ll, dones = env.step(action_ll.cpu().detach().numpy())
frame_end = time.time()
wait_time = cfg['environment']['control_dt'] - (frame_end-frame_start)
if wait_time > 0.:
time.sleep(wait_time)
env.stop_video_recording()
env.turn_off_visualization()
env.reset()
env.save_scaling(saver.data_dir, str(update))
# actual training
for step in range(n_steps):
obs = env.observe()
action = ppo.observe(obs)
reward, dones = env.step(action)
ppo.step(value_obs=obs, rews=reward, dones=dones)
done_sum = done_sum + sum(dones)
reward_ll_sum = reward_ll_sum + sum(reward)
# take st step to get value obs
obs = env.observe()
ppo.update(actor_obs=obs, value_obs=obs, log_this_iteration=update % 10 == 0, update=update)
average_ll_performance = reward_ll_sum / total_steps
average_dones = done_sum / total_steps
avg_rewards.append(average_ll_performance)
actor.distribution.enforce_minimum_std((torch.ones(12)*0.2).to(device))
end = time.time()
print('----------------------------------------------------')
print('{:>6}th iteration'.format(update))
print('{:<40} {:>6}'.format("average ll reward: ", '{:0.10f}'.format(average_ll_performance)))
print('{:<40} {:>6}'.format("dones: ", '{:0.6f}'.format(average_dones)))
print('{:<40} {:>6}'.format("time elapsed in this iteration: ", '{:6.4f}'.format(end - start)))
print('{:<40} {:>6}'.format("fps: ", '{:6.0f}'.format(total_steps / (end - start))))
print('{:<40} {:>6}'.format("real time factor: ", '{:6.0f}'.format(total_steps / (end - start)
* cfg['environment']['control_dt'])))
print('std: ')
print(np.exp(actor.distribution.std.cpu().detach().numpy()))
print('----------------------------------------------------\n')
| 40.493151 | 127 | 0.640054 |
7943e9f94a73a729cdde3b56a4080f3c7fcf5516 | 2,514 | py | Python | drklauns/timetable/export.py | Ameriks/drklauns | bc8febd72ed6d3f685cf9ad48b487d5c9bb4170e | [
"MIT"
] | null | null | null | drklauns/timetable/export.py | Ameriks/drklauns | bc8febd72ed6d3f685cf9ad48b487d5c9bb4170e | [
"MIT"
] | null | null | null | drklauns/timetable/export.py | Ameriks/drklauns | bc8febd72ed6d3f685cf9ad48b487d5c9bb4170e | [
"MIT"
] | null | null | null | import datetime
import pytz
import xlwt
from io import BytesIO
from django.conf import settings
from slugify import slugify
from drklauns.timetable.models import Summary, Work
riga_tz = pytz.timezone(settings.TIME_ZONE)
def monthly_excel(year: int, month: int):
output = BytesIO()
items = Summary.objects.filter(date=datetime.date(year, month, 1)).order_by('employee__first_name', 'employee__last_name', )
wbk = xlwt.Workbook()
sheet = wbk.add_sheet("Kopsavilkums %s-%s" % (year, month))
sheet.write(0, 0, "Kopsavilkums par %s - %s" % (year, month))
row = 1
header_row = (
'#', 'ID', 'Vārds', 'Uzvārds', 'Pers.kods', 'Līguma NR', 'Likme', 'Stundas', 'Alga', 'Procedūru skaits', 'Kontaktu skaits')
for col, value in enumerate(header_row):
sheet.write(row, col, value)
row = 2
for index, item in enumerate(items, start=1):
salary = round(float(item.employee.contract_rate) * item.hours_worked, 2)
row_values = (
index, item.employee_id, item.employee.first_name, item.employee.last_name, item.employee.ssn, item.employee.contract_no, item.employee.contract_rate,
item.hours_worked, salary, item.total_procedures, item.total_contacts,)
for col, value in enumerate(row_values):
sheet.write(row, col, value)
row += 1
for item in items:
sheet = wbk.add_sheet(slugify(str(item)))
sheet.write(0, 0, "Darbinieka %s visi darbi par %s - %s" % (item.employee, year, month))
row = 1
header_row = ('#', 'ID', 'No', 'Līdz', 'Stundu skaits', 'Slimnīca', 'Nodaļa', 'Procedūru skaits', 'Kontaktu skaits', 'Komentāri', 'Pievienots')
for col, value in enumerate(header_row):
sheet.write(row, col, value)
works = Work.objects.filter(employee=item.employee, start__year=item.date.year, start__month=item.date.month)
row = 2
for index, work in enumerate(works, start=1):
row_values = (
index, work.id, work.start.astimezone(riga_tz).strftime("%Y-%m-%d %H:%M"), work.end.astimezone(riga_tz).strftime("%Y-%m-%d %H:%M"), work.hours_worked, work.department.hospital.name, work.department.name,
work.number_of_procedures, work.number_of_contacts, work.comments, work.created.astimezone(riga_tz).strftime("%Y-%m-%d %H:%M"))
for col, value in enumerate(row_values):
sheet.write(row, col, value)
row += 1
wbk.save(output)
return output
| 41.9 | 219 | 0.647971 |
7943ea11eb37f0a21419c25dc79a6e006b173dd0 | 10,839 | py | Python | train-net.py | CrispyHarder/deep-weight-prior | b87e61d6ad590c61b90e188ec86bfb956073be65 | [
"MIT"
] | null | null | null | train-net.py | CrispyHarder/deep-weight-prior | b87e61d6ad590c61b90e188ec86bfb956073be65 | [
"MIT"
] | null | null | null | train-net.py | CrispyHarder/deep-weight-prior | b87e61d6ad590c61b90e188ec86bfb956073be65 | [
"MIT"
] | null | null | null | import torch
from torch import nn
import utils
import numpy as np
import os
import time
from models.lenet import FConvMNIST
from models.cifarnet import CIFARNet, CIFARNetNew
from models.cifar import ResNet
import utils
from logger import Logger
from utils import tonp
from torch.optim.lr_scheduler import MultiStepLR
import myexman
from torch.nn import functional as F
from torch.utils.tensorboard import SummaryWriter
from my_utils import MultistepMultiGammaLR
def adjust_learning_rate(optimizer, lr):
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def lr_linear(epoch):
lr = args.lr * np.minimum((args.decrease_from - epoch) * 1. / (args.epochs - args.decrease_from) + 1, 1.)
return max(0, lr)
def predict(data, net):
pred = []
l = []
for x, y in data:
l.append(y.numpy())
x = x.to(device)
p = F.log_softmax(net(x), dim=1)
pred.append(p.data.cpu().numpy())
return np.concatenate(pred), np.concatenate(l)
parser = myexman.ExParser(file=__file__)
#general settings
parser.add_argument('--name', default='')
parser.add_argument('--data', default='cifar')
parser.add_argument('--gpu_id', default='0')
parser.add_argument('--num_examples', default=None, type=int)
parser.add_argument('--data_split_seed', default=456, type=int)
parser.add_argument('--seed', default=5743, type=int)
parser.add_argument('--resume', default='')
parser.add_argument('--epochs', default=120, type=int, help='Number of epochs')
parser.add_argument('--bs', default=128, type=int, help='Batch size')
parser.add_argument('--test_bs', default=500, type=int, help='Batch size for test dataloader')
#model settings
parser.add_argument('--model', default='resnet20')
parser.add_argument('--model_size', default=1., type=float)
parser.add_argument('--net_cfg', default='E')
parser.add_argument('--hid_dim', default=[32, 64], type=int, nargs='+')
parser.add_argument('--n_classes', default=10, type=int)
parser.add_argument('--do', default=[], type=float, nargs='*')
#model init settings SINGLE init
parser.add_argument('--pretrained', default='')
parser.add_argument('--filters_list', default=[], nargs='*', type=str)
parser.add_argument('--init', default='xavier')
parser.add_argument('--init_list', type=str, nargs='*', default=[])
parser.add_argument('--vae', default='')
parser.add_argument('--vae_list', type=str, nargs='*', default=[])
#model init settings MULTI init (if used, single init is ignored)
parser.add_argument('--mult_init', default= 1, type = int)
parser.add_argument('--mult_init_mode', default= 'xavier', type = str,
help = '''such as vqvae1.3''')
parser.add_argument('--mult_init_root', type=str, default=os.path.join('data','resnet20','3x3'))
parser.add_argument('--mult_init_prior', type=str, default='',
help='''such as pixelcnn0''')
#optimizer settings
parser.add_argument('--lr', default=0.1, type=float, help='Initial learning rate')
parser.add_argument('--weight_decay', default=1e-4, type=float)
parser.add_argument('--momentum', default=0.9, type=float)
parser.add_argument('--milestones', type=int, nargs='*', default=[80,100])
parser.add_argument('--gammas', default=[0.5,0.2], nargs='*', type=float)
parser.add_argument('--decrease_from', default=0, type=int) #unused
# loss function settings
parser.add_argument('--l2', default=0., type=float)
parser.add_argument('--dwp_reg', default=0., type=float)
#evaluation and leftovers
parser.add_argument('--eval_freq', default=1, type=int)
parser.add_argument('--dwp_samples', default=1, type=int)
parser.add_argument('--rfe', default=0, type=int)
parser.add_argument('--fastconv', default=0, type=int)
parser.add_argument('--aug', default=0, type=int)
args = parser.parse_args()
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu_id
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.cuda.manual_seed_all(args.seed)
torch.manual_seed(args.seed)
np.random.seed(args.seed)
fmt = {
'lr': '.5f',
'sec': '.0f',
}
logger = Logger('logs', base=args.root, fmt=fmt)
# Load Datasets
trainloader, testloader = utils.load_dataset(data=args.data, train_bs=args.bs, test_bs=args.test_bs,
num_examples=args.num_examples, seed=args.data_split_seed,
augmentation=(args.aug == 1))
if args.model == 'fconv':
net = FConvMNIST(args.net_cfg, device=device, hid_dim=args.hid_dim, do=args.do)
elif args.model == 'cifarnet':
net = CIFARNet(args.net_cfg, device=device, n_classes=args.n_classes, do=args.do, k=args.model_size)
elif args.model == 'cifarnetnew':
net = CIFARNetNew(args.net_cfg, device=device, n_classes=args.n_classes, do=args.do, k=args.model_size,
vae_list=args.vae_list)
elif args.model == 'resnet20':
net = ResNet([3,3,3],num_classes=args.n_classes).to(device)
else:
raise NotImplementedError
# Initialization
if args.mult_init == 1 :
net.mult_weights_init(args.mult_init_mode, args.mult_init_root, device=device, prior=args.mult_init_prior)
else:
if hasattr(net, 'weights_init'):
net.weights_init(args.init_list, args.vae_list, pretrained=args.pretrained, filters_list=args.filters_list)
else:
utils.net_init(net, args.init, args.vae)
if args.dwp_reg != 0:
net.set_dwp_regularizer(args.vae_list)
# Optimizer
train_params = []
if args.rfe == 0:
train_params = net.parameters()
elif args.rfe == 1:
tr_modeules = [net.classifier]
train_params = list(net.classifier.parameters())
for m in net.features.modules():
if isinstance(m, nn.BatchNorm2d):
train_params += list(m.parameters())
tr_modeules += [m]
print('==> Random Feature Extration mode')
print(*tr_modeules)
else:
raise NotImplementedError
opt = torch.optim.SGD(net.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
lrscheduler = MultistepMultiGammaLR(opt, milestones=args.milestones,
gamma=args.gammas)
# Load params if fine-tuning
if args.resume:
net.load_state_dict(torch.load(os.path.join(args.resume, 'model.torch')))
opt.load_state_dict(torch.load(os.path.join(args.resume, 'opt.torch')))
N = len(trainloader.dataset)
t0 = time.time()
it = 0
#add a tensorboard writer
writer = SummaryWriter(args.root)
best_acc = 0.
for e in range(1, args.epochs + 1):
if args.milestones:
lrscheduler.step()
else:
adjust_learning_rate(opt, lr_linear(e - 1))
net.train()
train_acc = utils.MovingMetric()
train_nll = utils.MovingMetric()
train_loss = utils.MovingMetric()
opt.zero_grad()
for x, y in trainloader:
opt.zero_grad()
it += 1
x = x.to(device)
y = y.to(device)
p = net(x)
data_term = F.cross_entropy(p, y)
l2_norm = torch.FloatTensor([0.]).to(device)
if args.l2 != 0:
l2_norm = torch.sum(torch.stack([torch.sum(p**2) for p in net.features.parameters()]))
dwp_reg = 0.
if args.dwp_reg != 0.:
dwp_reg = net.get_dwp_reg(backward=True, n_tries=args.dwp_samples, weight=args.dwp_reg)
loss = data_term + args.l2 * l2_norm
loss.backward()
opt.step()
loss += args.dwp_reg * dwp_reg
acc = torch.sum(p.max(1)[1] == y)
train_acc.add(acc.item(), p.size(0))
train_nll.add(data_term.item() * x.size(0), x.size(0))
train_loss.add(loss.item() * x.size(0), x.size(0))
if args.fastconv == 1:
if (it % args.eval_freq) == 0 or it == 1:
net.eval()
logp_test, labels = predict(testloader, net)
test_acc = np.mean(logp_test.argmax(1) == labels)
test_nll = -logp_test[np.arange(len(labels)), labels].mean()
logger.add_scalar(it, 'loss', train_loss.get_val())
logger.add_scalar(it, 'train_nll', train_nll.get_val())
logger.add_scalar(it, 'test_nll', test_nll)
logger.add_scalar(it, 'train_acc', train_acc.get_val())
logger.add_scalar(it, 'test_acc', test_acc)
logger.add_scalar(it, 'lr', opt.param_groups[0]['lr'])
logger.add_scalar(it, 'sec', time.time() - t0)
logger.add_scalar(it, 'l2_norm', l2_norm.item())
logger.iter_info()
logger.save()
torch.save(net.state_dict(), os.path.join(args.root, 'model.torch'))
torch.save(opt.state_dict(), os.path.join(args.root, 'opt.torch'))
t0 = time.time()
net.train()
if ((e % args.eval_freq) == 0 or e == 1) and (args.fastconv == 0):
net.eval()
logp_test, labels = predict(testloader, net)
test_acc = np.mean(logp_test.argmax(1) == labels)
test_nll = -logp_test[np.arange(len(labels)), labels].mean()
logger.add_scalar(e, 'loss', train_loss.get_val())
logger.add_scalar(e, 'train_nll', train_nll.get_val())
logger.add_scalar(e, 'test_nll', test_nll)
logger.add_scalar(e, 'train_acc', train_acc.get_val())
logger.add_scalar(e, 'test_acc', test_acc)
logger.add_scalar(e, 'lr', opt.param_groups[0]['lr'])
logger.add_scalar(e, 'sec', time.time() - t0)
logger.add_scalar(e, 'l2_norm', l2_norm.item())
logger.add_scalar(e, 'dwp_reg', dwp_reg)
logger.iter_info()
logger.save()
writer.add_scalar( 'train/loss', train_loss.get_val(),e)
writer.add_scalar( 'train/nll', train_nll.get_val(),e)
writer.add_scalar( 'test/nll', test_nll,e)
writer.add_scalar( 'train/acc', train_acc.get_val(),e)
writer.add_scalar( 'test/acc', test_acc,e)
writer.add_scalar( 'lr', opt.param_groups[0]['lr'],e)
writer.add_scalar( 'sec', time.time() - t0,e)
writer.add_scalar( 'l2_norm', l2_norm.item(),e)
writer.add_scalar( 'dwp_reg', dwp_reg,e)
epoch = e
if (epoch-1) % 10 == 0:
torch.save(net.state_dict() , os.path.join(args.root, 'net_params_epoch_{}.torch'.format(epoch)))
torch.save(opt.state_dict(), os.path.join(args.root, 'opt_params_epoch_{}.torch'.format(epoch)))
is_best = best_acc < test_acc
if is_best:
best_acc = test_acc
torch.save(net.state_dict(), os.path.join(args.root, 'net_params.torch'))
t0 = time.time()
torch.save(net.state_dict(), os.path.join(args.root, 'vae_params_lastepoch.torch'))
torch.save(opt.state_dict(), os.path.join(args.root, 'opt_params_lastepoch.torch'))
writer.flush()
| 37.247423 | 115 | 0.646739 |
7943ea70400a1f8b54cc5d8e364a96d18ba41819 | 563 | py | Python | nfvparser/toscaparser/tests/__init__.py | onap/modeling-toscaparsers | 803b8d4ee8cc38e941cbc1ec26af2336f02fd20c | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | nfvparser/toscaparser/tests/__init__.py | onap/modeling-toscaparsers | 803b8d4ee8cc38e941cbc1ec26af2336f02fd20c | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | nfvparser/toscaparser/tests/__init__.py | onap/modeling-toscaparsers | 803b8d4ee8cc38e941cbc1ec26af2336f02fd20c | [
"Apache-2.0",
"CC-BY-4.0"
] | 1 | 2020-06-16T14:47:06.000Z | 2020-06-16T14:47:06.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
description = ""
| 40.214286 | 75 | 0.758437 |
7943eb4a186791bbcb9bd7613e3addb5dd92a350 | 3,238 | py | Python | programs/django/mysite/mysite/settings.py | Dilmuratjan/MyProject | 26f4ee708eb4a7ceef780842ad737fef64a39d7e | [
"WTFPL"
] | 2 | 2017-02-19T15:11:06.000Z | 2017-02-22T18:34:10.000Z | programs/django/mysite/mysite/settings.py | Dilmuratjan/MyProject | 26f4ee708eb4a7ceef780842ad737fef64a39d7e | [
"WTFPL"
] | null | null | null | programs/django/mysite/mysite/settings.py | Dilmuratjan/MyProject | 26f4ee708eb4a7ceef780842ad737fef64a39d7e | [
"WTFPL"
] | 4 | 2017-02-26T08:10:30.000Z | 2017-05-02T10:02:03.000Z | """
Django settings for mysite project.
Generated by 'django-admin startproject' using Django 2.0.3.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'lr@(%1^4_775&s#s(h=5+zrs=@)tf1=iyrsd=%4=^&yc)rc_vn'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls.apps.PollsConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'mysql',
'USER': 'root',
'PASSWORD': 'di.1995',
'HOST': '127.0.0.1',
'PORT': '3306',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = '/static/'
| 25.698413 | 91 | 0.680976 |
7943ebb9b5efbb867df3965e81233b7d1f5d5018 | 2,861 | py | Python | torchdyn/layers/utility.py | willw625731/torchdyn | 7a13ff3e7d2314ab98c41ebdd8654ac2ca48fca0 | [
"Apache-2.0"
] | null | null | null | torchdyn/layers/utility.py | willw625731/torchdyn | 7a13ff3e7d2314ab98c41ebdd8654ac2ca48fca0 | [
"Apache-2.0"
] | null | null | null | torchdyn/layers/utility.py | willw625731/torchdyn | 7a13ff3e7d2314ab98c41ebdd8654ac2ca48fca0 | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
class Augmenter(nn.Module):
"""Augmentation class. Can handle several types of augmentation strategies for Neural DEs.
:param augment_dims: number of augmented dimensions to initialize
:type augment_dims: int
:param augment_idx: index of dimension to augment
:type augment_idx: int
:param augment_func: nn.Module applied to the input data of dimension `d` to determine the augmented initial condition of dimension `d + a`.
`a` is defined implicitly in `augment_func` e.g. augment_func=nn.Linear(2, 5) augments a 2 dimensional input with 3 additional dimensions.
:type augment_func: nn.Module
:param order: whether to augment before data [augmentation, x] or after [x, augmentation] along dimension `augment_idx`. Options: ('first', 'last')
:type order: str
"""
def __init__(self, augment_idx:int=1, augment_dims:int=5, augment_func=None, order='first'):
super().__init__()
self.augment_dims, self.augment_idx, self.augment_func = augment_dims, augment_idx, augment_func
self.order = order
def forward(self, x: torch.Tensor):
if not self.augment_func:
new_dims = list(x.shape)
new_dims[self.augment_idx] = self.augment_dims
# if-else check for augmentation order
if self.order == 'first':
x = torch.cat([torch.zeros(new_dims).to(x), x],
self.augment_idx)
else:
x = torch.cat([x, torch.zeros(new_dims).to(x)],
self.augment_idx)
else:
# if-else check for augmentation order
if self.order == 'first':
x = torch.cat([self.augment_func(x).to(x), x],
self.augment_idx)
else:
x = torch.cat([x, self.augment_func(x).to(x)],
self.augment_idx)
return x
class DepthCat(nn.Module):
"""Depth variable `s` concatenation module. Allows for easy concatenation of `s` each call of the numerical solver, at specified layers of the DEFunc.
:param idx_cat: index of the data dimension to concatenate `s` to.
:type idx_cat: int
"""
def __init__(self, idx_cat=1):
super().__init__()
self.idx_cat = idx_cat ; self.s = None
def forward(self, x):
s_shape = list(x.shape);
s_shape[self.idx_cat] = 1
self.s = self.s * torch.ones(s_shape).to(x)
return torch.cat([x, self.s], self.idx_cat).to(x)
class DataControl(nn.Module):
"""Data-control module. Allows for data-control inputs at arbitrary points of the DEFunc
"""
def __init__(self):
super().__init__()
self.u = None
def forward(self, x):
return torch.cat([x, self.u], 1).to(x) | 40.871429 | 162 | 0.613422 |
7943ec211d50f4e4b206759f1d4a51ddfbe7281b | 654 | py | Python | lib/third_party/concurrent/futures/_base.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 2 | 2019-11-10T09:17:07.000Z | 2019-12-18T13:44:08.000Z | lib/third_party/concurrent/futures/_base.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 11 | 2020-02-29T02:51:12.000Z | 2022-03-30T23:20:08.000Z | lib/third_party/concurrent/futures/_base.py | kustodian/google-cloud-sdk | b6bae4137d4b58030adb3dcb1271216dfb19f96d | [
"Apache-2.0"
] | 1 | 2020-07-24T18:47:35.000Z | 2020-07-24T18:47:35.000Z | # Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from concurrent.python2.concurrent.futures._base import *
| 40.875 | 74 | 0.770642 |
7943ec6e7d49784b642fd3dfcdbd0da705a8dbb1 | 587 | py | Python | qulab/drivers/PSG_SignalGenerator.py | ParanoiaSYT/Qulab-backup | 09ec5457145b3789d4c1ac02c43dd3e6dfafc96f | [
"MIT"
] | null | null | null | qulab/drivers/PSG_SignalGenerator.py | ParanoiaSYT/Qulab-backup | 09ec5457145b3789d4c1ac02c43dd3e6dfafc96f | [
"MIT"
] | null | null | null | qulab/drivers/PSG_SignalGenerator.py | ParanoiaSYT/Qulab-backup | 09ec5457145b3789d4c1ac02c43dd3e6dfafc96f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import numpy as np
from qulab import BaseDriver, QInteger, QOption, QReal, QString, QVector
class Driver(BaseDriver):
support_models = ['E8257D', 'SMF100A', 'SMB100A','SGS100A']
quants = [
QReal('Frequency', unit='Hz',
set_cmd=':FREQ %(value).13e%(unit)s',
get_cmd=':FREQ?'),
QReal('Power', unit='dBm',
set_cmd=':POWER %(value).8e%(unit)s',
get_cmd=':POWER?'),
QOption('Output',
set_cmd=':OUTP %(option)s', options=[('OFF', 'OFF'), ('ON', 'ON')]),
]
| 26.681818 | 79 | 0.524702 |
7943ed68bec567fb50840e9d65b3936c244c365f | 2,302 | py | Python | Graphly_v1.py | souviksn7/Graphly | 3055be4d11e74b8ef2156afdd93ca4cc4859643f | [
"MIT"
] | null | null | null | Graphly_v1.py | souviksn7/Graphly | 3055be4d11e74b8ef2156afdd93ca4cc4859643f | [
"MIT"
] | null | null | null | Graphly_v1.py | souviksn7/Graphly | 3055be4d11e74b8ef2156afdd93ca4cc4859643f | [
"MIT"
] | null | null | null | from tkinter import*
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.figure import Figure
from matplotlib.backends.backend_tkagg import (FigureCanvasTkAgg,
NavigationToolbar2Tk)
root=Tk()
root.title('Graphly')
root.geometry("700x850")
def eq_maker(ls1,ls2):
fin='y= '
for i in range(len(ls1)):
if i>0 and ls1[i]>0:
fin+='+'
if ls1[i]==1:
if ls2[i]==0:
fin+='1'
elif ls2[i]==1:
fin+='x'
else:
fin+='x^'+str(ls2[i])
else:
if ls2[i]==0:
fin+=str(ls1[i])
elif ls2[i]==1:
fin+=str(ls1[i])+'x'
else:
fin+=str(ls1[i])+'x^'+str(ls2[i])
return fin
def coef():
lst1=e1.get().split(",")
for i in range(len(lst1)):
lst1[i]=int(lst1[i])
return lst1
def power():
lst2=e2.get().split(",")
for i in range(len(lst2)):
lst2[i]=int(lst2[i])
return lst2
def restart():
e1.delete(0,END)
def plot():
a1=coef()
a2=power()
e1.delete(0,END)
e2.delete(0,END)
if len(a1)!=len(a2):
e1.inser(0,'Error \n click on restart')
else:
y=np.zeros(20)
x=np.arange(1,21,1)
for i in range(len(a1)):
m=np.ones(20)
m=np.power(x,a2[i])
m=np.multiply(m,a1[i])
y=np.add(y,m)
string=eq_maker(a1,a2)
fig=Figure(figsize=(5,5))
plot1=fig.add_subplot(111)
plot1.plot(x,y)
plot1.title.set_text("Plot of : "+string)
canvas=FigureCanvasTkAgg(fig,master=root)
canvas.draw()
canvas.get_tk_widget().place(x=50,y=250)
toolbar=NavigationToolbar2Tk(canvas, window)
toolbar.update()
l1=Label(text='Enter the coeffecients of powers of x \nseparating with commas')
l2=Label(text='Enter the powers of x \nseparating with commas')
e1=Entry(width=50)
e2=Entry(width=50)
b1=Button(master=root,height=2,width=10,text='Plot',command=plot)
b2=Button(master=root,height=2,width=10,text='Restart',command=restart)
l1.place(x=30,y=10)
l2.place(x=70,y=70)
e1.place(x=240,y=10)
e2.place(x=240,y=70)
b1.place(x=100,y=130)
b2.place(x=100,y=170)
root.mainloop()
| 25.577778 | 79 | 0.552129 |
7943edaa51fa29e82e80c2eda18b696ea5b92fed | 55,456 | py | Python | qa/rpc-tests/test_framework/mininode.py | anhtienlk/bluecoin-new | 669ec70192718187913f364cd8d1234edd7b8964 | [
"MIT"
] | 3 | 2017-12-09T16:04:29.000Z | 2018-05-20T21:46:30.000Z | qa/rpc-tests/test_framework/mininode.py | JUDOKICK/bluecoin-new | 669ec70192718187913f364cd8d1234edd7b8964 | [
"MIT"
] | 2 | 2018-01-01T06:07:41.000Z | 2019-12-20T17:39:29.000Z | qa/rpc-tests/test_framework/mininode.py | JUDOKICK/bluecoin-new | 669ec70192718187913f364cd8d1234edd7b8964 | [
"MIT"
] | 3 | 2017-12-09T17:06:50.000Z | 2019-05-09T09:42:02.000Z | #!/usr/bin/env python3
# Copyright (c) 2010 ArtForz -- public domain half-a-node
# Copyright (c) 2012 Jeff Garzik
# Copyright (c) 2010-2016 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
#
# mininode.py - Bitcoin P2P network half-a-node
#
# This python code was modified from ArtForz' public domain half-a-node, as
# found in the mini-node branch of http://github.com/jgarzik/pynode.
#
# NodeConn: an object which manages p2p connectivity to a bitcoin node
# NodeConnCB: a base class that describes the interface for receiving
# callbacks with network messages from a NodeConn
# CBlock, CTransaction, CBlockHeader, CTxIn, CTxOut, etc....:
# data structures that should map to corresponding structures in
# bitcoin/primitives
# msg_block, msg_tx, msg_headers, etc.:
# data structures that represent network messages
# ser_*, deser_*: functions that handle serialization/deserialization
import struct
import socket
import asyncore
import time
import sys
import random
from .util import hex_str_to_bytes, bytes_to_hex_str
from io import BytesIO
from codecs import encode
import hashlib
from threading import RLock
from threading import Thread
import logging
import copy
import bluecoin_scrypt
from test_framework.siphash import siphash256
BIP0031_VERSION = 60000
MY_VERSION = 80014 # past bip-31 for ping/pong
MY_SUBVERSION = b"/python-mininode-tester:0.0.3/"
MY_RELAY = 1 # from version 70001 onwards, fRelay should be appended to version messages (BIP37)
MAX_INV_SZ = 50000
MAX_BLOCK_BASE_SIZE = 1000000
COIN = 100000000 # 1 btc in satoshis
NODE_NETWORK = (1 << 0)
NODE_GETUTXO = (1 << 1)
NODE_BLOOM = (1 << 2)
NODE_WITNESS = (1 << 3)
# Keep our own socket map for asyncore, so that we can track disconnects
# ourselves (to workaround an issue with closing an asyncore socket when
# using select)
mininode_socket_map = dict()
# One lock for synchronizing all data access between the networking thread (see
# NetworkThread below) and the thread running the test logic. For simplicity,
# NodeConn acquires this lock whenever delivering a message to to a NodeConnCB,
# and whenever adding anything to the send buffer (in send_message()). This
# lock should be acquired in the thread running the test logic to synchronize
# access to any data shared with the NodeConnCB or NodeConn.
mininode_lock = RLock()
# Serialization/deserialization tools
def sha256(s):
return hashlib.new('sha256', s).digest()
def ripemd160(s):
return hashlib.new('ripemd160', s).digest()
def hash256(s):
return sha256(sha256(s))
def ser_compact_size(l):
r = b""
if l < 253:
r = struct.pack("B", l)
elif l < 0x10000:
r = struct.pack("<BH", 253, l)
elif l < 0x100000000:
r = struct.pack("<BI", 254, l)
else:
r = struct.pack("<BQ", 255, l)
return r
def deser_compact_size(f):
nit = struct.unpack("<B", f.read(1))[0]
if nit == 253:
nit = struct.unpack("<H", f.read(2))[0]
elif nit == 254:
nit = struct.unpack("<I", f.read(4))[0]
elif nit == 255:
nit = struct.unpack("<Q", f.read(8))[0]
return nit
def deser_string(f):
nit = deser_compact_size(f)
return f.read(nit)
def ser_string(s):
return ser_compact_size(len(s)) + s
def deser_uint256(f):
r = 0
for i in range(8):
t = struct.unpack("<I", f.read(4))[0]
r += t << (i * 32)
return r
def ser_uint256(u):
rs = b""
for i in range(8):
rs += struct.pack("<I", u & 0xFFFFFFFF)
u >>= 32
return rs
def uint256_from_str(s):
r = 0
t = struct.unpack("<IIIIIIII", s[:32])
for i in range(8):
r += t[i] << (i * 32)
return r
def uint256_from_compact(c):
nbytes = (c >> 24) & 0xFF
v = (c & 0xFFFFFF) << (8 * (nbytes - 3))
return v
def deser_vector(f, c):
nit = deser_compact_size(f)
r = []
for i in range(nit):
t = c()
t.deserialize(f)
r.append(t)
return r
# ser_function_name: Allow for an alternate serialization function on the
# entries in the vector (we use this for serializing the vector of transactions
# for a witness block).
def ser_vector(l, ser_function_name=None):
r = ser_compact_size(len(l))
for i in l:
if ser_function_name:
r += getattr(i, ser_function_name)()
else:
r += i.serialize()
return r
def deser_uint256_vector(f):
nit = deser_compact_size(f)
r = []
for i in range(nit):
t = deser_uint256(f)
r.append(t)
return r
def ser_uint256_vector(l):
r = ser_compact_size(len(l))
for i in l:
r += ser_uint256(i)
return r
def deser_string_vector(f):
nit = deser_compact_size(f)
r = []
for i in range(nit):
t = deser_string(f)
r.append(t)
return r
def ser_string_vector(l):
r = ser_compact_size(len(l))
for sv in l:
r += ser_string(sv)
return r
def deser_int_vector(f):
nit = deser_compact_size(f)
r = []
for i in range(nit):
t = struct.unpack("<i", f.read(4))[0]
r.append(t)
return r
def ser_int_vector(l):
r = ser_compact_size(len(l))
for i in l:
r += struct.pack("<i", i)
return r
# Deserialize from a hex string representation (eg from RPC)
def FromHex(obj, hex_string):
obj.deserialize(BytesIO(hex_str_to_bytes(hex_string)))
return obj
# Convert a binary-serializable object to hex (eg for submission via RPC)
def ToHex(obj):
return bytes_to_hex_str(obj.serialize())
# Objects that map to bitcoind objects, which can be serialized/deserialized
class CAddress(object):
def __init__(self):
self.nServices = 1
self.pchReserved = b"\x00" * 10 + b"\xff" * 2
self.ip = "0.0.0.0"
self.port = 0
def deserialize(self, f):
self.nServices = struct.unpack("<Q", f.read(8))[0]
self.pchReserved = f.read(12)
self.ip = socket.inet_ntoa(f.read(4))
self.port = struct.unpack(">H", f.read(2))[0]
def serialize(self):
r = b""
r += struct.pack("<Q", self.nServices)
r += self.pchReserved
r += socket.inet_aton(self.ip)
r += struct.pack(">H", self.port)
return r
def __repr__(self):
return "CAddress(nServices=%i ip=%s port=%i)" % (self.nServices,
self.ip, self.port)
MSG_WITNESS_FLAG = 1<<30
class CInv(object):
typemap = {
0: "Error",
1: "TX",
2: "Block",
1|MSG_WITNESS_FLAG: "WitnessTx",
2|MSG_WITNESS_FLAG : "WitnessBlock",
4: "CompactBlock"
}
def __init__(self, t=0, h=0):
self.type = t
self.hash = h
def deserialize(self, f):
self.type = struct.unpack("<i", f.read(4))[0]
self.hash = deser_uint256(f)
def serialize(self):
r = b""
r += struct.pack("<i", self.type)
r += ser_uint256(self.hash)
return r
def __repr__(self):
return "CInv(type=%s hash=%064x)" \
% (self.typemap[self.type], self.hash)
class CBlockLocator(object):
def __init__(self):
self.nVersion = MY_VERSION
self.vHave = []
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.vHave = deser_uint256_vector(f)
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256_vector(self.vHave)
return r
def __repr__(self):
return "CBlockLocator(nVersion=%i vHave=%s)" \
% (self.nVersion, repr(self.vHave))
class COutPoint(object):
def __init__(self, hash=0, n=0):
self.hash = hash
self.n = n
def deserialize(self, f):
self.hash = deser_uint256(f)
self.n = struct.unpack("<I", f.read(4))[0]
def serialize(self):
r = b""
r += ser_uint256(self.hash)
r += struct.pack("<I", self.n)
return r
def __repr__(self):
return "COutPoint(hash=%064x n=%i)" % (self.hash, self.n)
class CTxIn(object):
def __init__(self, outpoint=None, scriptSig=b"", nSequence=0):
if outpoint is None:
self.prevout = COutPoint()
else:
self.prevout = outpoint
self.scriptSig = scriptSig
self.nSequence = nSequence
def deserialize(self, f):
self.prevout = COutPoint()
self.prevout.deserialize(f)
self.scriptSig = deser_string(f)
self.nSequence = struct.unpack("<I", f.read(4))[0]
def serialize(self):
r = b""
r += self.prevout.serialize()
r += ser_string(self.scriptSig)
r += struct.pack("<I", self.nSequence)
return r
def __repr__(self):
return "CTxIn(prevout=%s scriptSig=%s nSequence=%i)" \
% (repr(self.prevout), bytes_to_hex_str(self.scriptSig),
self.nSequence)
class CTxOut(object):
def __init__(self, nValue=0, scriptPubKey=b""):
self.nValue = nValue
self.scriptPubKey = scriptPubKey
def deserialize(self, f):
self.nValue = struct.unpack("<q", f.read(8))[0]
self.scriptPubKey = deser_string(f)
def serialize(self):
r = b""
r += struct.pack("<q", self.nValue)
r += ser_string(self.scriptPubKey)
return r
def __repr__(self):
return "CTxOut(nValue=%i.%08i scriptPubKey=%s)" \
% (self.nValue // COIN, self.nValue % COIN,
bytes_to_hex_str(self.scriptPubKey))
class CScriptWitness(object):
def __init__(self):
# stack is a vector of strings
self.stack = []
def __repr__(self):
return "CScriptWitness(%s)" % \
(",".join([bytes_to_hex_str(x) for x in self.stack]))
def is_null(self):
if self.stack:
return False
return True
class CTxInWitness(object):
def __init__(self):
self.scriptWitness = CScriptWitness()
def deserialize(self, f):
self.scriptWitness.stack = deser_string_vector(f)
def serialize(self):
return ser_string_vector(self.scriptWitness.stack)
def __repr__(self):
return repr(self.scriptWitness)
def is_null(self):
return self.scriptWitness.is_null()
class CTxWitness(object):
def __init__(self):
self.vtxinwit = []
def deserialize(self, f):
for i in range(len(self.vtxinwit)):
self.vtxinwit[i].deserialize(f)
def serialize(self):
r = b""
# This is different than the usual vector serialization --
# we omit the length of the vector, which is required to be
# the same length as the transaction's vin vector.
for x in self.vtxinwit:
r += x.serialize()
return r
def __repr__(self):
return "CTxWitness(%s)" % \
(';'.join([repr(x) for x in self.vtxinwit]))
def is_null(self):
for x in self.vtxinwit:
if not x.is_null():
return False
return True
class CTransaction(object):
def __init__(self, tx=None):
if tx is None:
self.nVersion = 1
self.vin = []
self.vout = []
self.wit = CTxWitness()
self.nLockTime = 0
self.sha256 = None
self.hash = None
else:
self.nVersion = tx.nVersion
self.vin = copy.deepcopy(tx.vin)
self.vout = copy.deepcopy(tx.vout)
self.nLockTime = tx.nLockTime
self.sha256 = tx.sha256
self.hash = tx.hash
self.wit = copy.deepcopy(tx.wit)
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.vin = deser_vector(f, CTxIn)
flags = 0
if len(self.vin) == 0:
flags = struct.unpack("<B", f.read(1))[0]
# Not sure why flags can't be zero, but this
# matches the implementation in bitcoind
if (flags != 0):
self.vin = deser_vector(f, CTxIn)
self.vout = deser_vector(f, CTxOut)
else:
self.vout = deser_vector(f, CTxOut)
if flags != 0:
self.wit.vtxinwit = [CTxInWitness() for i in range(len(self.vin))]
self.wit.deserialize(f)
self.nLockTime = struct.unpack("<I", f.read(4))[0]
self.sha256 = None
self.hash = None
def serialize_without_witness(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_vector(self.vin)
r += ser_vector(self.vout)
r += struct.pack("<I", self.nLockTime)
return r
# Only serialize with witness when explicitly called for
def serialize_with_witness(self):
flags = 0
if not self.wit.is_null():
flags |= 1
r = b""
r += struct.pack("<i", self.nVersion)
if flags:
dummy = []
r += ser_vector(dummy)
r += struct.pack("<B", flags)
r += ser_vector(self.vin)
r += ser_vector(self.vout)
if flags & 1:
if (len(self.wit.vtxinwit) != len(self.vin)):
# vtxinwit must have the same length as vin
self.wit.vtxinwit = self.wit.vtxinwit[:len(self.vin)]
for i in range(len(self.wit.vtxinwit), len(self.vin)):
self.wit.vtxinwit.append(CTxInWitness())
r += self.wit.serialize()
r += struct.pack("<I", self.nLockTime)
return r
# Regular serialization is without witness -- must explicitly
# call serialize_with_witness to include witness data.
def serialize(self):
return self.serialize_without_witness()
# Recalculate the txid (transaction hash without witness)
def rehash(self):
self.sha256 = None
self.calc_sha256()
# We will only cache the serialization without witness in
# self.sha256 and self.hash -- those are expected to be the txid.
def calc_sha256(self, with_witness=False):
if with_witness:
# Don't cache the result, just return it
return uint256_from_str(hash256(self.serialize_with_witness()))
if self.sha256 is None:
self.sha256 = uint256_from_str(hash256(self.serialize_without_witness()))
self.hash = encode(hash256(self.serialize())[::-1], 'hex_codec').decode('ascii')
def is_valid(self):
self.calc_sha256()
for tout in self.vout:
if tout.nValue < 0 or tout.nValue > 21000000 * COIN:
return False
return True
def __repr__(self):
return "CTransaction(nVersion=%i vin=%s vout=%s wit=%s nLockTime=%i)" \
% (self.nVersion, repr(self.vin), repr(self.vout), repr(self.wit), self.nLockTime)
class CBlockHeader(object):
def __init__(self, header=None):
if header is None:
self.set_null()
else:
self.nVersion = header.nVersion
self.hashPrevBlock = header.hashPrevBlock
self.hashMerkleRoot = header.hashMerkleRoot
self.nTime = header.nTime
self.nBits = header.nBits
self.nNonce = header.nNonce
self.sha256 = header.sha256
self.hash = header.hash
self.scrypt256 = header.scrypt256
self.calc_sha256()
def set_null(self):
self.nVersion = 1
self.hashPrevBlock = 0
self.hashMerkleRoot = 0
self.nTime = 0
self.nBits = 0
self.nNonce = 0
self.sha256 = None
self.hash = None
self.scrypt256 = None
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.hashPrevBlock = deser_uint256(f)
self.hashMerkleRoot = deser_uint256(f)
self.nTime = struct.unpack("<I", f.read(4))[0]
self.nBits = struct.unpack("<I", f.read(4))[0]
self.nNonce = struct.unpack("<I", f.read(4))[0]
self.sha256 = None
self.hash = None
self.scrypt256 = None
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256(self.hashPrevBlock)
r += ser_uint256(self.hashMerkleRoot)
r += struct.pack("<I", self.nTime)
r += struct.pack("<I", self.nBits)
r += struct.pack("<I", self.nNonce)
return r
def calc_sha256(self):
if self.sha256 is None:
r = b""
r += struct.pack("<i", self.nVersion)
r += ser_uint256(self.hashPrevBlock)
r += ser_uint256(self.hashMerkleRoot)
r += struct.pack("<I", self.nTime)
r += struct.pack("<I", self.nBits)
r += struct.pack("<I", self.nNonce)
self.sha256 = uint256_from_str(hash256(r))
self.hash = encode(hash256(r)[::-1], 'hex_codec').decode('ascii')
self.scrypt256 = uint256_from_str(bluecoin_scrypt.getPoWHash(r))
def rehash(self):
self.sha256 = None
self.scrypt256 = None
self.calc_sha256()
return self.sha256
def __repr__(self):
return "CBlockHeader(nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s nBits=%08x nNonce=%08x)" \
% (self.nVersion, self.hashPrevBlock, self.hashMerkleRoot,
time.ctime(self.nTime), self.nBits, self.nNonce)
class CBlock(CBlockHeader):
def __init__(self, header=None):
super(CBlock, self).__init__(header)
self.vtx = []
def deserialize(self, f):
super(CBlock, self).deserialize(f)
self.vtx = deser_vector(f, CTransaction)
def serialize(self, with_witness=False):
r = b""
r += super(CBlock, self).serialize()
if with_witness:
r += ser_vector(self.vtx, "serialize_with_witness")
else:
r += ser_vector(self.vtx)
return r
# Calculate the merkle root given a vector of transaction hashes
def get_merkle_root(self, hashes):
while len(hashes) > 1:
newhashes = []
for i in range(0, len(hashes), 2):
i2 = min(i+1, len(hashes)-1)
newhashes.append(hash256(hashes[i] + hashes[i2]))
hashes = newhashes
return uint256_from_str(hashes[0])
def calc_merkle_root(self):
hashes = []
for tx in self.vtx:
tx.calc_sha256()
hashes.append(ser_uint256(tx.sha256))
return self.get_merkle_root(hashes)
def calc_witness_merkle_root(self):
# For witness root purposes, the hash of the
# coinbase, with witness, is defined to be 0...0
hashes = [ser_uint256(0)]
for tx in self.vtx[1:]:
# Calculate the hashes with witness data
hashes.append(ser_uint256(tx.calc_sha256(True)))
return self.get_merkle_root(hashes)
def is_valid(self):
self.calc_sha256()
target = uint256_from_compact(self.nBits)
if self.scrypt256 > target:
return False
for tx in self.vtx:
if not tx.is_valid():
return False
if self.calc_merkle_root() != self.hashMerkleRoot:
return False
return True
def solve(self):
self.rehash()
target = uint256_from_compact(self.nBits)
while self.scrypt256 > target:
self.nNonce += 1
self.rehash()
def __repr__(self):
return "CBlock(nVersion=%i hashPrevBlock=%064x hashMerkleRoot=%064x nTime=%s nBits=%08x nNonce=%08x vtx=%s)" \
% (self.nVersion, self.hashPrevBlock, self.hashMerkleRoot,
time.ctime(self.nTime), self.nBits, self.nNonce, repr(self.vtx))
class CUnsignedAlert(object):
def __init__(self):
self.nVersion = 1
self.nRelayUntil = 0
self.nExpiration = 0
self.nID = 0
self.nCancel = 0
self.setCancel = []
self.nMinVer = 0
self.nMaxVer = 0
self.setSubVer = []
self.nPriority = 0
self.strComment = b""
self.strStatusBar = b""
self.strReserved = b""
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
self.nRelayUntil = struct.unpack("<q", f.read(8))[0]
self.nExpiration = struct.unpack("<q", f.read(8))[0]
self.nID = struct.unpack("<i", f.read(4))[0]
self.nCancel = struct.unpack("<i", f.read(4))[0]
self.setCancel = deser_int_vector(f)
self.nMinVer = struct.unpack("<i", f.read(4))[0]
self.nMaxVer = struct.unpack("<i", f.read(4))[0]
self.setSubVer = deser_string_vector(f)
self.nPriority = struct.unpack("<i", f.read(4))[0]
self.strComment = deser_string(f)
self.strStatusBar = deser_string(f)
self.strReserved = deser_string(f)
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += struct.pack("<q", self.nRelayUntil)
r += struct.pack("<q", self.nExpiration)
r += struct.pack("<i", self.nID)
r += struct.pack("<i", self.nCancel)
r += ser_int_vector(self.setCancel)
r += struct.pack("<i", self.nMinVer)
r += struct.pack("<i", self.nMaxVer)
r += ser_string_vector(self.setSubVer)
r += struct.pack("<i", self.nPriority)
r += ser_string(self.strComment)
r += ser_string(self.strStatusBar)
r += ser_string(self.strReserved)
return r
def __repr__(self):
return "CUnsignedAlert(nVersion %d, nRelayUntil %d, nExpiration %d, nID %d, nCancel %d, nMinVer %d, nMaxVer %d, nPriority %d, strComment %s, strStatusBar %s, strReserved %s)" \
% (self.nVersion, self.nRelayUntil, self.nExpiration, self.nID,
self.nCancel, self.nMinVer, self.nMaxVer, self.nPriority,
self.strComment, self.strStatusBar, self.strReserved)
class CAlert(object):
def __init__(self):
self.vchMsg = b""
self.vchSig = b""
def deserialize(self, f):
self.vchMsg = deser_string(f)
self.vchSig = deser_string(f)
def serialize(self):
r = b""
r += ser_string(self.vchMsg)
r += ser_string(self.vchSig)
return r
def __repr__(self):
return "CAlert(vchMsg.sz %d, vchSig.sz %d)" \
% (len(self.vchMsg), len(self.vchSig))
class PrefilledTransaction(object):
def __init__(self, index=0, tx = None):
self.index = index
self.tx = tx
def deserialize(self, f):
self.index = deser_compact_size(f)
self.tx = CTransaction()
self.tx.deserialize(f)
def serialize(self, with_witness=False):
r = b""
r += ser_compact_size(self.index)
if with_witness:
r += self.tx.serialize_with_witness()
else:
r += self.tx.serialize_without_witness()
return r
def serialize_with_witness(self):
return self.serialize(with_witness=True)
def __repr__(self):
return "PrefilledTransaction(index=%d, tx=%s)" % (self.index, repr(self.tx))
# This is what we send on the wire, in a cmpctblock message.
class P2PHeaderAndShortIDs(object):
def __init__(self):
self.header = CBlockHeader()
self.nonce = 0
self.shortids_length = 0
self.shortids = []
self.prefilled_txn_length = 0
self.prefilled_txn = []
def deserialize(self, f):
self.header.deserialize(f)
self.nonce = struct.unpack("<Q", f.read(8))[0]
self.shortids_length = deser_compact_size(f)
for i in range(self.shortids_length):
# shortids are defined to be 6 bytes in the spec, so append
# two zero bytes and read it in as an 8-byte number
self.shortids.append(struct.unpack("<Q", f.read(6) + b'\x00\x00')[0])
self.prefilled_txn = deser_vector(f, PrefilledTransaction)
self.prefilled_txn_length = len(self.prefilled_txn)
# When using version 2 compact blocks, we must serialize with_witness.
def serialize(self, with_witness=False):
r = b""
r += self.header.serialize()
r += struct.pack("<Q", self.nonce)
r += ser_compact_size(self.shortids_length)
for x in self.shortids:
# We only want the first 6 bytes
r += struct.pack("<Q", x)[0:6]
if with_witness:
r += ser_vector(self.prefilled_txn, "serialize_with_witness")
else:
r += ser_vector(self.prefilled_txn)
return r
def __repr__(self):
return "P2PHeaderAndShortIDs(header=%s, nonce=%d, shortids_length=%d, shortids=%s, prefilled_txn_length=%d, prefilledtxn=%s" % (repr(self.header), self.nonce, self.shortids_length, repr(self.shortids), self.prefilled_txn_length, repr(self.prefilled_txn))
# P2P version of the above that will use witness serialization (for compact
# block version 2)
class P2PHeaderAndShortWitnessIDs(P2PHeaderAndShortIDs):
def serialize(self):
return super(P2PHeaderAndShortWitnessIDs, self).serialize(with_witness=True)
# Calculate the BIP 152-compact blocks shortid for a given transaction hash
def calculate_shortid(k0, k1, tx_hash):
expected_shortid = siphash256(k0, k1, tx_hash)
expected_shortid &= 0x0000ffffffffffff
return expected_shortid
# This version gets rid of the array lengths, and reinterprets the differential
# encoding into indices that can be used for lookup.
class HeaderAndShortIDs(object):
def __init__(self, p2pheaders_and_shortids = None):
self.header = CBlockHeader()
self.nonce = 0
self.shortids = []
self.prefilled_txn = []
self.use_witness = False
if p2pheaders_and_shortids != None:
self.header = p2pheaders_and_shortids.header
self.nonce = p2pheaders_and_shortids.nonce
self.shortids = p2pheaders_and_shortids.shortids
last_index = -1
for x in p2pheaders_and_shortids.prefilled_txn:
self.prefilled_txn.append(PrefilledTransaction(x.index + last_index + 1, x.tx))
last_index = self.prefilled_txn[-1].index
def to_p2p(self):
if self.use_witness:
ret = P2PHeaderAndShortWitnessIDs()
else:
ret = P2PHeaderAndShortIDs()
ret.header = self.header
ret.nonce = self.nonce
ret.shortids_length = len(self.shortids)
ret.shortids = self.shortids
ret.prefilled_txn_length = len(self.prefilled_txn)
ret.prefilled_txn = []
last_index = -1
for x in self.prefilled_txn:
ret.prefilled_txn.append(PrefilledTransaction(x.index - last_index - 1, x.tx))
last_index = x.index
return ret
def get_siphash_keys(self):
header_nonce = self.header.serialize()
header_nonce += struct.pack("<Q", self.nonce)
hash_header_nonce_as_str = sha256(header_nonce)
key0 = struct.unpack("<Q", hash_header_nonce_as_str[0:8])[0]
key1 = struct.unpack("<Q", hash_header_nonce_as_str[8:16])[0]
return [ key0, key1 ]
# Version 2 compact blocks use wtxid in shortids (rather than txid)
def initialize_from_block(self, block, nonce=0, prefill_list = [0], use_witness = False):
self.header = CBlockHeader(block)
self.nonce = nonce
self.prefilled_txn = [ PrefilledTransaction(i, block.vtx[i]) for i in prefill_list ]
self.shortids = []
self.use_witness = use_witness
[k0, k1] = self.get_siphash_keys()
for i in range(len(block.vtx)):
if i not in prefill_list:
tx_hash = block.vtx[i].sha256
if use_witness:
tx_hash = block.vtx[i].calc_sha256(with_witness=True)
self.shortids.append(calculate_shortid(k0, k1, tx_hash))
def __repr__(self):
return "HeaderAndShortIDs(header=%s, nonce=%d, shortids=%s, prefilledtxn=%s" % (repr(self.header), self.nonce, repr(self.shortids), repr(self.prefilled_txn))
class BlockTransactionsRequest(object):
def __init__(self, blockhash=0, indexes = None):
self.blockhash = blockhash
self.indexes = indexes if indexes != None else []
def deserialize(self, f):
self.blockhash = deser_uint256(f)
indexes_length = deser_compact_size(f)
for i in range(indexes_length):
self.indexes.append(deser_compact_size(f))
def serialize(self):
r = b""
r += ser_uint256(self.blockhash)
r += ser_compact_size(len(self.indexes))
for x in self.indexes:
r += ser_compact_size(x)
return r
# helper to set the differentially encoded indexes from absolute ones
def from_absolute(self, absolute_indexes):
self.indexes = []
last_index = -1
for x in absolute_indexes:
self.indexes.append(x-last_index-1)
last_index = x
def to_absolute(self):
absolute_indexes = []
last_index = -1
for x in self.indexes:
absolute_indexes.append(x+last_index+1)
last_index = absolute_indexes[-1]
return absolute_indexes
def __repr__(self):
return "BlockTransactionsRequest(hash=%064x indexes=%s)" % (self.blockhash, repr(self.indexes))
class BlockTransactions(object):
def __init__(self, blockhash=0, transactions = None):
self.blockhash = blockhash
self.transactions = transactions if transactions != None else []
def deserialize(self, f):
self.blockhash = deser_uint256(f)
self.transactions = deser_vector(f, CTransaction)
def serialize(self, with_witness=False):
r = b""
r += ser_uint256(self.blockhash)
if with_witness:
r += ser_vector(self.transactions, "serialize_with_witness")
else:
r += ser_vector(self.transactions)
return r
def __repr__(self):
return "BlockTransactions(hash=%064x transactions=%s)" % (self.blockhash, repr(self.transactions))
# Objects that correspond to messages on the wire
class msg_version(object):
command = b"version"
def __init__(self):
self.nVersion = MY_VERSION
self.nServices = 1
self.nTime = int(time.time())
self.addrTo = CAddress()
self.addrFrom = CAddress()
self.nNonce = random.getrandbits(64)
self.strSubVer = MY_SUBVERSION
self.nStartingHeight = -1
self.nRelay = MY_RELAY
def deserialize(self, f):
self.nVersion = struct.unpack("<i", f.read(4))[0]
if self.nVersion == 10300:
self.nVersion = 300
self.nServices = struct.unpack("<Q", f.read(8))[0]
self.nTime = struct.unpack("<q", f.read(8))[0]
self.addrTo = CAddress()
self.addrTo.deserialize(f)
if self.nVersion >= 106:
self.addrFrom = CAddress()
self.addrFrom.deserialize(f)
self.nNonce = struct.unpack("<Q", f.read(8))[0]
self.strSubVer = deser_string(f)
else:
self.addrFrom = None
self.nNonce = None
self.strSubVer = None
self.nStartingHeight = None
if self.nVersion >= 209:
self.nStartingHeight = struct.unpack("<i", f.read(4))[0]
else:
self.nStartingHeight = None
if self.nVersion >= 70001:
# Relay field is optional for version 70001 onwards
try:
self.nRelay = struct.unpack("<b", f.read(1))[0]
except:
self.nRelay = 0
else:
self.nRelay = 0
def serialize(self):
r = b""
r += struct.pack("<i", self.nVersion)
r += struct.pack("<Q", self.nServices)
r += struct.pack("<q", self.nTime)
r += self.addrTo.serialize()
r += self.addrFrom.serialize()
r += struct.pack("<Q", self.nNonce)
r += ser_string(self.strSubVer)
r += struct.pack("<i", self.nStartingHeight)
r += struct.pack("<b", self.nRelay)
return r
def __repr__(self):
return 'msg_version(nVersion=%i nServices=%i nTime=%s addrTo=%s addrFrom=%s nNonce=0x%016X strSubVer=%s nStartingHeight=%i nRelay=%i)' \
% (self.nVersion, self.nServices, time.ctime(self.nTime),
repr(self.addrTo), repr(self.addrFrom), self.nNonce,
self.strSubVer, self.nStartingHeight, self.nRelay)
class msg_verack(object):
command = b"verack"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_verack()"
class msg_addr(object):
command = b"addr"
def __init__(self):
self.addrs = []
def deserialize(self, f):
self.addrs = deser_vector(f, CAddress)
def serialize(self):
return ser_vector(self.addrs)
def __repr__(self):
return "msg_addr(addrs=%s)" % (repr(self.addrs))
class msg_alert(object):
command = b"alert"
def __init__(self):
self.alert = CAlert()
def deserialize(self, f):
self.alert = CAlert()
self.alert.deserialize(f)
def serialize(self):
r = b""
r += self.alert.serialize()
return r
def __repr__(self):
return "msg_alert(alert=%s)" % (repr(self.alert), )
class msg_inv(object):
command = b"inv"
def __init__(self, inv=None):
if inv is None:
self.inv = []
else:
self.inv = inv
def deserialize(self, f):
self.inv = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.inv)
def __repr__(self):
return "msg_inv(inv=%s)" % (repr(self.inv))
class msg_getdata(object):
command = b"getdata"
def __init__(self, inv=None):
self.inv = inv if inv != None else []
def deserialize(self, f):
self.inv = deser_vector(f, CInv)
def serialize(self):
return ser_vector(self.inv)
def __repr__(self):
return "msg_getdata(inv=%s)" % (repr(self.inv))
class msg_getblocks(object):
command = b"getblocks"
def __init__(self):
self.locator = CBlockLocator()
self.hashstop = 0
def deserialize(self, f):
self.locator = CBlockLocator()
self.locator.deserialize(f)
self.hashstop = deser_uint256(f)
def serialize(self):
r = b""
r += self.locator.serialize()
r += ser_uint256(self.hashstop)
return r
def __repr__(self):
return "msg_getblocks(locator=%s hashstop=%064x)" \
% (repr(self.locator), self.hashstop)
class msg_tx(object):
command = b"tx"
def __init__(self, tx=CTransaction()):
self.tx = tx
def deserialize(self, f):
self.tx.deserialize(f)
def serialize(self):
return self.tx.serialize_without_witness()
def __repr__(self):
return "msg_tx(tx=%s)" % (repr(self.tx))
class msg_witness_tx(msg_tx):
def serialize(self):
return self.tx.serialize_with_witness()
class msg_block(object):
command = b"block"
def __init__(self, block=None):
if block is None:
self.block = CBlock()
else:
self.block = block
def deserialize(self, f):
self.block.deserialize(f)
def serialize(self):
return self.block.serialize()
def __repr__(self):
return "msg_block(block=%s)" % (repr(self.block))
# for cases where a user needs tighter control over what is sent over the wire
# note that the user must supply the name of the command, and the data
class msg_generic(object):
def __init__(self, command, data=None):
self.command = command
self.data = data
def serialize(self):
return self.data
def __repr__(self):
return "msg_generic()"
class msg_witness_block(msg_block):
def serialize(self):
r = self.block.serialize(with_witness=True)
return r
class msg_getaddr(object):
command = b"getaddr"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_getaddr()"
class msg_ping_prebip31(object):
command = b"ping"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_ping() (pre-bip31)"
class msg_ping(object):
command = b"ping"
def __init__(self, nonce=0):
self.nonce = nonce
def deserialize(self, f):
self.nonce = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<Q", self.nonce)
return r
def __repr__(self):
return "msg_ping(nonce=%08x)" % self.nonce
class msg_pong(object):
command = b"pong"
def __init__(self, nonce=0):
self.nonce = nonce
def deserialize(self, f):
self.nonce = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<Q", self.nonce)
return r
def __repr__(self):
return "msg_pong(nonce=%08x)" % self.nonce
class msg_mempool(object):
command = b"mempool"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_mempool()"
class msg_sendheaders(object):
command = b"sendheaders"
def __init__(self):
pass
def deserialize(self, f):
pass
def serialize(self):
return b""
def __repr__(self):
return "msg_sendheaders()"
# getheaders message has
# number of entries
# vector of hashes
# hash_stop (hash of last desired block header, 0 to get as many as possible)
class msg_getheaders(object):
command = b"getheaders"
def __init__(self):
self.locator = CBlockLocator()
self.hashstop = 0
def deserialize(self, f):
self.locator = CBlockLocator()
self.locator.deserialize(f)
self.hashstop = deser_uint256(f)
def serialize(self):
r = b""
r += self.locator.serialize()
r += ser_uint256(self.hashstop)
return r
def __repr__(self):
return "msg_getheaders(locator=%s, stop=%064x)" \
% (repr(self.locator), self.hashstop)
# headers message has
# <count> <vector of block headers>
class msg_headers(object):
command = b"headers"
def __init__(self):
self.headers = []
def deserialize(self, f):
# comment in bitcoind indicates these should be deserialized as blocks
blocks = deser_vector(f, CBlock)
for x in blocks:
self.headers.append(CBlockHeader(x))
def serialize(self):
blocks = [CBlock(x) for x in self.headers]
return ser_vector(blocks)
def __repr__(self):
return "msg_headers(headers=%s)" % repr(self.headers)
class msg_reject(object):
command = b"reject"
REJECT_MALFORMED = 1
def __init__(self):
self.message = b""
self.code = 0
self.reason = b""
self.data = 0
def deserialize(self, f):
self.message = deser_string(f)
self.code = struct.unpack("<B", f.read(1))[0]
self.reason = deser_string(f)
if (self.code != self.REJECT_MALFORMED and
(self.message == b"block" or self.message == b"tx")):
self.data = deser_uint256(f)
def serialize(self):
r = ser_string(self.message)
r += struct.pack("<B", self.code)
r += ser_string(self.reason)
if (self.code != self.REJECT_MALFORMED and
(self.message == b"block" or self.message == b"tx")):
r += ser_uint256(self.data)
return r
def __repr__(self):
return "msg_reject: %s %d %s [%064x]" \
% (self.message, self.code, self.reason, self.data)
# Helper function
def wait_until(predicate, *, attempts=float('inf'), timeout=float('inf')):
attempt = 0
elapsed = 0
while attempt < attempts and elapsed < timeout:
with mininode_lock:
if predicate():
return True
attempt += 1
elapsed += 0.05
time.sleep(0.05)
return False
class msg_feefilter(object):
command = b"feefilter"
def __init__(self, feerate=0):
self.feerate = feerate
def deserialize(self, f):
self.feerate = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<Q", self.feerate)
return r
def __repr__(self):
return "msg_feefilter(feerate=%08x)" % self.feerate
class msg_sendcmpct(object):
command = b"sendcmpct"
def __init__(self):
self.announce = False
self.version = 1
def deserialize(self, f):
self.announce = struct.unpack("<?", f.read(1))[0]
self.version = struct.unpack("<Q", f.read(8))[0]
def serialize(self):
r = b""
r += struct.pack("<?", self.announce)
r += struct.pack("<Q", self.version)
return r
def __repr__(self):
return "msg_sendcmpct(announce=%s, version=%lu)" % (self.announce, self.version)
class msg_cmpctblock(object):
command = b"cmpctblock"
def __init__(self, header_and_shortids = None):
self.header_and_shortids = header_and_shortids
def deserialize(self, f):
self.header_and_shortids = P2PHeaderAndShortIDs()
self.header_and_shortids.deserialize(f)
def serialize(self):
r = b""
r += self.header_and_shortids.serialize()
return r
def __repr__(self):
return "msg_cmpctblock(HeaderAndShortIDs=%s)" % repr(self.header_and_shortids)
class msg_getblocktxn(object):
command = b"getblocktxn"
def __init__(self):
self.block_txn_request = None
def deserialize(self, f):
self.block_txn_request = BlockTransactionsRequest()
self.block_txn_request.deserialize(f)
def serialize(self):
r = b""
r += self.block_txn_request.serialize()
return r
def __repr__(self):
return "msg_getblocktxn(block_txn_request=%s)" % (repr(self.block_txn_request))
class msg_blocktxn(object):
command = b"blocktxn"
def __init__(self):
self.block_transactions = BlockTransactions()
def deserialize(self, f):
self.block_transactions.deserialize(f)
def serialize(self):
r = b""
r += self.block_transactions.serialize()
return r
def __repr__(self):
return "msg_blocktxn(block_transactions=%s)" % (repr(self.block_transactions))
class msg_witness_blocktxn(msg_blocktxn):
def serialize(self):
r = b""
r += self.block_transactions.serialize(with_witness=True)
return r
# This is what a callback should look like for NodeConn
# Reimplement the on_* functions to provide handling for events
class NodeConnCB(object):
def __init__(self):
self.verack_received = False
# deliver_sleep_time is helpful for debugging race conditions in p2p
# tests; it causes message delivery to sleep for the specified time
# before acquiring the global lock and delivering the next message.
self.deliver_sleep_time = None
# Remember the services our peer has advertised
self.peer_services = None
def set_deliver_sleep_time(self, value):
with mininode_lock:
self.deliver_sleep_time = value
def get_deliver_sleep_time(self):
with mininode_lock:
return self.deliver_sleep_time
# Spin until verack message is received from the node.
# Tests may want to use this as a signal that the test can begin.
# This can be called from the testing thread, so it needs to acquire the
# global lock.
def wait_for_verack(self):
while True:
with mininode_lock:
if self.verack_received:
return
time.sleep(0.05)
def deliver(self, conn, message):
deliver_sleep = self.get_deliver_sleep_time()
if deliver_sleep is not None:
time.sleep(deliver_sleep)
with mininode_lock:
try:
getattr(self, 'on_' + message.command.decode('ascii'))(conn, message)
except:
print("ERROR delivering %s (%s)" % (repr(message),
sys.exc_info()[0]))
def on_version(self, conn, message):
if message.nVersion >= 209:
conn.send_message(msg_verack())
conn.ver_send = min(MY_VERSION, message.nVersion)
if message.nVersion < 209:
conn.ver_recv = conn.ver_send
conn.nServices = message.nServices
def on_verack(self, conn, message):
conn.ver_recv = conn.ver_send
self.verack_received = True
def on_inv(self, conn, message):
want = msg_getdata()
for i in message.inv:
if i.type != 0:
want.inv.append(i)
if len(want.inv):
conn.send_message(want)
def on_addr(self, conn, message): pass
def on_alert(self, conn, message): pass
def on_getdata(self, conn, message): pass
def on_getblocks(self, conn, message): pass
def on_tx(self, conn, message): pass
def on_block(self, conn, message): pass
def on_getaddr(self, conn, message): pass
def on_headers(self, conn, message): pass
def on_getheaders(self, conn, message): pass
def on_ping(self, conn, message):
if conn.ver_send > BIP0031_VERSION:
conn.send_message(msg_pong(message.nonce))
def on_reject(self, conn, message): pass
def on_open(self, conn): pass
def on_close(self, conn): pass
def on_mempool(self, conn): pass
def on_pong(self, conn, message): pass
def on_feefilter(self, conn, message): pass
def on_sendheaders(self, conn, message): pass
def on_sendcmpct(self, conn, message): pass
def on_cmpctblock(self, conn, message): pass
def on_getblocktxn(self, conn, message): pass
def on_blocktxn(self, conn, message): pass
# More useful callbacks and functions for NodeConnCB's which have a single NodeConn
class SingleNodeConnCB(NodeConnCB):
def __init__(self):
NodeConnCB.__init__(self)
self.connection = None
self.ping_counter = 1
self.last_pong = msg_pong()
def add_connection(self, conn):
self.connection = conn
# Wrapper for the NodeConn's send_message function
def send_message(self, message):
self.connection.send_message(message)
def send_and_ping(self, message):
self.send_message(message)
self.sync_with_ping()
def on_pong(self, conn, message):
self.last_pong = message
# Sync up with the node
def sync_with_ping(self, timeout=30):
def received_pong():
return (self.last_pong.nonce == self.ping_counter)
self.send_message(msg_ping(nonce=self.ping_counter))
success = wait_until(received_pong, timeout=timeout)
self.ping_counter += 1
return success
# The actual NodeConn class
# This class provides an interface for a p2p connection to a specified node
class NodeConn(asyncore.dispatcher):
messagemap = {
b"version": msg_version,
b"verack": msg_verack,
b"addr": msg_addr,
b"alert": msg_alert,
b"inv": msg_inv,
b"getdata": msg_getdata,
b"getblocks": msg_getblocks,
b"tx": msg_tx,
b"block": msg_block,
b"getaddr": msg_getaddr,
b"ping": msg_ping,
b"pong": msg_pong,
b"headers": msg_headers,
b"getheaders": msg_getheaders,
b"reject": msg_reject,
b"mempool": msg_mempool,
b"feefilter": msg_feefilter,
b"sendheaders": msg_sendheaders,
b"sendcmpct": msg_sendcmpct,
b"cmpctblock": msg_cmpctblock,
b"getblocktxn": msg_getblocktxn,
b"blocktxn": msg_blocktxn
}
MAGIC_BYTES = {
"mainnet": b"\xfb\xc0\xb6\xdb", # mainnet
"testnet3": b"\xfc\xc1\xb7\xdc", # testnet3
"regtest": b"\xfa\xbf\xb5\xda", # regtest
}
def __init__(self, dstaddr, dstport, rpc, callback, net="regtest", services=NODE_NETWORK, send_version=True):
asyncore.dispatcher.__init__(self, map=mininode_socket_map)
self.log = logging.getLogger("NodeConn(%s:%d)" % (dstaddr, dstport))
self.dstaddr = dstaddr
self.dstport = dstport
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.sendbuf = b""
self.recvbuf = b""
self.ver_send = 209
self.ver_recv = 209
self.last_sent = 0
self.state = "connecting"
self.network = net
self.cb = callback
self.disconnect = False
self.nServices = 0
if send_version:
# stuff version msg into sendbuf
vt = msg_version()
vt.nServices = services
vt.addrTo.ip = self.dstaddr
vt.addrTo.port = self.dstport
vt.addrFrom.ip = "0.0.0.0"
vt.addrFrom.port = 0
self.send_message(vt, True)
print('MiniNode: Connecting to Bluecoin Node IP # ' + dstaddr + ':' \
+ str(dstport))
try:
self.connect((dstaddr, dstport))
except:
self.handle_close()
self.rpc = rpc
def show_debug_msg(self, msg):
self.log.debug(msg)
def handle_connect(self):
if self.state != "connected":
self.show_debug_msg("MiniNode: Connected & Listening: \n")
self.state = "connected"
self.cb.on_open(self)
def handle_close(self):
self.show_debug_msg("MiniNode: Closing Connection to %s:%d... "
% (self.dstaddr, self.dstport))
self.state = "closed"
self.recvbuf = b""
self.sendbuf = b""
try:
self.close()
except:
pass
self.cb.on_close(self)
def handle_read(self):
try:
t = self.recv(8192)
if len(t) > 0:
self.recvbuf += t
self.got_data()
except:
pass
def readable(self):
return True
def writable(self):
with mininode_lock:
pre_connection = self.state == "connecting"
length = len(self.sendbuf)
return (length > 0 or pre_connection)
def handle_write(self):
with mininode_lock:
# asyncore does not expose socket connection, only the first read/write
# event, thus we must check connection manually here to know when we
# actually connect
if self.state == "connecting":
self.handle_connect()
if not self.writable():
return
try:
sent = self.send(self.sendbuf)
except:
self.handle_close()
return
self.sendbuf = self.sendbuf[sent:]
def got_data(self):
try:
while True:
if len(self.recvbuf) < 4:
return
if self.recvbuf[:4] != self.MAGIC_BYTES[self.network]:
raise ValueError("got garbage %s" % repr(self.recvbuf))
if self.ver_recv < 209:
if len(self.recvbuf) < 4 + 12 + 4:
return
command = self.recvbuf[4:4+12].split(b"\x00", 1)[0]
msglen = struct.unpack("<i", self.recvbuf[4+12:4+12+4])[0]
checksum = None
if len(self.recvbuf) < 4 + 12 + 4 + msglen:
return
msg = self.recvbuf[4+12+4:4+12+4+msglen]
self.recvbuf = self.recvbuf[4+12+4+msglen:]
else:
if len(self.recvbuf) < 4 + 12 + 4 + 4:
return
command = self.recvbuf[4:4+12].split(b"\x00", 1)[0]
msglen = struct.unpack("<i", self.recvbuf[4+12:4+12+4])[0]
checksum = self.recvbuf[4+12+4:4+12+4+4]
if len(self.recvbuf) < 4 + 12 + 4 + 4 + msglen:
return
msg = self.recvbuf[4+12+4+4:4+12+4+4+msglen]
th = sha256(msg)
h = sha256(th)
if checksum != h[:4]:
raise ValueError("got bad checksum " + repr(self.recvbuf))
self.recvbuf = self.recvbuf[4+12+4+4+msglen:]
if command in self.messagemap:
f = BytesIO(msg)
t = self.messagemap[command]()
t.deserialize(f)
self.got_message(t)
else:
self.show_debug_msg("Unknown command: '" + command + "' " +
repr(msg))
except Exception as e:
print('got_data:', repr(e))
# import traceback
# traceback.print_tb(sys.exc_info()[2])
def send_message(self, message, pushbuf=False):
if self.state != "connected" and not pushbuf:
raise IOError('Not connected, no pushbuf')
self.show_debug_msg("Send %s" % repr(message))
command = message.command
data = message.serialize()
tmsg = self.MAGIC_BYTES[self.network]
tmsg += command
tmsg += b"\x00" * (12 - len(command))
tmsg += struct.pack("<I", len(data))
if self.ver_send >= 209:
th = sha256(data)
h = sha256(th)
tmsg += h[:4]
tmsg += data
with mininode_lock:
self.sendbuf += tmsg
self.last_sent = time.time()
def got_message(self, message):
if message.command == b"version":
if message.nVersion <= BIP0031_VERSION:
self.messagemap[b'ping'] = msg_ping_prebip31
if self.last_sent + 30 * 60 < time.time():
self.send_message(self.messagemap[b'ping']())
self.show_debug_msg("Recv %s" % repr(message))
self.cb.deliver(self, message)
def disconnect_node(self):
self.disconnect = True
class NetworkThread(Thread):
def run(self):
while mininode_socket_map:
# We check for whether to disconnect outside of the asyncore
# loop to workaround the behavior of asyncore when using
# select
disconnected = []
for fd, obj in mininode_socket_map.items():
if obj.disconnect:
disconnected.append(obj)
[ obj.handle_close() for obj in disconnected ]
asyncore.loop(0.1, use_poll=True, map=mininode_socket_map, count=1)
# An exception we can raise if we detect a potential disconnect
# (p2p or rpc) before the test is complete
class EarlyDisconnectError(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
| 30.571114 | 262 | 0.593137 |
7943effab02ce25e8966e086149e31761a4b9d12 | 1,827 | py | Python | telethon/tl/custom/forward.py | pgjones/Telethon | 1be54035e81ee2e7f869fc2a8c20a16760924895 | [
"MIT"
] | 1 | 2019-06-21T19:19:50.000Z | 2019-06-21T19:19:50.000Z | telethon/tl/custom/forward.py | pgjones/Telethon | 1be54035e81ee2e7f869fc2a8c20a16760924895 | [
"MIT"
] | 1 | 2020-06-30T20:56:35.000Z | 2020-06-30T20:56:35.000Z | telethon/tl/custom/forward.py | SlavikMIPT/Telethon | fece5660f46b1f5b8f464c162914cc51bff6550f | [
"MIT"
] | null | null | null | from .chatgetter import ChatGetter
from .sendergetter import SenderGetter
from ... import utils
from ...tl import types
class Forward(ChatGetter, SenderGetter):
"""
Custom class that encapsulates a :tl:`MessageFwdHeader` providing an
abstraction to easily access information like the original sender.
Remember that this class implements `ChatGetter
<telethon.tl.custom.chatgetter.ChatGetter>` and `SenderGetter
<telethon.tl.custom.sendergetter.SenderGetter>` which means you
have access to all their sender and chat properties and methods.
Attributes:
original_fwd (:tl:`MessageFwdHeader`):
The original :tl:`MessageFwdHeader` instance.
Any other attribute:
Attributes not described here are the same as those available
in the original :tl:`MessageFwdHeader`.
"""
def __init__(self, client, original, entities):
# Copy all the fields, not reference! It would cause memory cycles:
# self.original_fwd.original_fwd.original_fwd.original_fwd
# ...would be valid if we referenced.
self.__dict__ = dict(original.__dict__)
self._client = client
self.original_fwd = original
sender, input_sender = utils._get_entity_pair(
original.from_id, entities, client._entity_cache)
if not original.channel_id:
peer = chat = input_chat = None
else:
peer = types.PeerChannel(original.channel_id)
chat, input_chat = utils._get_entity_pair(
utils.get_peer_id(peer), entities, client._entity_cache)
ChatGetter.__init__(self, peer, chat=chat, input_chat=input_chat)
SenderGetter.__init__(self, original.from_id, sender=sender, input_sender=input_sender)
# TODO We could reload the message
| 38.0625 | 95 | 0.692392 |
7943f075104ea1e2be24449516b1c7491458e92e | 16,782 | py | Python | Explorer/widgets.py | steveknipmeyer/ModelRelief | a3d067e0ed39a3a8ca78896c21eaa3e7293b15a2 | [
"MIT"
] | null | null | null | Explorer/widgets.py | steveknipmeyer/ModelRelief | a3d067e0ed39a3a8ca78896c21eaa3e7293b15a2 | [
"MIT"
] | null | null | null | Explorer/widgets.py | steveknipmeyer/ModelRelief | a3d067e0ed39a3a8ca78896c21eaa3e7293b15a2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
"""
.. module:: widgets
:synopsis: QT UI controls for Explorer.
.. moduleauthor:: Steve Knipmeyer <[email protected]>
"""
import os
# First, and before importing any Enthought packages, set the ETS_TOOLKIT environment variable to qt4 to tell Traits that we will use Qt.
# https://github.com/enthought/traitsui/issues/407
os.environ['ETS_TOOLKIT'] = 'qt4'
# By default, mayavi uses the PySide bindings. For the PyQt bindings, set the QT_API environment variable to 'pyqt5'
os.environ['QT_API'] = 'pyqt5'
# To be able to use PySide or PyQt4 and not run in conflicts with traits, we need to import QtGui and QtCore from pyface.qt
from PyQt5 import QtGui, QtCore, QtWidgets
import matplotlib
matplotlib.use('Qt5Agg')
import matplotlib.pyplot as plt
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar
from matplotlib.figure import Figure
from matplotlib.transforms import Bbox
from traits.api import HasTraits, Instance, on_trait_change
from traitsui.api import View, Item
from mayavi.core.ui.api import MayaviScene, MlabSceneModel, SceneEditor
from mayavi import mlab
import numpy as np
from numpy import pi, sin, cos, mgrid
from enum import Enum
from typing import Any, Callable, Dict, Optional
from results import Results, DataSource
# ------------------------------------------#
# Images #
# ------------------------------------------#
class ImageType(Enum):
"""
A class representing the various UI image view types.
"""
DepthBuffer = 1,
Relief = 2,
BackgroundMask = 3,
GradientX = 4,
GradientXMask = 5,
GradientY = 6,
GradientYMask = 7,
CompositeMask = 8,
GradientXUnsharp = 9,
GradientYUnsharp = 10,
# workbench
Image1 = 11,
Image2 = 12,
Image3 = 13,
Image4 = 14,
Image5 = 15,
Image6 = 16,
Image7 = 17,
Image8 = 18,
class ImageTab():
""" A UI tab of an image view. """
def __init__(self, widget: QtWidgets.QWidget, image_type: ImageType, cmap: str, content_ctor: Callable[[Figure, plt.Axes, np.ndarray, str, str], Figure], source: DataSource) -> None:
""" A UI image tab in the Explorer.
Parameters
----------
widget
QWidget of the tab.
image_type
The type of the image.
cmap
The matplotlib colormap.
content_ctor
The content constructor function that populates the given figure.
data
The Numpy array holding the image data.
"""
self.widget = widget
self.image_type = image_type
self.cmap = cmap
self.figure: Figure = None
self.canvas: FigureCanvas = None
self.scroll: QtWidgets.QScrollArea = None
self.nav: NavigationToolbar = None
self.content_ctor = content_ctor
self.source = source
def get_view_extents(self, figure: Figure)->Bbox:
""" Returns the bounding box extents of a figure
Parameters
---------
figure
The Figure to query.
"""
# N.B. get_axes() returns a List; axes[0] = data axes; axes[1] = normalized axes?
axes = figure.get_axes()[0]
return axes.viewLim
def set_view_extents(self, figure: Figure, limits: Bbox)->Bbox:
""" Sets the bounding box extents of a figure
Parameters
---------
figure
The Figure to update.
limits
The new bounding box.
"""
axes = figure.get_axes()[0]
# points: a 2x2 numpy array of the form [[x0, y0], [x1, y1]]
points = limits.get_points()
axes.set_xlim(points[0][0], points[1][0])
axes.set_ylim(points[0][1], points[1][1])
def construct (self):
""" Constructs the UI tab with the image content.
Regenerates the matplotlib Figure.
"""
figure_exists = self.figure != None
if figure_exists:
viewLim = self.get_view_extents(self.figure)
plt.close(self.figure)
# construct image figure
data = self.source.data
self.figure = self.construct_subplot_figures ([data.image], 1, [data.title], [self.cmap])
# restore extents
if figure_exists:
self.set_view_extents(self.figure, viewLim)
self.canvas = FigureCanvas(self.figure)
self.canvas.draw()
# navigation toolbar
if (self.nav is None):
self.nav = NavigationToolbar(self.canvas, self.widget)
self.widget.layout().addWidget(self.nav)
# scroll area
if (self.scroll is None):
self.scroll = QtWidgets.QScrollArea(self.canvas)
self.widget.layout().addWidget(self.scroll)
# update associated controls
self.scroll.setWidget(self.canvas)
self.nav.canvas = self.canvas
def update (self):
""" Updates the UI tab with the image content.
"""
if self.source.dirty:
self.construct()
self.source.dirty = False
@staticmethod
def add_image(figure: Figure, subplot: plt.Axes, image: np.ndarray, title: str, cmap: str) -> plt.Figure:
""" Adds an image to the given Figure.
Parameters
---------
figure
The Figure to which the image will be added.
subplot
The subplot Axes of the Figure.
image
The image array.
title
The title of the image Figure.
cmap
The colormap to be used.
Returns
-------
A Figure.
"""
# flip; first row is at minimum Y
image = np.flipud(image)
plot = plt.imshow(image, cmap)
# title
title_obj = subplot.set_title(title)
plt.setp(title_obj, color='w') # set the color of title to white
# axes
plt.setp(plt.getp(subplot, 'yticklabels'), color='w') # set yticklabels color
plt.setp(plt.getp(subplot, 'xticklabels'), color='w') # set xticklabels color
# colorbar
# https://matplotlib.org/examples/images_contours_and_fields/pcolormesh_levels.html
colorbar = figure.colorbar(plot, ax=subplot, drawedges=True)
plt.setp(plt.getp(colorbar.ax.axes, 'yticklabels'), color='w') # set colorbar to match Y labels
# yticklabels color
colorbar.outline.set_edgecolor('w') # set colorbar box color
colorbar.outline.set_linewidth(2)
colorbar.ax.yaxis.set_tick_params(color='w') # set colorbar ticks color
colorbar.dividers.set_linewidth(0)
return figure
def size_figure(self, figure: Figure, n_subplots: int) -> None:
"""
Sizes a figure to fit the aspect ratio and dimensions of the parent tab.
Parameters
----------
figure
The Figure to resize.
n_subplots
Number of subplots.
"""
xdpi = self.widget.logicalDpiX()
ydpi = self.widget.logicalDpiY()
dpi = max(xdpi, ydpi)
widget_height = self.widget.parent().height() / dpi
widget_width = self.widget.parent().width() / dpi
widget_aspect_ratio = widget_height / widget_width
baseline_height, baseline_width = figure.get_size_inches()
# add height of navigation bar
if self.nav is not None:
nav_height = self.nav.height() / dpi
baseline_height += nav_height
figure_aspect_ratio = baseline_height / baseline_width
# widget is "flatter" than figure
widget_aspect_ratio_smaller = widget_aspect_ratio < figure_aspect_ratio
display_height = widget_height if widget_aspect_ratio_smaller else widget_width * figure_aspect_ratio
display_width = display_height / figure_aspect_ratio
# if (self.widget.objectName() == "depthBufferTab"):
# print (f"Widget: AR = {widget_aspect_ratio}, height = {widget_height}, width = {widget_width}")
# print (f"Figure: AR = {figure_aspect_ratio}, height = {baseline_height}, width = {baseline_width}")
# print (f"Display: height = {display_height}, width = {display_width}")
# print ()
figure.set_size_inches(n_subplots * display_width, display_height)
try:
figure.tight_layout()
except ValueError:
pass
def construct_subplot_figures(self, data, rows, titles = None, cmaps = None) -> plt.Figure:
"""Display a list of subplots in a single figure with matplotlib.
https://gist.github.com/soply/f3eec2e79c165e39c9d540e916142ae1
https://stackoverflow.com/questions/9662995/matplotlib-change-title-and-colorbar-text-and-tick-colors
Parameters
---------
data
List of np.arrays holding the data.
rows (Default = 1)
Number of rows in figure (number of columns is set to np.ceil(n_subplots/float(rows))).
titles
List of titles corresponding to each subplot. Must have the same length as data.
cmaps
List of color maps corresponding to each figure. Must have the same length as data.
Returns
-------
A Figure.
"""
assert((titles is None) or (len(data) == len(titles)))
n_subplots = len(data)
if titles is None: titles = ['Figure (%d)' % i for i in range(1, n_subplots + 1)]
figure = plt.figure(facecolor='black')
columns = int(np.ceil(n_subplots/float(rows)))
for n, (data_array, title, cmap) in enumerate(zip(data, titles, cmaps)):
# make a subplot active
subplot = figure.add_subplot(rows, columns, n + 1)
figure = self.content_ctor(figure, subplot, data_array, title, cmap)
self.size_figure(figure, n_subplots)
return figure
# ------------------------------------------#
# Meshes #
# ------------------------------------------#
class MeshType(Enum):
"""
A class representing the various UI mesh view types.
"""
Model = 1,
ModelScaled = 2,
Relief = 3
class Camera:
"""
A class representing the Mayavi scene camera.
"""
def __init__(self, figure) ->None:
"""
Initialization
"""
self.figure = figure
self.azimuth, self.elevation, self.distance, self.focalpoint = mlab.view(figure=self.figure)
self.roll = mlab.roll(figure=self.figure)
def apply (self, figure=None) -> None:
"""
Apply the camera settings to the given figure.
"""
figure = self.figure if figure is None else figure
mlab.view(azimuth=self.azimuth, elevation=self.elevation, distance=self.distance, focalpoint=self.focalpoint, roll=self.roll, figure=figure)
class MeshContent(HasTraits):
""" Holds an instance of a 3D Mesh """
def __init__ (self, source: DataSource, mesh_type: MeshType) -> None:
""" Initialization.
Parameters
----------
source
The Solver data source for the mesh.
mesh_type
The type of the mesh.
"""
super().__init__()
self.source = source
self.mesh_type = mesh_type
self.camera: Optional[Camera] = None
def update(self, preserve_camera:bool = True):
"""
Update the mesh if necessary.
Parameters
----------
preserve_camera
Preserve the existing camera settings in the view.
"""
if self.source.dirty:
self.construct(self.scene)
if preserve_camera and self.camera is not None:
self.camera.apply()
self.source.dirty = False
def construct(self, scene):
# This function is called when the view is opened. We don't populate the scene
# when the view is not yet open, as some VTK features require a GLContext.
shape = self.source.data.image.shape
width = shape[1]
height = shape[0]
X = np.arange(0, width, 1.0)
Y = np.arange(0, height, 1.0)
X, Y = np.meshgrid(X, Y)
Z = self.source.data.image
colors = np.empty(X.shape, dtype=str)
colors.fill('b')
# figure for this MeshContent
current_figure = scene.mayavi_scene
# get active camera
self.camera = Camera(figure=current_figure)
# clear figure
mlab.clf(figure=current_figure)
mlab.figure(figure=current_figure, bgcolor=(0, 0, 0))
# create new figure
cyan = (0.25, 0.95, 0.92)
mlab.mesh(X, Y, Z, figure=current_figure, color=cyan)
#mlab.surf(Z, figure=current_figure, warp_scale="auto")
class ModelMeshContent(MeshContent, HasTraits):
""" Holds an instance of a Model Mesh """
# N.B. These must be class variables to maintain scene independence.
scene = Instance(MlabSceneModel, ())
view = View(Item('scene', editor=SceneEditor(scene_class=MayaviScene), show_label=False),
resizable=True # We need this to resize with the parent widget
)
@on_trait_change('scene.activated')
def update_content(self):
super().construct(self.scene)
class ModelMeshScaledContent(MeshContent, HasTraits):
""" Holds an instance of a Model Mesh that has been only scaled (not transformed)."""
# N.B. These must be class variables to maintain scene independence.
scene = Instance(MlabSceneModel, ())
view = View(Item('scene', editor=SceneEditor(scene_class=MayaviScene), show_label=False),
resizable=True # We need this to resize with the parent widget
)
@on_trait_change('scene.activated')
def update_content(self):
super().construct(self.scene)
class ReliefMeshContent(MeshContent, HasTraits):
""" Holds an instance of a Relief Mesh """
# N.B. These must be class variables to maintain scene independence.
scene = Instance(MlabSceneModel, ())
view = View(Item('scene', editor=SceneEditor(scene_class=MayaviScene), show_label=False),
resizable=True # We need this to resize with the parent widget
)
@on_trait_change('scene.activated')
def update_content(self):
super().construct(self.scene)
class MeshWidget(QtWidgets.QWidget):
""" The QWidget containing the visualization, this is pure PyQt5 code. """
def __init__(self, source: DataSource, mesh_type: MeshType, parent=None) -> None:
"""
Initialization.
Parameters
----------
source
The Solver data source for the mesh.
mesh_type
The type of the mesh.
"""
super().__init__(parent)
self.mesh_type = mesh_type
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0,0,0,0)
layout.setSpacing(0)
if self.mesh_type == MeshType.Model:
self.mesh_content = ModelMeshContent(source, self.mesh_type)
if self.mesh_type == MeshType.ModelScaled:
self.mesh_content = ModelMeshScaledContent(source, self.mesh_type)
if self.mesh_type == MeshType.Relief:
self.mesh_content = ReliefMeshContent(source, self.mesh_type)
# If you want to debug, beware that you need to remove the Qt input hook.
#QtCore.pyqtRemoveInputHook()
#import pdb ; pdb.set_trace()
#QtCore.pyqtRestoreInputHook()
# The edit_traits call will generate the widget to embed.
self.ui = self.mesh_content.edit_traits(parent=self, kind='subpanel').control
layout.addWidget(self.ui)
self.ui.setParent(self)
class MeshTab():
""" A UI tab of a mesh view. """
def __init__(self, widget: QtWidgets.QWidget, mesh_type: MeshType, title: str, cmap: str, source: DataSource) -> None:
""" A UI mesh tab in the Explorer.
Parameters
----------
widget
QWidget of the tab.
mesh_type
The type of the mesh.
title
The title of the view.
cmap
The matplotlib colormap.
source
The Solver data source for the mesh.
"""
self.widget = widget
self.mesh_type = mesh_type
self.title = title
self.cmap = cmap
self.source = source
self.mesh_widget = MeshWidget(source, self.mesh_type)
self.widget.layout().addWidget(self.mesh_widget) | 34.602062 | 186 | 0.605649 |
7943f0de87113eeae537abcbce8087366c778f19 | 4,582 | py | Python | test/test_report_structured.py | laundmo/tidypy | 3d08b4f95ed7c8827789222c9670a131cdf965b7 | [
"MIT"
] | null | null | null | test/test_report_structured.py | laundmo/tidypy | 3d08b4f95ed7c8827789222c9670a131cdf965b7 | [
"MIT"
] | null | null | null | test/test_report_structured.py | laundmo/tidypy | 3d08b4f95ed7c8827789222c9670a131cdf965b7 | [
"MIT"
] | null | null | null |
import sys
from tidypy import execute_reports, get_default_config, Collector, TidyPyIssue
ISSUES = [
TidyPyIssue(
'code1',
'Message 1',
u'someproject/foo.py',
5,
23,
),
TidyPyIssue(
'code2',
'Message 2',
u'someproject/foo.py',
2,
),
TidyPyIssue(
'code1',
'Message 1',
'someproject/blah/bar.py',
28,
),
TidyPyIssue(
'code3',
'Message 3',
'someproject/subdir/foobar.json',
5,
23,
),
]
EXPECTED_JSON = '''{
"tidypy": "0.21.0",
"issues": {
"blah/bar.py": [
{
"line": 28,
"character": 0,
"code": "code1",
"tool": "tidypy",
"message": "Message 1"
}
],
"foo.py": [
{
"line": 2,
"character": 0,
"code": "code2",
"tool": "tidypy",
"message": "Message 2"
},
{
"line": 5,
"character": 23,
"code": "code1",
"tool": "tidypy",
"message": "Message 1"
}
],
"subdir/foobar.json": [
{
"line": 5,
"character": 23,
"code": "code3",
"tool": "tidypy",
"message": "Message 3"
}
]
}
}
'''
def test_json_execute(capsys):
cfg = get_default_config()
cfg['requested_reports'] = [{'type': 'json'}]
collector = Collector(cfg)
collector.add_issues(ISSUES)
execute_reports(cfg, 'someproject', collector)
out, err = capsys.readouterr()
assert EXPECTED_JSON == out.replace('\r\n', '\n')
assert err == ''
EXPECTED_TOML = '''tidypy = "0.21.0"
[issues]
[[issues."blah/bar.py"]]
line = 28
character = 0
code = "code1"
tool = "tidypy"
message = "Message 1"
[[issues."foo.py"]]
line = 2
character = 0
code = "code2"
tool = "tidypy"
message = "Message 2"
[[issues."foo.py"]]
line = 5
character = 23
code = "code1"
tool = "tidypy"
message = "Message 1"
[[issues."subdir/foobar.json"]]
line = 5
character = 23
code = "code3"
tool = "tidypy"
message = "Message 3"
'''
def test_toml_execute(capsys):
cfg = get_default_config()
cfg['requested_reports'] = [{'type': 'toml'}]
collector = Collector(cfg)
collector.add_issues(ISSUES)
execute_reports(cfg, 'someproject', collector)
out, err = capsys.readouterr()
assert EXPECTED_TOML == out.replace('\r\n', '\n')
assert err == ''
EXPECTED_YAML = '''tidypy: 0.21.0
issues:
blah/bar.py:
- line: 28
character: 0
code: code1
tool: tidypy
message: Message 1
foo.py:
- line: 2
character: 0
code: code2
tool: tidypy
message: Message 2
- line: 5
character: 23
code: code1
tool: tidypy
message: Message 1
subdir/foobar.json:
- line: 5
character: 23
code: code3
tool: tidypy
message: Message 3
'''
def test_yaml_execute(capsys):
cfg = get_default_config()
cfg['requested_reports'] = [{'type': 'yaml'}]
collector = Collector(cfg)
collector.add_issues(ISSUES)
execute_reports(cfg, 'someproject', collector)
out, err = capsys.readouterr()
assert EXPECTED_YAML == out.replace('\r\n', '\n')
assert err == ''
EXPECTED_CSV = '''filename,line,character,tool,code,message
blah/bar.py,28,0,tidypy,code1,Message 1
foo.py,2,0,tidypy,code2,Message 2
foo.py,5,23,tidypy,code1,Message 1
subdir/foobar.json,5,23,tidypy,code3,Message 3
'''
def test_csv_execute(capsys):
cfg = get_default_config()
cfg['requested_reports'] = [{'type': 'csv'}]
collector = Collector(cfg)
collector.add_issues(ISSUES)
execute_reports(cfg, 'someproject', collector)
out, err = capsys.readouterr()
assert EXPECTED_CSV == out.replace('\r\n', '\n')
assert err == ''
def test_csv_file_output(capsys, tmpdir):
target_dir = tmpdir.mkdir('reports')
cfg = get_default_config()
cfg['requested_reports'] = [{'type': 'csv'}]
collector = Collector(cfg)
collector.add_issues(ISSUES)
test_file = str(target_dir) + 'test1'
with open(test_file, 'w') as fp:
execute_reports(cfg, 'someproject', collector, output_file=fp)
out, err = capsys.readouterr()
assert out == ''
assert err == ''
assert EXPECTED_CSV == open(test_file, 'r').read()
test_file = str(target_dir) + 'test2'
cfg['requested_reports'] = [{'type': 'csv', 'file': test_file}]
execute_reports(cfg, 'someproject', collector)
out, err = capsys.readouterr()
assert out == ''
assert err == ''
assert EXPECTED_CSV == open(test_file, 'r').read()
| 19.75 | 78 | 0.576604 |
7943f11928fc738272093e7e5d8c03261f088667 | 2,421 | py | Python | src/spaceone/identity/model/domain_model.py | spaceone-dev/identity | 63a3a8db1d8d7d1e2c17d53fb3dc7aad35aef917 | [
"Apache-2.0"
] | 13 | 2020-05-20T13:14:33.000Z | 2021-12-23T12:02:40.000Z | src/spaceone/identity/model/domain_model.py | whdalsrnt/identity | 6aeaf7ea405d9e2f7c4f24c7518445b955cebec6 | [
"Apache-2.0"
] | 7 | 2020-06-02T08:11:18.000Z | 2022-03-15T02:26:07.000Z | src/spaceone/identity/model/domain_model.py | whdalsrnt/identity | 6aeaf7ea405d9e2f7c4f24c7518445b955cebec6 | [
"Apache-2.0"
] | 9 | 2020-06-01T10:08:05.000Z | 2021-03-19T06:34:57.000Z | from mongoengine import *
from datetime import datetime
from spaceone.core.error import *
from spaceone.core.model.mongo_model import MongoModel
class PluginInfo(EmbeddedDocument):
plugin_id = StringField(max_length=40)
version = StringField(max_length=255)
options = DictField(default={})
metadata = DictField(default={})
secret_id = StringField(max_length=40, null=True, default=None)
schema = StringField(max_length=255, null=True, default=None)
upgrade_mode = StringField(max_length=20, default='AUTO', choices=('AUTO', 'MANUAL'))
def to_dict(self):
return dict(self.to_mongo())
class DomainTag(EmbeddedDocument):
key = StringField(max_length=255)
value = StringField(max_length=255)
class Domain(MongoModel):
domain_id = StringField(max_length=40, generate_id='domain', unique=True)
name = StringField(max_length=255)
state = StringField(max_length=20, default='ENABLED')
plugin_info = EmbeddedDocumentField(PluginInfo, default=None, null=True)
config = DictField()
tags = ListField(EmbeddedDocumentField(DomainTag))
created_at = DateTimeField(auto_now_add=True)
deleted_at = DateTimeField(default=None, null=True)
meta = {
'updatable_fields': [
'name',
'state',
'plugin_info',
'config',
'tags',
'deleted_at'
],
'minimal_fields': [
'domain_id',
'name',
'state',
],
'ordering': ['name'],
'indexes': [
'domain_id',
'state',
('tags.key', 'tags.value')
]
}
@queryset_manager
def objects(doc_cls, queryset):
return queryset.filter(state__ne='DELETED')
@classmethod
def create(cls, data):
domain_vos = cls.filter(name=data['name'])
if domain_vos.count() > 0:
raise ERROR_NOT_UNIQUE(key='name', value=data['name'])
return super().create(data)
def update(self, data):
if 'name' in data:
domain_vos = self.filter(name=data['name'], domain_id__ne=self.domain_id)
if domain_vos.count() > 0:
raise ERROR_NOT_UNIQUE(key='name', value=data['name'])
return super().update(data)
def delete(self):
self.update({
'state': 'DELETED',
'deleted_at': datetime.utcnow()
})
| 29.52439 | 89 | 0.611318 |
7943f23f7aefc7fd50cdb8b5739084d25b2fc8e9 | 1,141 | py | Python | get_power.py | FraserTooth/echonet_interface | e009a6371543a65ac8c5ed895e7258fb072612f8 | [
"MIT"
] | 1 | 2021-09-22T10:25:32.000Z | 2021-09-22T10:25:32.000Z | get_power.py | FraserTooth/echonet_interface | e009a6371543a65ac8c5ed895e7258fb072612f8 | [
"MIT"
] | null | null | null | get_power.py | FraserTooth/echonet_interface | e009a6371543a65ac8c5ed895e7258fb072612f8 | [
"MIT"
] | null | null | null | import meter.echonet as echonet
from meter.common import byte2str, hex2int
import meter.b_route as b_route
from meter.serial_connection import connect_to_serial_port
import time
import logging.handlers
import meter.influx as db
# ロガー取得
logger = logging.getLogger("main")
fmt = "%(asctime)s %(levelname)s %(name)s :%(message)s"
logging.basicConfig(level=10, format=fmt)
ser = connect_to_serial_port()
ipv6_address = b_route.connect_to_broute(ser)
# スマートメーターがインスタンスリスト通知を投げてくる
# (ECHONET-Lite_Ver.1.12_02.pdf p.4-16)
logger.info(byte2str(ser.readline()))
while True:
command = echonet.get_serial_command(
[echonet.SmartMeterActions.NOW_POWER], ipv6_address
)
# Send Command
ser.write(command)
line = ""
# Find the line we care about
while line.startswith("ERXUDP") is False:
line = byte2str(ser.readline())
if len(line) == 0:
# Serial Connection has Hung
print("Looks like we've hung...")
break
if len(line) > 0:
data = echonet.handle_line(ser, line)
if data is not None:
db.write_to_influx(data)
time.sleep(3)
| 25.931818 | 59 | 0.683611 |
7943f31f3067d044097556d01e85bfb664d12e5f | 6,511 | py | Python | sources/simulators/ray_based_simulator/ray_simulate.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/ray_based_simulator/ray_simulate.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | sources/simulators/ray_based_simulator/ray_simulate.py | M4rukku/impact_of_non_iid_data_in_federated_learning | c818db03699c82e42217d56f8ddd4cc2081c8bb1 | [
"MIT"
] | null | null | null | # Modified from Adap
# Copyright 2020 Adap GmbH. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Flower simulation app."""
import sys
from logging import ERROR, INFO
from typing import Any, Callable, Dict, List, Optional
import ray
from flwr.client.client import Client
from flwr.common.logger import log
from flwr.server import Server
from flwr.server.app import _fl, _init_defaults
from flwr.server.strategy import Strategy
from flwr.simulation.ray_transport.ray_client_proxy import RayClientProxy
INVALID_ARGUMENTS_START_SIMULATION = """
INVALID ARGUMENTS ERROR
Invalid Arguments in method:
`start_simulation(
*,
client_fn: Callable[[str], Client],
num_clients: Optional[int] = None,
clients_ids: Optional[List[str]] = None,
client_resources: Optional[Dict[str, int]] = None,
num_rounds: int = 1,
strategy: Optional[Strategy] = None,
ray_init_args: Optional[Dict[str, Any]] = None,
) -> None:`
REASON:
Method requires:
- Either `num_clients`[int] or `clients_ids`[List[str]]
to be set exclusively.
OR
- `len(clients_ids)` == `num_clients`
"""
def start_simulation( # pylint: disable=too-many-arguments
*,
client_fn: Callable[[str], Client],
num_clients: Optional[int] = None,
clients_ids: Optional[List[str]] = None,
client_resources: Optional[Dict[str, int]] = None,
num_rounds: int = 1,
strategy: Optional[Strategy] = None,
ray_init_args: Optional[Dict[str, Any]] = None,
server: Optional[Server] = None,
ray_callbacks: Optional[List[Callable[[], None]]]=None
) -> None:
"""Start a Ray-based Flower simulation server.
Parameters
----------
client_fn : Callable[[str], Client]
A function creating client instances. The function must take a single
str argument called `cid`. It should return a single client instance.
Note that the created client instances are ephemeral and will often be
destroyed after a single method invocation. Since client instances are
not long-lived, they should not attempt to carry state over method
invocations. Any state required by the instance (model, dataset,
hyperparameters, ...) should be (re-)created in either the call to
`client_fn` or the call to any of the client methods (e.g., load
evaluation data in the `evaluate` method itself).
num_clients : Optional[int]
The total number of clients in this simulation. This must be set if
`clients_ids` is not set and vice-versa.
clients_ids : Optional[List[str]]
List `client_id`s for each client. This is only required if
`num_clients` is not set. Setting both `num_clients` and `clients_ids`
with `len(clients_ids)` not equal to `num_clients` generates an error.
client_resources : Optional[Dict[str, int]] (default: None)
CPU and GPU resources for a single client. Supported keys are
`num_cpus` and `num_gpus`. Example: `{"num_cpus": 4, "num_gpus": 1}`.
To understand the GPU utilization caused by `num_gpus`, consult the Ray
documentation on GPU support.
num_rounds : int (default: 1)
The number of rounds to train.
strategy : Optional[flwr.server.Strategy] (default: None)
An implementation of the abstract base class `flwr.server.Strategy`. If
no strategy is provided, then `start_server` will use
`flwr.server.strategy.FedAvg`.
ray_init_args : Optional[Dict[str, Any]] (default: None)
Optional dictionary containing arguments for the call to `ray.init`.
If ray_init_args is None (the default), Ray will be initialized with
the following default args:
{
"ignore_reinit_error": True,
"include_dashboard": False,
}
An empty dictionary can be used (ray_init_args={}) to prevent any
arguments from being passed to ray.init.
"""
cids: List[str]
# clients_ids takes precedence
if clients_ids is not None:
if (num_clients is not None) and (len(clients_ids) != num_clients):
log(ERROR, INVALID_ARGUMENTS_START_SIMULATION)
sys.exit()
else:
cids = clients_ids
else:
if num_clients is None:
log(ERROR, INVALID_ARGUMENTS_START_SIMULATION)
sys.exit()
else:
cids = [str(x) for x in range(num_clients)]
# Default arguments for Ray initialization
if not ray_init_args:
ray_init_args = {
"ignore_reinit_error": True,
"include_dashboard": False,
}
# Shut down Ray if it has already been initialized
if ray.is_initialized():
ray_init_args = {**ray_init_args}
ray_init_args["address"] = "auto"
# Initialize Ray
ray.init(**ray_init_args)
log(
INFO,
"Ray initialized with resources: %s",
ray.cluster_resources(),
)
if ray_callbacks is not None:
for callback in ray_callbacks:
callback()
# Initialize server and server config
config = {"num_rounds": num_rounds}
initialized_server, initialized_config = _init_defaults(server, config, strategy)
log(
INFO,
"Starting Flower simulation running: %s",
initialized_config,
)
# Register one RayClientProxy object for each client with the ClientManager
resources = client_resources if client_resources is not None else {}
for cid in cids:
client_proxy = RayClientProxy(
client_fn=client_fn,
cid=cid,
resources=resources,
)
initialized_server.client_manager().register(client=client_proxy)
# Start training
_fl(
server=initialized_server,
config=initialized_config,
force_final_distributed_eval=False,
)
| 36.172222 | 85 | 0.658424 |
7943f3c6c44c7bf86810740444594d5429abe8f5 | 170 | py | Python | app/api/index.py | benranderson/fmeca | aedbde0ec20417cace3df01e09f193dd525ba0e2 | [
"MIT"
] | null | null | null | app/api/index.py | benranderson/fmeca | aedbde0ec20417cace3df01e09f193dd525ba0e2 | [
"MIT"
] | 11 | 2017-11-09T22:23:50.000Z | 2017-11-30T16:40:22.000Z | app/api/index.py | benranderson/fmeca | aedbde0ec20417cace3df01e09f193dd525ba0e2 | [
"MIT"
] | 3 | 2017-11-10T09:52:07.000Z | 2022-01-28T11:00:17.000Z | from flask import jsonify
from . import api
@api.route('/', methods=['GET'])
def index():
return jsonify("Welcome to the fmeca API. Check out '/api/facilities/'.")
| 21.25 | 77 | 0.676471 |
7943f43c477717e8563af684c11c3de16306bca5 | 11,918 | py | Python | odps/df/backends/analyzer.py | walker83/aliyun-odps-python-sdk | f69c2520d346554131f4129360cb7ae1211699ce | [
"Apache-2.0"
] | null | null | null | odps/df/backends/analyzer.py | walker83/aliyun-odps-python-sdk | f69c2520d346554131f4129360cb7ae1211699ce | [
"Apache-2.0"
] | null | null | null | odps/df/backends/analyzer.py | walker83/aliyun-odps-python-sdk | f69c2520d346554131f4129360cb7ae1211699ce | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 1999-2017 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
from .core import Backend
from ..utils import traverse_until_source
from ..expr.expressions import Scalar, SequenceExpr, CollectionExpr
from ..expr.reduction import GroupedSequenceReduction
from ..expr.element import Switch
from .. import output
from ... import compat
from ...models import Schema
from .utils import refresh_dynamic
from ..types import DynamicSchema
from ...compat import six
class BaseAnalyzer(Backend):
"""
Analyzer is used before optimzing,
which analyze some operation that is not supported for this execution backend.
"""
def __init__(self, expr_dag, traversed=None, on_sub=None):
self._dag = expr_dag
self._indexer = itertools.count(0)
self._traversed = traversed or set()
self._on_sub = on_sub
def analyze(self):
for node in self._iter():
self._traversed.add(id(node))
self._visit_node(node)
return self._dag.root
def _iter(self):
for node in traverse_until_source(self._dag, top_down=True,
traversed=self._traversed):
yield node
while True:
all_traversed = True
for node in traverse_until_source(self._dag, top_down=True):
if id(node) not in self._traversed:
all_traversed = False
yield node
if all_traversed:
break
def _visit_node(self, node):
try:
node.accept(self)
except NotImplementedError:
return
def _sub(self, expr, sub, parents=None):
self._dag.substitute(expr, sub, parents=parents)
if self._on_sub:
self._on_sub(expr, sub)
@staticmethod
def _get_moment_sub_expr(expr, _input, order, center):
def _group_mean(e):
m = e.mean()
if isinstance(expr, GroupedSequenceReduction):
m = m.to_grouped_reduction(expr._grouped)
return m
def _order(e, o):
if o == 1:
return e
else:
return e ** o
if not center:
if order == 0:
sub = Scalar(1)
else:
sub = _group_mean(_input ** order)
else:
if order == 0:
sub = Scalar(1)
elif order == 1:
sub = Scalar(0)
else:
sub = _group_mean(_input ** order)
divided = 1
divisor = 1
for o in compat.irange(1, order):
divided *= order - o + 1
divisor *= o
part_item = divided // divisor * _group_mean(_order(_input, order - o)) \
* (_order(_group_mean(_input), o))
if o & 1:
sub -= part_item
else:
sub += part_item
part_item = _group_mean(_input) ** order
if order & 1:
sub -= part_item
else:
sub += part_item
return sub
@classmethod
def _get_cut_sub_expr(cls, expr):
is_seq = isinstance(expr, SequenceExpr)
kw = dict()
if is_seq:
kw['_data_type'] = expr.dtype
else:
kw['_value_type'] = expr.dtype
conditions = []
thens = []
if expr.include_under:
bin = expr.bins[0]
if expr.right and not expr.include_lowest:
conditions.append(expr.input <= bin)
else:
conditions.append(expr.input < bin)
thens.append(expr.labels[0])
for i, bin in enumerate(expr.bins[1:]):
lower_bin = expr.bins[i]
if not expr.right or (i == 0 and expr.include_lowest):
condition = lower_bin <= expr.input
else:
condition = lower_bin < expr.input
if expr.right:
condition = (condition & (expr.input <= bin))
else:
condition = (condition & (expr.input < bin))
conditions.append(condition)
if expr.include_under:
thens.append(expr.labels[i + 1])
else:
thens.append(expr.labels[i])
if expr.include_over:
bin = expr.bins[-1]
if expr.right:
conditions.append(bin < expr.input)
else:
conditions.append(bin <= expr.input)
thens.append(expr.labels[-1])
return Switch(_conditions=conditions, _thens=thens,
_default=None, _input=None, **kw)
@classmethod
def _get_value_counts_sub_expr(cls, expr):
collection = expr.input
by = expr._by
sort = expr._sort.value
ascending = expr._ascending.value
dropna = expr._dropna.value
sub = collection.groupby(by).agg(count=by.count())
if sort:
sub = sub.sort('count', ascending=ascending)
if dropna:
sub = sub.filter(sub[by.name].notnull())
return sub
def _get_pivot_sub_expr(self, expr):
columns_expr = expr.input.distinct([c.copy() for c in expr._columns])
group_names = [g.name for g in expr._group]
group_types = [g.dtype for g in expr._group]
exprs = [expr]
def callback(result, new_expr):
expr = exprs[0]
columns = [r[0] for r in result]
if len(expr._values) > 1:
names = group_names + \
['{0}_{1}'.format(v.name, c)
for v in expr._values for c in columns]
types = group_types + \
list(itertools.chain(*[[n.dtype] * len(columns)
for n in expr._values]))
else:
names = group_names + columns
types = group_types + [expr._values[0].dtype] * len(columns)
new_expr._schema = Schema.from_lists(names, types)
column_name = expr._columns[0].name # column's size can only be 1
values_names = [v.name for v in expr._values]
@output(names, types)
def reducer(keys):
values = [None] * len(columns) * len(values_names)
def h(row, done):
col = getattr(row, column_name)
for val_idx, value_name in enumerate(values_names):
val = getattr(row, value_name)
idx = len(columns) * val_idx + columns.index(col)
if values[idx] is not None:
raise ValueError(
'Row contains duplicate entries, rows: {0}, column: {1}'.format(keys, col))
values[idx] = val
if done:
yield keys + tuple(values)
return h
fields = expr._group + expr._columns + expr._values
pivoted = expr.input.select(fields).map_reduce(reducer=reducer, group=group_names)
self._sub(new_expr, pivoted)
# trigger refresh of dynamic operations
refresh_dynamic(pivoted, self._dag)
return CollectionExpr(_schema=DynamicSchema.from_lists(group_names, group_types),
_deps=[(columns_expr, callback)])
def _get_pivot_table_sub_expr_without_columns(self, expr):
def get_agg(field, agg_func, agg_func_name, fill_value):
if isinstance(agg_func, six.string_types):
aggregated = getattr(field, agg_func)()
else:
aggregated = field.agg(agg_func)
if fill_value is not None:
aggregated.fillna(fill_value)
return aggregated.rename('{0}_{1}'.format(field.name, agg_func_name))
grouped = expr.input.groupby(expr._group)
aggs = []
for agg_func, agg_func_name in zip(expr._agg_func, expr._agg_func_names):
for value in expr._values:
agg = get_agg(value, agg_func, agg_func_name, expr.fill_value)
aggs.append(agg)
return grouped.aggregate(aggs, sort_by_name=False)
def _get_pivot_table_sub_expr_with_columns(self, expr):
columns_expr = expr.input.distinct([c.copy() for c in expr._columns])
group_names = [g.name for g in expr._group]
group_types = [g.dtype for g in expr._group]
exprs = [expr]
def callback(result, new_expr):
expr = exprs[0]
columns = [r[0] for r in result]
names = list(group_names)
tps = list(group_types)
aggs = []
for agg_func_name, agg_func in zip(expr._agg_func_names, expr._agg_func):
for value_col in expr._values:
for col in columns:
base = '{0}_'.format(col) if col is not None else ''
name = '{0}{1}_{2}'.format(base, value_col.name, agg_func_name)
names.append(name)
tps.append(value_col.dtype)
col = col.item() if hasattr(col, 'item') else col
field = (expr._columns[0] == col).ifelse(
value_col, Scalar(_value_type=value_col.dtype))
if isinstance(agg_func, six.string_types):
agg = getattr(field, agg_func)()
else:
func = agg_func()
class ActualAgg(object):
def buffer(self):
return func.buffer()
def __call__(self, buffer, value):
if value is None:
return
func(buffer, value)
def merge(self, buffer, pbuffer):
func.merge(buffer, pbuffer)
def getvalue(self, buffer):
return func.getvalue(buffer)
agg = field.agg(ActualAgg)
if expr.fill_value is not None:
agg = agg.fillna(expr.fill_value)
agg = agg.rename(name)
aggs.append(agg)
new_expr._schema = Schema.from_lists(names, tps)
pivoted = expr.input.groupby(expr._group).aggregate(aggs, sort_by_name=False)
self._sub(new_expr, pivoted)
# trigger refresh of dynamic operations
refresh_dynamic(pivoted, self._dag)
return CollectionExpr(_schema=DynamicSchema.from_lists(group_names, group_types),
_deps=[(columns_expr, callback)])
def _get_pivot_table_sub_expr(self, expr):
if expr._columns is None:
return self._get_pivot_table_sub_expr_without_columns(expr)
else:
return self._get_pivot_table_sub_expr_with_columns(expr)
| 37.012422 | 107 | 0.529116 |
7943f595c674438a1cfec4698c62343f1a8c742b | 656 | py | Python | infrastructure/crypto_ml/utils/_utils.py | ATCUWgithub/CryptoML | 6010c5daf7d985217fa76197b29331457a60a306 | [
"MIT"
] | 1 | 2020-02-18T00:38:16.000Z | 2020-02-18T00:38:16.000Z | infrastructure/crypto_ml/utils/_utils.py | ATCUWgithub/CryptoML | 6010c5daf7d985217fa76197b29331457a60a306 | [
"MIT"
] | null | null | null | infrastructure/crypto_ml/utils/_utils.py | ATCUWgithub/CryptoML | 6010c5daf7d985217fa76197b29331457a60a306 | [
"MIT"
] | 1 | 2020-02-18T00:39:12.000Z | 2020-02-18T00:39:12.000Z | import json as _json
import datetime as _datetime
def parse_timestamp(dataset, time_format="%Y-%m-%dT%H:%M:%S.000Z"):
for d in dataset:
d["timestamp"] = _datetime.datetime.strptime(d["timestamp"], time_format)
return dataset
def load_json(filename, time_format="%Y-%m-%dT%H:%M:%S.000Z"):
dictionary = dict()
with open(filename) as f:
dictionary = _json.load(f)
return parse_timestamp(dictionary, time_format)
def generate_config(dataset):
start_idx = 0
end_idx = len(dataset) - 1
return {
"test_start": dataset[start_idx]["timestamp"],
"test_end": dataset[end_idx]["timestamp"]
}
| 29.818182 | 81 | 0.660061 |
7943f67fbcf1be196d4617675369352657e99f99 | 5,806 | py | Python | houston/plugin/gcp.py | datasparq-intelligent-products/houston-python | c9248c1f121366ad258e8434caa6d2462d765059 | [
"MIT"
] | 7 | 2020-03-16T13:17:50.000Z | 2020-12-10T14:46:37.000Z | houston/plugin/gcp.py | datasparq-intelligent-products/houston-python | c9248c1f121366ad258e8434caa6d2462d765059 | [
"MIT"
] | null | null | null | houston/plugin/gcp.py | datasparq-intelligent-products/houston-python | c9248c1f121366ad258e8434caa6d2462d765059 | [
"MIT"
] | null | null | null | """Houston Utilities for Google Cloud Platform
PubSub utils:
Allows user to create Google Cloud Pub/Sub message according to the plan stage options, e.g.:
h.project = "my-project-1234" # set the Google Cloud project name in the client
res = h.end_stage("load-data", mission_id)
for next_stage in res['next']:
h.pubsub_trigger({'stage': next_stage, 'mission_id': mission_id}, topic=h.get_params(next_stage)['topic'])
--> sends a Pub/Sub message to the next tasks' topics. This assumes we have given each stage a 'topic' parameter.
Note: The topic name can either be provided as an argument or can be set as a parameter for the stage as 'topic' or
'psq', in which case it will be found automatically.
or:
h.project = "my-project-1234" # set the Google Cloud project name in the client
response = h.end_stage("load-data", mission_id)
h.call_stage_via_pubsub(response, mission_id) # assumes each stage has a 'psq' parameter which gives the topic name
"""
import base64
import json
import os
from google.cloud import pubsub_v1
from houston.client import Houston
class GCPHouston(Houston):
project = os.getenv("GCP_PROJECT", None)
topic = None
def pubsub_trigger(self, data, topic=None):
"""Sends a message to the provided Pub/Sub topic with the provided data payload.
:param dict data: content of the message to be sent. Should contain 'stage' and 'mission_id'. Can contain any
additional JSON serializable information.
:param string topic: Google Pub/Sub topic name, e.g. 'topic-for-stage'. This can either be provided here or be
set as a parameter for the stage as 'topic' or 'psq'.
"""
publisher_client = pubsub_v1.PublisherClient()
if self.project is None:
raise ValueError(
"Project is not set. Use GCPHouston.project = '[PROJECT]' "
"or set 'GCP_PROJECT' environment variable"
)
if 'plan' not in data:
data['plan'] = self.plan['name']
# try to find the topic name in the stage parameters
if topic is None:
if 'stage' in data:
stage_params = self.get_params(data['stage'])
if stage_params:
if stage_params['topic']:
topic = stage_params['topic']
elif stage_params['psq']:
topic = stage_params['psq']
if topic is None:
raise ValueError("Pub/Sub could not be determined. It can either be provided as an argument to "
"pubsub_trigger, or be a stage parameter with name 'topic' or 'psq'")
full_topic = "projects/{project}/topics/{topic}".format(
project=self.project, topic=topic
)
future = publisher_client.publish(topic=full_topic, data=json.dumps(data).encode("utf-8"))
future.result()
def call_stage_via_pubsub(self, response, mission_id):
"""Send stage details to Google Cloud Platform PubSub. Sends stage, mission_id, plan name as json in message
body, parameters as attributes
Message parameter must contain "psq" (PubSub Queue) key, this informs the function which topic is relevant
to the task
Blocks until PubSub message has been sent
:param dict response: response from Houston.end_stage
:param string mission_id: unique identifier of mission currently being completed
"""
publisher_client = pubsub_v1.PublisherClient()
if self.project is None:
raise ValueError(
"Project is not set. Use GCPHouston.project = '[PROJECT]' "
"or set 'GCP_PROJECT' environment variable"
)
# for all available tasks - trigger qs
for next_task in response["next"]:
if next_task not in response["params"]:
print(
"task: {next_task} does not have parameters, skipping".format(
next_task=next_task
)
)
continue
if "psq" not in response["params"][next_task].keys() and "topic" not in response["params"][next_task].keys():
print(
"task: {next_task} does not have psq topic set, skipping".format(
next_task=next_task
)
)
continue
task_parameters = response["params"][next_task]
if "psq" in task_parameters:
target_psq = task_parameters.pop("psq")
else:
target_psq = task_parameters.pop("topic")
data = json.dumps(
{"stage": next_task, "mission_id": mission_id, "plan": self.plan}
).encode("utf-8")
# make topic string
topic = "projects/{project}/topics/{topic}".format(
project=self.project, topic=target_psq
)
# json encode task param values
# useful for decoding in PubSub subscriber
for key, value in task_parameters.items():
task_parameters[key] = json.dumps(value)
if not task_parameters:
future = publisher_client.publish(topic=topic, data=data)
future.result()
else:
future = publisher_client.publish(
topic=topic, data=data, **task_parameters
)
future.result()
@staticmethod
def extract_stage_information(data):
"""Static method to extract stage information from sent PubSub message"""
return json.loads(base64.b64decode(data))
| 38.450331 | 121 | 0.59628 |
7943f76bf464889fd4c6f26155e3653298ba9583 | 5,235 | py | Python | src/bin/shipyard_airflow/shipyard_airflow/control/action/actions_steps_id_logs_api.py | openvdro/airship-shipyard | bae15294c534cf321f5c7ca37592dfa74c4ad7c2 | [
"Apache-2.0"
] | 12 | 2018-05-18T18:59:23.000Z | 2019-05-10T12:31:44.000Z | src/bin/shipyard_airflow/shipyard_airflow/control/action/actions_steps_id_logs_api.py | openvdro/airship-shipyard | bae15294c534cf321f5c7ca37592dfa74c4ad7c2 | [
"Apache-2.0"
] | 4 | 2021-07-28T14:36:57.000Z | 2022-03-22T16:39:23.000Z | src/bin/shipyard_airflow/shipyard_airflow/control/action/actions_steps_id_logs_api.py | openvdro/airship-shipyard | bae15294c534cf321f5c7ca37592dfa74c4ad7c2 | [
"Apache-2.0"
] | 9 | 2018-05-18T16:42:41.000Z | 2019-04-18T20:12:14.000Z | # Copyright 2018 AT&T Intellectual Property. All other rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import falcon
import logging
import os
import requests
from oslo_config import cfg
from shipyard_airflow import policy
from shipyard_airflow.control.base import BaseResource
from shipyard_airflow.control.helpers.action_helper import ActionsHelper
from shipyard_airflow.errors import ApiError
CONF = cfg.CONF
LOG = logging.getLogger(__name__)
class ActionsStepsLogsResource(BaseResource):
"""
The actions steps logs resource retrieves the logs for a particular
step of an action. By default, it will retrieve the logs from the
last attempt. Note that a workflow step can retry multiple times with
the names of the logs as 1.log, 2.log, 3.log, etc.
"""
@policy.ApiEnforcer(policy.GET_ACTION_STEP_LOGS)
def on_get(self, req, resp, **kwargs):
"""
Returns the logs of an action step
:returns: logs of an action step
"""
# We will set the kwarg to 'try_number' as 'try' is a
# reserved keyword
try_number = req.get_param_as_int('try',
required=False)
# Parse kwargs
action_id = ActionsHelper.parse_action_id(**kwargs)
step_id = ActionsHelper.parse_step_id(**kwargs)
# Retrieve logs for the action step
resp.body = self.get_action_step_logs(action_id,
step_id,
try_number)
resp.status = falcon.HTTP_200
def get_action_step_logs(self, action_id, step_id, try_number=None):
"""
Retrieve Airflow Logs
"""
# Set up actions helper
self.actions_helper = ActionsHelper(action_id=action_id)
# Retrieve step
step = self.actions_helper.get_step(step_id, try_number)
# Retrieve Dag ID
dag_id = step['dag_id']
# Generate Log Endpoint
log_endpoint = self.generate_log_endpoint(step,
dag_id,
step_id,
try_number)
LOG.debug("Log endpoint url is: %s", log_endpoint)
return self.retrieve_logs(log_endpoint)
def generate_log_endpoint(self, step, dag_id, step_id, try_number):
"""
Retrieve Log Endpoint
"""
# Construct worker pod URL
scheme = CONF.airflow.worker_endpoint_scheme
worker_pod_fqdn = step['hostname']
worker_pod_port = CONF.airflow.worker_port
worker_pod_url = "{}://{}:{}".format(scheme,
worker_pod_fqdn,
str(worker_pod_port))
# Define log_file
if try_number:
log_file = str(try_number) + '.log'
else:
log_file = str(step['try_number']) + '.log'
# Define dag_execution_date
dag_execution_date = (
self.actions_helper.get_formatted_dag_execution_date(step))
# Form logs query endpoint
log_endpoint = os.path.join(worker_pod_url,
'log',
dag_id,
step_id,
dag_execution_date,
log_file)
return log_endpoint
@staticmethod
def retrieve_logs(log_endpoint):
"""
Retrieve Logs
"""
LOG.debug("Retrieving Airflow logs...")
try:
response = requests.get(
log_endpoint,
timeout=(
CONF.requests_config.airflow_log_connect_timeout,
CONF.requests_config.airflow_log_read_timeout))
except requests.exceptions.RequestException as e:
LOG.exception(e)
raise ApiError(
title='Log retrieval error',
description='Exception happened during Airflow API request',
status=falcon.HTTP_500)
if response.status_code >= 400:
LOG.info('Airflow endpoint returned error status code %s, '
'content %s. Response code will be bubbled up',
response.status_code, response.text)
raise ApiError(
title='Log retrieval error',
description='Airflow endpoint returned error status code',
status=getattr(
falcon,
'HTTP_%d' % response.status_code,
falcon.HTTP_500))
return response.text
| 35.856164 | 76 | 0.580516 |
Subsets and Splits