blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
616
| content_id
stringlengths 40
40
| detected_licenses
sequencelengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 777
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 149
values | src_encoding
stringclasses 26
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 3
10.2M
| extension
stringclasses 188
values | content
stringlengths 3
10.2M
| authors
sequencelengths 1
1
| author_id
stringlengths 1
132
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
23b991d12666c815fd91709f9380c3dd0a5f7ff1 | 83de24182a7af33c43ee340b57755e73275149ae | /aliyun-python-sdk-eventbridge/aliyunsdkeventbridge/request/v20200401/ListApiDestinationsRequest.py | ea56b2c0a786c4668796b9c0f271fa46e30cff38 | [
"Apache-2.0"
] | permissive | aliyun/aliyun-openapi-python-sdk | 4436ca6c57190ceadbc80f0b1c35b1ab13c00c7f | 83fd547946fd6772cf26f338d9653f4316c81d3c | refs/heads/master | 2023-08-04T12:32:57.028821 | 2023-08-04T06:00:29 | 2023-08-04T06:00:29 | 39,558,861 | 1,080 | 721 | NOASSERTION | 2023-09-14T08:51:06 | 2015-07-23T09:39:45 | Python | UTF-8 | Python | false | false | 1,878 | py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
class ListApiDestinationsRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'eventbridge', '2020-04-01', 'ListApiDestinations')
self.set_method('POST')
def get_NextToken(self): # String
return self.get_query_params().get('NextToken')
def set_NextToken(self, NextToken): # String
self.add_query_param('NextToken', NextToken)
def get_ConnectionName(self): # String
return self.get_query_params().get('ConnectionName')
def set_ConnectionName(self, ConnectionName): # String
self.add_query_param('ConnectionName', ConnectionName)
def get_MaxResults(self): # Long
return self.get_query_params().get('MaxResults')
def set_MaxResults(self, MaxResults): # Long
self.add_query_param('MaxResults', MaxResults)
def get_ApiDestinationNamePrefix(self): # String
return self.get_query_params().get('ApiDestinationNamePrefix')
def set_ApiDestinationNamePrefix(self, ApiDestinationNamePrefix): # String
self.add_query_param('ApiDestinationNamePrefix', ApiDestinationNamePrefix)
| [
"[email protected]"
] | |
f91f2010f2e8223dd9304fc1bbf4e21fa295d5c7 | 43b0679349d4f8a8c281705df1bf4cf2805b2816 | /backend/code/archipelag/event/migrations/0005_auto_20170909_1004.py | 5561e8ad2d914586dfa2bee6d8e6cf2782f56c5a | [] | no_license | socek/archipelag | d2eecc1e7b49954d3d9de89d571f7a5021b995e4 | 359ea98d9e8cca0eac2469413d4b4469166f6a43 | refs/heads/master | 2021-03-24T09:12:29.137254 | 2017-09-09T14:14:33 | 2017-09-09T14:14:33 | 102,904,818 | 0 | 2 | null | 2017-12-18T22:32:12 | 2017-09-08T21:12:10 | Python | UTF-8 | Python | false | false | 529 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.11.5 on 2017-09-09 10:04
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('event', '0004_auto_20170909_0933'),
]
operations = [
migrations.AlterField(
model_name='event',
name='owner',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='ngo.NgoUser'),
),
]
| [
"[email protected]"
] | |
3446157ad4d00593c36962a090ac33aa87f2f45b | cfb705f3727ff2f53288269ae37bd2cb6687951d | /build/SubmissionHelpers/CMakeFiles/python_submissionControllerskubernetespycGen.py | 67472e3bddb45b9b3081b46cddaa64bfd39ce77d | [] | no_license | alessio94/di-Higgs-analysis | 395934df01190998057f7c81775209c5d32f906e | 79c793cc819df7c8511c45f3efe6bdd10fd966bf | refs/heads/master | 2023-02-17T05:44:59.997960 | 2023-02-13T18:02:42 | 2023-02-13T18:02:42 | 224,252,187 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 332 | py | import py_compile; py_compile.compile( '/afs/cern.ch/work/a/apizzini/private/2022/nov/CAFbbll/SubmissionHelpers/python/submissionControllers/kubernetes.py', cfile = '/afs/cern.ch/work/a/apizzini/private/2022/nov/CAFbbll/build/SubmissionHelpers/CMakeFiles/pythonBytecode/python/submissionControllers/kubernetes.pyc', doraise = True ) | [
"[email protected]"
] | |
d1b223b99ccc20932fbbadf3a004e2e0128ec2fd | 5d7b619d6bd8117db0abc878af02d7f4f30ca938 | /fileIO_Includes.py | 9c97566943c292e11fef69ff2bf5df5245b21699 | [] | no_license | LeeHuangChen/2018_01_17_1_BlastAllToAll | 3b55f3efc5e837e9692cd32b117b240846a67026 | 32dad9ef0eff7f725734365c41488cd530e6bdcb | refs/heads/master | 2021-09-04T09:54:35.336820 | 2018-01-17T20:46:28 | 2018-01-17T20:46:28 | 117,890,770 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,999 | py | import os
#A list of includes for file io
def createEmptyFiles(paths):
for path in paths:
f=open(path,"w")
f.close()
def testPath(path):
if not os.path.exists(path):
os.mkdir(path)
def appendFile(path,content):
f=open(path,"a")
f.write(content)
f.close()
def readFile(path):
f=open(path,"r")
content=f.read()
f.close()
return content
def processSeqFile(path):
read=readFile(path)
lines=read.split("\n")[1:]
processedList=[]
#lines longer then 20 assumed to be sequences
for line in lines:
if len(line)>20:
delims=[" "," "," "," "]
array=[]
for delim in delims:
if(len(line.split(delim))>1):
array=line.split(delim)
break
processedList.append(array[1])
return processedList
#generate all the directories needed for the given path
def generateDirectories(path):
folders=path.split("/")
curdir=""
for folder in folders:
curdir=os.path.join(curdir,folder)
if not os.path.exists(curdir):
os.mkdir(curdir)
def generateDirectoriesMult(paths):
for path in paths:
generateDirectories(path)
#processFusedGenes functions
#takes the protein, taxa, and sequence information and produce a FASTA format string for that sequence
def toFASTA(prot, taxa, seq):
return ">"+prot+ " ["+taxa+"]\n"+seq+"\n\n"
#read the fusion event log and produce a dictionary to easily access the contents
def readFusionEventLog(path):
f=open(path,"r")
content=f.read()
f.close()
fusionDict={}
lines=content.split("\n")
for line in lines:
array=line.split("\t")
# if this line is not a header
if (not "#" in line) and (len(line)!=0):
fusionDict[int(array[0])]=array
return fusionDict
#A simple function to generate a name for each test case base on the parameters
def name(model,seqLen,numFamily, numFusionEvent,totalEvolutionTime, numGeneration):
name="M_"+str(model.replace("-",""))+"_SeqL_"+str(seqLen)+"_NFam_"+str(numFamily)+"_NFusions_"+str(numFusionEvent)+"_TEvo_"+str(totalEvolutionTime)+"_NGen_"+str(numGeneration)
return name | [
"[email protected]"
] | |
f3716608d9c07e2bfa41779e9f5bebd961f12cf3 | cc2029f40a12e82712072275fc76a07ac59b5940 | /battles/challenges/leftover.py | f1f494d260bbf60ca98f6f60873b37896969161e | [
"MIT"
] | permissive | heitorchang/learn-code | d3fb8e45d539d302372126fe28e85032590b5707 | 5e6e56f7257de1910830619c01d470e892d7f9d8 | refs/heads/master | 2023-08-09T13:46:18.623772 | 2023-07-21T16:57:11 | 2023-07-21T16:57:11 | 147,522,837 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 541 | py | def leftover(s):
u = s.upper()
su = sum(map(ord, u))
return sum(map(ord, s)) % su
def trymod(a, c):
solutions = []
for i in range(2, a):
if a % i == c:
solutions.append(i)
return solutions
def test():
testeql(leftover("antidisestablishmentarianism"), 27)
testeql(leftover("supercalifragilisticexpialidocious"), 27)
testeql(leftover("appetite"), 4)
testeql(leftover("hello"), 2)
testeql(leftover("cb"), 1)
testeql(leftover("2017"), 2)
testeql(leftover("watcher"), 1)
| [
"[email protected]"
] | |
1620d78c72c1f7e1b85e480f5639de81c127ad1e | 9b410c4884b978f654e1538467549d26456f60e0 | /src/fuzz_closure_3178.py | c0a38983637435f2ceeb1ef23ae239bcb386b76e | [] | no_license | vrthra/ddset | 4c49e13a91c4a1c1b4a7b580174abe21323324da | 6c776c998a0d7e2ee0092cf688e352149b177330 | refs/heads/master | 2022-11-29T23:46:40.730354 | 2020-08-03T13:36:49 | 2020-08-03T13:36:49 | 257,855,512 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 177 | py | import Fuzz as F
import closure_3178 as Main
if __name__ == '__main__':
F.main('./lang/js/grammar/javascript.fbjson', './lang/js/bugs/closure.3178.js', Main.my_predicate)
| [
"[email protected]"
] | |
93ec4324fe75da5921ba6871ebe99da564045576 | b589f3997e790c3760ab6ddce1dd1b7813cfab3a | /665.py | 254ca38201ea67ab0e46990b8f10261e4019ab22 | [] | no_license | higsyuhing/leetcode_easy | 56ceb2aab31f7c11671d311552aaf633aadd14a8 | 48d516fdbb086d697e2593a9ce1dbe6f40c3c701 | refs/heads/master | 2022-12-04T00:49:33.894066 | 2022-11-15T20:44:36 | 2022-11-15T20:44:36 | 135,224,120 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 474 | py | class Solution(object):
def checkPossibility(self, nums):
"""
:type nums: List[int]
:rtype: bool
"""
# wtf this problem...
p = None
for i in xrange(len(nums) - 1):
if nums[i] > nums[i+1]:
if p is not None:
return False
p = i
return (p is None or p == 0 or p == len(nums)-2 or
nums[p-1] <= nums[p+1] or nums[p] <= nums[p+2])
| [
"[email protected]"
] | |
4bb930fddf7b3752d067f732a558b58be2f49b4f | b1cf54e4d6f969d9084160fccd20fabc12c361c2 | /misc/python/list_comp.py | 30133d550bfa444f552434571332ab5c3ebeecee | [] | no_license | zarkle/code_challenges | 88a53477d6f9ee9dd71577678739e745b9e8a694 | 85b7111263d4125b362184df08e8a2265cf228d5 | refs/heads/master | 2021-06-10T11:05:03.048703 | 2020-01-23T06:16:41 | 2020-01-23T06:16:41 | 136,668,643 | 0 | 1 | null | 2019-02-07T23:35:59 | 2018-06-08T21:44:26 | JavaScript | UTF-8 | Python | false | false | 1,890 | py | """
List Comprehension practice
https://www.reddit.com/r/learnpython/comments/4d2yl7/i_need_list_comprehension_exercises_to_drill/
Tip: Practice mechanically translating a list comprehension into the equivalent for loop and back again.
"""
# Find all of the numbers from 1-1000 that are divisible by 7
seven = [num for num in range(1,1001) if num % 7 == 0]
print(seven)
# Find all of the numbers from 1-1000 that have a 3 in them
# Count the number of spaces in a string
string = 'sample string of text'
spaces = [char for char in string if char == ' ']
total = len(spaces)
# Remove all of the vowels in a string
vowels = 'aeiou'
string = 'a string with vowels'
no_vowels = [char for char in string if char not in vowels]
# Find all of the words in a string that are less than 4 letters
short = [word ]
# Challenge:
# Use a dictionary comprehension to count the length of each word in a sentence.
# Use a nested list comprehension to find all of the numbers from 1-1000 that are divisible by any single digit besides 1 (2-9)
# For all the numbers 1-1000, use a nested list/dictionary comprehension to find the highest single digit any of the numbers is divisible by
"""
From: http://www.learnpython.org/en/List_Comprehensions
"""
# create a list of integers which specify the length of each word in a certain sentence, but only if the word is not the word "the".
lengths = [len(word) for word in sentence.split() if word != 'the']
print(lengths)
# long way
sentence = "the quick brown fox jumps over the lazy dog"
words = sentence.split()
word_lengths = []
for word in words:
if word != "the":
word_lengths.append(len(word))
print(words)
print(word_lengths)
# create a new list called "newlist" out of the list "numbers", which contains only the positive numbers from the list, as integers.
newlist = [int(num) for num in numbers if num > -1]
print(newlist)
| [
"[email protected]"
] | |
3199fe280a37613c9116da698abbdbb9541d069b | 15f321878face2af9317363c5f6de1e5ddd9b749 | /solutions_python/Problem_75/166.py | 1165a86b97154abf8872b43df4ac9635e82595f0 | [] | no_license | dr-dos-ok/Code_Jam_Webscraper | c06fd59870842664cd79c41eb460a09553e1c80a | 26a35bf114a3aa30fc4c677ef069d95f41665cc0 | refs/heads/master | 2020-04-06T08:17:40.938460 | 2018-10-14T10:12:47 | 2018-10-14T10:12:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,578 | py | INPUT_FILE = r'C:\Downloads\FromFirefox\B-large.in'
OUTPUT_FILE = r'C:\Users\Assaf\Fun\codeJam\B-large.out'
inputFile = file(INPUT_FILE, 'rb')
numQuestions = int(inputFile.readline())
outputFile = file(OUTPUT_FILE, 'wb')
def solveQuestion(combain, disappear, elements):
magicka = ['\x00']
for element in elements:
magicka.append(element)
pair = magicka[-2] + magicka[-1]
while pair in combain:
magicka = magicka[:-2] + [combain[pair]]
pair = magicka[-2] + magicka[-1]
for d in disappear:
if d[0] in magicka and d[1] in magicka:
magicka = ['\x00']
return `magicka[1:]`.replace("'", '').replace('"', '')
for q in xrange(numQuestions):
outputFile.write("Case #%d: " % (q+1))
line = inputFile.readline().replace('\r', '').replace('\n', '').replace('\t', ' ').split(' ')
C = int(line[0])
line = line[1:]
combain = {}
for i in xrange(C):
combain[line[0][:2]] = line[0][2]
combain[line[0][:2][::-1]] = line[0][2]
line = line[1:]
D = int(line[0])
line = line[1:]
disappear = []
for i in xrange(D):
disappear.append((line[0][0], line[0][1]))
line = line[1:]
N = int(line[0])
line = line[1:]
if len(line[0]) != N:
raise Exception("Input error at N")
result = solveQuestion(combain, disappear, line[0])
outputFile.write(result)
outputFile.write("\n")
outputFile.close()
inputFile.close()
# print file(OUTPUT_FILE, 'rb').read()
| [
"[email protected]"
] | |
2205f8227b4cfacbdec160fedda6f00dab2b89d8 | cbc107b4a98275bd6d007b496d3477d9bc8dc89a | /catalog/api_router.py | 7a9bdf96d7397cdf587e5b494aaee645241d024b | [] | no_license | grogsy/local-library | 1a9e35692cb5173f3197b948d13ce3a5861b03ba | 783c8965d5aa01c53297f77396010e998272d8c2 | refs/heads/master | 2023-08-22T22:28:24.302301 | 2020-06-10T00:07:20 | 2020-06-10T00:07:20 | 271,135,512 | 0 | 0 | null | 2021-09-22T19:11:10 | 2020-06-10T00:05:55 | Python | UTF-8 | Python | false | false | 205 | py | from rest_framework.routers import DefaultRouter
from .api_views import AuthorViewSet, BookViewSet
router = DefaultRouter()
router.register('authors', AuthorViewSet)
router.register('books', BookViewSet) | [
"[email protected]"
] | |
8737c3804357d15c0f2a38478876a0af9addecf2 | 78c062054304534f2a4b7b9ebd4b6fbe7d9dc3b2 | /Ampere-Law-Example.py | e5d8d63a75eedbd1ebe3c5adc860b254a7c0e390 | [] | no_license | buckees/ICP-field-solver | 5d4137ee6d6e7345c83b212b7e844b0adf794abc | 51644c311c62d63e5d7f689d5c6659ab6bda52aa | refs/heads/master | 2022-12-10T18:43:35.387773 | 2020-09-04T04:48:16 | 2020-09-04T04:48:16 | 290,904,111 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,119 | py | # -*- coding: utf-8 -*-
"""
ICP Field Solver
Ampere's Law
"""
import numpy as np
from math import pi
import matplotlib.pyplot as plt
from matplotlib import colors, ticker, cm
from Constants import MU0
from Mesh import MESHGRID
#set infinite-long wire to position (wx, wy) with infinitesimal radius
I = 1.0 # wire current in A
# According to Ampere's law in integral form
# B(r|r>r0) = mu0*I/(2*pi*r)
#The earth's magnetic field is about 0.5 gauss.
width, height, nx, ny = 10.0, 10.0, 101, 101
mesh = MESHGRID(width, height, nx, ny)
mesh.init_mesh()
def calc_bf(position, I):
dist, vecx, vecy = mesh.calc_dist(position, I)
bf = MU0*I/(2.0*pi)
dist_min = min(width/(nx-1), height/(ny-1))
bf_max = np.ones_like(dist)*bf/dist_min
bf = np.divide(bf, dist, where=dist>dist_min, out=bf_max)
bf = abs(bf)
print('B field min = %.2e max = %.2e' % (bf.min(), bf.max()))
# fig, ax = plt.subplots(figsize=(3,3))
# ax.plot(mesh.posx, mesh.posy, '.k',
# marker='.', markersize=3,
# color='black', linestyle='None')
#fmt = ticker.LogFormatterMathtext()
#fmt.create_dummy_axis()
#cs = ax.contour(mesh.posx, mesh.posy, bf,
# locator=ticker.LogLocator(subs=range(1,6)),
# cmap=cm.plasma)
# Alternatively, you can manually set the levels
# and the norm:
# lev_exp = np.arange(np.floor(np.log10(bf.min())),
# np.ceil(np.log10(bf.max())), 0.1)
# levs = np.power(10, lev_exp)
# cs = ax.contour(mesh.posx, mesh.posy, bf, levs, norm=colors.LogNorm())
#ax.clabel(cs, cs.levels)
# fig.colorbar(cs)
# ax.quiver(mesh.posx, mesh.posy, vecx, vecy)
# ax.plot(position[0], position[1],
# color='red', marker='o', markersize=15)
return bf, vecx, vecy
pos1, pos2 = (-1.5, 0.0), (1.5, 0.0)
bf1, vx1, vy1 = calc_bf(pos1, I)
bf2, vx2, vy2 = calc_bf(pos2, -I)
vx = np.multiply(bf1, vx1) + np.multiply(bf2, vx2)
vy = np.multiply(bf1, vy1) + np.multiply(bf2, vy2)
bf = np.sqrt(np.power(vx, 2) + np.power(vy, 2))
print('B field min = %.2e max = %.2e' % (bf[np.nonzero(bf)].min(),
bf.max()))
vx, vy = np.divide(vx, bf), np.divide(vy, bf)
fig, ax = plt.subplots(figsize=(3,3))
ax.plot(mesh.posx, mesh.posy, '.k',
marker='.', markersize=3,
color='black', linestyle='None')
# Alternatively, you can manually set the levels
# and the norm:
lev_exp = np.arange(np.floor(np.log10(bf[np.nonzero(bf)].min())),
np.ceil(np.log10(bf.max())), 0.05)
levs = np.power(10, lev_exp)
#levs = np.linspace(bf.min(), bf.max(), 50)
cs = ax.contour(mesh.posx, mesh.posy, bf, levs, norm=colors.LogNorm())
#ax.clabel(cs, cs.levels)
#fig.colorbar(cs)
#ax.quiver(mesh.posx, mesh.posy, vx, vy)
#ax.plot(pos1[0], pos1[1],
# color='red', marker='o', markersize=15)
#ax.plot(pos2[0], pos2[1],
# color='red', marker='o', markersize=15)
fig, ax = plt.subplots(2, 1, figsize=(6,6))
ax[0].plot(mesh.posx[int(nx/2), :], bf[int(nx/2), :])
ax[1].plot(mesh.posy[:, int(ny/2)], bf[:, int(ny/2)])
| [
"[email protected]"
] | |
6c95cc60743e241e2ea83b22b81fcfde4ec4a846 | 60284a471e48e49e9b184305b08da38cbaf85c38 | /src/tests/ftest/container/query_attribute.py | a1a6e819ad27b3d44a00c63b6c53a7b7a369d0ab | [
"BSD-2-Clause-Patent",
"BSD-2-Clause"
] | permissive | minmingzhu/daos | 734aa37c3cce1c4c9e777b151f44178eb2c4da1f | 9f095c63562db03e66028f78df0c37f1c05e2db5 | refs/heads/master | 2022-05-10T17:23:32.791914 | 2022-02-28T18:44:50 | 2022-02-28T18:44:50 | 228,773,662 | 1 | 0 | Apache-2.0 | 2019-12-18T06:30:39 | 2019-12-18T06:30:38 | null | UTF-8 | Python | false | false | 7,045 | py | #!/usr/bin/python
"""
(C) Copyright 2020-2022 Intel Corporation.
SPDX-License-Identifier: BSD-2-Clause-Patent
"""
from apricot import TestWithServers
import base64
class ContainerQueryAttributeTest(TestWithServers):
# pylint: disable=anomalous-backslash-in-string
"""Test class for daos container query and attribute tests.
Test Class Description:
Query test: Create a pool, create a container, and call daos container
query. From the output, verify the pool/container UUID matches the one
that was returned when creating the pool/container.
Attribute test:
1. Prepare 7 types of strings; alphabets, numbers, special characters,
etc.
2. Create attributes with each of these 7 types in attr and value;
i.e., 14 total attributes are created.
3. Call get-attr for each of the 14 attrs and verify the returned
values.
4. Call list-attrs and verify the returned attrs.
:avocado: recursive
"""
def __init__(self, *args, **kwargs):
"""Initialize a ContainerQueryAttribute object."""
super().__init__(*args, **kwargs)
self.expected_cont_uuid = None
self.daos_cmd = None
def test_container_query_attr(self):
"""JIRA ID: DAOS-4640
Test Description:
Test daos container query and attribute commands as described
above.
Use Cases:
Test container query, set-attr, get-attr, and list-attrs.
:avocado: tags=all,full_regression
:avocado: tags=small
:avocado: tags=container,cont_query_attr
"""
# Create a pool and a container.
self.add_pool()
self.add_container(pool=self.pool)
self.daos_cmd = self.get_daos_command()
# Call daos container query, obtain pool and container UUID, and
# compare against those used when creating the pool and the container.
kwargs = {
"pool": self.pool.uuid,
"cont": self.container.uuid
}
data = self.daos_cmd.container_query(**kwargs)['response']
actual_pool_uuid = data['pool_uuid']
actual_cont_uuid = data['container_uuid']
self.assertEqual(actual_pool_uuid, self.pool.uuid.lower())
self.assertEqual(actual_cont_uuid, self.container.uuid.lower())
# Test container set-attr, get-attr, and list-attrs with different
# types of characters.
test_strings = [
"abcd",
"1234",
"abc123",
"abcdefghijabcdefghijabcdefghijabcdefghijabcdefghijabcdefghij",
# Characters that don't require backslash. The backslashes in here
# are required for the code to work, but not by daos.
"~@#$%^*-=_+[]\{\}:/?,.", # noqa: W605
# Characters that require backslash.
"\`\&\(\)\\\;\\'\\\"\!\<\>", # noqa: W605
# Characters that include space.
"\"aa bb\""]
# We added backslashes for the code to work, but get-attr output
# does not contain them, so prepare the expected output that does not
# include backslashes.
escape_to_not = {}
escape_to_not[test_strings[-3]] = "~@#$%^*-=_+[]{}:/?,."
# We still need a backslash before the double quote for the code to
# work.
escape_to_not[test_strings[-2]] = "`&()\;'\"!<>" # noqa: W605
escape_to_not[test_strings[-1]] = "aa bb"
# Prepare attr-value paris. Use the test_strings in value for the first
# 7 and in attr for the next 7.
attr_values = []
j = 0
for i in range(2):
for test_string in test_strings:
if i == 0:
attr_values.append(["attr" + str(j), test_string])
else:
attr_values.append([test_string, "attr" + str(j)])
j += 1
# Set and verify get-attr.
errors = []
expected_attrs = []
for attr_value in attr_values:
self.daos_cmd.container_set_attr(
pool=actual_pool_uuid, cont=actual_cont_uuid,
attr=attr_value[0], val=attr_value[1])
kwargs["attr"] = attr_value[0]
data = self.daos_cmd.container_get_attr(**kwargs)['response']
actual_val = base64.b64decode(data["value"]).decode()
if attr_value[1] in escape_to_not:
# Special character string.
if actual_val != escape_to_not[attr_value[1]]:
errors.append(
"Unexpected output for get_attr: {} != {}\n".format(
actual_val, escape_to_not[attr_value[1]]))
else:
# Standard character string.
if actual_val != attr_value[1]:
errors.append(
"Unexpected output for get_attr: {} != {}\n".format(
actual_val, attr_value[1]))
# Collect comparable attr as a preparation of list-attrs test.
if attr_value[0] in escape_to_not:
expected_attrs.append(escape_to_not[attr_value[0]])
else:
expected_attrs.append(attr_value[0])
self.assertEqual(len(errors), 0, "; ".join(errors))
# Verify that attr-lists works with test_strings.
expected_attrs.sort()
kwargs = {
"pool": actual_pool_uuid,
"cont": actual_cont_uuid
}
data = self.daos_cmd.container_list_attrs(**kwargs)['response']
actual_attrs = list(data)
actual_attrs.sort()
self.log.debug(str(actual_attrs))
self.assertEqual(actual_attrs, expected_attrs)
def test_list_attrs_long(self):
"""JIRA ID: DAOS-4640
Test Description:
Set many attributes and verify list-attrs works.
Use Cases:
Test daos container list-attrs with 50 attributes.
:avocado: tags=all,full_regression
:avocado: tags=small
:avocado: tags=container,cont_list_attrs
"""
# Create a pool and a container.
self.add_pool()
self.add_container(pool=self.pool)
self.daos_cmd = self.get_daos_command()
expected_attrs = []
vals = []
for i in range(50):
expected_attrs.append("attr" + str(i))
vals.append("val" + str(i))
for expected_attr, val in zip(expected_attrs, vals):
_ = self.daos_cmd.container_set_attr(
pool=self.pool.uuid, cont=self.container.uuid,
attr=expected_attr, val=val)
expected_attrs.sort()
kwargs = {
"pool": self.pool.uuid,
"cont": self.container.uuid
}
data = self.daos_cmd.container_list_attrs(**kwargs)['response']
actual_attrs = list(data)
actual_attrs.sort()
self.assertEqual(
expected_attrs, actual_attrs, "Unexpected output from list_attrs")
| [
"[email protected]"
] | |
4aeab5318b535611b91139d2bc101a65282f49c9 | 6eb13e52b6babe24eaa7122b11bb3041752d1ede | /stock/forms.py | e55fb855b02a1c3af321406980641cbc20cbe5ce | [] | no_license | siuols/Inventory | a992076736bf34e0a5ad35e965860bd5971e3b73 | e30e15593f1c2e1faabb382d8f4c2753f717fb73 | refs/heads/master | 2022-12-08T07:32:49.101561 | 2019-03-05T01:39:29 | 2019-03-05T01:39:29 | 173,235,778 | 1 | 2 | null | 2022-12-08T01:21:31 | 2019-03-01T04:40:13 | JavaScript | UTF-8 | Python | false | false | 2,963 | py | from django import forms
from .models import Brand,Category,Course,Customer,Release,Office,Item
from django.contrib.auth import get_user_model
from django.core.validators import RegexValidator
from django.utils.translation import ugettext, ugettext_lazy as _
User = get_user_model()
class ItemForm(forms.ModelForm):
class Meta:
model = Item
fields = [
'brand',
'category',
'number',
'name',
'description',
'quantity',
'unit_cost',
]
class BrandForm(forms.ModelForm):
class Meta:
model = Brand
fields = [
'name'
]
class CategoryForm(forms.ModelForm):
class Meta:
model = Category
fields = [
'name'
]
class OfficeForm(forms.ModelForm):
class Meta:
model = Office
fields = [
'name'
]
class CourseForm(forms.ModelForm):
class Meta:
model = Course
fields = [
'code'
]
class CustomerForm(forms.ModelForm):
class Meta:
model = Customer
fields = [
'id_number',
'last_name',
'first_name',
'middle_name',
'course',
'year',
'status'
]
class ReleaseForm(forms.ModelForm):
class Meta:
model = Release
fields = [
'id_number',
'number',
'quantity',
'office'
]
class RegistrationForm(forms.ModelForm):
password1 = forms.CharField(label='Password', min_length=8, widget=forms.PasswordInput, validators=[RegexValidator('^[-a-zA-Z0-9_]+$', message="Password should be a combination of Alphabets and Numbers")])
password2 = forms.CharField(label='Password confirmation', widget=forms.PasswordInput)
class Meta:
model = User
fields = (
'username',
'email'
)
def clean_email(self):
email = self.cleaned_data.get("email")
qs = User.objects.filter(email__iexact=email)
if qs.exists():
raise forms.ValidationError("Cannot use this email. It's already register")
return email
def clean_username(self):
username = self.cleaned_data.get("username")
qs = User.objects.filter(username__iexact=username)
if qs.exists():
raise forms.ValidationError("Username is already register")
return username
def clean_password2(self):
# Check that the two password entries match
password1 = self.cleaned_data.get("password1")
password2 = self.cleaned_data.get("password2")
if password1 and password2 and password1 != password2:
raise forms.ValidationError("Passwords don't match")
return password2
def save(self, commit=True):
#Save the provided password in hashed format
user = super(RegistrationForm, self).save(commit=False)
user.set_password(self.cleaned_data["password1"])
user.is_active = True
if commit:
user.save()
return user | [
"[email protected]"
] | |
775c68a1333634d972eebf451d817b86a9e833eb | 23d55806db77d9e735dec5f71b85d31bcb88b6d3 | /lib/clientProcessing.py | 448a742a6e2b73f2be86da23aca0cf7994777451 | [
"MIT"
] | permissive | schollz/splitthework | 60fa69f0a8aeda911718937ff0fff3f20cf564ae | cfb2d9495fab64018b73483c41d371408823abe0 | refs/heads/master | 2023-09-01T00:07:38.606938 | 2016-01-25T13:24:17 | 2016-01-25T13:24:17 | 50,106,122 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 411 | py | import os
from lib.compress import *
def processWork(data):
strData = []
for a in data:
strData.append(str(a))
os.system("sudo python3 downloadPages.py " + " ".join(strData))
results = json.load(open('downloadedPages.json','r'))
# print(sys.getsizeof(json.dumps(results)))
dataCompressed = compress(results)
# print(sys.getsizeof(dataCompressed))
return dataCompressed
| [
"[email protected]"
] | |
5042d48f346bea32b1c892f4b34cf1d6c611d4ab | d45ae345eb677df44c8940de49faa54554392259 | /player.py | d059193080e902204e4d7caddae96527a16b3e0f | [] | no_license | ErickMwazonga/tic-tac-toe | e2c1577a26e86f6846477ba930530f02ed3c9760 | 852d71bdbd30e6c7d6a1a6c5454b27782c04dd9c | refs/heads/main | 2023-02-16T18:41:26.656770 | 2021-01-17T18:55:25 | 2021-01-17T18:55:25 | 330,277,710 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,027 | py | import math
import random
class Player:
def __init__(self, letter):
self.letter = letter
def get_move(self, game):
pass
class RandomComputerPlayer(Player):
def __init__(self, letter):
super().__init__(letter)
def get_move(self, game):
square = random.choice(game.available_moves())
return square
class HumanPlayer(Player):
def __init__(self, letter):
super().__init__(letter)
def get_move(self, game):
valid_spot = False
val = None
while not valid_spot:
spot = input(f"{self.letter} turn. Choose spot(1-9): ")
if spot == 'Q' or spot == 'q':
quit()
try:
spot = int(spot)
val = game.board_mapping.get(spot)
if val not in game.available_moves():
raise ValueError
valid_spot = True
except ValueError:
print('Invalid spot. Try Again.')
return val
| [
"[email protected]"
] | |
21fff6327264a41c228458d5d90a207273cc788d | c1c7214e1f9230f19d74bb9776dac40d820da892 | /examples/django/urlディスパッチャ/pathconverterの使い方/project/project/urls.py | 351f32b488f3fde9be2c0319b72c653193e0088b | [] | no_license | FujitaHirotaka/djangoruler3 | cb326c80d9413ebdeaa64802c5e5f5daadb00904 | 9a743fbc12a0efa73dbc90f93baddf7e8a4eb4f8 | refs/heads/master | 2020-04-01T13:32:28.078110 | 2018-12-13T00:39:56 | 2018-12-13T00:39:56 | 153,256,642 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 369 | py | from django.contrib import admin
from django.urls import path, include
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
path('admin/', admin.site.urls),
path('app/', include('app.urls')),
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT) | [
"[email protected]"
] | |
dffba14a10e2a9605e6d4e2c4867a0b64bb48df5 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02571/s951856680.py | 877fada58e3c8f14ea495f8dbf96727c9212573a | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 265 | py | from sys import stdin
input = stdin.readline
s = input().strip()
t = input().strip()
ns = len(s)
nt = len(t)
def cnt(a,b):
return sum(1 for aa,bb in zip(a,b) if aa != bb)
res = nt
for i in range(ns - nt + 1):
res = min(res,cnt(s[i:i+nt],t))
print(res) | [
"[email protected]"
] | |
7fbc9e51df7922527afeba0c282b8daf8f33a7bc | 8f2c55a2530c3e59dab5907c0044c618b88dd09b | /tests_python/test_tracing_on_top_level.py | fa53ad46bf0ff18b4f97d0774ad20ad927cf5194 | [
"Apache-2.0",
"EPL-1.0"
] | permissive | fabioz/PyDev.Debugger | 5a9c6d4c09be85a0e2d9fb93567fd65faf04c81d | 26864816cbfcf002a99913bcc31ebef48042a4ac | refs/heads/main | 2023-08-18T01:08:34.323363 | 2023-04-15T11:15:47 | 2023-04-15T11:15:47 | 21,870,144 | 363 | 126 | Apache-2.0 | 2023-07-30T23:03:31 | 2014-07-15T18:01:12 | Python | UTF-8 | Python | false | false | 11,578 | py | from pydevd import PyDB
import pytest
from tests_python.debugger_unittest import IS_CPYTHON
import threading
DEBUG = False
class DummyTopLevelFrame(object):
__slots__ = ['f_code', 'f_back', 'f_lineno', 'f_trace']
def __init__(self, method):
self.f_code = method.__code__
self.f_back = None
self.f_lineno = method.__code__.co_firstlineno
class DummyWriter(object):
__slots__ = ['commands', 'command_meanings']
def __init__(self):
self.commands = []
self.command_meanings = []
def add_command(self, cmd):
from _pydevd_bundle.pydevd_comm import ID_TO_MEANING
meaning = ID_TO_MEANING[str(cmd.id)]
if DEBUG:
print(meaning)
self.command_meanings.append(meaning)
if DEBUG:
print(cmd._as_bytes.decode('utf-8'))
self.commands.append(cmd)
class DummyPyDb(PyDB):
def __init__(self):
PyDB.__init__(self, set_as_global=False)
def do_wait_suspend(
self, thread, frame, event, arg, *args, **kwargs):
from _pydevd_bundle.pydevd_constants import STATE_RUN
info = thread.additional_info
info.pydev_original_step_cmd = -1
info.pydev_step_cmd = -1
info.pydev_step_stop = None
info.pydev_state = STATE_RUN
return PyDB.do_wait_suspend(self, thread, frame, event, arg, *args, **kwargs)
class _TraceTopLevel(object):
def __init__(self):
self.py_db = DummyPyDb()
self.py_db.writer = DummyWriter()
def set_target_func(self, target_func):
self.frame = DummyTopLevelFrame(target_func)
self.target_func = target_func
def get_exception_arg(self):
import sys
try:
raise AssertionError()
except:
arg = sys.exc_info()
return arg
def create_add_exception_breakpoint_with_policy(
self, exception, notify_on_handled_exceptions, notify_on_unhandled_exceptions, ignore_libraries):
return '\t'.join(str(x) for x in [
exception, notify_on_handled_exceptions, notify_on_unhandled_exceptions, ignore_libraries])
def add_unhandled_exception_breakpoint(self):
from _pydevd_bundle.pydevd_process_net_command import process_net_command
from tests_python.debugger_unittest import CMD_ADD_EXCEPTION_BREAK
for exc_name in ('AssertionError', 'RuntimeError'):
process_net_command(
self.py_db,
CMD_ADD_EXCEPTION_BREAK,
1,
self.create_add_exception_breakpoint_with_policy(exc_name, '0', '1', '0'),
)
def assert_last_commands(self, *commands):
assert self.py_db.writer.command_meanings[-len(commands):] == list(commands)
def assert_no_commands(self, *commands):
for command in commands:
assert command not in self.py_db.writer.command_meanings
def trace_dispatch(self, event, arg):
from _pydevd_bundle import pydevd_trace_dispatch_regular
self.new_trace_func = pydevd_trace_dispatch_regular.trace_dispatch(self.py_db, self.frame, event, arg)
return self.new_trace_func
def call_trace_dispatch(self, line):
self.frame.f_lineno = line
return self.trace_dispatch('call', None)
def exception_trace_dispatch(self, line, arg):
self.frame.f_lineno = line
self.new_trace_func = self.new_trace_func(self.frame, 'exception', arg)
def return_trace_dispatch(self, line):
self.frame.f_lineno = line
self.new_trace_func = self.new_trace_func(self.frame, 'return', None)
def assert_paused(self):
self.assert_last_commands('CMD_THREAD_SUSPEND', 'CMD_THREAD_RUN')
def assert_not_paused(self):
self.assert_no_commands('CMD_THREAD_SUSPEND', 'CMD_THREAD_RUN')
@pytest.yield_fixture
def trace_top_level():
# Note: we trace with a dummy frame with no f_back to simulate the issue in a remote attach.
yield _TraceTopLevel()
threading.current_thread().additional_info = None
@pytest.fixture
def trace_top_level_unhandled(trace_top_level):
trace_top_level.add_unhandled_exception_breakpoint()
return trace_top_level
_expected_functions_to_test = 0
def mark_handled(func):
global _expected_functions_to_test
_expected_functions_to_test += 1
func.__handled__ = True
return func
def mark_unhandled(func):
global _expected_functions_to_test
_expected_functions_to_test += 1
func.__handled__ = False
return func
#------------------------------------------------------------------------------------------- Handled
@mark_handled
def raise_handled_exception():
try:
raise AssertionError()
except:
pass
@mark_handled
def raise_handled_exception2():
try:
raise AssertionError()
except AssertionError:
pass
@mark_handled
def raise_handled_exception3():
try:
try:
raise AssertionError()
except RuntimeError:
pass
except AssertionError:
pass
@mark_handled
def raise_handled_exception3a():
try:
try:
raise AssertionError()
except AssertionError:
pass
except RuntimeError:
pass
@mark_handled
def raise_handled_exception4():
try:
try:
raise AssertionError()
except RuntimeError:
pass
except (
RuntimeError,
AssertionError):
pass
@mark_handled
def raise_handled():
try:
try:
raise AssertionError()
except RuntimeError:
pass
except (
RuntimeError,
AssertionError):
pass
@mark_handled
def raise_handled2():
try:
raise AssertionError()
except (
RuntimeError,
AssertionError):
pass
try:
raise RuntimeError()
except (
RuntimeError,
AssertionError):
pass
@mark_handled
def raise_handled9():
for i in range(2):
try:
raise AssertionError()
except AssertionError:
if i == 1:
try:
raise
except:
pass
@mark_handled
def raise_handled10():
for i in range(2):
try:
raise AssertionError()
except AssertionError:
if i == 1:
try:
raise
except:
pass
_foo = 10
#----------------------------------------------------------------------------------------- Unhandled
@mark_unhandled
def raise_unhandled_exception():
raise AssertionError()
@mark_unhandled
def raise_unhandled_exception_not_in_except_clause():
try:
raise AssertionError()
except RuntimeError:
pass
@mark_unhandled
def raise_unhandled():
try:
try:
raise AssertionError()
except RuntimeError:
pass
except (
RuntimeError,
AssertionError):
raise
@mark_unhandled
def raise_unhandled2():
try:
raise AssertionError()
except AssertionError:
pass
raise AssertionError()
@mark_unhandled
def raise_unhandled3():
try:
raise AssertionError()
except AssertionError:
raise AssertionError()
@mark_unhandled
def raise_unhandled4():
try:
raise AssertionError()
finally:
_a = 10
@mark_unhandled
def raise_unhandled5():
try:
raise AssertionError()
finally:
raise RuntimeError()
@mark_unhandled
def raise_unhandled6():
try:
raise AssertionError()
finally:
raise RuntimeError(
'in another'
'line'
)
@mark_unhandled
def raise_unhandled7():
try:
raise AssertionError()
except AssertionError:
try:
raise AssertionError()
except RuntimeError:
pass
@mark_unhandled
def raise_unhandled8():
for i in range(2):
def get_exc_to_treat():
if i == 0:
return AssertionError
return RuntimeError
try:
raise AssertionError()
except get_exc_to_treat():
pass
@mark_unhandled
def raise_unhandled9():
for i in range(2):
def get_exc_to_treat():
if i == 0:
return AssertionError
return RuntimeError
try:
raise AssertionError()
except get_exc_to_treat():
try:
raise
except:
pass
@mark_unhandled
def raise_unhandled10():
for i in range(2):
try:
raise AssertionError()
except AssertionError:
if i == 1:
try:
raise
except RuntimeError:
pass
@mark_unhandled
def raise_unhandled11():
try:
raise_unhandled10()
finally:
if True:
pass
@mark_unhandled
def raise_unhandled12():
try:
raise AssertionError()
except:
pass
try:
raise AssertionError()
finally:
if True:
pass
@mark_unhandled
def reraise_handled_exception():
try:
raise AssertionError() # Should be considered unhandled (because it's reraised).
except:
raise
def _collect_events(func):
collected = []
def events_collector(frame, event, arg):
if frame.f_code.co_name == func.__name__:
collected.append((event, frame.f_lineno, arg))
return events_collector
import sys
sys.settrace(events_collector)
try:
func()
except:
import traceback;traceback.print_exc()
finally:
sys.settrace(None)
return collected
def _replay_events(collected, trace_top_level_unhandled):
for event, lineno, arg in collected:
if event == 'call':
# Notify only unhandled
new_trace_func = trace_top_level_unhandled.call_trace_dispatch(lineno)
# Check that it's dealing with the top-level event.
if hasattr(new_trace_func, 'get_method_object'):
new_trace_func = new_trace_func.get_method_object()
assert new_trace_func.__name__ == 'trace_dispatch_and_unhandled_exceptions'
elif event == 'exception':
trace_top_level_unhandled.exception_trace_dispatch(lineno, arg)
elif event == 'return':
trace_top_level_unhandled.return_trace_dispatch(lineno)
elif event == 'line':
pass
else:
raise AssertionError('Unexpected: %s' % (event,))
def _collect_target_functions():
# return [raise_unhandled10]
ret = []
for _key, method in sorted(dict(globals()).items()):
if hasattr(method, '__handled__'):
ret.append(method)
assert len(ret) == _expected_functions_to_test
return ret
@pytest.mark.skipif(not IS_CPYTHON, reason='try..except info only available on CPython')
@pytest.mark.parametrize("func", _collect_target_functions())
def test_tracing_on_top_level_unhandled(trace_top_level_unhandled, func):
trace_top_level_unhandled.set_target_func(func)
collected_events = _collect_events(func)
# print([(x[0], x[1], x[2].__class__.__name__) for x in collected_events])
_replay_events(collected_events, trace_top_level_unhandled)
if func.__handled__:
trace_top_level_unhandled.assert_not_paused() # handled exception
else:
trace_top_level_unhandled.assert_paused()
| [
"[email protected]"
] | |
b26270c83c99d4a88804354df7d7dcfc67c423a8 | 6fcfb638fa725b6d21083ec54e3609fc1b287d9e | /python/clips_pattern/pattern-master/pattern/text/tree.py | 11fee24fc9ae5b2e83edfcce5eeb454ea8f31b40 | [] | no_license | LiuFang816/SALSTM_py_data | 6db258e51858aeff14af38898fef715b46980ac1 | d494b3041069d377d6a7a9c296a14334f2fa5acc | refs/heads/master | 2022-12-25T06:39:52.222097 | 2019-12-12T08:49:07 | 2019-12-12T08:49:07 | 227,546,525 | 10 | 7 | null | 2022-12-19T02:53:01 | 2019-12-12T07:29:39 | Python | UTF-8 | Python | false | false | 72,048 | py | #### PATTERN | EN | PARSE TREE #####################################################################
# Copyright (c) 2010 University of Antwerp, Belgium
# Author: Tom De Smedt <[email protected]>
# License: BSD (see LICENSE.txt for details).
# http://www.clips.ua.ac.be/pages/pattern
####################################################################################################
# Text and Sentence objects to traverse words and chunks in parsed text.
# from pattern.en import parsetree
# for sentence in parsetree("The cat sat on the mat."):
# for chunk in sentence.chunks:
# for word in chunk.words:
# print(word.string, word.tag, word.lemma)
# Terminology:
# - part-of-speech: the role that a word plays in a sentence: noun (NN), verb (VB), adjective, ...
# - sentence: a unit of language, with a subject (e.g., "the cat") and a predicate ("jumped").
# - token: a word in a sentence with a part-of-speech tag (e.g., "jump/VB" or "jump/NN").
# - word: a string of characters that expresses a meaningful concept (e.g., "cat").
# - lemma: the canonical word form ("jumped" => "jump").
# - lexeme: the set of word forms ("jump", "jumps", "jumping", ...)
# - chunk: a phrase, group of words that express a single thought (e.g., "the cat").
# - subject: the phrase that the sentence is about, usually a noun phrase.
# - predicate: the remainder of the sentence tells us what the subject does (jump).
# - object: the phrase that is affected by the action (the cat jumped [the mouse]").
# - preposition: temporal, spatial or logical relationship ("the cat jumped [on the table]").
# - anchor: the chunk to which the preposition is attached:
# "the cat eats its snackerel with vigor" => eat with vigor?
# OR => vigorous snackerel?
# The Text and Sentece classes are containers:
# no parsing functionality should be added to it.
try:
from itertools import chain
from itertools import izip
except:
izip = zip # Python 3
try:
from config import SLASH
from config import WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA
MBSP = True # Memory-Based Shallow Parser for Python.
except:
SLASH, WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA = \
"&slash;", "word", "part-of-speech", "chunk", "preposition", "relation", "anchor", "lemma"
MBSP = False
# B- marks the start of a chunk: the/DT/B-NP cat/NN/I-NP
# I- words are inside a chunk.
# O- words are outside a chunk (punctuation etc.).
IOB, BEGIN, INSIDE, OUTSIDE = "IOB", "B", "I", "O"
# -SBJ marks subjects: the/DT/B-NP-SBJ cat/NN/I-NP-SBJ
# -OBJ marks objects.
ROLE = "role"
SLASH0 = SLASH[0]
### LIST FUNCTIONS #################################################################################
def find(function, iterable):
""" Returns the first item in the list for which function(item) is True, None otherwise.
"""
for x in iterable:
if function(x) == True:
return x
def intersects(iterable1, iterable2):
""" Returns True if the given lists have at least one item in common.
"""
return find(lambda x: x in iterable1, iterable2) is not None
def unique(iterable):
""" Returns a list copy in which each item occurs only once (in-order).
"""
seen = set()
return [x for x in iterable if x not in seen and not seen.add(x)]
_zip = zip
def zip(*args, **kwargs):
""" Returns a list of tuples, where the i-th tuple contains the i-th element
from each of the argument sequences or iterables (or default if too short).
"""
args = [list(iterable) for iterable in args]
n = max(map(len, args))
v = kwargs.get("default", None)
return _zip(*[i + [v] * (n - len(i)) for i in args])
def unzip(i, iterable):
""" Returns the item at the given index from inside each tuple in the list.
"""
return [x[i] for x in iterable]
class Map(list):
""" A stored imap() on a list.
The list is referenced instead of copied, and the items are mapped on-the-fly.
"""
def __init__(self, function=lambda x: x, items=[]):
self._f = function
self._a = items
@property
def items(self):
return self._a
def __repr__(self):
return repr(list(iter(self)))
def __getitem__(self, i):
return self._f(self._a[i])
def __len__(self):
return len(self._a)
def __iter__(self):
i = 0
while i < len(self._a):
yield self._f(self._a[i])
i += 1
### SENTENCE #######################################################################################
# The output of parse() is a slash-formatted string (e.g., "the/DT cat/NN"),
# so slashes in words themselves are encoded as &slash;
encode_entities = lambda string: string.replace("/", SLASH)
decode_entities = lambda string: string.replace(SLASH, "/")
#--- WORD ------------------------------------------------------------------------------------------
class Word(object):
def __init__(self, sentence, string, lemma=None, type=None, index=0):
""" A word in the sentence.
- lemma: base form of the word; "was" => "be".
- type: the part-of-speech tag; "NN" => a noun.
- chunk: the chunk (or phrase) this word belongs to.
- index: the index in the sentence.
"""
if not isinstance(string, unicode):
try: string = string.decode("utf-8") # ensure Unicode
except:
pass
self.sentence = sentence
self.index = index
self.string = string # "was"
self.lemma = lemma # "be"
self.type = type # VB
self.chunk = None # Chunk object this word belongs to (i.e., a VP).
self.pnp = None # PNP chunk object this word belongs to.
# word.chunk and word.pnp are set in chunk.append().
self._custom_tags = None # Tags object, created on request.
def copy(self, chunk=None, pnp=None):
w = Word(
self.sentence,
self.string,
self.lemma,
self.type,
self.index
)
w.chunk = chunk
w.pnp = pnp
if self._custom_tags:
w._custom_tags = Tags(w, items=self._custom_tags)
return w
def _get_tag(self):
return self.type
def _set_tag(self, v):
self.type = v
tag = pos = part_of_speech = property(_get_tag, _set_tag)
@property
def phrase(self):
return self.chunk
@property
def prepositional_phrase(self):
return self.pnp
prepositional_noun_phrase = prepositional_phrase
@property
def tags(self):
""" Yields a list of all the token tags as they appeared when the word was parsed.
For example: ["was", "VBD", "B-VP", "O", "VP-1", "A1", "be"]
"""
# See also. Sentence.__repr__().
ch, I,O,B = self.chunk, INSIDE+"-", OUTSIDE, BEGIN+"-"
tags = [OUTSIDE for i in range(len(self.sentence.token))]
for i, tag in enumerate(self.sentence.token): # Default: [WORD, POS, CHUNK, PNP, RELATION, ANCHOR, LEMMA]
if tag == WORD:
tags[i] = encode_entities(self.string)
elif tag == POS or tag == "pos" and self.type:
tags[i] = self.type
elif tag == CHUNK and ch and ch.type:
tags[i] = (self == ch[0] and B or I) + ch.type
elif tag == PNP and self.pnp:
tags[i] = (self == self.pnp[0] and B or I) + "PNP"
elif tag == REL and ch and len(ch.relations) > 0:
tags[i] = ["-".join([str(x) for x in [ch.type]+list(reversed(r)) if x]) for r in ch.relations]
tags[i] = "*".join(tags[i])
elif tag == ANCHOR and ch:
tags[i] = ch.anchor_id or OUTSIDE
elif tag == LEMMA:
tags[i] = encode_entities(self.lemma or "")
elif tag in self.custom_tags:
tags[i] = self.custom_tags.get(tag) or OUTSIDE
return tags
@property
def custom_tags(self):
if not self._custom_tags: self._custom_tags = Tags(self)
return self._custom_tags
def next(self, type=None):
""" Returns the next word in the sentence with the given type.
"""
i = self.index + 1
s = self.sentence
while i < len(s):
if type in (s[i].type, None):
return s[i]
i += 1
def previous(self, type=None):
""" Returns the next previous word in the sentence with the given type.
"""
i = self.index - 1
s = self.sentence
while i > 0:
if type in (s[i].type, None):
return s[i]
i -= 1
# User-defined tags are available as Word.[tag] attributes.
def __getattr__(self, tag):
d = self.__dict__.get("_custom_tags", None)
if d and tag in d:
return d[tag]
raise AttributeError("Word instance has no attribute '%s'" % tag)
# Word.string and unicode(Word) are Unicode strings.
# repr(Word) is a Python string (with Unicode characters encoded).
def __unicode__(self):
return self.string
def __repr__(self):
return "Word(%s)" % repr("%s/%s" % (
encode_entities(self.string),
self.type is not None and self.type or OUTSIDE))
def __eq__(self, word):
return id(self) == id(word)
def __ne__(self, word):
return id(self) != id(word)
class Tags(dict):
def __init__(self, word, items=[]):
""" A dictionary of custom word tags.
A word may be annotated with its part-of-speech tag (e.g., "cat/NN"),
phrase tag (e.g., "cat/NN/NP"), the prepositional noun phrase it is part of etc.
An example of an extra custom slot is its semantic type,
e.g., gene type, topic, and so on: "cat/NN/NP/genus_felis"
"""
if items:
dict.__init__(self, items)
self.word = word
def __setitem__(self, k, v):
# Ensure that the custom tag is also in Word.sentence.token,
# so that it is not forgotten when exporting or importing XML.
dict.__setitem__(self, k, v)
if k not in reversed(self.word.sentence.token):
self.word.sentence.token.append(k)
def setdefault(self, k, v):
if k not in self:
self.__setitem__(k, v); return self[k]
#--- CHUNK -----------------------------------------------------------------------------------------
class Chunk(object):
def __init__(self, sentence, words=[], type=None, role=None, relation=None):
""" A list of words that make up a phrase in the sentence.
- type: the phrase tag; "NP" => a noun phrase (e.g., "the black cat").
- role: the function of the phrase; "SBJ" => sentence subject.
- relation: an id shared with other phrases, linking subject to object in the sentence.
"""
# A chunk can have multiple roles or relations in the sentence,
# so role and relation can also be given as lists.
b1 = isinstance(relation, (list, tuple))
b2 = isinstance(role, (list, tuple))
if not b1 and not b2:
r = [(relation, role)]
elif b1 and b2:
r = zip(relation, role)
elif b1:
r = zip(relation, [role] * len(relation))
elif b2:
r = zip([relation] * len(role), role)
r = [(a, b) for a, b in r if a is not None or b is not None]
self.sentence = sentence
self.words = []
self.type = type # NP, VP, ADJP ...
self.relations = r # NP-SBJ-1 => [(1, SBJ)]
self.pnp = None # PNP chunk object this chunk belongs to.
self.anchor = None # PNP chunk's anchor.
self.attachments = [] # PNP chunks attached to this anchor.
self._conjunctions = None # Conjunctions object, created on request.
self._modifiers = None
self.extend(words)
def extend(self, words):
for w in words:
self.append(w)
def append(self, word):
self.words.append(word)
word.chunk = self
def __getitem__(self, index):
return self.words[index]
def __len__(self):
return len(self.words)
def __iter__(self):
return self.words.__iter__()
def _get_tag(self):
return self.type
def _set_tag(self, v):
self.type = v
tag = pos = part_of_speech = property(_get_tag, _set_tag)
@property
def start(self):
return self.words[0].index
@property
def stop(self):
return self.words[-1].index + 1
@property
def range(self):
return range(self.start, self.stop)
@property
def span(self):
return (self.start, self.stop)
@property
def lemmata(self):
return [word.lemma for word in self.words]
@property
def tagged(self):
return [(word.string, word.type) for word in self.words]
@property
def head(self):
""" Yields the head of the chunk (usually, the last word in the chunk).
"""
if self.type == "NP" and any(w.type.startswith("NNP") for w in self):
w = find(lambda w: w.type.startswith("NNP"), reversed(self))
elif self.type == "NP": # "the cat" => "cat"
w = find(lambda w: w.type.startswith("NN"), reversed(self))
elif self.type == "VP": # "is watching" => "watching"
w = find(lambda w: w.type.startswith("VB"), reversed(self))
elif self.type == "PP": # "from up on" => "from"
w = find(lambda w: w.type.startswith(("IN", "PP")), self)
elif self.type == "PNP": # "from up on the roof" => "roof"
w = find(lambda w: w.type.startswith("NN"), reversed(self))
else:
w = None
if w is None:
w = self[-1]
return w
@property
def relation(self):
""" Yields the first relation id of the chunk.
"""
# [(2,OBJ), (3,OBJ)])] => 2
return len(self.relations) > 0 and self.relations[0][0] or None
@property
def role(self):
""" Yields the first role of the chunk (SBJ, OBJ, ...).
"""
# [(1,SBJ), (1,OBJ)])] => SBJ
return len(self.relations) > 0 and self.relations[0][1] or None
@property
def subject(self):
ch = self.sentence.relations["SBJ"].get(self.relation, None)
if ch != self:
return ch
@property
def object(self):
ch = self.sentence.relations["OBJ"].get(self.relation, None)
if ch != self:
return ch
@property
def verb(self):
ch = self.sentence.relations["VP"].get(self.relation, None)
if ch != self:
return ch
@property
def related(self):
""" Yields a list of all chunks in the sentence with the same relation id.
"""
return [ch for ch in self.sentence.chunks
if ch != self and intersects(unzip(0, ch.relations), unzip(0, self.relations))]
@property
def prepositional_phrase(self):
return self.pnp
prepositional_noun_phrase = prepositional_phrase
@property
def anchor_id(self):
""" Yields the anchor tag as parsed from the original token.
Chunks that are anchors have a tag with an "A" prefix (e.g., "A1").
Chunks that are PNP attachmens (or chunks inside a PNP) have "P" (e.g., "P1").
Chunks inside a PNP can be both anchor and attachment (e.g., "P1-A2"),
as in: "clawed/A1 at/P1 mice/P1-A2 in/P2 the/P2 wall/P2"
"""
id = ""
f = lambda ch: filter(lambda k: self.sentence._anchors[k] == ch, self.sentence._anchors)
if self.pnp and self.pnp.anchor:
id += "-" + "-".join(f(self.pnp))
if self.anchor:
id += "-" + "-".join(f(self))
if self.attachments:
id += "-" + "-".join(f(self))
return id.strip("-") or None
@property
def conjunctions(self):
if not self._conjunctions: self._conjunctions = Conjunctions(self)
return self._conjunctions
@property
def modifiers(self):
""" For verb phrases (VP), yields a list of the nearest adjectives and adverbs.
"""
if self._modifiers is None:
# Iterate over all the chunks and attach modifiers to their VP-anchor.
is_modifier = lambda ch: ch.type in ("ADJP", "ADVP") and ch.relation is None
for chunk in self.sentence.chunks:
chunk._modifiers = []
for chunk in filter(is_modifier, self.sentence.chunks):
anchor = chunk.nearest("VP")
if anchor: anchor._modifiers.append(chunk)
return self._modifiers
def nearest(self, type="VP"):
""" Returns the nearest chunk in the sentence with the given type.
This can be used (for example) to find adverbs and adjectives related to verbs,
as in: "the cat is ravenous" => is what? => "ravenous".
"""
candidate, d = None, len(self.sentence.chunks)
if isinstance(self, PNPChunk):
i = self.sentence.chunks.index(self.chunks[0])
else:
i = self.sentence.chunks.index(self)
for j, chunk in enumerate(self.sentence.chunks):
if chunk.type.startswith(type) and abs(i-j) < d:
candidate, d = chunk, abs(i-j)
return candidate
def next(self, type=None):
""" Returns the next chunk in the sentence with the given type.
"""
i = self.stop
s = self.sentence
while i < len(s):
if s[i].chunk is not None and type in (s[i].chunk.type, None):
return s[i].chunk
i += 1
def previous(self, type=None):
""" Returns the next previous chunk in the sentence with the given type.
"""
i = self.start - 1
s = self.sentence
while i > 0:
if s[i].chunk is not None and type in (s[i].chunk.type, None):
return s[i].chunk
i -= 1
# Chunk.string and unicode(Chunk) are Unicode strings.
# repr(Chunk) is a Python string (with Unicode characters encoded).
@property
def string(self):
return u" ".join(word.string for word in self.words)
def __unicode__(self):
return self.string
def __repr__(self):
return "Chunk(%s)" % repr("%s/%s%s%s") % (
self.string,
self.type is not None and self.type or OUTSIDE,
self.role is not None and ("-" + self.role) or "",
self.relation is not None and ("-" + str(self.relation)) or "")
def __eq__(self, chunk):
return id(self) == id(chunk)
def __ne__(self, chunk):
return id(self) != id(chunk)
# Chinks are non-chunks,
# see also the chunked() function:
class Chink(Chunk):
def __repr__(self):
return Chunk.__repr__(self).replace("Chunk(", "Chink(", 1)
#--- PNP CHUNK -------------------------------------------------------------------------------------
class PNPChunk(Chunk):
def __init__(self, *args, **kwargs):
""" A chunk of chunks that make up a prepositional noun phrase (i.e., PP + NP).
When the output of the parser includes PP-attachment,
PNPChunck.anchor will yield the chunk that is clarified by the preposition.
For example: "the cat went [for the mouse] [with its claws]":
- [went] what? => for the mouse,
- [went] how? => with its claws.
"""
self.anchor = None # The anchor chunk (e.g., "for the mouse" => "went").
self.chunks = [] # List of chunks in the prepositional noun phrase.
Chunk.__init__(self, *args, **kwargs)
def append(self, word):
self.words.append(word)
word.pnp = self
if word.chunk is not None:
word.chunk.pnp = self
if word.chunk not in self.chunks:
self.chunks.append(word.chunk)
@property
def preposition(self):
""" Yields the first chunk in the prepositional noun phrase, usually a PP-chunk.
PP-chunks contain words such as "for", "with", "in", ...
"""
return self.chunks[0]
pp = preposition
@property
def phrases(self):
return self.chunks
def guess_anchor(self):
""" Returns an anchor chunk for this prepositional noun phrase (without a PP-attacher).
Often, the nearest verb phrase is a good candidate.
"""
return self.nearest("VP")
#--- CONJUNCTION -----------------------------------------------------------------------------------
CONJUNCT = AND = "AND"
DISJUNCT = OR = "OR"
class Conjunctions(list):
def __init__(self, chunk):
""" Chunk.conjunctions is a list of other chunks participating in a conjunction.
Each item in the list is a (chunk, conjunction)-tuple, with conjunction either AND or OR.
"""
self.anchor = chunk
def append(self, chunk, type=CONJUNCT):
list.append(self, (chunk, type))
#--- SENTENCE --------------------------------------------------------------------------------------
_UID = 0
def _uid():
global _UID; _UID+=1; return _UID
def _is_tokenstring(string):
# The class mbsp.TokenString stores the format of tags for each token.
# Since it comes directly from MBSP.parse(), this format is always correct,
# regardless of the given token format parameter for Sentence() or Text().
return isinstance(string, unicode) and hasattr(string, "tags")
class Sentence(object):
def __init__(self, string="", token=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA], language="en"):
""" A nested tree of sentence words, chunks and prepositions.
The input is a tagged string from parse().
The order in which token tags appear can be specified.
"""
# Extract token format from TokenString or TaggedString if possible.
if _is_tokenstring(string):
token, language = string.tags, getattr(string, "language", language)
# Convert to Unicode.
if not isinstance(string, unicode):
for encoding in (("utf-8",), ("windows-1252",), ("utf-8", "ignore")):
try: string = string.decode(*encoding)
except:
pass
self.parent = None # A Slice refers to the Sentence it is part of.
self.text = None # A Sentence refers to the Text it is part of.
self.language = language
self.id = _uid()
self.token = list(token)
self.words = []
self.chunks = [] # Words grouped into chunks.
self.pnp = [] # Words grouped into PNP chunks.
self._anchors = {} # Anchor tags related to anchor chunks or attached PNP's.
self._relation = None # Helper variable: the last chunk's relation and role.
self._attachment = None # Helper variable: the last attachment tag (e.g., "P1") parsed in _do_pnp().
self._previous = None # Helper variable: the last token parsed in parse_token().
self.relations = {"SBJ":{}, "OBJ":{}, "VP":{}}
# Split the slash-formatted token into the separate tags in the given order.
# Append Word and Chunk objects according to the token's tags.
for chars in string.split(" "):
if chars:
self.append(*self.parse_token(chars, token))
@property
def word(self):
return self.words
@property
def lemmata(self):
return Map(lambda w: w.lemma, self.words)
#return [word.lemma for word in self.words]
lemma = lemmata
@property
def parts_of_speech(self):
return Map(lambda w: w.type, self.words)
#return [word.type for word in self.words]
pos = parts_of_speech
@property
def tagged(self):
return [(word.string, word.type) for word in self]
@property
def phrases(self):
return self.chunks
chunk = phrases
@property
def prepositional_phrases(self):
return self.pnp
prepositional_noun_phrases = prepositional_phrases
@property
def start(self):
return 0
@property
def stop(self):
return self.start + len(self.words)
@property
def nouns(self):
return [word for word in self if word.type.startswith("NN")]
@property
def verbs(self):
return [word for word in self if word.type.startswith("VB")]
@property
def adjectives(self):
return [word for word in self if word.type.startswith("JJ")]
@property
def subjects(self):
return self.relations["SBJ"].values()
@property
def objects(self):
return self.relations["OBJ"].values()
@property
def verbs(self):
return self.relations["VP"].values()
@property
def anchors(self):
return [chunk for chunk in self.chunks if len(chunk.attachments) > 0]
@property
def is_question(self):
return len(self) > 0 and str(self[-1]) == "?"
@property
def is_exclamation(self):
return len(self) > 0 and str(self[-1]) == "!"
def __getitem__(self, index):
return self.words[index]
def __len__(self):
return len(self.words)
def __iter__(self):
return self.words.__iter__()
def append(self, word, lemma=None, type=None, chunk=None, role=None, relation=None, pnp=None, anchor=None, iob=None, custom={}):
""" Appends the next word to the sentence / chunk / preposition.
For example: Sentence.append("clawed", "claw", "VB", "VP", role=None, relation=1)
- word : the current word,
- lemma : the canonical form of the word,
- type : part-of-speech tag for the word (NN, JJ, ...),
- chunk : part-of-speech tag for the chunk this word is part of (NP, VP, ...),
- role : the chunk's grammatical role (SBJ, OBJ, ...),
- relation : an id shared by other related chunks (e.g., SBJ-1 <=> VP-1),
- pnp : PNP if this word is in a prepositional noun phrase (B- prefix optional),
- iob : BEGIN if the word marks the start of a new chunk,
INSIDE (optional) if the word is part of the previous chunk,
- custom : a dictionary of (tag, value)-items for user-defined word tags.
"""
self._do_word(word, lemma, type) # Append Word object.
self._do_chunk(chunk, role, relation, iob) # Append Chunk, or add last word to last chunk.
self._do_conjunction()
self._do_relation()
self._do_pnp(pnp, anchor)
self._do_anchor(anchor)
self._do_custom(custom)
def parse_token(self, token, tags=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]):
""" Returns the arguments for Sentence.append() from a tagged token representation.
The order in which token tags appear can be specified.
The default order is (separated by slashes):
- word,
- part-of-speech,
- (IOB-)chunk,
- (IOB-)preposition,
- chunk(-relation)(-role),
- anchor,
- lemma.
Examples:
The/DT/B-NP/O/NP-SBJ-1/O/the
cats/NNS/I-NP/O/NP-SBJ-1/O/cat
clawed/VBD/B-VP/O/VP-1/A1/claw
at/IN/B-PP/B-PNP/PP/P1/at
the/DT/B-NP/I-PNP/NP/P1/the
sofa/NN/I-NP/I-PNP/NP/P1/sofa
././O/O/O/O/.
Returns a (word, lemma, type, chunk, role, relation, preposition, anchor, iob, custom)-tuple,
which can be passed to Sentence.append(): Sentence.append(*Sentence.parse_token("cats/NNS/NP"))
The custom value is a dictionary of (tag, value)-items of unrecognized tags in the token.
"""
p = { WORD: "",
POS: None,
IOB: None,
CHUNK: None,
PNP: None,
REL: None,
ROLE: None,
ANCHOR: None,
LEMMA: None }
# Split the slash-formatted token into separate tags in the given order.
# Decode &slash; characters (usually in words and lemmata).
# Assume None for missing tags (except the word itself, which defaults to an empty string).
custom = {}
for k, v in izip(tags, token.split("/")):
if SLASH0 in v:
v = v.replace(SLASH, "/")
if k == "pos":
k = POS
if k not in p:
custom[k] = None
if v != OUTSIDE or k == WORD or k == LEMMA: # "type O negative" => "O" != OUTSIDE.
(p if k not in custom else custom)[k] = v
# Split IOB-prefix from the chunk tag:
# B- marks the start of a new chunk,
# I- marks inside of a chunk.
ch = p[CHUNK]
if ch is not None and ch.startswith(("B-", "I-")):
p[IOB], p[CHUNK] = ch[:1], ch[2:] # B-NP
# Split the role from the relation:
# NP-SBJ-1 => relation id is 1 and role is SBJ,
# VP-1 => relation id is 1 with no role.
# Tokens may be tagged with multiple relations (e.g., NP-OBJ-1*NP-OBJ-3).
if p[REL] is not None:
ch, p[REL], p[ROLE] = self._parse_relation(p[REL])
# Infer a missing chunk tag from the relation tag (e.g., NP-SBJ-1 => NP).
# For PP relation tags (e.g., PP-CLR-1), the first chunk is PP, the following chunks NP.
if ch == "PP" \
and self._previous \
and self._previous[REL] == p[REL] \
and self._previous[ROLE] == p[ROLE]:
ch = "NP"
if p[CHUNK] is None and ch != OUTSIDE:
p[CHUNK] = ch
self._previous = p
# Return the tags in the right order for Sentence.append().
return p[WORD], p[LEMMA], p[POS], p[CHUNK], p[ROLE], p[REL], p[PNP], p[ANCHOR], p[IOB], custom
def _parse_relation(self, tag):
""" Parses the chunk tag, role and relation id from the token relation tag.
- VP => VP, [], []
- VP-1 => VP, [1], [None]
- ADJP-PRD => ADJP, [None], [PRD]
- NP-SBJ-1 => NP, [1], [SBJ]
- NP-OBJ-1*NP-OBJ-2 => NP, [1,2], [OBJ,OBJ]
- NP-SBJ;NP-OBJ-1 => NP, [1,1], [SBJ,OBJ]
"""
chunk, relation, role = None, [], []
if ";" in tag:
# NP-SBJ;NP-OBJ-1 => 1 relates to both SBJ and OBJ.
id = tag.split("*")[0][-2:]
id = id if id.startswith("-") else ""
tag = tag.replace(";", id + "*")
if "*" in tag:
tag = tag.split("*")
else:
tag = [tag]
for s in tag:
s = s.split("-")
n = len(s)
if n == 1:
chunk = s[0]
if n == 2:
chunk = s[0]; relation.append(s[1]); role.append(None)
if n >= 3:
chunk = s[0]; relation.append(s[2]); role.append(s[1])
if n > 1:
id = relation[-1]
if id.isdigit():
relation[-1] = int(id)
else:
# Correct "ADJP-PRD":
# (ADJP, [PRD], [None]) => (ADJP, [None], [PRD])
relation[-1], role[-1] = None, id
return chunk, relation, role
def _do_word(self, word, lemma=None, type=None):
""" Adds a new Word to the sentence.
Other Sentence._do_[tag] functions assume a new word has just been appended.
"""
# Improve 3rd person singular "'s" lemma to "be", e.g., as in "he's fine".
if lemma == "'s" and type in ("VB", "VBZ"):
lemma = "be"
self.words.append(Word(self, word, lemma, type, index=len(self.words)))
def _do_chunk(self, type, role=None, relation=None, iob=None):
""" Adds a new Chunk to the sentence, or adds the last word to the previous chunk.
The word is attached to the previous chunk if both type and relation match,
and if the word's chunk tag does not start with "B-" (i.e., iob != BEGIN).
Punctuation marks (or other "O" chunk tags) are not chunked.
"""
if (type is None or type == OUTSIDE) and \
(role is None or role == OUTSIDE) and (relation is None or relation == OUTSIDE):
return
if iob != BEGIN \
and self.chunks \
and self.chunks[-1].type == type \
and self._relation == (relation, role) \
and self.words[-2].chunk is not None: # "one, two" => "one" & "two" different chunks.
self.chunks[-1].append(self.words[-1])
else:
ch = Chunk(self, [self.words[-1]], type, role, relation)
self.chunks.append(ch)
self._relation = (relation, role)
def _do_relation(self):
""" Attaches subjects, objects and verbs.
If the previous chunk is a subject/object/verb, it is stored in Sentence.relations{}.
"""
if self.chunks:
ch = self.chunks[-1]
for relation, role in ch.relations:
if role == "SBJ" or role == "OBJ":
self.relations[role][relation] = ch
if ch.type in ("VP",):
self.relations[ch.type][ch.relation] = ch
def _do_pnp(self, pnp, anchor=None):
""" Attaches prepositional noun phrases.
Identifies PNP's from either the PNP tag or the P-attachment tag.
This does not determine the PP-anchor, it only groups words in a PNP chunk.
"""
if anchor or pnp and pnp.endswith("PNP"):
if anchor is not None:
m = find(lambda x: x.startswith("P"), anchor)
else:
m = None
if self.pnp \
and pnp \
and pnp != OUTSIDE \
and pnp.startswith("B-") is False \
and self.words[-2].pnp is not None:
self.pnp[-1].append(self.words[-1])
elif m is not None and m == self._attachment:
self.pnp[-1].append(self.words[-1])
else:
ch = PNPChunk(self, [self.words[-1]], type="PNP")
self.pnp.append(ch)
self._attachment = m
def _do_anchor(self, anchor):
""" Collects preposition anchors and attachments in a dictionary.
Once the dictionary has an entry for both the anchor and the attachment, they are linked.
"""
if anchor:
for x in anchor.split("-"):
A, P = None, None
if x.startswith("A") and len(self.chunks) > 0: # anchor
A, P = x, x.replace("A","P")
self._anchors[A] = self.chunks[-1]
if x.startswith("P") and len(self.pnp) > 0: # attachment (PNP)
A, P = x.replace("P","A"), x
self._anchors[P] = self.pnp[-1]
if A in self._anchors and P in self._anchors and not self._anchors[P].anchor:
pnp = self._anchors[P]
pnp.anchor = self._anchors[A]
pnp.anchor.attachments.append(pnp)
def _do_custom(self, custom):
""" Adds the user-defined tags to the last word.
Custom tags can be used to add extra semantical meaning or metadata to words.
"""
if custom:
self.words[-1].custom_tags.update(custom)
def _do_conjunction(self, _and=("and", "e", "en", "et", "und", "y")):
""" Attach conjunctions.
CC-words like "and" and "or" between two chunks indicate a conjunction.
"""
w = self.words
if len(w) > 2 and w[-2].type == "CC" and w[-2].chunk is None:
cc = w[-2].string.lower() in _and and AND or OR
ch1 = w[-3].chunk
ch2 = w[-1].chunk
if ch1 is not None and \
ch2 is not None:
ch1.conjunctions.append(ch2, cc)
ch2.conjunctions.append(ch1, cc)
def get(self, index, tag=LEMMA):
""" Returns a tag for the word at the given index.
The tag can be WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHOR or a custom word tag.
"""
if tag == WORD:
return self.words[index]
if tag == LEMMA:
return self.words[index].lemma
if tag == POS or tag == "pos":
return self.words[index].type
if tag == CHUNK:
return self.words[index].chunk
if tag == PNP:
return self.words[index].pnp
if tag == REL:
ch = self.words[index].chunk; return ch and ch.relation
if tag == ROLE:
ch = self.words[index].chunk; return ch and ch.role
if tag == ANCHOR:
ch = self.words[index].pnp; return ch and ch.anchor
if tag in self.words[index].custom_tags:
return self.words[index].custom_tags[tag]
return None
def loop(self, *tags):
""" Iterates over the tags in the entire Sentence,
For example, Sentence.loop(POS, LEMMA) yields tuples of the part-of-speech tags and lemmata.
Possible tags: WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHOR or a custom word tag.
Any order or combination of tags can be supplied.
"""
for i in range(len(self.words)):
yield tuple([self.get(i, tag=tag) for tag in tags])
def indexof(self, value, tag=WORD):
""" Returns the indices of tokens in the sentence where the given token tag equals the string.
The string can contain a wildcard "*" at the end (this way "NN*" will match "NN" and "NNS").
The tag can be WORD, LEMMA, POS, CHUNK, PNP, RELATION, ROLE, ANCHOR or a custom word tag.
For example: Sentence.indexof("VP", tag=CHUNK)
returns the indices of all the words that are part of a VP chunk.
"""
match = lambda a, b: a.endswith("*") and b.startswith(a[:-1]) or a==b
indices = []
for i in range(len(self.words)):
if match(value, unicode(self.get(i, tag))):
indices.append(i)
return indices
def slice(self, start, stop):
""" Returns a portion of the sentence from word start index to word stop index.
The returned slice is a subclass of Sentence and a deep copy.
"""
s = Slice(token=self.token, language=self.language)
for i, word in enumerate(self.words[start:stop]):
# The easiest way to copy (part of) a sentence
# is by unpacking all of the token tags and passing them to Sentence.append().
p0 = word.string # WORD
p1 = word.lemma # LEMMA
p2 = word.type # POS
p3 = word.chunk is not None and word.chunk.type or None # CHUNK
p4 = word.pnp is not None and "PNP" or None # PNP
p5 = word.chunk is not None and unzip(0, word.chunk.relations) or None # REL
p6 = word.chunk is not None and unzip(1, word.chunk.relations) or None # ROLE
p7 = word.chunk and word.chunk.anchor_id or None # ANCHOR
p8 = word.chunk and word.chunk.start == start+i and BEGIN or None # IOB
p9 = word.custom_tags # User-defined tags.
# If the given range does not contain the chunk head, remove the chunk tags.
if word.chunk is not None and (word.chunk.stop > stop):
p3, p4, p5, p6, p7, p8 = None, None, None, None, None, None
# If the word starts the preposition, add the IOB B-prefix (i.e., B-PNP).
if word.pnp is not None and word.pnp.start == start+i:
p4 = BEGIN+"-"+"PNP"
# If the given range does not contain the entire PNP, remove the PNP tags.
# The range must contain the entire PNP,
# since it starts with the PP and ends with the chunk head (and is meaningless without these).
if word.pnp is not None and (word.pnp.start < start or word.chunk.stop > stop):
p4, p7 = None, None
s.append(word=p0, lemma=p1, type=p2, chunk=p3, pnp=p4, relation=p5, role=p6, anchor=p7, iob=p8, custom=p9)
s.parent = self
s._start = start
return s
def copy(self):
return self.slice(0, len(self))
def chunked(self):
return chunked(self)
def constituents(self, pnp=False):
""" Returns an in-order list of mixed Chunk and Word objects.
With pnp=True, also contains PNPChunk objects whenever possible.
"""
a = []
for word in self.words:
if pnp and word.pnp is not None:
if len(a) == 0 or a[-1] != word.pnp:
a.append(word.pnp)
elif word.chunk is not None:
if len(a) == 0 or a[-1] != word.chunk:
a.append(word.chunk)
else:
a.append(word)
return a
# Sentence.string and unicode(Sentence) are Unicode strings.
# repr(Sentence) is a Python strings (with Unicode characters encoded).
@property
def string(self):
return u" ".join(word.string for word in self)
def __unicode__(self):
return self.string
def __repr__(self):
return "Sentence(%s)" % repr(" ".join(["/".join(word.tags) for word in self.words]).encode("utf-8"))
def __eq__(self, other):
if not isinstance(other, Sentence):
return False
return len(self) == len(other) and repr(self) == repr(other)
@property
def xml(self):
""" Yields the sentence as an XML-formatted string (plain bytestring, UTF-8 encoded).
"""
return parse_xml(self, tab="\t", id=self.id or "")
@classmethod
def from_xml(cls, xml):
""" Returns a new Text from the given XML string.
"""
s = parse_string(xml)
return Sentence(s.split("\n")[0], token=s.tags, language=s.language)
fromxml = from_xml
def nltk_tree(self):
""" The sentence as an nltk.tree object.
"""
return nltk_tree(self)
class Slice(Sentence):
def __init__(self, *args, **kwargs):
""" A portion of the sentence returned by Sentence.slice().
"""
self._start = kwargs.pop("start", 0)
Sentence.__init__(self, *args, **kwargs)
@property
def start(self):
return self._start
@property
def stop(self):
return self._start + len(self.words)
#---------------------------------------------------------------------------------------------------
# s = Sentence(parse("black cats and white dogs"))
# s.words => [Word('black/JJ'), Word('cats/NNS'), Word('and/CC'), Word('white/JJ'), Word('dogs/NNS')]
# s.chunks => [Chunk('black cats/NP'), Chunk('white dogs/NP')]
# s.constituents() => [Chunk('black cats/NP'), Word('and/CC'), Chunk('white dogs/NP')]
# s.chunked(s) => [Chunk('black cats/NP'), Chink('and/O'), Chunk('white dogs/NP')]
def chunked(sentence):
""" Returns a list of Chunk and Chink objects from the given sentence.
Chink is a subclass of Chunk used for words that have Word.chunk == None
(e.g., punctuation marks, conjunctions).
"""
# For example, to construct a training vector with the head of previous chunks as a feature.
# Doing this with Sentence.chunks would discard the punctuation marks and conjunctions
# (Sentence.chunks only yields Chunk objects), which amy be useful features.
chunks = []
for word in sentence:
if word.chunk is not None:
if len(chunks) == 0 or chunks[-1] != word.chunk:
chunks.append(word.chunk)
else:
ch = Chink(sentence)
ch.append(word.copy(ch))
chunks.append(ch)
return chunks
#--- TEXT ------------------------------------------------------------------------------------------
class Text(list):
def __init__(self, string, token=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA], language="en", encoding="utf-8"):
""" A list of Sentence objects parsed from the given string.
The string is the Unicode return value from parse().
"""
self.encoding = encoding
# Extract token format from TokenString if possible.
if _is_tokenstring(string):
token, language = string.tags, getattr(string, "language", language)
if string:
# From a string.
if isinstance(string, basestring):
string = string.splitlines()
# From an iterable (e.g., string.splitlines(), open('parsed.txt')).
self.extend(Sentence(s, token, language) for s in string)
def insert(self, index, sentence):
list.insert(self, index, sentence)
sentence.text = self
def append(self, sentence):
list.append(self, sentence)
sentence.text = self
def extend(self, sentences):
list.extend(self, sentences)
for s in sentences:
s.text = self
def remove(self, sentence):
list.remove(self, sentence)
sentence.text = None
def pop(self, index):
sentence = list.pop(self, index)
sentence.text = None
return sentence
@property
def sentences(self):
return list(self)
@property
def words(self):
return list(chain(*self))
def copy(self):
t = Text("", encoding=self.encoding)
for sentence in self:
t.append(sentence.copy())
return t
# Text.string and unicode(Text) are Unicode strings.
@property
def string(self):
return u"\n".join(sentence.string for sentence in self)
def __unicode__(self):
return self.string
#def __repr__(self):
# return "\n".join([repr(sentence) for sentence in self])
@property
def xml(self):
""" Yields the sentence as an XML-formatted string (plain bytestring, UTF-8 encoded).
All the sentences in the XML are wrapped in a <text> element.
"""
xml = []
xml.append('<?xml version="1.0" encoding="%s"?>' % XML_ENCODING.get(self.encoding, self.encoding))
xml.append("<%s>" % XML_TEXT)
xml.extend([sentence.xml for sentence in self])
xml.append("</%s>" % XML_TEXT)
return "\n".join(xml)
@classmethod
def from_xml(cls, xml):
""" Returns a new Text from the given XML string.
"""
return Text(parse_string(xml))
fromxml = from_xml
Tree = Text
def tree(string, token=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]):
""" Transforms the output of parse() into a Text object.
The token parameter lists the order of tags in each token in the input string.
"""
return Text(string, token)
split = tree # Backwards compatibility.
def xml(string, token=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]):
""" Transforms the output of parse() into XML.
The token parameter lists the order of tags in each token in the input string.
"""
return Text(string, token).xml
### XML ############################################################################################
# Elements:
XML_TEXT = "text" # <text>, corresponds to Text object.
XML_SENTENCE = "sentence" # <sentence>, corresponds to Sentence object.
XML_CHINK = "chink" # <chink>, where word.chunk.type=None.
XML_CHUNK = "chunk" # <chunk>, corresponds to Chunk object.
XML_PNP = "chunk" # <chunk type="PNP">, corresponds to PNP chunk object.
XML_WORD = "word" # <word>, corresponds to Word object
# Attributes:
XML_LANGUAGE = "language" # <sentence language="">, defines the language used.
XML_TOKEN = "token" # <sentence token="">, defines the order of tags in a token.
XML_TYPE = "type" # <word type="">, <chunk type="">
XML_RELATION = "relation" # <chunk relation="">
XML_ID = "id" # <chunk id="">
XML_OF = "of" # <chunk of=""> corresponds to id-attribute.
XML_ANCHOR = "anchor" # <chunk anchor=""> corresponds to id-attribute.
XML_LEMMA = "lemma" # <word lemma="">
XML_ENCODING = {
'utf8' : 'UTF-8',
'utf-8' : 'UTF-8',
'utf16' : 'UTF-16',
'utf-16' : 'UTF-16',
'latin' : 'ISO-8859-1',
'latin1' : 'ISO-8859-1',
'latin-1' : 'ISO-8859-1',
'cp1252' : 'windows-1252',
'windows-1252' : 'windows-1252'
}
def xml_encode(string):
""" Returns the string with XML-safe special characters.
"""
string = string.replace("&", "&")
string = string.replace("<", "<")
string = string.replace(">", ">")
string = string.replace("\"",""")
string = string.replace(SLASH, "/")
return string
def xml_decode(string):
""" Returns the string with special characters decoded.
"""
string = string.replace("&", "&")
string = string.replace("<", "<")
string = string.replace(">", ">")
string = string.replace(""","\"")
string = string.replace("/", SLASH)
return string
#--- SENTENCE TO XML -------------------------------------------------------------------------------
# Relation id's in the XML output are relative to the sentence id,
# so relation 1 in sentence 2 = "2.1".
_UID_SEPARATOR = "."
def parse_xml(sentence, tab="\t", id=""):
""" Returns the given Sentence object as an XML-string (plain bytestring, UTF-8 encoded).
The tab delimiter is used as indendation for nested elements.
The id can be used as a unique identifier per sentence for chunk id's and anchors.
For example: "I eat pizza with a fork." =>
<sentence token="word, part-of-speech, chunk, preposition, relation, anchor, lemma" language="en">
<chunk type="NP" relation="SBJ" of="1">
<word type="PRP" lemma="i">I</word>
</chunk>
<chunk type="VP" relation="VP" id="1" anchor="A1">
<word type="VBP" lemma="eat">eat</word>
</chunk>
<chunk type="NP" relation="OBJ" of="1">
<word type="NN" lemma="pizza">pizza</word>
</chunk>
<chunk type="PNP" of="A1">
<chunk type="PP">
<word type="IN" lemma="with">with</word>
</chunk>
<chunk type="NP">
<word type="DT" lemma="a">a</word>
<word type="NN" lemma="fork">fork</word>
</chunk>
</chunk>
<chink>
<word type="." lemma=".">.</word>
</chink>
</sentence>
"""
uid = lambda *parts: "".join([str(id), _UID_SEPARATOR ]+[str(x) for x in parts]).lstrip(_UID_SEPARATOR)
push = lambda indent: indent+tab # push() increases the indentation.
pop = lambda indent: indent[:-len(tab)] # pop() decreases the indentation.
indent = tab
xml = []
# Start the sentence element:
# <sentence token="word, part-of-speech, chunk, preposition, relation, anchor, lemma">
xml.append('<%s%s %s="%s" %s="%s">' % (
XML_SENTENCE,
XML_ID and " %s=\"%s\"" % (XML_ID, str(id)) or "",
XML_TOKEN, ", ".join(sentence.token),
XML_LANGUAGE, sentence.language
))
# Collect chunks that are PNP anchors and assign id.
anchors = {}
for chunk in sentence.chunks:
if chunk.attachments:
anchors[chunk.start] = len(anchors) + 1
# Traverse all words in the sentence.
for word in sentence.words:
chunk = word.chunk
pnp = word.chunk and word.chunk.pnp or None
# Start the PNP element if the chunk is the first chunk in PNP:
# <chunk type="PNP" of="A1">
if pnp and pnp.start == chunk.start and pnp.start == word.index:
a = pnp.anchor and ' %s="%s"' % (XML_OF, uid("A", anchors.get(pnp.anchor.start, ""))) or ""
xml.append(indent + '<%s %s="PNP"%s>' % (XML_CHUNK, XML_TYPE, a))
indent = push(indent)
# Start the chunk element if the word is the first word in the chunk:
# <chunk type="VP" relation="VP" id="1" anchor="A1">
if chunk and chunk.start == word.index:
if chunk.relations:
# Create the shortest possible attribute values for multiple relations,
# e.g., [(1,"OBJ"),(2,"OBJ")]) => relation="OBJ" id="1|2"
r1 = unzip(0, chunk.relations) # Relation id's.
r2 = unzip(1, chunk.relations) # Relation roles.
r1 = [x is None and "-" or uid(x) for x in r1]
r2 = [x is None and "-" or x for x in r2]
r1 = not len(unique(r1)) == 1 and "|".join(r1) or (r1+[None])[0]
r2 = not len(unique(r2)) == 1 and "|".join(r2) or (r2+[None])[0]
xml.append(indent + '<%s%s%s%s%s%s>' % (
XML_CHUNK,
chunk.type and ' %s="%s"' % (XML_TYPE, chunk.type) or "",
chunk.relations and chunk.role != None and ' %s="%s"' % (XML_RELATION, r2) or "",
chunk.relation and chunk.type == "VP" and ' %s="%s"' % (XML_ID, uid(chunk.relation)) or "",
chunk.relation and chunk.type != "VP" and ' %s="%s"' % (XML_OF, r1) or "",
chunk.attachments and ' %s="%s"' % (XML_ANCHOR, uid("A",anchors[chunk.start])) or ""
))
indent = push(indent)
# Words outside of a chunk are wrapped in a <chink> tag:
# <chink>
if not chunk:
xml.append(indent + '<%s>' % XML_CHINK)
indent = push(indent)
# Add the word element:
# <word type="VBP" lemma="eat">eat</word>
xml.append(indent + '<%s%s%s%s>%s</%s>' % (
XML_WORD,
word.type and ' %s="%s"' % (XML_TYPE, xml_encode(word.type)) or '',
word.lemma and ' %s="%s"' % (XML_LEMMA, xml_encode(word.lemma)) or '',
(" "+" ".join(['%s="%s"' % (k,v) for k,v in word.custom_tags.items() if v != None])).rstrip(),
xml_encode(unicode(word)),
XML_WORD
))
if not chunk:
# Close the <chink> element if outside of a chunk.
indent = pop(indent); xml.append(indent + "</%s>" % XML_CHINK)
if chunk and chunk.stop-1 == word.index:
# Close the <chunk> element if this is the last word in the chunk.
indent = pop(indent); xml.append(indent + "</%s>" % XML_CHUNK)
if pnp and pnp.stop-1 == word.index:
# Close the PNP element if this is the last word in the PNP.
indent = pop(indent); xml.append(indent + "</%s>" % XML_CHUNK)
xml.append("</%s>" % XML_SENTENCE)
# Return as a plain str.
return "\n".join(xml).encode("utf-8")
#--- XML TO SENTENCE(S) ----------------------------------------------------------------------------
# Classes XML and XMLNode provide an abstract interface to cElementTree.
# The advantage is that we can switch to a faster parser in the future
# (as we did when switching from xml.dom.minidom to xml.etree).
# cElemenTree is fast; but the fastest way is to simply store and reload the parsed Unicode string.
# The disadvantage is that we need to remember the token format, see (1) below:
# s = "..."
# s = parse(s, lemmata=True)
# open("parsed.txt", "w", encoding="utf-8").write(s)
# s = open("parsed.txt", encoding="utf-8")
# s = Text(s, token=[WORD, POS, CHUNK, PNP, LEMMA]) # (1)
class XML(object):
def __init__(self, string):
from xml.etree import cElementTree
self.root = cElementTree.fromstring(string)
def __call__(self, tag):
return self.root.tag == tag \
and [XMLNode(self.root)] \
or [XMLNode(e) for e in self.root.findall(tag)]
class XMLNode(object):
def __init__(self, element):
self.element = element
@property
def tag(self):
return self.element.tag
@property
def value(self):
return self.element.text
def __iter__(self):
return iter(XMLNode(e) for e in self.element)
def __getitem__(self, k):
return self.element.attrib[k]
def get(self, k, default=""):
return self.element.attrib.get(k, default)
# The structure of linked anchor chunks and PNP attachments
# is collected from _parse_token() calls.
_anchors = {} # {u'A1': [[u'eat', u'VBP', u'B-VP', 'O', u'VP-1', 'O', u'eat', 'O']]}
_attachments = {} # {u'A1': [[[u'with', u'IN', u'B-PP', 'B-PNP', u'PP', 'O', u'with', 'O'],
# [u'a', u'DT', u'B-NP', 'I-PNP', u'NP', 'O', u'a', 'O'],
# [u'fork', u'NN', u'I-NP', 'I-PNP', u'NP', 'O', u'fork', 'O']]]}
# This is a fallback if for some reason we fail to import MBSP.TokenString,
# e.g., when tree.py is part of another project.
class TaggedString(unicode):
def __new__(cls, string, tags=["word"], language="en"):
if isinstance(string, unicode) and hasattr(string, "tags"):
tags, language = string.tags, getattr(string, "language", language)
s = unicode.__new__(cls, string)
s.tags = list(tags)
s.language = language
return s
def parse_string(xml):
""" Returns a slash-formatted string from the given XML representation.
The return value is a TokenString (for MBSP) or TaggedString (for Pattern).
"""
string = ""
# Traverse all the <sentence> elements in the XML.
dom = XML(xml)
for sentence in dom(XML_SENTENCE):
_anchors.clear() # Populated by calling _parse_tokens().
_attachments.clear() # Populated by calling _parse_tokens().
# Parse the language from <sentence language="">.
language = sentence.get(XML_LANGUAGE, "en")
# Parse the token tag format from <sentence token="">.
# This information is returned in TokenString.tags,
# so the format and order of the token tags is retained when exporting/importing as XML.
format = sentence.get(XML_TOKEN, [WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA])
format = not isinstance(format, basestring) and format or format.replace(" ","").split(",")
# Traverse all <chunk> and <chink> elements in the sentence.
# Find the <word> elements inside and create tokens.
tokens = []
for chunk in sentence:
tokens.extend(_parse_tokens(chunk, format))
# Attach PNP's to their anchors.
# Keys in _anchors have linked anchor chunks (each chunk is a list of tokens).
# The keys correspond to the keys in _attachments, which have linked PNP chunks.
if ANCHOR in format:
A, P, a, i = _anchors, _attachments, 1, format.index(ANCHOR)
for id in sorted(A.keys()):
for token in A[id]:
token[i] += "-"+"-".join(["A"+str(a+p) for p in range(len(P[id]))])
token[i] = token[i].strip("O-")
for p, pnp in enumerate(P[id]):
for token in pnp:
token[i] += "-"+"P"+str(a+p)
token[i] = token[i].strip("O-")
a += len(P[id])
# Collapse the tokens to string.
# Separate multiple sentences with a new line.
tokens = ["/".join([tag for tag in token]) for token in tokens]
tokens = " ".join(tokens)
string += tokens + "\n"
# Return a TokenString, which is a unicode string that transforms easily
# into a plain str, a list of tokens, or a Sentence.
try:
if MBSP: from mbsp import TokenString
return TokenString(string.strip(), tags=format, language=language)
except:
return TaggedString(string.strip(), tags=format, language=language)
def _parse_tokens(chunk, format=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]):
""" Parses tokens from <word> elements in the given XML <chunk> element.
Returns a flat list of tokens, in which each token is [WORD, POS, CHUNK, PNP, RELATION, ANCHOR, LEMMA].
If a <chunk type="PNP"> is encountered, traverses all of the chunks in the PNP.
"""
tokens = []
# Only process <chunk> and <chink> elements,
# text nodes in between return an empty list.
if not (chunk.tag == XML_CHUNK or chunk.tag == XML_CHINK):
return []
type = chunk.get(XML_TYPE, "O")
if type == "PNP":
# For, <chunk type="PNP">, recurse all the child chunks inside the PNP.
for ch in chunk:
tokens.extend(_parse_tokens(ch, format))
# Tag each of them as part of the PNP.
if PNP in format:
i = format.index(PNP)
for j, token in enumerate(tokens):
token[i] = (j==0 and "B-" or "I-") + "PNP"
# Store attachments so we can construct anchor id's in parse_string().
# This has to be done at the end, when all the chunks have been found.
a = chunk.get(XML_OF).split(_UID_SEPARATOR)[-1]
if a:
_attachments.setdefault(a, [])
_attachments[a].append(tokens)
return tokens
# For <chunk type-"VP" id="1">, the relation is VP-1.
# For <chunk type="NP" relation="OBJ" of="1">, the relation is NP-OBJ-1.
relation = _parse_relation(chunk, type)
# Process all of the <word> elements in the chunk, for example:
# <word type="NN" lemma="pizza">pizza</word> => [pizza, NN, I-NP, O, NP-OBJ-1, O, pizza]
for word in filter(lambda n: n.tag == XML_WORD, chunk):
tokens.append(_parse_token(word, chunk=type, relation=relation, format=format))
# Add the IOB chunk tags:
# words at the start of a chunk are marked with B-, words inside with I-.
if CHUNK in format:
i = format.index(CHUNK)
for j, token in enumerate(tokens):
token[i] = token[i] != "O" and ((j==0 and "B-" or "I-") + token[i]) or "O"
# The chunk can be the anchor of one or more PNP chunks.
# Store anchors so we can construct anchor id's in parse_string().
a = chunk.get(XML_ANCHOR, "").split(_UID_SEPARATOR)[-1]
if a:
_anchors[a] = tokens
return tokens
def _parse_relation(chunk, type="O"):
""" Returns a string of the roles and relations parsed from the given <chunk> element.
The chunk type (which is part of the relation string) can be given as parameter.
"""
r1 = chunk.get(XML_RELATION)
r2 = chunk.get(XML_ID, chunk.get(XML_OF))
r1 = [x != "-" and x or None for x in r1.split("|")] or [None]
r2 = [x != "-" and x or None for x in r2.split("|")] or [None]
r2 = [x is not None and x.split(_UID_SEPARATOR )[-1] or x for x in r2]
if len(r1) < len(r2): r1 = r1 + r1 * (len(r2)-len(r1)) # [1] ["SBJ", "OBJ"] => "SBJ-1;OBJ-1"
if len(r2) < len(r1): r2 = r2 + r2 * (len(r1)-len(r2)) # [2,4] ["OBJ"] => "OBJ-2;OBJ-4"
return ";".join(["-".join([x for x in (type, r1, r2) if x]) for r1, r2 in zip(r1, r2)])
def _parse_token(word, chunk="O", pnp="O", relation="O", anchor="O",
format=[WORD, POS, CHUNK, PNP, REL, ANCHOR, LEMMA]):
""" Returns a list of token tags parsed from the given <word> element.
Tags that are not attributes in a <word> (e.g., relation) can be given as parameters.
"""
tags = []
for tag in format:
if tag == WORD : tags.append(xml_decode(word.value))
elif tag == POS : tags.append(xml_decode(word.get(XML_TYPE, "O")))
elif tag == CHUNK : tags.append(chunk)
elif tag == PNP : tags.append(pnp)
elif tag == REL : tags.append(relation)
elif tag == ANCHOR : tags.append(anchor)
elif tag == LEMMA : tags.append(xml_decode(word.get(XML_LEMMA, "")))
else:
# Custom tags when the parser has been extended, see also Word.custom_tags{}.
tags.append(xml_decode(word.get(tag, "O")))
return tags
### NLTK TREE ######################################################################################
def nltk_tree(sentence):
""" Returns an NLTK nltk.tree.Tree object from the given Sentence.
The NLTK module should be on the search path somewhere.
"""
from nltk import tree
def do_pnp(pnp):
# Returns the PNPChunk (and the contained Chunk objects) in NLTK bracket format.
s = ' '.join([do_chunk(ch) for ch in pnp.chunks])
return '(PNP %s)' % s
def do_chunk(ch):
# Returns the Chunk in NLTK bracket format. Recurse attached PNP's.
s = ' '.join(['(%s %s)' % (w.pos, w.string) for w in ch.words])
s+= ' '.join([do_pnp(pnp) for pnp in ch.attachments])
return '(%s %s)' % (ch.type, s)
T = ['(S']
v = [] # PNP's already visited.
for ch in sentence.chunked():
if not ch.pnp and isinstance(ch, Chink):
T.append('(%s %s)' % (ch.words[0].pos, ch.words[0].string))
elif not ch.pnp:
T.append(do_chunk(ch))
#elif ch.pnp not in v:
elif ch.pnp.anchor is None and ch.pnp not in v:
# The chunk is part of a PNP without an anchor.
T.append(do_pnp(ch.pnp))
v.append(ch.pnp)
T.append(')')
return tree.bracket_parse(' '.join(T))
### GRAPHVIZ DOT ###################################################################################
BLUE = {
'' : ("#f0f5ff", "#000000"),
'VP' : ("#e6f0ff", "#000000"),
'SBJ' : ("#64788c", "#ffffff"),
'OBJ' : ("#64788c", "#ffffff"),
}
def _colorize(x, colors):
s = ''
if isinstance(x, Word):
x = x.chunk
if isinstance(x, Chunk):
s = ',style=filled, fillcolor="%s", fontcolor="%s"' % ( \
colors.get(x.role) or \
colors.get(x.type) or \
colors.get('') or ("none", "black"))
return s
def graphviz_dot(sentence, font="Arial", colors=BLUE):
""" Returns a dot-formatted string that can be visualized as a graph in GraphViz.
"""
s = 'digraph sentence {\n'
s += '\tranksep=0.75;\n'
s += '\tnodesep=0.15;\n'
s += '\tnode [penwidth=1, fontname="%s", shape=record, margin=0.1, height=0.35];\n' % font
s += '\tedge [penwidth=1];\n'
s += '\t{ rank=same;\n'
# Create node groups for words, chunks and PNP chunks.
for w in sentence.words:
s += '\t\tword%s [label="<f0>%s|<f1>%s"%s];\n' % (w.index, w.string, w.type, _colorize(w, colors))
for w in sentence.words[:-1]:
# Invisible edges forces the words into the right order:
s += '\t\tword%s -> word%s [color=none];\n' % (w.index, w.index+1)
s += '\t}\n'
s += '\t{ rank=same;\n'
for i, ch in enumerate(sentence.chunks):
s += '\t\tchunk%s [label="<f0>%s"%s];\n' % (i+1, "-".join([x for x in (
ch.type, ch.role, str(ch.relation or '')) if x]) or '-', _colorize(ch, colors))
for i, ch in enumerate(sentence.chunks[:-1]):
# Invisible edges forces the chunks into the right order:
s += '\t\tchunk%s -> chunk%s [color=none];\n' % (i+1, i+2)
s += '}\n'
s += '\t{ rank=same;\n'
for i, ch in enumerate(sentence.pnp):
s += '\t\tpnp%s [label="<f0>PNP"%s];\n' % (i+1, _colorize(ch, colors))
s += '\t}\n'
s += '\t{ rank=same;\n S [shape=circle, margin=0.25, penwidth=2]; }\n'
# Connect words to chunks.
# Connect chunks to PNP or S.
for i, ch in enumerate(sentence.chunks):
for w in ch:
s += '\tword%s -> chunk%s;\n' % (w.index, i+1)
if ch.pnp:
s += '\tchunk%s -> pnp%s;\n' % (i+1, sentence.pnp.index(ch.pnp)+1)
else:
s += '\tchunk%s -> S;\n' % (i+1)
if ch.type == 'VP':
# Indicate related chunks with a dotted
for r in ch.related:
s += '\tchunk%s -> chunk%s [style=dotted, arrowhead=none];\n' % (
i+1, sentence.chunks.index(r)+1)
# Connect PNP to anchor chunk or S.
for i, ch in enumerate(sentence.pnp):
if ch.anchor:
s += '\tpnp%s -> chunk%s;\n' % (i+1, sentence.chunks.index(ch.anchor)+1)
s += '\tpnp%s -> S [color=none];\n' % (i+1)
else:
s += '\tpnp%s -> S;\n' % (i+1)
s += "}"
return s
### STDOUT TABLE ###################################################################################
def table(sentence, fill=1, placeholder="-"):
""" Returns a string where the tags of tokens in the sentence are organized in outlined columns.
"""
tags = [WORD, POS, IOB, CHUNK, ROLE, REL, PNP, ANCHOR, LEMMA]
tags += [tag for tag in sentence.token if tag not in tags]
def format(token, tag):
# Returns the token tag as a string.
if tag == WORD : s = token.string
elif tag == POS : s = token.type
elif tag == IOB : s = token.chunk and (token.index == token.chunk.start and "B" or "I")
elif tag == CHUNK : s = token.chunk and token.chunk.type
elif tag == ROLE : s = token.chunk and token.chunk.role
elif tag == REL : s = token.chunk and token.chunk.relation and str(token.chunk.relation)
elif tag == PNP : s = token.chunk and token.chunk.pnp and token.chunk.pnp.type
elif tag == ANCHOR : s = token.chunk and token.chunk.anchor_id
elif tag == LEMMA : s = token.lemma
else : s = token.custom_tags.get(tag)
return s or placeholder
def outline(column, fill=1, padding=3, align="left"):
# Add spaces to each string in the column so they line out to the highest width.
n = max([len(x) for x in column]+[fill])
if align == "left" : return [x+" "*(n-len(x))+" "*padding for x in column]
if align == "right" : return [" "*(n-len(x))+x+" "*padding for x in column]
# Gather the tags of the tokens in the sentece per column.
# If the IOB-tag is I-, mark the chunk tag with "^".
# Add the tag names as headers in each column.
columns = [[format(token, tag) for token in sentence] for tag in tags]
columns[3] = [columns[3][i]+(iob == "I" and " ^" or "") for i, iob in enumerate(columns[2])]
del columns[2]
for i, header in enumerate(['word', 'tag', 'chunk', 'role', 'id', 'pnp', 'anchor', 'lemma']+tags[9:]):
columns[i].insert(0, "")
columns[i].insert(0, header.upper())
# The left column (the word itself) is outlined to the right,
# and has extra spacing so that words across sentences line out nicely below each other.
for i, column in enumerate(columns):
columns[i] = outline(column, fill+10*(i==0), align=("left","right")[i==0])
# Anchor column is useful in MBSP but not in pattern.en.
if not MBSP:
del columns[6]
# Create a string with one row (i.e., one token) per line.
return "\n".join(["".join([x[i] for x in columns]) for i in range(len(columns[0]))])
| [
"[email protected]"
] | |
cf29ae1399310eb4eef155d268cd8c66f9b0237f | 03dacfab20ffb93eeb675f78005824bf68b72e7c | /Python/Algorithms/String/高级字符串算法/最长子串和子序列问题/1143 M_最长公共子序列.py | 18f7a7737f2a65f2b64ae7bdb1c3399384fc097b | [] | no_license | RuiWu-yes/leetcode | e343a55ebd7a3cacd400d6d2605fdbd2345a28d3 | bfc5641445c505f2b41155c61bdf65f3e601554f | refs/heads/master | 2023-07-14T07:27:42.525472 | 2021-08-01T08:57:00 | 2021-08-01T08:57:00 | 342,442,765 | 4 | 2 | null | null | null | null | UTF-8 | Python | false | false | 2,620 | py | # -*- coding: utf-8 -*-
# @Author : ruiwu
# @Email : [email protected]
# @Title : 1143 最长公共子序列
# @Content : 给定两个字符串 text1 和 text2,返回这两个字符串的最长公共子序列的长度.
# 若这两个字符串没有公共子序列,则返回 0.
class Solution:
def longestCommonSubsequence1(self, text1: str, text2: str) -> int:
# 暴力解法:可以用备忘录去优化
def dp(i, j):
# 空串的base case
if i == -1 or j == -1:
return 0
if text1[i] == text2[j]:
# 这边找到一个lcs的元素,继续往前找
return dp(i-1, j-1) + 1
else:
# 谁能让lcs最长,就听谁的
return max(dp(i-1, j), dp(i, j-1))
return dp(len(text1)-1, len(text2)-1)
def longestCommonSubsequence2(self, text1: str, text2: str) -> int:
# 动态规划:用DP table来优化时间复杂度
# dp[i][j]的定义:对于 s1[1..i] 和 s2[1..j],它们的 LCS 长度是 dp[i][j]
# 状态转移:
# 用两个指针 i 和 j 从后往前遍历 s1 和 s2,如果 s1[i]==s2[j],那么这个字符一定在 lcs 中;
# 否则的话,s1[i] 和 s2[j] 这两个字符至少有一个不在 lcs 中,需要丢弃一个。
m, n = len(text1), len(text2)
# 构建 DP table 和 base case
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
for j in range(1, n + 1):
if text1[i-1] == text2[j-1]:
# 找到一个 lcs 中的字符
dp[i][j] = 1 + dp[i-1][j-1]
else:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
return dp[-1][-1]
if __name__ == '__main__':
# case1 res = 3
# 最长公共子序列是 "ace",它的长度为 3.
text1_1 = "abcde"
text2_1 = "ace"
# case2 res = 3
# 最长公共子序列是 "abc",它的长度为 3.
text1_2 = "abc"
text2_2 = "abc"
# case3 res = 0
# 两个字符串没有公共子序列,返回 0.
text1_3 = "abc"
text2_3 = "def"
sol = Solution()
res1 = sol.longestCommonSubsequence1(text1_1, text2_1), sol.longestCommonSubsequence2(text1_1, text2_1)
res2 = sol.longestCommonSubsequence1(text1_2, text2_2), sol.longestCommonSubsequence2(text1_2, text2_2)
res3 = sol.longestCommonSubsequence1(text1_3, text2_3), sol.longestCommonSubsequence2(text1_3, text2_3)
print('case1:', res1)
print('case2:', res2)
print('case3:', res3) | [
"[email protected]"
] | |
78363b0fe2022a55e40a75f79966bd4e280108fb | 99dd08b129792494cd2cd74224ce5a8de68ac4c9 | /app/migrations/0009_auto_20160605_2237.py | c453c0484a366bdb7ec0961161d1fe95913ce75b | [] | no_license | ssssergey/DjangoPauk | 78d90239792d1c07f88809fed36682242260be8e | 887b71648c3c30f0bd90eb5becf3e9b50d6f44a8 | refs/heads/master | 2021-01-20T19:36:38.899428 | 2016-07-10T21:20:26 | 2016-07-10T21:20:26 | 60,295,481 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,378 | py | # -*- coding: utf-8 -*-
# Generated by Django 1.9.6 on 2016-06-05 19:37
from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('app', '0008_auto_20160605_0023'),
]
operations = [
migrations.CreateModel(
name='UserCountry',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('last_time', models.DateTimeField()),
('checked', models.BooleanField(default=True)),
('country', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='app.Countries')),
('user', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)),
],
),
migrations.AlterField(
model_name='news',
name='download_time',
field=models.DateTimeField(),
),
migrations.AddField(
model_name='countries',
name='users',
field=models.ManyToManyField(related_name='countries', through='app.UserCountry', to=settings.AUTH_USER_MODEL),
),
]
| [
"[email protected]"
] | |
f483e7c7b7682bce9d8387d15166f0d4ba4223ae | 78b7b3e27553ccf0b89c24cbd11662600db26b4c | /ScrapeNASAPicDayWebsite/.history/scraper_20190701155658.py | 9085a093cb3265c5dda31be4a0d1f5a1e80d88a7 | [] | no_license | web3-qa/intermediatePython | 2c23408bd6d6dffc070b92e1155d3c072cfe040c | b4791db2bcb59aaf9c447cf50ffd4d21cacbe16b | refs/heads/master | 2023-02-08T14:18:54.288227 | 2019-07-18T13:31:23 | 2019-07-18T13:31:23 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 188 | py | import urllib.request
from bs4 import BeautifulSoup
from scraper import BeautifulSoup
urllib.request.urlopen("hhtp://apod.nasa.gov/apod/archivepix.html").read()
BeautifulSoup(content, ) | [
"[email protected]"
] | |
4f9e97ff4a141d409fa1a42cf7808dcf0bd944bb | b77e12c0fc66cf47f83359fe8a04890669058f08 | /day_5/dictAndSet.py | e9db838ac14a2b726afe23383f0aae6c8d717cf6 | [] | no_license | dydy061951/SeleniumProject | 177bb8bdd9f3f586d63c7330d4e5bcdc473cf7c8 | 857fddb74748133475e5f4583007446ab7d2184f | refs/heads/master | 2021-08-23T04:41:30.705381 | 2017-12-03T09:26:14 | 2017-12-03T09:26:14 | 112,907,712 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,105 | py | # dict 是字典的缩写;set是集合的缩写;它们都和数组类似
# python 中的元组用小括号表示,列表用中括号表示
# 元组用小括号,列表用中括号,字典和集合用大括号表示
#比如,同样描述一个学生的信息
stu=("001","小明","男",23) #元组:只读,不可增删改,只可查看
#元组和数组的区别:
#数组可以修改元素的内容,但是不能增加和删除,数组中所有元素的类型一样
#元组不能增删改,元素的类型不固定(可以有数字和字符串)
stu1=["001","小明","男",23] #列表:可以进行增删改查,列表是最常用的数据格式
#find_elements()返回的就是列表
stu2={"001","小明","男",23} #集合:是无序的,不能用下标索引的方式查找元素;不可重复的,重复的元素会自动删除
stu3={"id":"001","姓名":"小明","性别":"男","年龄":"23"} #字典:key:value ,是有自我描述的,看到key值,就知道value所代表的意义;字典也是无序的,key不能重复,但value值可以重复
print(stu3['姓名']) | [
"51Testing"
] | 51Testing |
ded8cc2105b8e2776cda74b3ba9523ce406a7267 | e57d7785276053332c633b57f6925c90ad660580 | /sdk/compute/azure-mgmt-compute/azure/mgmt/compute/v2021_07_01/operations/_shared_galleries_operations.py | 6917433a7bf25264449d40b6e4bd996c5ca9de98 | [
"MIT",
"LicenseRef-scancode-generic-cla",
"LGPL-2.1-or-later"
] | permissive | adriananeci/azure-sdk-for-python | 0d560308497616a563b6afecbb494a88535da4c5 | b2bdfe659210998d6d479e73b133b6c51eb2c009 | refs/heads/main | 2023-08-18T11:12:21.271042 | 2021-09-10T18:48:44 | 2021-09-10T18:48:44 | 405,684,423 | 1 | 0 | MIT | 2021-09-12T15:51:51 | 2021-09-12T15:51:50 | null | UTF-8 | Python | false | false | 8,424 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from typing import TYPE_CHECKING
import warnings
from azure.core.exceptions import ClientAuthenticationError, HttpResponseError, ResourceExistsError, ResourceNotFoundError, map_error
from azure.core.paging import ItemPaged
from azure.core.pipeline import PipelineResponse
from azure.core.pipeline.transport import HttpRequest, HttpResponse
from azure.mgmt.core.exceptions import ARMErrorFormat
from .. import models as _models
if TYPE_CHECKING:
# pylint: disable=unused-import,ungrouped-imports
from typing import Any, Callable, Dict, Generic, Iterable, Optional, TypeVar, Union
T = TypeVar('T')
ClsType = Optional[Callable[[PipelineResponse[HttpRequest, HttpResponse], T, Dict[str, Any]], Any]]
class SharedGalleriesOperations(object):
"""SharedGalleriesOperations operations.
You should not instantiate this class directly. Instead, you should create a Client instance that
instantiates it for you and attaches it as an attribute.
:ivar models: Alias to model classes used in this operation group.
:type models: ~azure.mgmt.compute.v2021_07_01.models
:param client: Client for service requests.
:param config: Configuration of service client.
:param serializer: An object model serializer.
:param deserializer: An object model deserializer.
"""
models = _models
def __init__(self, client, config, serializer, deserializer):
self._client = client
self._serialize = serializer
self._deserialize = deserializer
self._config = config
def list(
self,
location, # type: str
shared_to=None, # type: Optional[Union[str, "_models.SharedToValues"]]
**kwargs # type: Any
):
# type: (...) -> Iterable["_models.SharedGalleryList"]
"""List shared galleries by subscription id or tenant id.
:param location: Resource location.
:type location: str
:param shared_to: The query parameter to decide what shared galleries to fetch when doing
listing operations.
:type shared_to: str or ~azure.mgmt.compute.v2021_07_01.models.SharedToValues
:keyword callable cls: A custom type or function that will be passed the direct response
:return: An iterator like instance of either SharedGalleryList or the result of cls(response)
:rtype: ~azure.core.paging.ItemPaged[~azure.mgmt.compute.v2021_07_01.models.SharedGalleryList]
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SharedGalleryList"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2021-07-01"
accept = "application/json"
def prepare_request(next_link=None):
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
if not next_link:
# Construct URL
url = self.list.metadata['url'] # type: ignore
path_format_arguments = {
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'location': self._serialize.url("location", location, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
if shared_to is not None:
query_parameters['sharedTo'] = self._serialize.query("shared_to", shared_to, 'str')
request = self._client.get(url, query_parameters, header_parameters)
else:
url = next_link
query_parameters = {} # type: Dict[str, Any]
request = self._client.get(url, query_parameters, header_parameters)
return request
def extract_data(pipeline_response):
deserialized = self._deserialize('SharedGalleryList', pipeline_response)
list_of_elem = deserialized.value
if cls:
list_of_elem = cls(list_of_elem)
return deserialized.next_link or None, iter(list_of_elem)
def get_next(next_link=None):
request = prepare_request(next_link)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
return pipeline_response
return ItemPaged(
get_next, extract_data
)
list.metadata = {'url': '/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/sharedGalleries'} # type: ignore
def get(
self,
location, # type: str
gallery_unique_name, # type: str
**kwargs # type: Any
):
# type: (...) -> "_models.SharedGallery"
"""Get a shared gallery by subscription id or tenant id.
:param location: Resource location.
:type location: str
:param gallery_unique_name: The unique name of the Shared Gallery.
:type gallery_unique_name: str
:keyword callable cls: A custom type or function that will be passed the direct response
:return: SharedGallery, or the result of cls(response)
:rtype: ~azure.mgmt.compute.v2021_07_01.models.SharedGallery
:raises: ~azure.core.exceptions.HttpResponseError
"""
cls = kwargs.pop('cls', None) # type: ClsType["_models.SharedGallery"]
error_map = {
401: ClientAuthenticationError, 404: ResourceNotFoundError, 409: ResourceExistsError
}
error_map.update(kwargs.pop('error_map', {}))
api_version = "2021-07-01"
accept = "application/json"
# Construct URL
url = self.get.metadata['url'] # type: ignore
path_format_arguments = {
'subscriptionId': self._serialize.url("self._config.subscription_id", self._config.subscription_id, 'str'),
'location': self._serialize.url("location", location, 'str'),
'galleryUniqueName': self._serialize.url("gallery_unique_name", gallery_unique_name, 'str'),
}
url = self._client.format_url(url, **path_format_arguments)
# Construct parameters
query_parameters = {} # type: Dict[str, Any]
query_parameters['api-version'] = self._serialize.query("api_version", api_version, 'str')
# Construct headers
header_parameters = {} # type: Dict[str, Any]
header_parameters['Accept'] = self._serialize.header("accept", accept, 'str')
request = self._client.get(url, query_parameters, header_parameters)
pipeline_response = self._client._pipeline.run(request, stream=False, **kwargs)
response = pipeline_response.http_response
if response.status_code not in [200]:
map_error(status_code=response.status_code, response=response, error_map=error_map)
raise HttpResponseError(response=response, error_format=ARMErrorFormat)
deserialized = self._deserialize('SharedGallery', pipeline_response)
if cls:
return cls(pipeline_response, deserialized, {})
return deserialized
get.metadata = {'url': '/subscriptions/{subscriptionId}/providers/Microsoft.Compute/locations/{location}/sharedGalleries/{galleryUniqueName}'} # type: ignore
| [
"[email protected]"
] | |
08d14434ed320a79bfd7b6ad544964e314a6d41a | 9d0195aa83cc594a8c61f334b90375961e62d4fe | /JTTest/SL7/CMSSW_10_2_15/src/dataRunA/nano3815.py | 3c452a77941e271ddd339cde4b81831fd7359261 | [] | no_license | rsk146/CMS | 4e49592fc64f6438051544c5de18598db36ed985 | 5f8dab8c59ae556598b9747b52b88205fffc4dbe | refs/heads/master | 2022-12-01T03:57:12.126113 | 2020-08-04T03:29:27 | 2020-08-04T03:29:27 | 284,863,383 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,293 | py | # Auto generated configuration file
# using:
# Revision: 1.19
# Source: /local/reps/CMSSW/CMSSW/Configuration/Applications/python/ConfigBuilder.py,v
# with command line options: nanoAOD_jetToolbox_cff -s NANO --data --eventcontent NANOAOD --datatier NANOAOD --no_exec --conditions 102X_dataRun2_Sep2018Rereco_v1 --era Run2_2018,run2_nanoAOD_102Xv1 --customise_commands=process.add_(cms.Service('InitRootHandlers', EnableIMT = cms.untracked.bool(False))) --customise JMEAnalysis/JetToolbox/nanoAOD_jetToolbox_cff.nanoJTB_customizeMC --filein /users/h2/rsk146/JTTest/SL7/CMSSW_10_6_12/src/ttbarCutTest/dataReprocessing/0004A5E9-9F18-6B42-B31D-4206406CE423.root --fileout file:jetToolbox_nano_datatest.root
import FWCore.ParameterSet.Config as cms
from Configuration.StandardSequences.Eras import eras
process = cms.Process('NANO',eras.Run2_2018,eras.run2_nanoAOD_102Xv1)
# import of standard configurations
process.load('Configuration.StandardSequences.Services_cff')
process.load('SimGeneral.HepPDTESSource.pythiapdt_cfi')
process.load('FWCore.MessageService.MessageLogger_cfi')
process.load('Configuration.EventContent.EventContent_cff')
process.load('Configuration.StandardSequences.GeometryRecoDB_cff')
process.load('Configuration.StandardSequences.MagneticField_AutoFromDBCurrent_cff')
process.load('PhysicsTools.NanoAOD.nano_cff')
process.load('Configuration.StandardSequences.EndOfProcess_cff')
process.load('Configuration.StandardSequences.FrontierConditions_GlobalTag_cff')
process.maxEvents = cms.untracked.PSet(
input = cms.untracked.int32(-1)
)
# Input source
process.source = cms.Source("PoolSource",
fileNames = cms.untracked.vstring('file:root://cms-xrd-global.cern.ch//store/data/Run2018A/EGamma/MINIAOD/17Sep2018-v2/270000/DE440E48-3552-214C-B236-AF780B81C8C4.root'),
secondaryFileNames = cms.untracked.vstring()
)
process.options = cms.untracked.PSet(
)
# Production Info
process.configurationMetadata = cms.untracked.PSet(
annotation = cms.untracked.string('nanoAOD_jetToolbox_cff nevts:1'),
name = cms.untracked.string('Applications'),
version = cms.untracked.string('$Revision: 1.19 $')
)
# Output definition
process.NANOAODoutput = cms.OutputModule("NanoAODOutputModule",
compressionAlgorithm = cms.untracked.string('LZMA'),
compressionLevel = cms.untracked.int32(9),
dataset = cms.untracked.PSet(
dataTier = cms.untracked.string('NANOAOD'),
filterName = cms.untracked.string('')
),
fileName = cms.untracked.string('file:jetToolbox_nano_datatest3815.root'),
outputCommands = process.NANOAODEventContent.outputCommands
)
# Additional output definition
# Other statements
from Configuration.AlCa.GlobalTag import GlobalTag
process.GlobalTag = GlobalTag(process.GlobalTag, '102X_dataRun2_Sep2018Rereco_v1', '')
# Path and EndPath definitions
process.nanoAOD_step = cms.Path(process.nanoSequence)
process.endjob_step = cms.EndPath(process.endOfProcess)
process.NANOAODoutput_step = cms.EndPath(process.NANOAODoutput)
# Schedule definition
process.schedule = cms.Schedule(process.nanoAOD_step,process.endjob_step,process.NANOAODoutput_step)
from PhysicsTools.PatAlgos.tools.helpers import associatePatAlgosToolsTask
associatePatAlgosToolsTask(process)
# customisation of the process.
# Automatic addition of the customisation function from PhysicsTools.NanoAOD.nano_cff
from PhysicsTools.NanoAOD.nano_cff import nanoAOD_customizeData
#call to customisation function nanoAOD_customizeData imported from PhysicsTools.NanoAOD.nano_cff
process = nanoAOD_customizeData(process)
# Automatic addition of the customisation function from JMEAnalysis.JetToolbox.nanoAOD_jetToolbox_cff
from JMEAnalysis.JetToolbox.nanoAOD_jetToolbox_cff import nanoJTB_customizeMC
#call to customisation function nanoJTB_customizeMC imported from JMEAnalysis.JetToolbox.nanoAOD_jetToolbox_cff
process = nanoJTB_customizeMC(process)
# End of customisation functions
# Customisation from command line
process.add_(cms.Service('InitRootHandlers', EnableIMT = cms.untracked.bool(False)))
# Add early deletion of temporary data products to reduce peak memory need
from Configuration.StandardSequences.earlyDeleteSettings_cff import customiseEarlyDelete
process = customiseEarlyDelete(process)
# End adding early deletion | [
"[email protected]"
] | |
674ef2ade2f77f65c921287874e2e1a94c29f507 | 27da9fb329a867a6035ecefb77c3a591eefa1e17 | /tools/data_faker/data_faker/__main__.py | 63faaed86541ebc70adffb12911f915503996460 | [
"BSD-3-Clause"
] | permissive | ngoctrantl/rotki | ceef5d3c11ff987889997b3ef1939ef71daaa2ce | c30b2d0084c215b72e061e04d9f8391f8106b874 | refs/heads/develop | 2020-12-21T06:49:47.819538 | 2020-01-26T10:35:45 | 2020-01-26T15:29:56 | 236,344,817 | 0 | 0 | BSD-3-Clause | 2020-01-26T17:04:32 | 2020-01-26T17:03:27 | null | UTF-8 | Python | false | false | 815 | py | from gevent import monkey # isort:skip # noqa
monkey.patch_all() # isort:skip # noqa
import logging
from data_faker.args import data_faker_args
from data_faker.faker import DataFaker
from data_faker.mock_apis.api import APIServer, RestAPI
logger = logging.getLogger(__name__)
def main() -> None:
arg_parser = data_faker_args()
args = arg_parser.parse_args()
faker = DataFaker(args)
rest_api = RestAPI(
fake_kraken=faker.fake_kraken,
fake_binance=faker.fake_binance,
)
server = APIServer(rest_api)
print('SERVER IS NOW RUNNING')
# For some reason debug=True throws an exception:
# ModuleNotFoundError: No module named 'data_faker
# server.run(debug=True)
server.run()
print('SERVER IS NOW SHUTTING DOWN')
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
19a0bd540f1464267b32189c9380ffdd67d3eb3f | a1fc57c6a3e3101d53729ad11df22adb058f1060 | /instagram/posts/models/posts.py | 65ff67af1e6b4c9e356e4d7d992c72aaa9674904 | [] | no_license | morwen1/curso_instagram | 7f5742256a1eacf38a78b06a62e3f21bcf8b10a9 | d201ff1f35f5f682242e4f49867fe6cad144d5c8 | refs/heads/master | 2020-07-17T23:00:47.259139 | 2019-09-03T02:37:21 | 2019-09-03T02:37:21 | 206,119,167 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 857 | py | from django.db import models
from utils.abstract_model import AbstractModel
class Post(AbstractModel):
"""
model for posts
"""
profile = models.ForeignKey('users.Profile',
help_text = 'este es una clave foranea diciendo que los posts pueden tener muchos usuarios'
, on_delete=models.CASCADE)
photo = models.ImageField(
upload_to = 'static/posts'
, blank=True)
photobase64 = models.TextField( blank=True)
description = models.CharField(max_length=255)
#indicadores de likes y comentarios son una variable numerica para hacerle la vida facil al front
likes = models.IntegerField(default=0)
reply = models.IntegerField(default=0)
#los comentarios pueden ser vacios osea no tener comentarios
comments = models.ManyToManyField(
to='posts.Comment' ,
blank=True)
| [
"[email protected]"
] | |
57e440f9fd98a0afa4340c24c19adfdec78fcf41 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03011/s231502042.py | 7a3be73ad896f44c8d92be6dc0b51274a08bf211 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 150 | py | # -*- coding: utf-8 -*-
from itertools import combinations
print(min(map(lambda x: sum(x), list(combinations(list(map(int, input().split())), 2)))))
| [
"[email protected]"
] | |
3f053d139e2165e119e91310782c2ab28c379bfa | 50914176887f9f21a3489a9407195ba14831354c | /three_sum.py | 47d8ce947ca63a31a4d8a149ad33f21e2a0c41bf | [] | no_license | nkukarl/leetcode | e8cfc2a31e64b68222ad7af631277f1f66d277bc | b1dbe37e8ca1c88714f91643085625ccced76e07 | refs/heads/master | 2021-01-10T05:42:04.022807 | 2018-02-24T03:55:24 | 2018-02-24T03:55:24 | 43,725,072 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 856 | py | class Solution:
def three_sum(self, numbers, target):
numbers.sort()
res = []
for i in range(len(numbers) - 2):
if i > 0 and numbers[i] == numbers[i - 1]:
continue
j = i + 1
k = len(numbers) - 1
while j < k:
triplet = [numbers[i], numbers[j], numbers[k]]
total = sum(triplet)
if total == target:
res.append(triplet)
while j < k and numbers[j] == numbers[j + 1]:
j += 1
while j < k and numbers[k] == numbers[k - 1]:
k -= 1
j += 1
k -= 1
elif total < target:
j += 1
else:
k -= 1
return res
| [
"[email protected]"
] | |
16a5e5311dd5aa7dcfe895249e7d901d2b518249 | 397e125e94f4f139f2bf5055824d81f24b8b1757 | /ABC/061/C.py | 47ac392a04a86831bd757380bb2584b700a9cb8c | [] | no_license | tails1434/Atcoder | ecbab6ee238e3f225551297db961b1b502841fa4 | e7c7fed36be46bbaaf020a70997842240ba98d62 | refs/heads/master | 2021-07-07T00:31:49.235625 | 2020-09-30T01:42:01 | 2020-09-30T01:42:01 | 189,009,622 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 316 | py | def main():
N, K = map(int, input().split())
ans = [0] * (10 ** 5 + 1)
for i in range(N):
a, b = map(int, input().split())
ans[a] += b
for i in range(10 ** 5 + 1):
if K <= ans[i]:
print(i)
exit()
else:
K -= ans[i]
main()
| [
"[email protected]"
] | |
4e4e7e019f8f77c6f1c5dfdc25e15ce358740c13 | adf428caea488bfbc22917b8d340dde3293fc306 | /gan/cloud/trainer/mytask.py | ea4de8f20e131a8cc074ecd3edd431b3a661f397 | [] | no_license | tingleshao/riviera | 3269a0a0cb30da96bfd33ba3d950a873fdfa24e3 | f44f43bc2b08d50d6bbc6d0b61fcb91146da5d9f | refs/heads/master | 2021-09-11T20:29:35.615539 | 2018-04-11T23:23:32 | 2018-04-11T23:23:32 | 115,686,329 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,803 | py | import argparse
import model
import tensorflow as tf
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.utils import (
saved_model_export_utils)
def generate_experiment_fn(train_files,
eval_files,
num_epochs=None,
train_batch_size=40,
eval_batch_size=40,
embedding_size=8,
first_layer_size=100,
num_layers=4,
scale_factor=0.7,
**experiment_args):
"""Create an experiment function given hyperparameters.
See command line help text for description of args.
Returns:
A function (output_dir) -> Experiment where output_dir is a string
representing the location of summaries, checkpoints, and exports.
this function is used by learn_runner to create an Experiment which
executes model code provided in the form of an Estimator and
input functions.
All listed arguments in the outer function are used to create an
Estimator, and input functions (training, evaluation, serving).
Unlisted args are passed through to Experiment.
"""
def _experiment_fn(output_dir):
# num_epochs can control duration if train_steps isn't
# passed to Experiment
train_input = model.generate_input_fn(
train_files,
num_epochs=num_epochs,
batch_size=train_batch_size,
)
# Don't shuffle evaluation data
eval_input = model.generate_input_fn(
eval_files,
batch_size=eval_batch_size,
shuffle=False
)
return tf.contrib.learn.Experiment(
model.build_estimator(
output_dir,
embedding_size=embedding_size,
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_layer_size * scale_factor**i))
for i in range(num_layers)
]
),
train_input_fn=train_input,
eval_input_fn=eval_input,
# export strategies control the prediction graph structure
# of exported binaries.
export_strategies=[saved_model_export_utils.make_export_strategy(
model.serving_input_fn,
default_output_alternative_key=None,
exports_to_keep=1
)],
**experiment_args
)
return _experiment_fn
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Input Arguments
parser.add_argument(
'--train-files',
help='GCS or local paths to training data',
nargs='+',
required=True
)
parser.add_argument(
'--num-epochs',
help="""\
Maximum number of training data epochs on which to train.
If both --max-steps and --num-epochs are specified,
the training job will run for --max-steps or --num-epochs,
whichever occurs first. If unspecified will run for --max-steps.\
""",
type=int,
)
parser.add_argument(
'--train-batch-size',
help='Batch size for training steps',
type=int,
default=40
)
parser.add_argument(
'--eval-batch-size',
help='Batch size for evaluation steps',
type=int,
default=40
)
parser.add_argument(
'--train-steps',
help="""\
Steps to run the training job for. If --num-epochs is not specified,
this must be. Otherwise the training job will run indefinitely.\
""",
type=int
)
parser.add_argument(
'--eval-steps',
help='Number of steps to run evalution for at each checkpoint',
default=100,
type=int
)
parser.add_argument(
'--eval-files',
help='GCS or local paths to evaluation data',
nargs='+',
required=True
)
# Training arguments
parser.add_argument(
'--embedding-size',
help='Number of embedding dimensions for categorical columns',
default=8,
type=int
)
parser.add_argument(
'--first-layer-size',
help='Number of nodes in the first layer of the DNN',
default=100,
type=int
)
parser.add_argument(
'--num-layers',
help='Number of layers in the DNN',
default=4,
type=int
)
parser.add_argument(
'--scale-factor',
help='How quickly should the size of the layers in the DNN decay',
default=0.7,
type=float
)
parser.add_argument(
'--job-dir',
help='GCS location to write checkpoints and export models',
required=True
)
# Argument to turn on all logging
parser.add_argument(
'--verbosity',
choices=[
'DEBUG',
'ERROR',
'FATAL',
'INFO',
'WARN'
],
default=tf.logging.FATAL,
help='Set logging verbosity'
)
# Experiment arguments
parser.add_argument(
'--eval-delay-secs',
help='How long to wait before running first evaluation',
default=10,
type=int
)
parser.add_argument(
'--min-eval-frequency',
help='Minimum number of training steps between evaluations',
default=1,
type=int
)
args = parser.parse_args()
arguments = args.__dict__
tf.logging.set_verbosity(arguments.pop('verbosity'))
job_dir = arguments.pop('job_dir')
print('Starting Census: Please lauch tensorboard to see results:\n'
'tensorboard --logdir=$MODEL_DIR')
# Run the training job
# learn_runner pulls configuration information from environment
# variables using tf.learn.RunConfig and uses this configuration
# to conditionally execute Experiment, or param server code
#(c) job_dir is the MODEL_DIR, where the trained model is saved
learn_runner.run(generate_experiment_fn(**arguments), job_dir)
| [
"[email protected]"
] | |
ead3852d5b3896dc4a4817d88070380e31e7f65c | e210c28eeed9d38eb78c14b3a6388eca1e0e85d8 | /nvflare/app_opt/pt/file_model_locator.py | 2caab1f304cf3f0c974eb142e0c6083e1d68b582 | [
"Apache-2.0"
] | permissive | NVIDIA/NVFlare | 5a2d2e4c85a3fd0948e25f1ba510449727529a15 | 1433290c203bd23f34c29e11795ce592bc067888 | refs/heads/main | 2023-08-03T09:21:32.779763 | 2023-07-05T21:17:16 | 2023-07-05T21:17:16 | 388,876,833 | 442 | 140 | Apache-2.0 | 2023-09-14T19:12:35 | 2021-07-23T17:26:12 | Python | UTF-8 | Python | false | false | 2,996 | py | # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from nvflare.apis.dxo import DXO
from nvflare.apis.event_type import EventType
from nvflare.apis.fl_context import FLContext
from nvflare.app_common.abstract.model import model_learnable_to_dxo
from nvflare.app_common.abstract.model_locator import ModelLocator
from nvflare.app_opt.pt.file_model_persistor import PTFileModelPersistor
class PTFileModelLocator(ModelLocator):
def __init__(self, pt_persistor_id: str):
"""The ModelLocator's job is to find and locate the models inventory saved during training.
Args:
pt_persistor_id (str): ModelPersistor component ID
"""
super().__init__()
self.pt_persistor_id = pt_persistor_id
self.model_persistor = None
self.model_inventory = {}
def handle_event(self, event_type: str, fl_ctx: FLContext):
if event_type == EventType.START_RUN:
self._initialize(fl_ctx)
def _initialize(self, fl_ctx: FLContext):
engine = fl_ctx.get_engine()
self.model_persistor: PTFileModelPersistor = engine.get_component(self.pt_persistor_id)
if self.model_persistor is None or not isinstance(self.model_persistor, PTFileModelPersistor):
raise ValueError(
f"pt_persistor_id component must be PTFileModelPersistor. " f"But got: {type(self.model_persistor)}"
)
def get_model_names(self, fl_ctx: FLContext) -> List[str]:
"""Returns the list of model names that should be included from server in cross site validation.add().
Args:
fl_ctx (FLContext): FL Context object.
Returns:
List[str]: List of model names.
"""
self.model_inventory: dict = self.model_persistor.get_model_inventory(fl_ctx)
return list(self.model_inventory.keys())
def locate_model(self, model_name, fl_ctx: FLContext) -> DXO:
"""Call to locate and load the model weights of model_name.
Args:
model_name: name of the model
fl_ctx: FLContext
Returns: model_weight DXO
"""
if model_name not in list(self.model_inventory.keys()):
raise ValueError(f"model inventory does not contain: {model_name}")
model_learnable = self.model_persistor.get(model_name, fl_ctx)
dxo = model_learnable_to_dxo(model_learnable)
return dxo
| [
"[email protected]"
] | |
295adab3b33c2f876c9b28ae186e1deafbe3fdfd | c4e97f2eb1081d8fad5e64872c3d6acf9a89d445 | /Solutions/0140_wordBreak.py | 04f7ecd57c34aa3b8fb6e9be01cefe48758ef41d | [] | no_license | YoupengLi/leetcode-sorting | 0efb3f4d7269c76a3ed11caa3ab48c8ab65fea25 | 3d9e0ad2f6ed92ec969556f75d97c51ea4854719 | refs/heads/master | 2020-05-18T23:28:51.363862 | 2019-09-12T00:42:14 | 2019-09-12T00:42:14 | 184,712,501 | 3 | 1 | null | null | null | null | UTF-8 | Python | false | false | 5,485 | py | # -*- coding: utf-8 -*-
# @Time : 2019/7/23 15:47
# @Author : Youpeng Li
# @Site :
# @File : 0140_wordBreak.py
# @Software: PyCharm
'''
140. Word Break II
Given a non-empty string s and a dictionary wordDict containing a list of non-empty words,
add spaces in s to construct a sentence where each word is a valid dictionary word.
Return all such possible sentences.
Note:
The same word in the dictionary may be reused multiple times in the segmentation.
You may assume the dictionary does not contain duplicate words.
Example 1:
Input:
s = "catsanddog"
wordDict = ["cat", "cats", "and", "sand", "dog"]
Output:
[
"cats and dog",
"cat sand dog"
]
Example 2:
Input:
s = "pineapplepenapple"
wordDict = ["apple", "pen", "applepen", "pine", "pineapple"]
Output:
[
"pine apple pen apple",
"pineapple pen apple",
"pine applepen apple"
]
Explanation: Note that you are allowed to reuse a dictionary word.
Example 3:
Input:
s = "catsandog"
wordDict = ["cats", "dog", "sand", "and", "cat"]
Output:
[]
'''
class Solution:
def wordBreak(self, s: 'str', wordDict: 'List[str]') -> 'List[str]':
if not s or not wordDict:
return []
res = []
self.dfs(s, wordDict, "", res)
return res
# Before we do dfs, we check whether the remaining string
# can be splitted by using the dictionary,
# in this way we can decrease unnecessary computation greatly.
def dfs(self, s: 'str', wordDict: 'List[str]', path: 'str', res: 'List[str]'):
if not s:
res.append(path[:-1])
return
if self.check(s, wordDict): # prunning
for i in range(1, len(s) + 1):
if s[:i] in wordDict:
# dic.remove(s[:i])
self.dfs(s[i:], wordDict, path + s[:i] + " ", res)
# DP code to check whether a string can be splitted by using the
# dic, this is the same as word break I.
def check(self, s: 'str', wordDict: 'List[str]') -> 'bool':
if not s or not wordDict:
return False
dp = [False] * (len(s) + 1) # dp[i] means s[:i+1] can be segmented into words in the wordDicts
dp[0] = True
for i in range(len(s)):
for j in range(i, len(s)):
if dp[i] and s[i: j + 1] in wordDict:
dp[j + 1] = True
return dp[-1]
def wordBreak_1(self, s: 'str', wordDict: 'List[str]') -> 'List[str]':
if not s or not wordDict:
return []
if not self.check_1(s, wordDict):
return []
n = len(s)
word_dict = set(wordDict)
max_len = max(len(word) for word in word_dict)
min_len = min(len(word) for word in word_dict)
def dp(i):
if i >= n:
return [""]
res = []
ed_left = i + min_len
ed_right = min(i + max_len, n)
for ed in range(ed_left, ed_right + 1):
if s[i:ed] in word_dict and dp(ed):
res += [s[i:ed] + ' ' + rest if rest else s[i:ed] for rest in dp(ed)]
return res
return dp(0)
def check_1(self, s: 'str', wordDict: 'List[str]') -> 'bool':
if not s or not wordDict:
return False
dp = [False] * (len(s) + 1) # dp[i] means s[:i+1] can be segmented into words in the wordDicts
dp[0] = True
for i in range(len(s)):
for j in range(i, len(s)):
if dp[i] and s[i: j + 1] in wordDict:
dp[j + 1] = True
return dp[-1]
def wordBreak_2(self, s: 'str', wordDict: 'List[str]') -> 'List[str]':
if not s or not wordDict:
return []
wordDict = set(wordDict)
backup = {}
self.res = []
def dfs_2(s: 'str') -> 'List[str]':
if not s:
return ['']
if s not in backup:
backup[s] = []
for i in range(1, len(s) + 1):
word = s[:i]
if word in wordDict:
sentences = dfs_2(s[i:])
for ss in sentences:
backup[s].append(word + ' ' + ss)
return backup[s]
dfs_2(s)
return [bu[:-1] for bu in backup[s]]
if __name__ == "__main__":
a = Solution()
s = "catsanddog"
wordDict = ["cat", "cats", "and", "sand", "dog"]
print(a.wordBreak(s, wordDict))
print(a.wordBreak_1(s, wordDict))
print(a.wordBreak_2(s, wordDict))
s = "pineapplepenapple"
wordDict = ["apple", "pen", "applepen", "pine", "pineapple"]
print(a.wordBreak(s, wordDict))
print(a.wordBreak_1(s, wordDict))
print(a.wordBreak_2(s, wordDict))
s = "catsandog"
wordDict = ["cats", "dog", "sand", "and", "cat"]
print(a.wordBreak(s, wordDict))
print(a.wordBreak_1(s, wordDict))
print(a.wordBreak_2(s, wordDict))
s = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" \
"baaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
wordDict = ["a", "aa", "aaa", "aaaa", "aaaaa", "aaaaaa", "aaaaaaa", "aaaaaaaa", "aaaaaaaaa", "aaaaaaaaaa"]
print(a.wordBreak(s, wordDict))
print(a.wordBreak_1(s, wordDict))
print(a.wordBreak_2(s, wordDict)) | [
"[email protected]"
] | |
484aec251ff1c5e25208e3ebcacfbfdcfa821b7b | e4045e99ae5395ce5369a1374a20eae38fd5179b | /files/read_names.py | 4020137ff8b621ccfc423a9891205d6ca36c0eba | [] | no_license | srikanthpragada/09_MAR_2018_PYTHON_DEMO | 74fdb54004ab82b62f68c9190fe868f3c2961ec0 | 8684137c77d04701f226e1e2741a7faf9eeef086 | refs/heads/master | 2021-09-11T15:52:17.715078 | 2018-04-09T15:29:16 | 2018-04-09T15:29:16 | 124,910,054 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 207 | py | # open file for writing in text mode
with open(r"e:\classroom\python\mar9\names.txt", "rt") as f:
for lineno, name in enumerate(f.readlines()):
print("{:03} {}".format(lineno + 1, name), end='')
| [
"[email protected]"
] | |
6685570716f0c046013d2b6a6abc428738b35399 | 85f5dff291acf1fe7ab59ca574ea9f4f45c33e3b | /api/tacticalrmm/checks/migrations/0018_auto_20210205_1647.py | cce78b61d2167ba2adcec7ff771ee85901adcc0b | [
"LicenseRef-scancode-proprietary-license",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | sadnub/tacticalrmm | a4ecaf994abe39244a6d75ed2166222abb00d4f4 | 0af95aa9b1084973642da80e9b01a18dcacec74a | refs/heads/develop | 2023-08-30T16:48:33.504137 | 2023-04-10T22:57:44 | 2023-04-10T22:57:44 | 243,405,684 | 0 | 2 | MIT | 2020-09-08T13:03:30 | 2020-02-27T01:43:56 | Python | UTF-8 | Python | false | false | 518 | py | # Generated by Django 3.1.4 on 2021-02-05 16:47
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('checks', '0017_check_dashboard_alert'),
]
operations = [
migrations.AlterField(
model_name='check',
name='alert_severity',
field=models.CharField(blank=True, choices=[('info', 'Informational'), ('warning', 'Warning'), ('error', 'Error')], default='warning', max_length=15, null=True),
),
]
| [
"[email protected]"
] | |
f751e7a8acf0536d699fa16c80f652308a74ce43 | 9318b1885946f639f1446431abc6ec4fa33fc9ac | /typeData.py | 1b85d8810b009197b06a00e1557aefa5546fc95d | [] | no_license | mcewenar/PYTHON_INFO_I_BASIC | 1d365bcd3d0186c8955e3cde2605831717d0a412 | e5c3278969b420e7ce03bf7903cf57e63865aaca | refs/heads/master | 2023-06-04T02:26:42.124304 | 2021-06-22T02:48:08 | 2021-06-22T02:48:08 | 326,510,259 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 404 | py | #IMPORTANTE:
def isInt(data):
if type(data) == int:
print("es entero")
return True
elif type(data) == float:
print("es float")
return False
elif type(data) == str:
print("es string")
return None
#print(isInt(5))
#print(isInt(5.0))
#print(isInt("5"))
#print(isInt("Hola mundo"))
x=int(input("Ingrese cualquier dato: "))
print(isInt(x)) | [
"[email protected]"
] | |
c6aaa99c6e382ba9cb455550f3944c94fc5935df | 490a934f36fdb97827934220eeff71f89f7c3e5d | /config.py | 026520f65ada648e65573c49c6af390f1694a9cf | [
"MIT"
] | permissive | qq453388937/Tornado_home_Git | 9c34a198be8737bbb49a28732cfbe899c0f86828 | 65b36a2816b6648c9bad136249552c8276d4584e | refs/heads/master | 2021-04-06T06:33:46.631606 | 2018-03-29T08:55:43 | 2018-03-29T08:55:43 | 124,759,358 | 0 | 0 | MIT | 2018-03-20T13:07:43 | 2018-03-11T13:30:04 | JavaScript | UTF-8 | Python | false | false | 905 | py | # -*- coding:utf-8 -*-
import os
# redis 配置抽离
torndb_settings = dict(
host="127.0.0.1",
database="ihome",
user="root",
password="123", # 看源码得知默认3306端口
)
redis_settings = dict(
host='127.0.0.1',
port=6379,
)
settings = {
'debug': True,
'static_path': os.path.join(os.path.dirname(__file__), 'static'),
'template_path': os.path.join(os.path.dirname(__file__), 'template'),
# 'static_url_prefix': "/ChinaNumber1", # 一般默认用/static ,这个参数可以修改默认的静态请求开头路径
'cookie_secret': '0Q1AKOKTQHqaa+N80XhYW7KCGskOUE2snCW06UIxXgI=', # 组合拳,安全cookie import base64,uuid
'xsrf_cookies': False
}
log_file = os.path.join(os.path.dirname(__file__), 'logs/log.txt')
log_leve = 'debug'
session_expire = 86400
# 密码加密密钥
passwd_hash_key = "nlgCjaTXQX2jpupQFQLoQo5N4OkEmkeHsHD9+BBx2WQ="
| [
"[email protected]"
] | |
7bda8d156f687a4d69a597afd6dacbe903332568 | c0f69bf01d09718b81814bb8bf274c931801e9c8 | /codebase/manager_component/monitoring_subcomponent/history_subcomponent/graphing_subcomponent/graphing_class.py | 522e0767a94a2f96d6d3b706d99124d234954be8 | [] | no_license | johnpcole/Download-Manager | 369ec1232f35ec3ab8d653c03f4ea12bbb57207c | fd9b287cbfb6b813a6d23877f25423079b063c46 | refs/heads/master | 2021-07-19T17:33:40.473368 | 2019-11-03T23:43:19 | 2019-11-03T23:43:19 | 85,001,326 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,795 | py | from .....common_components.datetime_datatypes import datetime_module as DateTime
from .graph_subcomponent import graph_module as Graph
class DefineGraphing:
def __init__(self, smallera, largeera):
# Define graphset size
self.graphsetsize = 6
# Defines the granularity of display of monitor data
self.shorterasize = smallera
self.longerasize = largeera
# Screen metrics
self.widegraphcolumnwidth = 3
self.narrowgraphcolumnwidth = 2
self.graphhorizontaloffset = 9
self.graphverticaloffset = -28
self.graphverticalspacing = 177
self.widegraphwidth = 974 - 18
self.narrowgraphwidth = 480 - 18
self.graphheight = 125
self.graphblockheight = 5
self.wideshortoriginoffset = 39 # Hours from origin to now
self.wideshortoriginoffsetminutes = 40 # Minutes from origin to now
self.narrowshortoriginoffset = 25 # Hours from origin to now
self.narrowshortoriginoffsetminutes = 20 # Minutes from origin to now
self.widelongoriginoffset = 23 + (9 * 24) # Hours from origin to now
self.narrowlongoriginoffset = 9 + (6 * 24) # Hours from origin to now
# =========================================================================================
def drawgraphs(self, longhistorymode, historydataset):
currentdatetime = DateTime.getnow()
graphset = Graph.creategraphset(self.graphsetsize)
for graphindex in [1, 2, 3, 4, 5, 6]:
# Axes
graphset.addto(graphindex, Graph.creategraphaxes(
self.determineorigintimedate(currentdatetime, graphindex, longhistorymode),
self.determinecorrecterasize(longhistorymode),
self.determinecolumnwidth(graphindex),
self.graphhorizontaloffset,
self.determinegraphbottom(1),
self.determinegraphwidth(graphindex),
self.graphheight))
# Graph Headings
graphset.addto(graphindex, Graph.createtitles(
longhistorymode,
self.graphhorizontaloffset,
self.graphverticaloffset,
self.graphverticalspacing,
graphindex))
for graphindex in [1, 4]:
# VPN Bars for graphs 1 & 4
graphset.addto(graphindex, Graph.createvpnbars(
self.determineorigintimedate(currentdatetime, graphindex, longhistorymode),
self.determinecorrecterasize(longhistorymode),
self.determinecolumnwidth(graphindex),
self.graphhorizontaloffset,
self.determinegraphbottom(1),
self.graphheight,
historydataset))
# Upload Bars for graphs 2 & 5
graphset.addto(graphindex + 1, Graph.createuploadedbars(
self.determineorigintimedate(currentdatetime, graphindex + 1, longhistorymode),
self.determinecorrecterasize(longhistorymode),
self.determinecolumnwidth(graphindex + 1),
self.graphhorizontaloffset,
self.determinegraphbottom(1),
self.graphheight,
historydataset))
# Legends
graphset.addto(graphindex, Graph.createstatuslegend(
self.determinegraphwidth(graphindex + 1),
self.determinegraphbottom(1)))
if longhistorymode == True:
# Status bars for graphs 1 & 4
graphset.addto(graphindex, Graph.createstatusbars(
self.determineorigintimedate(currentdatetime, graphindex, longhistorymode),
self.determinecorrecterasize(longhistorymode),
self.determinecolumnwidth(graphindex),
self.graphhorizontaloffset,
self.determinegraphbottom(1),
self.graphheight,
historydataset))
else:
# Status blocks for graphs 1 & 4
graphset.addto(graphindex, Graph.createstatusblocks(
self.determineorigintimedate(currentdatetime, graphindex, longhistorymode),
self.determinecorrecterasize(longhistorymode),
self.determinecolumnwidth(graphindex),
self.graphhorizontaloffset,
self.determinegraphbottom(1),
historydataset,
self.graphblockheight))
# Temp bars for graphs 3 & 6
graphset.addto(graphindex + 2, Graph.createtempbars(
self.determineorigintimedate(currentdatetime, graphindex + 2, longhistorymode),
self.determinecorrecterasize(longhistorymode),
self.determinecolumnwidth(graphindex + 2),
self.graphhorizontaloffset,
self.determinegraphbottom(1),
self.graphheight,
historydataset))
return graphset.printout()
def determinegraphbottom(self, graphindex):
return self.graphverticaloffset + (self.graphverticalspacing * graphindex)
def determinecorrecterasize(self, longhistorymode):
if longhistorymode == False:
graph = self.shorterasize
else:
graph = self.longerasize
return graph
def determineorigintimedate(self, currenttimedate, graphindex, longhistorymode):
graph = DateTime.createfromobject(currenttimedate)
if graphindex > 3:
if longhistorymode == True:
graph.adjusthours(0 - self.narrowlongoriginoffset)
else:
graph.adjusthours(0 - self.narrowshortoriginoffset)
graph.adjustminutes(0 - self.narrowshortoriginoffsetminutes)
else:
if longhistorymode == True:
graph.adjusthours(0 - self.widelongoriginoffset)
else:
graph.adjusthours(0 - self.wideshortoriginoffset)
graph.adjustminutes(0 - self.wideshortoriginoffsetminutes)
return graph
def determinegraphwidth(self, index):
if index < 4:
outcome = self.widegraphwidth
else:
outcome = self.narrowgraphwidth
return outcome
def determinecolumnwidth(self, index):
if index < 4:
outcome = self.widegraphcolumnwidth
else:
outcome = self.narrowgraphcolumnwidth
return outcome
| [
"[email protected]"
] | |
af638ccb4cbe7e382ee5237fc60ac8cb90f021ab | 0eb8bde44f28866596b9612835b4c0bb37c3a30f | /morsels/20200622_instance_tracker/problem_text.py | 47ba7e437c23ffedc2efa80754b80f954149c6b3 | [] | no_license | gtcooke94/snippets | 609ebc85b40453a79845e28113bd545579796379 | 4792e10cf9f056487e992219cfb088529a53e897 | refs/heads/master | 2021-06-25T13:01:55.282635 | 2020-11-13T21:01:18 | 2020-11-13T21:01:18 | 170,204,644 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,712 | py | Greetings,
This week I'd like you to make a "class factory" which will allow classes to track instances of themselves.
This instance_tracker class factory will return a class when called and can be used like this:
class Account(instance_tracker()):
def __init__(self, number):
self.number = number
super().__init__()
def __repr__(self):
return 'Account({!r})'.format(self.number)
Now the Account class will have an instances attribute which will keep track of all instances of the Account class.
>>> a1 = Account('4056')
>>> a2 = Account('8156')
>>> print(*Account.instances, sep='\n')
Account('4056')
Account('8156')
At first you can assume that subclasses of instance_tracker never override __init__ without calling super()__init__(...).
Bonus 1
For the first bonus, allow your instance_tracker class factory to optionally accept an attribute name to use for storing the instances (instead of the default instances).
class Person:
def __init__(self, name):
self.name = name
def __repr__(self):
return "Person({!r})".format(self.name)
class TrackedPerson(instance_tracker('registry'), Person):
"""Example of inheritance and renaming 'instances' to 'registry'."""
That class should have a registry attribute instead of an instances attribute:
>>> brett = TrackedPerson("Brett Cannon")
>>> guido = TrackedPerson("Guido van Rossum")
>>> carol = TrackedPerson("Carol Willing")
>>> list(TrackedPerson.registry)
[Person('Brett Cannon'), Person('Guido van Rossum'), Person('Carol Willing')]
Bonus 2
For the second bonus, make sure your instance_tracker factory works even for subclasses that don't call super().__init__(...).
For example this class:
class Person(instance_tracker()):
def __init__(self, name):
self.name = name
def __repr__(self):
return "Person({!r})".format(self.name)
Should work as expected:
>>> nick = Person("Nick Coghlan")
>>> brett = Person("Brett Cannon")
>>> list(Person.instances)
[Person('Nick Coghlan'), Person('Brett Cannon')]
Bonus 3
For the third bonus, I'd like you to make sure that objects which are not referenced anywhere else will be deleted from memory as usual.
Take this class for example:
class Account(instance_tracker()):
def __init__(self, number):
self.number = number
def __repr__(self):
return 'Account({!r})'.format(self.number)
Making three instances where one is no longer referenced (we're using a1 twice below) and one has had its last reference removed (using del a2) should result in just one reference:
>>> a1 = Account('4056')
>>> a2 = Account('8156')
>>> a1 = Account('3168')
>>> del a2
>>> list(Account.instances)
[Account('3168')]
.
| [
"[email protected]"
] | |
bd5f261eaa813c6baee78561351043cf93204240 | 61aa319732d3fa7912e28f5ff7768498f8dda005 | /tests/configs/memcheck.py | 669c71b30a1d9f09e065a72010191df495b1cc72 | [
"BSD-3-Clause",
"LicenseRef-scancode-proprietary-license",
"LGPL-2.0-or-later",
"MIT"
] | permissive | TeCSAR-UNCC/gem5-SALAM | 37f2f7198c93b4c18452550df48c1a2ab14b14fb | c14c39235f4e376e64dc68b81bd2447e8a47ff65 | refs/heads/main | 2023-06-08T22:16:25.260792 | 2023-05-31T16:43:46 | 2023-05-31T16:43:46 | 154,335,724 | 62 | 22 | BSD-3-Clause | 2023-05-31T16:43:48 | 2018-10-23T13:45:44 | C++ | UTF-8 | Python | false | false | 2,695 | py | # Copyright (c) 2016 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder. You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Copyright (c) 2015 Jason Lowe-Power
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import m5
from m5.objects import *
# the traffic generator is only available if we have protobuf support,
# so potentially skip this test
require_sim_object("TrafficGen")
# A wrapper around configs/example/memcheck.py
# For some reason, this is implicitly needed by run.py
root = None
def run_test(root):
# Called from tests/run.py
import sys
argv = [
sys.argv[0],
'-m %d' % maxtick,
]
# Execute the script we are wrapping
run_config('configs/example/memcheck.py', argv=argv)
| [
"[email protected]"
] | |
09394ec926883c40727a02a56b5b4e0447abecb3 | b15ccd04d3edfb4d6278a055422610be09c3916c | /4861_회문/sol1.py | 150a37727e44769ce062900a4cbe6fe7238ab4b5 | [] | no_license | hksoftcorn/Algorithm | d0f3a1a6009f47e4f391e568b29a3b51d6095d33 | 81b067b8105ba305172dd8271787c19f04d170ba | refs/heads/master | 2023-05-12T21:15:34.668580 | 2021-06-08T07:57:04 | 2021-06-08T07:57:04 | 337,121,489 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 714 | py | import sys
sys.stdin = open('sample_input.txt')
def solution(N, M, arr):
# 가로
for i in range(N):
for j in range(N-M+1):
palindrome = arr[i][j:j+M]
if palindrome == palindrome[::-1]:
return palindrome
# 세로
for j in range(N):
for i in range(N-M+1):
palindrome =''
for m in range(M):
palindrome += arr[i+m][j]
#print(palindrome)
if palindrome == palindrome[::-1]:
return palindrome
T = int(input())
for tc in range(1, T+1):
N, M = map(int, input().split())
arr = [input() for i in range(N)]
print('#{} {}'.format(tc, solution(N, M, arr)))
| [
"[email protected]"
] | |
c21f0f7ddfb24849fcae721146b7c813fd8bbd6b | a86ca34e23afaf67fdf858df9e47847606b23e0c | /lib/temboo/Library/Amazon/SNS/ListSubscriptionsByTopic.py | d3fabd538f4ac9b5184e976b8af2beefa94acba4 | [] | no_license | miriammelnick/dont-get-mugged | 6026ad93c910baaecbc3f5477629b0322e116fa8 | 1613ee636c027ccc49c3f84a5f186e27de7f0f9d | refs/heads/master | 2021-01-13T02:18:39.599323 | 2012-08-12T23:25:47 | 2012-08-12T23:25:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,212 | py |
###############################################################################
#
# ListSubscriptionsByTopic
# Returns a list of the subscriptions for a specified topic.
#
# Python version 2.6
#
###############################################################################
from temboo.core.choreography import Choreography
from temboo.core.choreography import InputSet
from temboo.core.choreography import ResultSet
from temboo.core.choreography import ChoreographyExecution
class ListSubscriptionsByTopic(Choreography):
"""
Create a new instance of the ListSubscriptionsByTopic Choreography. A TembooSession object, containing a valid
set of Temboo credentials, must be supplied.
"""
def __init__(self, temboo_session):
Choreography.__init__(self, temboo_session, '/Library/Amazon/SNS/ListSubscriptionsByTopic')
def new_input_set(self):
return ListSubscriptionsByTopicInputSet()
def _make_result_set(self, result, path):
return ListSubscriptionsByTopicResultSet(result, path)
def _make_execution(self, session, exec_id, path):
return ListSubscriptionsByTopicChoreographyExecution(session, exec_id, path)
"""
An InputSet with methods appropriate for specifying the inputs to the ListSubscriptionsByTopic
choreography. The InputSet object is used to specify input parameters when executing this choreo.
"""
class ListSubscriptionsByTopicInputSet(InputSet):
"""
Set the value of the AWSAccessKeyId input for this choreography. ((required, string) The Access Key ID provided by Amazon Web Services.)
"""
def set_AWSAccessKeyId(self, value):
InputSet._set_input(self, 'AWSAccessKeyId', value)
"""
Set the value of the AWSSecretKeyId input for this choreography. ((required, string) The Secret Key ID provided by Amazon Web Services.)
"""
def set_AWSSecretKeyId(self, value):
InputSet._set_input(self, 'AWSSecretKeyId', value)
"""
Set the value of the NextToken input for this choreography. ((optional, string) The token returned from a previous LIstSubscriptionsByTopic request.)
"""
def set_NextToken(self, value):
InputSet._set_input(self, 'NextToken', value)
"""
Set the value of the TopicArn input for this choreography. ((required, string) The ARN of the topic that you want to find subscriptions for.)
"""
def set_TopicArn(self, value):
InputSet._set_input(self, 'TopicArn', value)
"""
A ResultSet with methods tailored to the values returned by the ListSubscriptionsByTopic choreography.
The ResultSet object is used to retrieve the results of a choreography execution.
"""
class ListSubscriptionsByTopicResultSet(ResultSet):
"""
Retrieve the value for the "Response" output from this choreography execution. ((xml) The response from Amazon.)
"""
def get_Response(self):
return self._output.get('Response', None)
class ListSubscriptionsByTopicChoreographyExecution(ChoreographyExecution):
def _make_result_set(self, response, path):
return ListSubscriptionsByTopicResultSet(response, path)
| [
"miriam@famulus"
] | miriam@famulus |
81ef572b2720d4856f76b694186f6bcfb53baa0f | 1c560f8035793e75fb9fda0ff6807cd67a2370ec | /ABC214/C.py | bbaccfd8e49a455c79e986c8d9cfa5c7fe3e2701 | [] | no_license | pumbaacave/atcoder | fa4c488a30388e3d8b4928a570c730c29df7ac0c | 61923f8714f21e8dd5ebafa89b2c3929cff3adf1 | refs/heads/master | 2023-08-17T02:27:03.091792 | 2023-08-05T13:10:58 | 2023-08-05T13:10:58 | 155,023,403 | 1 | 0 | null | 2022-11-12T02:36:11 | 2018-10-28T01:01:52 | Python | UTF-8 | Python | false | false | 1,000 | py | import sys
import collections
stdin = sys.stdin
# sys.setrecursionlimit(10**5)
def ii(): return int(stdin.readline())
def li(): return map(int, stdin.readline().split())
def li_(): return map(lambda x: int(x)-1, stdin.readline().split())
def lf(): return map(float, stdin.readline().split())
def ls(): return stdin.readline().split()
def ns(): return stdin.readline().rstrip()
def lc(): return list(ns())
def ni(): return int(stdin.readline())
def nf(): return float(stdin.readline())
def run():
N = ii()
S = list(li())
T = list(li())
ret = [0] * N
min_T = min(T)
start = T.index(min_T)
time_to_pass = min_T
for i in range(start, N):
received = T[i]
ret[i] = min(received, time_to_pass)
time_to_pass = ret[i] + S[i]
for i in range(start):
received = T[i]
ret[i] = min(received, time_to_pass)
time_to_pass = ret[i] + S[i]
for n in ret:
print(n)
if __name__ == '__main__':
run()
| [
"[email protected]"
] | |
7c402d715018475f125d7ed7546a3819242a9451 | b1aa3c599c5d831444e0ae4e434f35f57b4c6c45 | /month1/week3/class7/operator.py | 4fc87cc493b996cd295e5b5d82bda5b92cd31cde | [] | no_license | yunyusha/xunxibiji | 2346d7f2406312363216c5bddbf97f35c1e2c238 | f6c3ffb4df2387b8359b67d5e15e5e33e81e3f7d | refs/heads/master | 2020-03-28T12:31:17.429159 | 2018-09-11T11:35:19 | 2018-09-11T11:35:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 164 | py | from month1.week3.class7.test_Class import jisuan,Fenshu
# 先创建两个分数对象
fel = Fenshu(4,9)
fe2 = Fenshu(5,8)
mf = jisuan()
mf.adjust(fel, fe2, ' +')
| [
"[email protected]"
] | |
f663f67d3b155d0b30a1b47e75008bf5eba2ffbc | 338062cc2bb422f1364fd18ad5e721f6f713907a | /39. ООП. Определение операторов/Дополнительные задачи/Сложение многочленов.py | 51105a443ee66d24ae7e477f720aaf022755bbce | [] | no_license | rady1337/FirstYandexLyceumCourse | f3421d5eac7e7fbea4f5e266ebeb6479b89941cf | 0d27e452eda046ddd487d6471eeb7d9eb475bd39 | refs/heads/master | 2022-06-17T03:07:51.017888 | 2020-05-12T22:17:34 | 2020-05-12T22:17:34 | 263,459,364 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 621 | py | class Polynomial: def __init__(self, koef): self.koef = koef def __call__(self, value): s = 0 for i in range(len(self.koef)): s += self.koef[i] * pow(value, i) return s def __add__(self, other): st = [] k = Polynomial(st) if len(self.koef) < len(other.koef): m = len(self.koef) else: m = len(other.koef) for i in range(m): st.append(self.koef[i] + other.koef[i]) if len(self.koef) > m: st += self.koef[m::] else: st += other.koef[m::] k.koef = st return k | [
"[email protected]"
] | |
69e0d566f725250eeb6a4df86ade0a78bb6ecaa6 | 4266e9b1c59ddef83eede23e0fcbd6e09e0fa5cb | /vs/gyp/test/win/gyptest-rc-build.py | c6ee4492d87a3fe06f0d61154a719d5e3350e1c0 | [
"BSD-3-Clause"
] | permissive | barrystudy/study | b3ba6ed652d1a0bcf8c2e88a2a693fa5f6bf2115 | 96f6bb98966d3633b47aaf8e533cd36af253989f | refs/heads/master | 2020-12-24T14:53:06.219236 | 2017-10-23T02:22:28 | 2017-10-23T02:22:28 | 41,944,841 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 629 | py | #!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""
Make sure we build and include .rc files.
"""
import TestGyp
import sys
if sys.platform == 'win32':
test = TestGyp.TestGyp(formats=['msvs', 'ninja'])
CHDIR = 'rc-build'
test.run_gyp('hello.gyp', chdir=CHDIR)
test.build('hello.gyp', test.ALL, chdir=CHDIR)
test.up_to_date('hello.gyp', 'resource_only_dll', chdir=CHDIR)
test.run_built_executable('with_resources', chdir=CHDIR, status=4)
test.pass_test()
| [
"[email protected]"
] | |
12934916e6b0d3c94d1a4fee1d88ccb21c46b386 | 7fd1406b7e94d4b82a158ce5be87b5ae821e16b6 | /pro4_2.py | 4f291fb3e1e6b3638c2ed6eb70f86a2232d3f486 | [] | no_license | THABUULAGANATHAN/guvi-programs | c1c4d314c7ce43d6c3996fdac85616248c69e4fd | fb004f6916776ca9fbe07b8d507f9725cc55248f | refs/heads/master | 2022-01-15T09:08:32.904234 | 2019-07-19T06:45:04 | 2019-07-19T06:45:04 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 308 | py | nu1,nu2=map(int,input().split())
if nu1<=nu2:
u=nu1
else:
u=nu2
m=[]
for i in range(0,u):
m.append(sorted(list(map(int,input().split()))))
m=sorted(m)
for i in range(0,len(m[0])):
for j in range(0,len(m)-1):
if m[j][i]>m[j+1][i]:
m[j][i],m[j+1][i]=m[j+1][i],m[j][i]
for i in m:
print(*i)
| [
"[email protected]"
] | |
60a6329221eaa31ed4c8bdfa2540f68b65f5a2cc | 8883b85daf1a07f6ff4945055356134dc2889522 | /Telstra_Messaging/models/provision_number_request.py | a64cd44bba136c0c9a8a48a893dcf33450e00f70 | [
"Apache-2.0"
] | permissive | yashints/MessagingAPI-SDK-python | 968d86a4efb7220bfec4f2add18e6abef7ea721e | 6cb41ed90fd237e57a6ce4ca383fa035cd842a7d | refs/heads/master | 2020-12-10T05:20:15.915247 | 2018-08-10T06:40:11 | 2018-08-10T06:40:11 | 233,511,705 | 0 | 0 | Apache-2.0 | 2020-01-13T04:34:27 | 2020-01-13T04:34:26 | null | UTF-8 | Python | false | false | 15,001 | py | # coding: utf-8
"""
Telstra Messaging API
# Introduction <table><tbody><tr><td class = 'into_api' style='border:none;padding:0 0 0 0'><p>Send and receive SMS and MMS messages globally using Telstra's enterprise grade Messaging API. It also allows your application to track the delivery status of both sent and received messages. Get your dedicated Australian number, and start sending and receiving messages today.</p></td><td class = 'into_api_logo' style='width: 20%;border:none'><img class = 'api_logo' style='margin: -26px 0 0 0' src = 'https://test-telstra-retail-tdev.devportal.apigee.io/sites/default/files/messagingapi-icon.png'></td></tr></tbody></table> # Features The Telstra Messaging API provides the features below. | Feature | Description | | --- | --- | | `Dedicated Number` | Provision a mobile number for your account to be used as `from` address in the API | | `Send Messages` | Sending SMS or MMS messages | | `Receive Messages` | Telstra will deliver messages sent to a dedicated number or to the `notifyURL` defined by you | | `Broadcast Messages` | Invoke a single API call to send a message to a list of numbers provided in `to` | | `Delivery Status` | Query the delivery status of your messages | | `Callbacks` | Provide a notification URL and Telstra will notify your app when a message status changes | | `Alphanumeric Identifier` | Differentiate yourself by providing an alphanumeric string in `from`. This feature is only available on paid plans | | `Concatenation` | Send messages up to 1900 characters long and Telstra will automaticaly segment and reassemble them | | `Reply Request` | Create a chat session by associating `messageId` and `to` number to track responses received from a mobile number. We will store this association for 8 days | | `Character set` | Accepts all Unicode characters as part of UTF-8 | | `Bounce-back response` | See if your SMS hits an unreachable or unallocated number (Australia Only) | | `Queuing` | Messaging API will automatically queue and deliver each message at a compliant rate. | | `Emoji Encoding` | The API supports the encoding of the full range of emojis. Emojis in the reply messages will be in their UTF-8 format. | ## Delivery Notification or Callbacks The API provides several methods for notifying when a message has been delivered to the destination. 1. When you send a message there is an opportunity to specify a `notifyURL`. Once the message has been delivered the API will make a call to this URL to advise of the message status. 2. If you do not specify a URL you can always call the `GET /status` API to get the status of the message. # Getting Access to the API 1. Register at [https://dev.telstra.com](https://dev.telstra.com). 2. After registration, login to [https://dev.telstra.com](https://dev.telstra.com) and navigate to the **My apps** page. 3. Create your application by clicking the **Add new app** button 4. Select **API Free Trial** Product when configuring your application. This Product includes the Telstra Messaging API as well as other free trial APIs. Your application will be approved automatically. 5. There is a maximum of 1000 free messages per developer. Additional messages and features can be purchased from [https://dev.telstra.com](https://dev.telstra.com). 6. Note your `Client key` and `Client secret` as these will be needed to provision a number for your application and for authentication. Now head over to **Getting Started** where you can find a postman collection as well as some links to sample apps and SDKs to get you started. Happy Messaging! # Frequently Asked Questions **Q: Is creating a subscription via the Provisioning call a required step?** A. Yes. You will only be able to start sending messages if you have a provisioned dedicated number. Use Provisioning to create a dedicated number subscription, or renew your dedicated number if it has expired. **Q: When trying to send an SMS I receive a `400 Bad Request` response. How can I fix this?** A. You need to make sure you have a provisioned dedicated number before you can send an SMS. If you do not have a provisioned dedicated number and you try to send a message via the API, you will get the error below in the response: <pre><code class=\"language-sh\">{ \"status\":\"400\", \"code\":\"DELIVERY-IMPOSSIBLE\", \"message\":\"Invalid \\'from\\' address specified\" }</code></pre> Use Provisioning to create a dedicated number subscription, or renew your dedicated number if it has expired. **Q: How long does my dedicated number stay active for?** A. When you provision a dedicated number, by default it will be active for 30 days. You can use the `activeDays` parameter during the provisioning call to increment or decrement the number of days your dedicated number will remain active. Note that Free Trial apps will have 30 days as the maximum `activeDays` they can add to their provisioned number. If the Provisioning call is made several times within that 30-Day period, it will return the `expiryDate` in the Unix format and will not add any activeDays until after that `expiryDate`. **Q: Can I send a broadcast message using the Telstra Messaging API?** A. Yes. Recipient numbers can be in the form of an array of strings if a broadcast message needs to be sent, allowing you to send to multiple mobile numbers in one API call. A sample request body for this will be: `{\"to\":[\"+61412345678\",\"+61487654321\"],\"body\":\"Test Message\"}` **Q: Can I send SMS and MMS to all countries?** A. You can send SMS and MMS to all countries EXCEPT to countries which are subject to global sanctions namely: Burma, Côte d'Ivoire, Cuba, Iran, North Korea, Syria. **Q: Can I use `Alphanumeric Identifier` from my paid plan via credit card?** A. `Alphanumeric Identifier` is only available on Telstra Account paid plans, not through credit card paid plans. **Q: What is the maximum sized MMS that I can send?** A. This will depend on the carrier that will receive the MMS. For Telstra it's up to 2MB, Optus up to 1.5MB and Vodafone only allows up to 500kB. You will need to check with international carriers for thier MMS size limits. **Q: How is the size of an MMS calculated?** A. Images are scaled up to approximately 4/3 when base64 encoded. Additionally, there is approximately 200 bytes of overhead on each MMS. Assuming the maximum MMS that can be sent on Telstra’s network is 2MB, then the maximum image size that can be sent will be approximately 1.378MB (1.378 x 1.34 + 200, without SOAP encapsulation). **Q: How is an MMS classified as Small or Large?** A. MMSes with size below 600kB are classed as Small whereas those that are bigger than 600kB are classed as Large. They will be charged accordingly. **Q: Are SMILs supported by the Messaging API?** A. While there will be no error if you send an MMS with a SMIL presentation, the actual layout or sequence defined in the SMIL may not display as expected because most of the new smartphone devices ignore the SMIL presentation layer. SMIL was used in feature phones which had limited capability and SMIL allowed a *powerpoint type* presentation to be provided. Smartphones now have the capability to display video which is the better option for presentations. It is recommended that MMS messages should just drop the SMIL. **Q: How do I assign a delivery notification or callback URL?** A. You can assign a delivery notification or callback URL by adding the `notifyURL` parameter in the body of the request when you send a message. Once the message has been delivered, a notification will then be posted to this callback URL. **Q: What is the difference between the `notifyURL` parameter in the Provisoning call versus the `notifyURL` parameter in the Send Message call?** A. The `notifyURL` in the Provisoning call will be the URL where replies to the provisioned number will be posted. On the other hand, the `notifyURL` in the Send Message call will be the URL where the delivery notification will be posted, e.g. when an SMS has already been delivered to the recipient. # Getting Started Below are the steps to get started with the Telstra Messaging API. 1. Generate an OAuth2 token using your `Client key` and `Client secret`. 2. Use the Provisioning call to create a subscription and receive a dedicated number. 3. Send a message to a specific mobile number. ## Run in Postman <a href=\"https://app.getpostman.com/run-collection/ded00578f69a9deba256#?env%5BMessaging%20API%20Environments%5D=W3siZW5hYmxlZCI6dHJ1ZSwia2V5IjoiY2xpZW50X2lkIiwidmFsdWUiOiIiLCJ0eXBlIjoidGV4dCJ9LHsiZW5hYmxlZCI6dHJ1ZSwia2V5IjoiY2xpZW50X3NlY3JldCIsInZhbHVlIjoiIiwidHlwZSI6InRleHQifSx7ImVuYWJsZWQiOnRydWUsImtleSI6ImFjY2Vzc190b2tlbiIsInZhbHVlIjoiIiwidHlwZSI6InRleHQifSx7ImVuYWJsZWQiOnRydWUsImtleSI6Imhvc3QiLCJ2YWx1ZSI6InRhcGkudGVsc3RyYS5jb20iLCJ0eXBlIjoidGV4dCJ9LHsiZW5hYmxlZCI6dHJ1ZSwia2V5IjoiQXV0aG9yaXphdGlvbiIsInZhbHVlIjoiIiwidHlwZSI6InRleHQifSx7ImVuYWJsZWQiOnRydWUsImtleSI6Im9hdXRoX2hvc3QiLCJ2YWx1ZSI6InNhcGkudGVsc3RyYS5jb20iLCJ0eXBlIjoidGV4dCJ9LHsiZW5hYmxlZCI6dHJ1ZSwia2V5IjoibWVzc2FnZV9pZCIsInZhbHVlIjoiIiwidHlwZSI6InRleHQifV0=\"><img src=\"https://run.pstmn.io/button.svg\" alt=\"Run in Postman\"/></a> ## Sample Apps - [Perl Sample App](https://github.com/telstra/MessagingAPI-perl-sample-app) - [Happy Chat App](https://github.com/telstra/messaging-sample-code-happy-chat) - [PHP Sample App](https://github.com/developersteve/telstra-messaging-php) ## SDK Repos - [Messaging API - PHP SDK](https://github.com/telstra/MessagingAPI-SDK-php) - [Messaging API - Python SDK](https://github.com/telstra/MessagingAPI-SDK-python) - [Messaging API - Ruby SDK](https://github.com/telstra/MessagingAPI-SDK-ruby) - [Messaging API - NodeJS SDK](https://github.com/telstra/MessagingAPI-SDK-node) - [Messaging API - .Net2 SDK](https://github.com/telstra/MessagingAPI-SDK-dotnet) - [Messaging API - Java SDK](https://github.com/telstra/MessagingAPI-SDK-Java) ## Blog Posts For more information on the Messaging API, you can read these blog posts: - [Callbacks Part 1](https://dev.telstra.com/content/understanding-messaging-api-callbacks-part-1) - [Callbacks Part 2](https://dev.telstra.com/content/understanding-messaging-api-callbacks-part-2) # noqa: E501
OpenAPI spec version: 2.2.9
Generated by: https://openapi-generator.tech
"""
import pprint
import re # noqa: F401
import six
class ProvisionNumberRequest(object):
"""NOTE: This class is auto generated by OpenAPI Generator.
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
openapi_types = {
'active_days': 'int',
'notify_url': 'str'
}
attribute_map = {
'active_days': 'activeDays',
'notify_url': 'notifyURL'
}
def __init__(self, active_days=None, notify_url=None): # noqa: E501
"""ProvisionNumberRequest - a model defined in OpenAPI""" # noqa: E501
self._active_days = None
self._notify_url = None
self.discriminator = None
if active_days is not None:
self.active_days = active_days
if notify_url is not None:
self.notify_url = notify_url
@property
def active_days(self):
"""Gets the active_days of this ProvisionNumberRequest. # noqa: E501
The number of days before for which this number is provisioned. # noqa: E501
:return: The active_days of this ProvisionNumberRequest. # noqa: E501
:rtype: int
"""
return self._active_days
@active_days.setter
def active_days(self, active_days):
"""Sets the active_days of this ProvisionNumberRequest.
The number of days before for which this number is provisioned. # noqa: E501
:param active_days: The active_days of this ProvisionNumberRequest. # noqa: E501
:type: int
"""
self._active_days = active_days
@property
def notify_url(self):
"""Gets the notify_url of this ProvisionNumberRequest. # noqa: E501
A notification URL that will be POSTed to whenever a new message (e.g. a reply to a message sent) arrives at this destination address. If this is not provided then you can use the Get /sms or /mms API to poll for reply messages. *Please note that the notification URLs and the Get /sms or /mms call are exclusive. If a notification URL has been set then the GET call will not provide any useful information.* # noqa: E501
:return: The notify_url of this ProvisionNumberRequest. # noqa: E501
:rtype: str
"""
return self._notify_url
@notify_url.setter
def notify_url(self, notify_url):
"""Sets the notify_url of this ProvisionNumberRequest.
A notification URL that will be POSTed to whenever a new message (e.g. a reply to a message sent) arrives at this destination address. If this is not provided then you can use the Get /sms or /mms API to poll for reply messages. *Please note that the notification URLs and the Get /sms or /mms call are exclusive. If a notification URL has been set then the GET call will not provide any useful information.* # noqa: E501
:param notify_url: The notify_url of this ProvisionNumberRequest. # noqa: E501
:type: str
"""
self._notify_url = notify_url
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ProvisionNumberRequest):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"[email protected]"
] | |
d224e4bc048889e5860384746a106809df71fbd6 | 4b169d970dc9390ab53281d4a4a1cb32f79f9317 | /subject.py | 7e9214220b4ef27ce31b7bce1366a7b8d0c816f6 | [] | no_license | marloverket/crosstask | 96a710946f2db1cda18c9f9cb9da3cc8aaa3455f | 21ba7ea1c5a0f48be252acbea23e916d49bbaebb | refs/heads/master | 2021-01-19T15:40:32.553961 | 2012-11-26T04:04:09 | 2012-11-26T04:04:09 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,747 | py | import viz,vizinfo,viztask,vizact
from datetime import datetime
global subj
"""
Adds a panel to the screen where the subject info gets
filled out.
"""
info_box = vizinfo.add('')
info_box.scale(2,2)
info_box.translate(0.85,0.8)
info_box.title('Participant Info')
#Add the GUI elements to the box
id_box = info_box.add(viz.TEXTBOX,'Participant ID')
day_box = info_box.add(viz.TEXTBOX, 'Day')
run_box = info_box.add(viz.TEXTBOX, 'Run Number')
scan_box = info_box.add(viz.CHECKBOX,'Scanner')
training_box = info_box.add(viz.CHECKBOX,'Training?')
run_button = info_box.add(viz.BUTTON,'Run')
info_box.visible(viz.OFF)
class Subject(object):
def __init__(self):
self.init_time = datetime.now().strftime("%Y.%m.%d.%H.%M")
self.time_offset = -1
def set_time_offset(self, timestamp):
"""
If the experiment relies on an external trigger
to begin, set the timing offset, so that when we
dump this subject's behavioral data,
"""
self.time_offset = timestamp
def grab_info(self):
""" Reads the information from the vizinfo
widgets and fills in details about this
subject.
"""
self.show_gui()
yield viztask.waitButtonUp(run_button)
self.subject_id = "S%03i"%int(id_box.get())
self.run_num = "R%02i"%int(run_box.get())
self.day_num = "D%02i"%int(day_box.get())
self.is_scanning = bool(scan_box.get())
self.is_training = bool(training_box.get())
info_box.remove()
def show_gui(self):
info_box.visible(viz.ON)
def get_experiment(self):
"""
Experiment files should be named such that it is unambiguous
which file should go with this subject/day/run.
"""
raise NotImplementedError(
"Must be overwritten in subclass")
if __name__=="__main__":
viz.go()
viztask.schedule(experiment()) | [
"[email protected]"
] | |
09ad4a8a300cc289665cb238bd3bdbbaf5769d75 | f06d9cd5fb86885a73ee997c687f3294840dd199 | /services/flickr.py | c21a01e2fd33883bb08bcd8d4e89cbe4ed018d9d | [] | no_license | bu2/oauth-proxy | aaff16a07d5c2c07c8243293c9ed41205b251a74 | dbed492f8a806c36177a56ca626f005acec904b1 | refs/heads/master | 2020-12-26T15:53:40.618570 | 2013-07-09T05:06:16 | 2013-07-09T05:06:16 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,290 | py | import foauth.providers
class Flickr(foauth.providers.OAuth1):
# General info about the provider
provider_url = 'http://www.flickr.com/'
docs_url = 'http://www.flickr.com/services/api/'
category = 'Pictures'
# URLs to interact with the API
request_token_url = 'http://www.flickr.com/services/oauth/request_token'
authorize_url = 'http://www.flickr.com/services/oauth/authorize'
access_token_url = 'http://www.flickr.com/services/oauth/access_token'
api_domain = 'secure.flickr.com'
available_permissions = [
(None, 'access your public and private photos'),
('write', 'upload, edit and replace your photos'),
('delete', 'upload, edit, replace and delete your photos'),
]
permissions_widget = 'radio'
def get_authorize_params(self, redirect_uri, scopes):
params = super(Flickr, self).get_authorize_params(redirect_uri, scopes)
if any(scopes):
params['perms'] = scopes[0]
else:
params['perms'] = 'read'
return params
def get_user_id(self, key):
url = u'/services/rest/?method=flickr.people.getLimits'
url += u'&format=json&nojsoncallback=1'
r = self.api(key, self.api_domain, url)
return r.json()[u'person'][u'nsid']
| [
"[email protected]"
] | |
21703522e9344dd45bae154f7468fa13d918ed67 | 600df3590cce1fe49b9a96e9ca5b5242884a2a70 | /build/android/pylib/junit/test_dispatcher.py | 51253d4cc07f90be1bf883c29ac92bd70b12bc0c | [
"BSD-3-Clause"
] | permissive | metux/chromium-suckless | efd087ba4f4070a6caac5bfbfb0f7a4e2f3c438a | 72a05af97787001756bae2511b7985e61498c965 | refs/heads/orig | 2022-12-04T23:53:58.681218 | 2017-04-30T10:59:06 | 2017-04-30T23:35:58 | 89,884,931 | 5 | 3 | BSD-3-Clause | 2022-11-23T20:52:53 | 2017-05-01T00:09:08 | null | UTF-8 | Python | false | false | 843 | py | # Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from pylib import constants
from pylib.base import base_test_result
def RunTests(tests, runner_factory):
"""Runs a set of java tests on the host.
Return:
A tuple containing the results & the exit code.
"""
def run(t):
runner = runner_factory(None, None)
runner.SetUp()
results_list, return_code = runner.RunTest(t)
runner.TearDown()
return (results_list, return_code == 0)
test_run_results = base_test_result.TestRunResults()
exit_code = 0
for t in tests:
results_list, passed = run(t)
test_run_results.AddResults(results_list)
if not passed:
exit_code = constants.ERROR_EXIT_CODE
return (test_run_results, exit_code)
| [
"[email protected]"
] | |
b8aa82a8b82c5da5dc36d563d5cbd1447a6552cb | 1a1b7f607c5e0783fd1c98c8bcff6460e933f09a | /core/ras/ras_loader.py | b66f88e4e3755f436f6ce986949291f6a9faf8f8 | [] | no_license | smrmohammadi/freeIBS | 14fb736fcadfaea24f0acdafeafd2425de893a2d | 7f612a559141622d5042614a62a2580a72a9479b | refs/heads/master | 2021-01-17T21:05:19.200916 | 2014-03-17T03:07:15 | 2014-03-17T03:07:15 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,124 | py | from core.db import db_main
from core.ras import ras_main
from core.ibs_exceptions import *
from core.errors import errorText
from radius_server.pyrad.server import RemoteHost
class RasLoader:
def __init__(self):
self.rases_ip={}
self.rases_id={}
self.radius_remote_hosts={}
def __getitem__(self,key):
return self.getRasByID(key)
def getRasByIP(self,ras_ip):
try:
return self.rases_ip[ras_ip]
except KeyError:
raise GeneralException(errorText("RAS","INVALID_RAS_IP")%ras_ip)
def getRasByID(self,ras_id):
try:
return self.rases_id[ras_id]
except KeyError:
raise GeneralException(errorText("RAS","INVALID_RAS_ID")%ras_id)
def checkRasIP(self,ras_ip):
"""
check if ras with ip "ras_ip" is loaded
raise a GeneralException on Error
"""
if not self.rasIPExists(ras_ip):
raise GeneralException(errorText("RAS","INVALID_RAS_IP")%ras_ip)
def checkRasID(self,ras_id):
"""
check if ras with id "ras_id" is loaded
raise a GeneralException on Error
"""
if not self.rases_id.has_key(ras_id):
raise GeneralException(errorText("RAS","INVALID_RAS_ID")%ras_ip)
def rasIPExists(self,ras_ip):
"""
return True if ras with ip "ras_ip" already exists and False if it doesn't exists
"""
return self.rases_ip.has_key(ras_ip)
def getAllRasIPs(self):
"""
return a list of all ras_ips that is loaded into object
"""
return self.rases_ip.keys()
def runOnAllRases(self,method):
"""
run "method" multiple times with each ras_obj as argument
method should accept one argument (ras_obj)
"""
return map(method,self.rases_id.values())
def loadAllRases(self):
ras_ids=self.__getAllActiveRasIDs()
map(self.loadRas,ras_ids)
def loadRas(self,ras_id):
"""
load ras with id "ras_id" and keep it in the loader object
"""
ras_obj=self.loadRasObj(ras_id)
self.keepObj(ras_obj)
def loadRasObj(self,ras_id):
"""
load ras with id "ras_id" and return the object
"""
(ras_info,ras_attrs,ports,ippools)=self.getRasInfo(ras_id)
ras_obj=self.__createRasObj(ras_info,ras_attrs,ports,ippools)
return ras_obj
def getRasInfo(self,ras_id):
ras_info=self.__getRasInfoDB(ras_id)
ras_attrs=self.__getRasAttrs(ras_id)
ports=self.__getRasPorts(ras_id)
ippools=self.__getRasIPpools(ras_id)
return (ras_info,ras_attrs,ports,ippools)
def unloadRas(self,ras_id):
"""
unload ras, with id "ras_id" from object
useful when the ras is deleted
"""
ras_obj=self.getRasByID(ras_id)
ras_obj.unloaded()
self.unKeepObj(ras_obj)
def getRadiusRemoteHosts(self):
return self.radius_remote_hosts
def __getAllActiveRasIDs(self):
"""
return a list of all ras_id s from table "ras"
"""
ras_ids=db_main.getHandle().get("ras","active='t'",0,-1,"",["ras_id"])
return [m["ras_id"] for m in ras_ids]
def __getRasIPpools(self,ras_id):
"""
return a list of ras ippool ids in format [pool_id1,pool_id2,..]
"""
ras_ippools_db=self.__getRasIPpoolsDB(ras_id)
return [m["ippool_id"] for m in ras_ippools_db]
def __getRasIPpoolsDB(self,ras_id):
"""
return a list of ras ippool names from table ras_ippools
"""
return db_main.getHandle().get("ras_ippools","ras_id=%s"%ras_id)
def __getRasPorts(self,ras_id):
"""
return a dic of ports of ras with id "ras_id" in format
{port_name:{"phone":phone_no,"type":type,"comment":comment}
"""
ports={}
db_ports=self.__getPortsDB(ras_id)
for _dic in db_ports:
ports[_dic["port_name"]]=_dic
return ports
def __getPortsDB(self,ras_id):
"""
return a list of dics, that returned from db query from table "ras_ports"
"""
return db_main.getHandle().get("ras_ports","ras_id=%s"%ras_id)
def __getRasInfoDB(self,ras_id):
"""
return a dictionary of ras basic info from table "ras"
"""
return db_main.getHandle().get("ras","ras_id=%s"%ras_id)[0]
def __getRasAttrs(self,ras_id):
"""
return ras attributes in a dic with format {attr_name:attr_value}
"""
attrs={}
attrs_db=self.__getRasAttrsDB(ras_id)
for _dic in attrs_db:
attrs[_dic["attr_name"]]=_dic["attr_value"]
return attrs
def __getRasAttrsDB(self,ras_id):
"""
return a dic of ras_attributes returned from "ras_attrs" table
"""
return db_main.getHandle().get("ras_attrs","ras_id=%s"%ras_id)
def __createRasObj(self,ras_info,ras_attrs,ports,ippools):
"""
create a ras object, using ras_info and ras_attrs
"""
return ras_main.getFactory().getClassFor(ras_info["ras_type"])(ras_info["ras_ip"],ras_info["ras_id"],
ras_info["ras_type"],ras_info["radius_secret"],ports,ippools,ras_attrs)
def keepObj(self,ras_obj):
"""
keep "ras_obj" into self, by adding them to internal dics
"""
self.rases_ip[ras_obj.getRasIP()]=ras_obj
self.rases_id[ras_obj.getRasID()]=ras_obj
self.radius_remote_hosts[ras_obj.getRasIP()]=RemoteHost(ras_obj.getRasIP(),ras_obj.getRadiusSecret(),ras_obj.getRasIP())
def unKeepObj(self,ras_obj):
del(self.rases_id[ras_obj.getRasID()])
del(self.rases_ip[ras_obj.getRasIP()])
del(self.radius_remote_hosts[ras_obj.getRasIP()])
| [
"farshad_kh"
] | farshad_kh |
7df9cc92dfba37d4d64f5ac42303e6293ec477df | 487ce91881032c1de16e35ed8bc187d6034205f7 | /codes/CodeJamCrawler/16_0_1_neat/16_0_1_Kaster_count_numbers.py | a8f821f2452a7b5e684a7c06e79b1215ca4b622a | [] | no_license | DaHuO/Supergraph | 9cd26d8c5a081803015d93cf5f2674009e92ef7e | c88059dc66297af577ad2b8afa4e0ac0ad622915 | refs/heads/master | 2021-06-14T16:07:52.405091 | 2016-08-21T13:39:13 | 2016-08-21T13:39:13 | 49,829,508 | 2 | 0 | null | 2021-03-19T21:55:46 | 2016-01-17T18:23:00 | Python | UTF-8 | Python | false | false | 1,156 | py | import sys
def counting_numbers(n, case, i = 1, already_have = set()):
if (n == 0):
message = 'Case #%d: %s' % (case, 'INSOMNIA')
else:
N = str(i * n)
unique_nums = set(list(N))
combined = unique_nums | already_have
if len(combined) < 10:
message = counting_numbers(n, case, i + 1, combined)
else:
message = 'Case #%d: %s' % (case, n*i)
return message
def check_answer(n, i = 1, already_have = set()):
if (n == 0):
print 'INSOMNIA'
else:
N = str(i * n)
unique_nums = set(list(N))
print 'number: %d' % (n * i), 'unique digits: ' + str(unique_nums), 'seen before: ' + str(already_have)
sys.stdout.flush()
combined = unique_nums | already_have
if len(combined) < 10:
raw_input()
check_answer(n, i + 1, combined)
# open the file
with open('A-large.in', 'r') as f:
small = [int(a) for a in f.read().split('\n')[:-1]]
T = small[0]
out = ''
for i, number in enumerate(small[1:]):
out += counting_numbers(number, i+1) + '\n'
open('output2.txt', 'w').write(out)
# check_answer(11) | [
"[[email protected]]"
] | |
295395e9c4c9fb7d25a1f433a3626ce141121cb9 | 9db82d0fc7819b11ebcae4c3904dde1a75bd1054 | /setup.py | a06c2273e17b0d0b51aa6c03bf54007730d3d415 | [] | no_license | rblack42/PyLit4 | f49278ff3417ad4a3348657f1f199f7afc589a1f | 352f6e962f2265a585de274372ab678a9f3ccddb | refs/heads/master | 2021-01-10T20:47:03.897946 | 2014-08-17T05:37:46 | 2014-08-17T05:37:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 964 | py | import os
from setuptools import setup
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(
name='PyLit4',
version='0.1dev',
url='https://github.com/rblack42/PyLit4',
license='BSD3',
author='Roie Black',
author_email='[email protected]',
description='Literate programming with reStructuredText',
long_description=read('README.rst'),
packages=['pylit'],
zip_safe=False,
include_package_data=True,
platforms='any',
install_required=(
'Flask>=0.10.1',
'nose>=1.3.3'
),
classifiers=[
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules'
]
)
| [
"[email protected]"
] | |
d192513223a3c78eeb653b57ba0afc6b50e163eb | becfe7904e7e17bcd23b891021292542a7968b60 | /basic_elements/cross_and_circle.py | f08aee209c76448d5c7e1d35aea0620fa814d597 | [
"Apache-2.0"
] | permissive | ppinko/python_knowledge_library | 5ef482ddc36b1e4968f11b295a72589be268af99 | 089348c80e3f49a4a56839bfb921033e5386f07e | refs/heads/master | 2023-03-21T04:15:15.947396 | 2021-03-07T12:26:00 | 2021-03-07T12:26:00 | 256,592,705 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,350 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Nov 2 10:32:18 2019
@author: lx
"""
"""
My first interactive game - cross and circle
"""
import sys
from recursion import test
board = [['-', '-', '-'], ['-', '-', '-'], ['-', '-', '-']]
def update_board(a, b, c, current_board):
""" Update board """
current_board[a][b] = c
return current_board
#test(update_board(0, 0, 'x', board) == [['x', '-', '-'], ['-', '-', '-'],
# ['-', '-', '-']])
#test(update_board(1, 2, 'o', board) == [['x', '-', '-'], ['-', '-', 'o'],
# ['-', '-', '-']])
#test(update_board(2, 0, 'x', board) == [['x', '-', '-'], ['-', '-', 'o'],
# ['x', '-', '-']])
#print(board)
def check_win(current_board, symbol):
""" Check if someone wins """
for i in range(3):
if current_board[i].count(symbol) == 3:
return True
if current_board[0][i] == current_board[1][i] == current_board[2][i] == symbol:
return True
if current_board[0][0] == current_board[1][1] == current_board[2][2] == symbol:
return True
if current_board[0][2] == current_board[1][1] == current_board[2][0] == symbol:
return True
return False
#test(check_win([['x', '-', '-'], ['-', '-', '-'],
# ['-', '-', '-']], 'x') == False)
#test(check_win([['x', 'x', 'x'], ['-', '-', '-'],
# ['-', '-', '-']], 'x') == True)
#test(check_win([['x', '-', '-'], ['x', '-', '-'],
# ['x', '-', '-']], 'x') == True)
#test(check_win([['x', '-', '-'], ['-', 'x', '-'],
# ['-', '-', 'x']], 'x') == True)
def printed_board(current_board):
""" Show current board """
print(current_board[0][0], current_board[1][0], current_board[2][0])
print(current_board[0][1], current_board[1][1], current_board[2][1])
print(current_board[0][2], current_board[1][2], current_board[2][2])
def check_move(x, y, possible_moves):
""" Check possibility of the movement """
move = (x-1, y-1)
if move in possible_moves:
return True
else:
return False
#possibles = [(0,0), (0,1), (1,1), (1,0)]
#test(check_move(1,1, possibles) == [(0,1), (1,1), (1,0)])
def two_player_game():
""" Interactive game for two players """
symbol = 'x'
possible_moves = [(0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2)]
current_board = [['-', '-', '-'], ['-', '-', '-'], ['-', '-', '-']]
n = 1
print("Welcome in the game!\nHave a nice game!\n")
printed_board(current_board)
while n <= 9:
x = int(input('Choose a column: '))
y = int(input('Choose a row: '))
z = (x-1, y-1)
if check_move(x, y, possible_moves) == False:
print('Invalid move')
continue
possible_moves.remove(z)
print('\n')
updated_board = update_board(x-1, y-1, symbol, current_board)
current_board = updated_board[:]
printed_board(current_board)
if n >= 3:
if check_win(updated_board, symbol) == True:
return print('WINNER')
n += 1
if symbol == 'x':
symbol = 'o'
else:
symbol = 'x'
return print('END OF THE GAME' + '\n' + 'NO WINNER')
""" Initilizing the game """
two_player_game() # START OF THE GAME | [
"[email protected]"
] | |
e413a8d1df5d879d723ed8cfe9d8ae4391e1a9ba | e4c4255f89f7e2ebac11d9d14fde6b7556dea483 | /src/firestudio/interpolate/scene_interpolate.py | 42d080b4648a153f5055893d22689e1eb652598f | [
"MIT"
] | permissive | agurvich/FIRE_studio | 9bb10d6c3756a83d5db1967f8f50056536187421 | 8b027f9a28416a5f7a163008afcb6283b7ceb907 | refs/heads/main | 2023-05-31T19:14:09.341688 | 2022-09-12T02:57:18 | 2022-09-12T02:57:18 | 108,601,747 | 9 | 7 | MIT | 2022-09-15T17:45:11 | 2017-10-27T22:35:12 | Jupyter Notebook | UTF-8 | Python | false | false | 12,646 | py | import gc
import multiprocessing
import itertools
import numpy as np
from abg_python.parallel.multiproc_utils import copySnapshotNamesToMPSharedMemory
from abg_python.galaxy.gal_utils import Galaxy
from .time_interpolate import TimeInterpolationHandler
from .time_helper import get_single_snap,get_load_flags,multi_worker_function,render_ffmpeg_frames
studio_kwargs = {
'quaternion':(1,0,0,0),
'camera_pos':(0,0,15),
'camera_focus':(0,0,0),
'frame_half_thickness':15, ## half-thickness of image in z direction
'aspect_ratio':1, ## shape of image, y/x TODO figure out if this is necessary to pass?
'pixels':1200, ## pixels in x direction, resolution of image
'scale_line_length':5, ## length of the scale line in kpc
'fontsize':12, ## font size of scale bar and figure label
'font_color':(1,1,1,1), ## font color of scale bar and figure label
#'figure_label':'', ## string to be put in upper right corner
#'scale_bar':True, ## flag to plot length scale bar in lower left corner
#'figure_label_side':'right', ## corner to put label in
#'noaxis':True, ## turns off axis ticks
#'savefig':None, ## save the image as a png if passed a string
#'snapdir':None,
#'snapnum':None,
#'sim_name':None
}
star_kwargs = {
'maxden' : None, ##
'dynrange' : None}, ## controls the saturation of the image in a non-obvious way
## set_ImageParams only controls basic aspects of the colorbar which
## don't make sense to try and interpolate
gas_kwargs = {}
default_kwargs ={}
for kwargs in [studio_kwargs,star_kwargs,gas_kwargs]: default_kwargs.update(kwargs)
class SceneInterpolationHandler(TimeInterpolationHandler):
snap_pairs = None
def __repr__(self):
return (
"SceneInterpolationHandler(%d/%d frames (%d keyframes) - %s)"%(
len(self.scene_kwargss),
self.nframes,
len(self.keyframes),
repr(list(self.scene_kwargss[0].keys()))))
def __getitem__(self,key):
return self.kwargs[key]
def __init__(
self,
total_duration_sec,
fps=15,
**kwargs):
self.fps = fps
self.total_duration_sec = total_duration_sec
self.nframes = int(self.total_duration_sec*self.fps)
kwargs = self.parse_kwargs(**kwargs)
## initialize the first keyframe. we'll interpolate from it on the first call to
## self.add_keyframe and we'll copy it if, for whatever reason, we make it to
## interpolater.render
self.scene_kwargss = [kwargs]
self.keyframes = [0]
def parse_kwargs(self,**kwargs):
if 'camera' in kwargs:
camera = kwargs.pop('camera')
for key in ['quaternion','camera_pos','camera_focus']:
kwargs[key] = getattr(camera,key)
for kwarg in list(kwargs.keys()):
if (kwarg in default_kwargs): pass
else: raise KeyError(
'Invalid key: %s - try one of:\n%s'%(
kwarg,
repr(list(default_kwargs.keys()))))
return kwargs
def add_keyframe(
self,
time_since_last_keyframe_sec,
time_clip=True,
loud=True,
nsteps=None,
**kwargs):
new_kwargs = self.parse_kwargs(**kwargs)
prev_kwargs = self.scene_kwargss[-1]
if nsteps is None: nsteps = int(np.ceil(time_since_last_keyframe_sec*self.fps))
## handle case when we are asked to interpolate past
## the total duration of the movie
if len(self.scene_kwargss) + nsteps > self.nframes:
message = ("time since last keyframe too large,"+
" this segment (%d + %d frames) would exceed total duration: %d frames (%.1f sec)"%(
len(self.scene_kwargss),
nsteps,
self.nframes,
self.total_duration_sec))
if not time_clip: raise ValueError(message+". Use time_clip=True to avoid this error message.")
else:
nsteps = self.nframes - len(self.scene_kwargss)
if loud: print(message+'... clipping to %d frames instead'%nsteps)
## handle case where we are not changing a previously specified kwarg
for prev_kwarg in prev_kwargs:
if prev_kwarg not in new_kwargs:
## just copy the old value over
new_kwargs[prev_kwarg] = prev_kwargs[prev_kwarg]
## make sure each dictionary has one-to-one
## corresponding keys:
for new_kwarg in new_kwargs:
## handle invalid kwarg
if new_kwarg not in default_kwargs: raise KeyError(
'kwarg %s must be one of:\n%s'%(new_kwarg,repr(default_kwargs.keys())))
## handle case where a new kwarg is not in the previous
if new_kwarg not in prev_kwargs:
## *explicitly* set the previous values for each frame to be the default
for sub_prev_kwargs in self.scene_kwargss:
sub_prev_kwargs[new_kwarg] = default_kwargs[new_kwarg]
if nsteps == 1: self.scene_kwargss.append(new_kwargs)
else:
## start at i = 1 to avoid repeating frames
for i in range(1,nsteps+1):
this_kwargs = {}
for kwarg in new_kwargs:
pval = prev_kwargs[kwarg]
nval = new_kwargs[kwarg]
## convert args that are lists/tuples to arrays
if kwarg in ['quaternion','camera_pos','camera_focus','font_color']:
pval = np.array(pval)
nval = np.array(nval)
## TODO should have some kind of interpolation function
## so we don't have to do just linear
## then again we can always string together keyframes
## to get complex interpolations
this_kwargs[kwarg] = pval + i*(nval-pval)/(nsteps)
self.scene_kwargss.append(this_kwargs)
## note the index of this keyframe
self.keyframes.append(len(self.scene_kwargss)-1)
if loud: print(self)
def interpolateAndRenderMultiprocessing(
self,
multi_threads,
galaxy_kwargs,
scene_kwargss=None,
studio_kwargss=None,
render_kwargss=None,
which_studios=None,
fixed_star_hsml=0.028
):
if 'keys_to_extract' in galaxy_kwargs.keys(): keys_to_extract = galaxy_kwargs.pop('keys_to_extract')
else: keys_to_extract = []
load_gas,load_star = get_load_flags(which_studios,render_kwargss)
gas_snapdict,star_snapdict = get_single_snap(
load_gas,
load_star,
keys_to_extract=keys_to_extract,
**galaxy_kwargs)
global_snapdict_name = 'gas_snapshot_%03d'%galaxy_kwargs['snapnum']
global_star_snapdict_name = 'star_snapshot_%03d'%galaxy_kwargs['snapnum']
## if we were bold enough to extract everything, copy nothing to the child processes.
## that'll teach us!
#if keys_to_extract is None:
### todo, why not just use all the keys if they're going to go to a shared memory buffer?
#raise KeyError("Use keys_to_extract to specify field keys you need for rendering,"+
#" they're going to be put into a shared memory buffer so we will *not* pass all keys by default.")
if multi_threads is None: multi_threads = multiprocessing.cpu_count()-1
## collect positional arguments for worker_function
argss = zip(
itertools.repeat(which_studios),
itertools.repeat(global_snapdict_name),
itertools.repeat(global_star_snapdict_name),
scene_kwargss,
itertools.repeat(studio_kwargss),
itertools.repeat(render_kwargss))
## initialize dictionary that will point to shared memory buffers
gas_wrapper_dict = {}
star_wrapper_dict = {}
try:
if load_gas:
## use as few references so i have to clean up fewer below lol
gas_wrapper_dict,gas_shm_buffers = copySnapshotNamesToMPSharedMemory(
['Coordinates',
'Masses',
'SmoothingLength']+keys_to_extract,
gas_snapdict,
finally_flag=True,
loud=True)
else: gas_shm_buffers = [None]
if load_star:
if 'SmoothingLength' not in star_snapdict:
star_snapdict['SmoothingLength'] = np.repeat(fixed_star_hsml,star_snapdict['Coordinates'].shape[0])
## NOTE the lack of smoothing lengths might mess this up if a bunch of processes all
## try and compute smoothing lengths and write to the same file :\
star_wrapper_dict,star_shm_buffers = copySnapshotNamesToMPSharedMemory(
['Coordinates',
'Masses',
'SmoothingLength',
'AgeGyr']+keys_to_extract,
star_snapdict,
finally_flag=True,
loud=True)
else: star_shm_buffers = [None]
for key in ['name','datadir','snapnum']: gas_wrapper_dict[key] = gas_snapdict[key]
for key in ['name','datadir','snapnum']: star_wrapper_dict[key] = star_snapdict[key]
del gas_snapdict,star_snapdict
globals()[global_snapdict_name] = gas_wrapper_dict
globals()[global_star_snapdict_name] = star_wrapper_dict
## don't remove these lines, they perform some form of dark arts
## that helps the garbage collector its due
## attempt to wrangle shared memory buffer and avoid memory leak
locals().keys()
globals().keys()
gc.collect()
## attempt to wrangle shared memory buffer and avoid memory leak
for obj in gc.get_objects():
if isinstance(obj,Galaxy):
print(obj,'will be copied to child processes and is probably large.')
with multiprocessing.Pool(multi_threads) as my_pool:
these_figs = my_pool.starmap(multi_worker_function_wrapper,argss)
## attempt to wrangle shared memory buffer and avoid memory leak
del my_pool
locals().keys()
globals().keys()
gc.collect()
except: raise
finally:
## TODO clean up anything that contains a reference to a shared
## memory object. globals() must be purged before the shm_buffers
## are unlinked or python will crash.
globals().pop(global_snapdict_name)
globals().pop(global_star_snapdict_name)
del gas_wrapper_dict
del star_wrapper_dict
for shm_buffer in gas_shm_buffers:
## handle case where multiprocessing isn't used
if shm_buffer is not None:
shm_buffer.close()
try: shm_buffer.unlink()
except FileNotFoundError: pass
del gas_shm_buffers
for shm_buffer in star_shm_buffers:
## handle case where multiprocessing isn't used
if shm_buffer is not None:
shm_buffer.close()
try: shm_buffer.unlink()
except FileNotFoundError: pass
del star_shm_buffers
## use ffmpeg to produce an mp4 of the frames
render_ffmpeg_frames(studio_kwargss,galaxy_kwargs,self.nframes,self.fps)
return these_figs
def multi_worker_function_wrapper(
which_studios,
global_snapdict_name, ## important to access shared memory :\
global_star_snapdict_name,
scene_kwargss,
studio_kwargss,
render_kwargss):
## read the unique global name for the relevant snapshot dictionary
## TODO: could I handle time interpolation right here by checking if
## if I was passed multiple snapdict names... then I could compute
## current_time_gyr and make a new combination snapshotdictionary
## that was interpolated.
multi_worker_function(
which_studios,
globals()[global_snapdict_name],
globals()[global_star_snapdict_name],
scene_kwargss,
studio_kwargss,
render_kwargss) | [
"[email protected]"
] | |
ad0163094ee7e3c39d856c2a8d32a28d55661207 | 7bead245354e233f76fff4608938bf956abb84cf | /cloudmersive_convert_api_client/models/remove_whitespace_from_text_request.py | 98ee877987d0fcd6f673c4f2bc3c1052c4a3d3c5 | [
"Apache-2.0"
] | permissive | Cloudmersive/Cloudmersive.APIClient.Python.Convert | 5ba499937b9664f37cb2700509a4ba93952e9d6c | dba2fe7257229ebdacd266531b3724552c651009 | refs/heads/master | 2021-10-28T23:12:42.698951 | 2021-10-18T03:44:49 | 2021-10-18T03:44:49 | 138,449,321 | 3 | 2 | null | null | null | null | UTF-8 | Python | false | false | 3,711 | py | # coding: utf-8
"""
convertapi
Convert API lets you effortlessly convert file formats and types. # noqa: E501
OpenAPI spec version: v1
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
import pprint
import re # noqa: F401
import six
class RemoveWhitespaceFromTextRequest(object):
"""NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'text_containing_whitespace': 'str'
}
attribute_map = {
'text_containing_whitespace': 'TextContainingWhitespace'
}
def __init__(self, text_containing_whitespace=None): # noqa: E501
"""RemoveWhitespaceFromTextRequest - a model defined in Swagger""" # noqa: E501
self._text_containing_whitespace = None
self.discriminator = None
if text_containing_whitespace is not None:
self.text_containing_whitespace = text_containing_whitespace
@property
def text_containing_whitespace(self):
"""Gets the text_containing_whitespace of this RemoveWhitespaceFromTextRequest. # noqa: E501
Input text string to remove the whitespace from # noqa: E501
:return: The text_containing_whitespace of this RemoveWhitespaceFromTextRequest. # noqa: E501
:rtype: str
"""
return self._text_containing_whitespace
@text_containing_whitespace.setter
def text_containing_whitespace(self, text_containing_whitespace):
"""Sets the text_containing_whitespace of this RemoveWhitespaceFromTextRequest.
Input text string to remove the whitespace from # noqa: E501
:param text_containing_whitespace: The text_containing_whitespace of this RemoveWhitespaceFromTextRequest. # noqa: E501
:type: str
"""
self._text_containing_whitespace = text_containing_whitespace
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
if issubclass(RemoveWhitespaceFromTextRequest, dict):
for key, value in self.items():
result[key] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
return pprint.pformat(self.to_dict())
def __repr__(self):
"""For `print` and `pprint`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, RemoveWhitespaceFromTextRequest):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"[email protected]"
] | |
fdd7630f165b381094274f0ee6ad3caae9b8abeb | f0d713996eb095bcdc701f3fab0a8110b8541cbb | /KWoj7kWiHRqJtG6S2_10.py | 65551bedd3a2aa334347384e82cd7b4ef7a2f2ef | [] | no_license | daniel-reich/turbo-robot | feda6c0523bb83ab8954b6d06302bfec5b16ebdf | a7a25c63097674c0a81675eed7e6b763785f1c41 | refs/heads/main | 2023-03-26T01:55:14.210264 | 2021-03-23T16:08:01 | 2021-03-23T16:08:01 | 350,773,815 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 593 | py | """
There is a single operator in Python, capable of providing the remainder of a
division operation. Two numbers are passed as parameters. The first parameter
divided by the second parameter will have a remainder, possibly zero. Return
that value.
### Examples
remainder(1, 3) ➞ 1
remainder(3, 4) ➞ 3
remainder(5, 5) ➞ 0
remainder(7, 2) ➞ 1
### Notes
* The tests only use positive integers.
* Don't forget to `return` the result.
* If you get stuck on a challenge, find help in the **Resources** tab.
"""
remainder = lambda a, b: a % b
| [
"[email protected]"
] | |
7da4675809919b8ee509fd2815c35901a2f3e54b | 5f2f3743e0f8054d62042fc6c05bf994995bfdee | /tests/test_dlthx.py | 14d0cffd033391ce59baf17813d95cad08d92c2d | [
"MIT"
] | permissive | li7300198125/itmlogic | 1374a295278af1b818377049c6e0720386c50195 | b7297a595b6ab8ec36d3ac5f81755171beed4407 | refs/heads/master | 2022-11-15T01:59:57.574625 | 2020-06-27T22:51:23 | 2020-06-27T22:51:23 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 541 | py | import pytest
from itmlogic.dlthx import dlthx
def test_dlthx(setup_pfl1):
"""
Tests the delta h value, which is the interdecile range of elevations between point x1
and point x2, generated from the terrain profile pfl1.
The terrain profile (pfl1) is imported from tests/conftest.py via the fixture
setup_pfl1.
The test is derived from the original test for Longley-Rice between for Crystal
Palace (South London) to Mursley, England.
"""
assert round(dlthx(setup_pfl1, 2158.5, 77672.5), 4) == 89.2126
| [
"[email protected]"
] | |
06e6aaa2ca2b7d9bd08666139ce3cf28ff269e0e | 7b102f9c8f2e3f9240090d1d67af50333a2ba98d | /gbd_2019/shared_code/central_comp/cod/codem/hybridizer/joblaunch/HybridTask.py | 6d838cef584d3c08aa703402a2922aba19274e34 | [] | no_license | Nermin-Ghith/ihme-modeling | 9c8ec56b249cb0c417361102724fef1e6e0bcebd | 746ea5fb76a9c049c37a8c15aa089c041a90a6d5 | refs/heads/main | 2023-04-13T00:26:55.363986 | 2020-10-28T19:51:51 | 2020-10-28T19:51:51 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,972 | py | from hybridizer.database import gbd_round_from_id
import hybridizer.metadata as metadata
from codem.joblaunch.CODEmTask import CODEmBaseTask
import logging
logger = logging.getLogger(__name__)
class HybridTask(CODEmBaseTask):
def __init__(self, user, developed_model_version_id,
global_model_version_id,
conn_def,
upstream_tasks=None,
parameter_dict=None,
max_attempts=15,
cores=1,
gb=20,
runtime_min=60*2):
"""
Creates a hybrid task for a CODEm Workflow.
"""
gbd_round_id = gbd_round_from_id(global_model_version_id, conn_def)
model_version_id = metadata.hybrid_metadata(user, global_model_version_id,
developed_model_version_id, conn_def,
gbd_round_id)
self.gbd_round_id = gbd_round_id
self.model_version_id = model_version_id
self.user = user
self.global_model_version_id = global_model_version_id
self.developed_model_version_id = developed_model_version_id
logger.info("New Hybrid Model Version ID: {}".format(model_version_id))
super().__init__(model_version_id=model_version_id,
parameter_dict=parameter_dict, max_attempts=max_attempts,
upstream_tasks=upstream_tasks,
conn_def=conn_def, hybridizer=True,
cores=cores,
gb=gb, minutes=runtime_min
)
command = 'FILEPATH {} {} {} {} {}'. \
format(user, model_version_id, global_model_version_id, developed_model_version_id, conn_def)
self.setup_task(
command=command,
resource_scales={'m_mem_free': 0.5,
'max_runtime_seconds': 0.5}
)
| [
"[email protected]"
] | |
3781bf374bfe5f8826bd54fb515b1163b4b53ce4 | 4749d3cf395522d90cb74d1842087d2f5671fa87 | /alice/LC022.py | d5ca5a07462cfcc708bf1982ffc93009853c53ea | [] | no_license | AliceTTXu/LeetCode | c1ad763c3fa229362350ce3227498dfb1f022ab0 | ed15eb27936b39980d4cb5fb61cd937ec7ddcb6a | refs/heads/master | 2021-01-23T11:49:49.903285 | 2018-08-03T06:00:16 | 2018-08-03T06:00:16 | 33,470,003 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,107 | py | class Solution(object):
def generateParenthesis(self, n):
"""
:type n: int
:rtype: List[str]
"""
if not n:
return ['']
else:
return self.generateParenthesisCore(n - 1, 1, ['('])
def generateParenthesisCore(self, n, leftCount, out):
if n > 0:
if leftCount > 0:
part1 = self.generateParenthesisCore(n, leftCount - 1, [x + ')' for x in out])
else:
part1 = []
part2 = self.generateParenthesisCore(n - 1, leftCount + 1, [x + '(' for x in out])
return part1 + part2
else:
if leftCount > 0:
return [x + ')' * leftCount for x in out]
def generateParenthesis2(self, n):
if not n:
return ['']
out = []
for i in xrange(n):
for left in self.generateParenthesis2(i):
for right in self.generateParenthesis2(n - i - 1):
out.append('({}){}'.format(left, right))
return out
s = Solution()
print s.generateParenthesis2(3) | [
"[email protected]"
] | |
a1dc58d81bc25723ec4c8842e7e14cdd086fbf88 | 9c2b322b36564327cf15e75ff7ad6ef2461643af | /code/analysis/delayedfeedback/scaled_noise_test.py | aaf8d931f31f9d48cc2dc73af5a06e3f2d446a2a | [
"LicenseRef-scancode-warranty-disclaimer",
"MIT"
] | permissive | dmytrov/stochasticcontrol | 3951c0fd555cdcf38bcf6812b1758ed41fd28cf9 | a289d5c0953c4a328b2177f51168588248c00f2c | refs/heads/master | 2022-12-15T13:19:32.295905 | 2020-09-14T19:57:04 | 2020-09-14T19:57:04 | 295,521,166 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,897 | py | """ Test for the control-scaled noise covariance matrix
Reference: "Christopher M. Harris and Daniel M. Wolpert - 1998 - Signal-dependent
noise determines motor planning":
"We assume that neural commands have signal-dependent noise
whose standard deviation increases linearly with the absolute value
of the neural control signal."
"""
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import torch
import analysis.delayedfeedback.targetswitching.model as tsm
torch.set_default_dtype(torch.float64)
vcontrol = 1.0* torch.tensor([3.0, 0.0]) # max covariance direction
scale = torch.tensor([4.0, 1.0]) # covariance scale [max, min]
covariance_matrix, m_control_globalscaled = tsm.signal_dependent_noise_covar_torch(vcontrol, scale)
#vcontrol = 1.0* torch.tensor([3.0, 0.0]) # max covariance direction
#covariance_matrix, m_control_globalscaled = tsm.signal_dependent_noise_covar_xaligned_torch(vcontrol, scale)
#covariance_matrix = torch.diag(covariance_matrix)
u, sigma, v = covariance_matrix.svd()
print("u:", u)
print("sigma^2:", sigma)
std = torch.sqrt(sigma)
assert torch.abs(std[0] / std[1] - scale[0] / scale[1]) < 1.0e-3
print(u @ torch.diag(sigma) @ v)
loc = torch.tensor([0.0, 0.0])
matplotlib.rcParams['xtick.direction'] = 'out'
matplotlib.rcParams['ytick.direction'] = 'out'
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-3.0, 3.0, delta)
X, Y = np.meshgrid(x, y)
X = torch.tensor(X)
Y = torch.tensor(Y)
gaussian = torch.distributions.MultivariateNormal(loc=loc, covariance_matrix=covariance_matrix)
XY = torch.stack([X, Y], dim=-1)
Z = torch.exp(gaussian.log_prob(XY))
plt.figure()
m = 0.2 * m_control_globalscaled
plt.arrow(0, 0, m[0, 0], m[1, 0])
plt.arrow(0, 0, m[0, 1], m[1, 1])
CS = plt.contour(X, Y, Z)
plt.clabel(CS, inline=1, fontsize=10)
plt.title('Control scaled noise')
plt.axis("equal")
plt.grid(True)
plt.show() | [
"[email protected]"
] | |
68dd6b3d405e733518f0e2777a7fa632c3b710af | 865ee6eb8ee52c8056fbf406059c5481f365de6e | /openresty-win32-build/thirdparty/x86/pgsql/pgAdmin 4/web/pgadmin/utils/driver/psycopg2/typecast.py | f1366049c72a54f28a94858ceb1a5e5958dd7673 | [
"LicenseRef-scancode-ssleay",
"MIT",
"BSD-3-Clause",
"LicenseRef-scancode-openssl",
"LicenseRef-scancode-ssleay-windows",
"LicenseRef-scancode-pcre",
"LicenseRef-scancode-public-domain",
"Zlib",
"BSD-2-Clause"
] | permissive | nneesshh/openresty-oss | 76e119081ea06bc82b184f96d531cc756b716c9d | bfbb9d7526020eda1788a0ed24f2be3c8be5c1c3 | refs/heads/master | 2022-12-12T21:39:48.917622 | 2019-05-31T03:14:18 | 2019-05-31T03:14:18 | 184,213,410 | 1 | 0 | MIT | 2022-12-06T17:28:59 | 2019-04-30T07:28:45 | C | UTF-8 | Python | false | false | 8,221 | py | ##########################################################################
#
# pgAdmin 4 - PostgreSQL Tools
#
# Copyright (C) 2013 - 2018, The pgAdmin Development Team
# This software is released under the PostgreSQL Licence
#
##########################################################################
"""
Typecast various data types so that they can be compatible with Javascript
data types.
"""
import sys
from psycopg2 import STRING as _STRING
import psycopg2
from psycopg2.extensions import encodings
# OIDs of data types which need to typecast as string to avoid JavaScript
# compatibility issues.
# e.g JavaScript does not support 64 bit integers. It has 64-bit double
# giving only 53 bits of integer range (IEEE 754)
# So to avoid loss of remaining 11 bits (64-53) we need to typecast bigint to
# string.
TO_STRING_DATATYPES = (
# To cast bytea, interval type
17, 1186,
# date, timestamp, timestamptz, bigint, double precision
1700, 1082, 1114, 1184, 20, 701,
# real, time without time zone
700, 1083
)
# OIDs of array data types which need to typecast to array of string.
# This list may contain:
# OIDs of data types from PSYCOPG_SUPPORTED_ARRAY_DATATYPES as they need to be
# typecast to array of string.
# Also OIDs of data types which psycopg2 does not typecast array of that
# data type. e.g: uuid, bit, varbit, etc.
TO_ARRAY_OF_STRING_DATATYPES = (
# To cast bytea[] type
1001,
# bigint[]
1016,
# double precision[], real[]
1022, 1021,
# bit[], varbit[]
1561, 1563,
)
# OID of record array data type
RECORD_ARRAY = (2287,)
# OIDs of builtin array datatypes supported by psycopg2
# OID reference psycopg2/psycopg/typecast_builtins.c
#
# For these array data types psycopg2 returns result in list.
# For all other array data types psycopg2 returns result as string (string
# representing array literal)
# e.g:
#
# For below two sql psycopg2 returns result in different formats.
# SELECT '{foo,bar}'::text[];
# print('type of {} ==> {}'.format(res[0], type(res[0])))
# SELECT '{<a>foo</a>,<b>bar</b>}'::xml[];
# print('type of {} ==> {}'.format(res[0], type(res[0])))
#
# Output:
# type of ['foo', 'bar'] ==> <type 'list'>
# type of {<a>foo</a>,<b>bar</b>} ==> <type 'str'>
PSYCOPG_SUPPORTED_BUILTIN_ARRAY_DATATYPES = (
1016, 1005, 1006, 1007, 1021, 1022, 1231,
1002, 1003, 1009, 1014, 1015, 1009, 1014,
1015, 1000, 1115, 1185, 1183, 1270, 1182,
1187, 1001, 1028, 1013, 1041, 651, 1040
)
# json, jsonb
# OID reference psycopg2/lib/_json.py
PSYCOPG_SUPPORTED_JSON_TYPES = (114, 3802)
# json[], jsonb[]
PSYCOPG_SUPPORTED_JSON_ARRAY_TYPES = (199, 3807)
ALL_JSON_TYPES = PSYCOPG_SUPPORTED_JSON_TYPES +\
PSYCOPG_SUPPORTED_JSON_ARRAY_TYPES
# INET[], CIDR[]
# OID reference psycopg2/lib/_ipaddress.py
PSYCOPG_SUPPORTED_IPADDRESS_ARRAY_TYPES = (1041, 651)
# uuid[]
# OID reference psycopg2/lib/extras.py
PSYCOPG_SUPPORTED_IPADDRESS_ARRAY_TYPES = (2951,)
# int4range, int8range, numrange, daterange tsrange, tstzrange[]
# OID reference psycopg2/lib/_range.py
PSYCOPG_SUPPORTED_RANGE_TYPES = (3904, 3926, 3906, 3912, 3908, 3910)
# int4range[], int8range[], numrange[], daterange[] tsrange[], tstzrange[]
# OID reference psycopg2/lib/_range.py
PSYCOPG_SUPPORTED_RANGE_ARRAY_TYPES = (3905, 3927, 3907, 3913, 3909, 3911)
def register_global_typecasters():
if sys.version_info < (3,):
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)
unicode_type_for_record = psycopg2.extensions.new_type(
(2249,),
"RECORD",
psycopg2.extensions.UNICODE
)
unicode_array_type_for_record_array = psycopg2.extensions.new_array_type(
RECORD_ARRAY,
"ARRAY_RECORD",
unicode_type_for_record
)
# This registers a unicode type caster for datatype 'RECORD'.
psycopg2.extensions.register_type(unicode_type_for_record)
# This registers a array unicode type caster for datatype 'ARRAY_RECORD'.
psycopg2.extensions.register_type(unicode_array_type_for_record_array)
# define type caster to convert various pg types into string type
pg_types_to_string_type = psycopg2.extensions.new_type(
TO_STRING_DATATYPES + PSYCOPG_SUPPORTED_RANGE_TYPES,
'TYPECAST_TO_STRING', _STRING
)
# define type caster to convert pg array types of above types into
# array of string type
pg_array_types_to_array_of_string_type = \
psycopg2.extensions.new_array_type(
TO_ARRAY_OF_STRING_DATATYPES,
'TYPECAST_TO_ARRAY_OF_STRING', pg_types_to_string_type
)
# This registers a type caster to convert various pg types into string type
psycopg2.extensions.register_type(pg_types_to_string_type)
# This registers a type caster to convert various pg array types into
# array of string type
psycopg2.extensions.register_type(pg_array_types_to_array_of_string_type)
def register_string_typecasters(connection):
if connection.encoding != 'UTF8':
# In python3 when database encoding is other than utf-8 and client
# encoding is set to UNICODE then we need to map data from database
# encoding to utf-8.
# This is required because when client encoding is set to UNICODE then
# psycopg assumes database encoding utf-8 and not the actual encoding.
# Not sure whether it's bug or feature in psycopg for python3.
if sys.version_info >= (3,):
def return_as_unicode(value, cursor):
if value is None:
return None
# Treat value as byte sequence of database encoding and then
# decode it as utf-8 to get correct unicode value.
return bytes(
value, encodings[cursor.connection.encoding]
).decode('utf-8')
unicode_type = psycopg2.extensions.new_type(
# "char", name, text, character, character varying
(19, 18, 25, 1042, 1043, 0),
'UNICODE', return_as_unicode)
else:
def return_as_unicode(value, cursor):
if value is None:
return None
# Decode it as utf-8 to get correct unicode value.
return value.decode('utf-8')
unicode_type = psycopg2.extensions.new_type(
# "char", name, text, character, character varying
(19, 18, 25, 1042, 1043, 0),
'UNICODE', return_as_unicode)
unicode_array_type = psycopg2.extensions.new_array_type(
# "char"[], name[], text[], character[], character varying[]
(1002, 1003, 1009, 1014, 1015, 0
), 'UNICODEARRAY', unicode_type)
psycopg2.extensions.register_type(unicode_type)
psycopg2.extensions.register_type(unicode_array_type)
def register_binary_typecasters(connection):
psycopg2.extensions.register_type(
psycopg2.extensions.new_type(
(
# To cast bytea type
17,
),
'BYTEA_PLACEHOLDER',
# Only show placeholder if data actually exists.
lambda value, cursor: 'binary data'
if value is not None else None),
connection
)
psycopg2.extensions.register_type(
psycopg2.extensions.new_type(
(
# To cast bytea[] type
1001,
),
'BYTEA_ARRAY_PLACEHOLDER',
# Only show placeholder if data actually exists.
lambda value, cursor: 'binary data[]'
if value is not None else None),
connection
)
def register_array_to_string_typecasters(connection):
psycopg2.extensions.register_type(
psycopg2.extensions.new_type(
PSYCOPG_SUPPORTED_BUILTIN_ARRAY_DATATYPES +
PSYCOPG_SUPPORTED_JSON_ARRAY_TYPES +
PSYCOPG_SUPPORTED_IPADDRESS_ARRAY_TYPES +
PSYCOPG_SUPPORTED_RANGE_ARRAY_TYPES +
TO_ARRAY_OF_STRING_DATATYPES,
'ARRAY_TO_STRING',
_STRING),
connection
)
| [
"[email protected]"
] | |
206f6341e76afb3a6029ba678b56ef8a35aa2ff9 | 83bbca19a1a24a6b73d9b56bd3d76609ff321325 | /bard/providers/download/__init__.py | b2f5fccd3085ca64f2b68c752b7284a583c5807e | [] | no_license | b1naryth1ef/bard | 837547a0cbf5c196f5cb9b0dfbb944703fa993e0 | a6325a4684080a7a3f61f6f361bd2e0a78986ab9 | refs/heads/master | 2021-04-30T22:16:17.347526 | 2020-01-10T18:57:53 | 2020-01-10T18:57:53 | 172,892,673 | 16 | 0 | null | 2021-03-25T22:35:25 | 2019-02-27T10:18:01 | Python | UTF-8 | Python | false | false | 219 | py | from .iptorrents import IPTorrentsDownloadProvider
from .horriblesubs import HorribleSubsDownloadProvider
PROVIDERS = {
"iptorrents": IPTorrentsDownloadProvider,
"horriblesubs": HorribleSubsDownloadProvider,
}
| [
"[email protected]"
] | |
caedcb717831f3959e5cb1d6e58f728a5038387b | 702ad30ea1de11f109a5207919bddb5381bd206e | /toy_data.py | c3921fdb841e6d73e0f8bc8a2f4009eb6fbe2202 | [] | no_license | glouppe/flowing-with-jax | 84e2dfc1a81073328518ca95806f031fefe30287 | f58b1772d08e235e71f10d4b28e2a39b771b17cf | refs/heads/master | 2023-02-09T00:52:32.157418 | 2020-12-28T22:30:41 | 2020-12-28T22:30:41 | 323,332,121 | 17 | 1 | null | null | null | null | UTF-8 | Python | false | false | 4,601 | py | # Source: https://raw.githubusercontent.com/rtqichen/ffjord/master/lib/toy_data.py
import numpy as np
import sklearn
import sklearn.datasets
from sklearn.utils import shuffle as util_shuffle
# Dataset iterator
def inf_train_gen(data, rng=None, batch_size=200):
if rng is None:
rng = np.random.RandomState()
if data == "swissroll":
data = sklearn.datasets.make_swiss_roll(n_samples=batch_size, noise=1.0)[0]
data = data.astype("float32")[:, [0, 2]]
data /= 5
return data
elif data == "circles":
data = sklearn.datasets.make_circles(n_samples=batch_size, factor=.5, noise=0.08)[0]
data = data.astype("float32")
data *= 3
return data
elif data == "rings":
n_samples4 = n_samples3 = n_samples2 = batch_size // 4
n_samples1 = batch_size - n_samples4 - n_samples3 - n_samples2
# so as not to have the first point = last point, we set endpoint=False
linspace4 = np.linspace(0, 2 * np.pi, n_samples4, endpoint=False)
linspace3 = np.linspace(0, 2 * np.pi, n_samples3, endpoint=False)
linspace2 = np.linspace(0, 2 * np.pi, n_samples2, endpoint=False)
linspace1 = np.linspace(0, 2 * np.pi, n_samples1, endpoint=False)
circ4_x = np.cos(linspace4)
circ4_y = np.sin(linspace4)
circ3_x = np.cos(linspace4) * 0.75
circ3_y = np.sin(linspace3) * 0.75
circ2_x = np.cos(linspace2) * 0.5
circ2_y = np.sin(linspace2) * 0.5
circ1_x = np.cos(linspace1) * 0.25
circ1_y = np.sin(linspace1) * 0.25
X = np.vstack([
np.hstack([circ4_x, circ3_x, circ2_x, circ1_x]),
np.hstack([circ4_y, circ3_y, circ2_y, circ1_y])
]).T * 3.0
X = util_shuffle(X, random_state=rng)
# Add noise
X = X + rng.normal(scale=0.08, size=X.shape)
return X.astype("float32")
elif data == "moons":
data = sklearn.datasets.make_moons(n_samples=batch_size, noise=0.1)[0]
data = data.astype("float32")
data = data * 2 + np.array([-1, -0.2])
return data
elif data == "8gaussians":
scale = 4.
centers = [(1, 0), (-1, 0), (0, 1), (0, -1), (1. / np.sqrt(2), 1. / np.sqrt(2)),
(1. / np.sqrt(2), -1. / np.sqrt(2)), (-1. / np.sqrt(2),
1. / np.sqrt(2)), (-1. / np.sqrt(2), -1. / np.sqrt(2))]
centers = [(scale * x, scale * y) for x, y in centers]
dataset = []
for i in range(batch_size):
point = rng.randn(2) * 0.5
idx = rng.randint(8)
center = centers[idx]
point[0] += center[0]
point[1] += center[1]
dataset.append(point)
dataset = np.array(dataset, dtype="float32")
dataset /= 1.414
return dataset
elif data == "pinwheel":
radial_std = 0.3
tangential_std = 0.1
num_classes = 5
num_per_class = batch_size // 5
rate = 0.25
rads = np.linspace(0, 2 * np.pi, num_classes, endpoint=False)
features = rng.randn(num_classes*num_per_class, 2) \
* np.array([radial_std, tangential_std])
features[:, 0] += 1.
labels = np.repeat(np.arange(num_classes), num_per_class)
angles = rads[labels] + rate * np.exp(features[:, 0])
rotations = np.stack([np.cos(angles), -np.sin(angles), np.sin(angles), np.cos(angles)])
rotations = np.reshape(rotations.T, (-1, 2, 2))
return 2 * rng.permutation(np.einsum("ti,tij->tj", features, rotations))
elif data == "2spirals":
n = np.sqrt(np.random.rand(batch_size // 2, 1)) * 540 * (2 * np.pi) / 360
d1x = -np.cos(n) * n + np.random.rand(batch_size // 2, 1) * 0.5
d1y = np.sin(n) * n + np.random.rand(batch_size // 2, 1) * 0.5
x = np.vstack((np.hstack((d1x, d1y)), np.hstack((-d1x, -d1y)))) / 3
x += np.random.randn(*x.shape) * 0.1
return x
elif data == "checkerboard":
x1 = np.random.rand(batch_size) * 4 - 2
x2_ = np.random.rand(batch_size) - np.random.randint(0, 2, batch_size) * 2
x2 = x2_ + (np.floor(x1) % 2)
return np.concatenate([x1[:, None], x2[:, None]], 1) * 2
elif data == "line":
x = rng.rand(batch_size) * 5 - 2.5
y = x
return np.stack((x, y), 1)
elif data == "cos":
x = rng.rand(batch_size) * 5 - 2.5
y = np.sin(x) * 2.5
return np.stack((x, y), 1)
else:
return inf_train_gen("8gaussians", rng, batch_size)
| [
"[email protected]"
] | |
379c1fa52e66c607912bbf3aec01ca28e4d2b81b | 377dc973a58d30154cf485de141223d7ca5424dd | /havok_classes/hclStorageSetupMeshSectionSectionEdgeSelectionChannel.py | 2a0b1f269d753d07efa1c18f0755cf5070e00a60 | [
"MIT"
] | permissive | sawich/havok-reflection | d6a5552f2881bb4070ad824fb7180ad296edf4c4 | 1d5b768fb533b3eb36fc9e42793088abeffbad59 | refs/heads/master | 2021-10-11T12:56:44.506674 | 2019-01-25T22:37:31 | 2019-01-25T22:37:31 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 530 | py | from .hkReferencedObject import hkReferencedObject
from typing import List
from .common import get_array
class hclStorageSetupMeshSectionSectionEdgeSelectionChannel(hkReferencedObject):
edgeIndices: List[int]
def __init__(self, infile):
self.edgeIndices = get_array(infile, int, 4) # TYPE_ARRAY:TYPE_UINT32
def __repr__(self):
return "<{class_name} edgeIndices=[{edgeIndices}]>".format(**{
"class_name": self.__class__.__name__,
"edgeIndices": self.edgeIndices,
})
| [
"[email protected]"
] | |
eff804ac1d48782d19505cda1ee199107169edf8 | 354b26a5d854bd044286047d4aef1a0aa54961f1 | /lock.py | 1bddd56489c297a1cedc7b6e3ede027691018df2 | [
"MIT"
] | permissive | liondani/pytshares | 3796ca8705de409825ef8631604f49319887b510 | 45458a026a23c53ad2bacd56abb9e930e8268bb4 | refs/heads/master | 2020-12-31T01:36:56.896329 | 2014-12-14T00:08:22 | 2014-12-14T00:08:22 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 193 | py | #!/usr/bin/python
from btsrpcapi import *
import config
if __name__ == "__main__":
rpc = btsrpcapi(config.url, config.user, config.passwd)
print rpc.walletopen("delegate")
print rpc.lock()
| [
"[email protected]"
] | |
65ca126a7c2f6266a87cf7cd0e1e349388f0bba1 | 6928f8ee318edeb4a3bdbdce587c3b0b8d63057b | /generate_small.py | ec20bb2e670b8cb56ea64ef0181e06f4a6107608 | [
"CC0-1.0",
"MIT"
] | permissive | tonny2v/brand | 9940aa4ffb245fe5b8ac956e76c02f8d3d133dbe | 422d34acf78757073c0b186b9420a6e7bdf504ff | refs/heads/master | 2021-01-15T16:04:46.009461 | 2016-06-04T01:18:19 | 2016-06-04T01:18:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 9,819 | py | import random
import svgwrite
from svgwrite.container import Hyperlink
phodal_width = 176
secondary_text_x = 200
basic_text_y = 35
def generate_idea():
y_text_split = phodal_width + 1
height = 50
rect_length = 2
width = 290
max_rect_length = 10
dwg = svgwrite.Drawing('shields/idea-small.svg', profile='full', size=(u'290', u'50'))
rect_with_radius_mask = dwg.mask((0, 0), (width, height), id='a')
rect_with_radius_mask.add(dwg.rect((0, 0), (width, height), fill='#eee', rx=3))
dwg.add(rect_with_radius_mask)
g = dwg.add(dwg.g(id='g', fill='none', mask='url(#a)'))
g.add(dwg.rect((0, 0), (phodal_width, height), fill='#5E6772'))
g.add(dwg.rect((phodal_width, 0), (width - phodal_width, height), fill='#2196F3'))
shapes = dwg.add(dwg.g(id='shapes', fill='none'))
slogan_link = Hyperlink('https://www.phodal.com/', target='_blank')
shapes.add(dwg.text('phodal', insert=(28, basic_text_y + 1), fill='#000', fill_opacity=0.3, font_size=40,
font_family='Helvetica'))
slogan_link.add(dwg.text('phodal', insert=(27, basic_text_y), fill='#FFFFFF', font_size=40, font_family='Helvetica'))
dwg.add(slogan_link)
def draw_for_bg_plus():
for x in range(y_text_split + rect_length, width, rect_length):
shapes.add(dwg.line((x, 0), (x, height), stroke='#EEEEEE', stroke_width='0.5', stroke_opacity=0.1))
for y in range(rect_length, height, rect_length):
shapes.add(
dwg.line((y_text_split, y), (width, y), stroke='#EEEEEE', stroke_width='0.5', stroke_opacity=0.1))
for x in range(y_text_split + max_rect_length, width, max_rect_length):
for y in range(0, height, max_rect_length):
shapes.add(dwg.line((x, y - 2), (x, y + 2), stroke='#EEEEEE', stroke_width='0.8', stroke_opacity=0.15))
for y in range(0, height, max_rect_length):
for x in range(y_text_split + max_rect_length, width, max_rect_length):
shapes.add(dwg.line((x - 2, y), (x + 2, y), stroke='#EEEEEE', stroke_width='0.8', stroke_opacity=0.15))
draw_for_bg_plus()
shapes.add(
dwg.text('idea', insert=(secondary_text_x + 1, basic_text_y + 1), fill='#000', font_size=40, fill_opacity=0.3,
font_family='Helvetica'))
shapes.add(dwg.text('idea', insert=(secondary_text_x, basic_text_y), fill='#FFFFFF', font_size=40,
font_family='Helvetica'))
dwg.save()
def generate_article():
dwg = svgwrite.Drawing('shields/article-small.svg', size=(u'323', u'50'))
height = 50
width = 323
rect_with_radius_mask = dwg.mask((0, 0), (width, height), id='a')
rect_with_radius_mask.add(dwg.rect((0, 0), (width, height), fill='#eee', rx=3))
dwg.add(rect_with_radius_mask)
g = dwg.add(dwg.g(id='g', fill='none', mask='url(#a)'))
g.add(dwg.rect((0, 0), (phodal_width, height), fill='#5E6772'))
g.add(dwg.rect((phodal_width, 0), (width - phodal_width, height), fill='#ffeb3b'))
shapes = dwg.add(dwg.g(id='shapes', fill='none'))
slogan_link = Hyperlink('https://www.phodal.com/', target='_blank')
shapes.add(dwg.text('phodal', insert=(28, basic_text_y + 1), fill='#000', fill_opacity=0.3, font_size=40,
font_family='Helvetica'))
slogan_link.add(
dwg.text('phodal', insert=(27, basic_text_y), fill='#FFFFFF', font_size=40, font_family='Helvetica'))
dwg.add(slogan_link)
def create_text():
g.add(dwg.text(insert=(phodal_width, 6), fill='#34495e', opacity=0.2, font_size=4,
text='Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, fe-'))
g.add(dwg.text(insert=(phodal_width, 12), fill='#34495e', opacity=0.2, font_size=4,
text='ugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi'))
g.add(dwg.text(insert=(phodal_width, 18), fill='#34495e', opacity=0.2, font_size=4,
text='vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, '))
g.add(dwg.text(insert=(phodal_width, 24), fill='#34495e', opacity=0.2, font_size=4,
text='condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum '))
g.add(dwg.text(insert=(phodal_width, 30), fill='#34495e', opacity=0.2, font_size=4,
text='rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus,'))
g.add(dwg.text(insert=(phodal_width, 36), fill='#34495e', opacity=0.2, font_size=4,
text=' neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi,'))
g.add(dwg.text(insert=(phodal_width, 42), fill='#34495e', opacity=0.2, font_size=4,
text=' tincidunt quis, accumsan porttitor, facilisis luctus, metus'))
g.add(dwg.text(insert=(phodal_width, 48), fill='#34495e', opacity=0.2, font_size=4,
text='Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vivamus magna. Cras in mi at felis aliquet congue. Ut a est eget '))
g.add(dwg.text(insert=(phodal_width, 54), fill='#34495e', opacity=0.2, font_size=4,
text='ligula molestie gravida. Curabitur massa. Donec eleifend, libero at sagittis mollis, tellus est malesuada tellus, at luctus '))
g.add(dwg.text(insert=(phodal_width, 60), fill='#34495e', opacity=0.2, font_size=4,
text='turpis elit sit amet quam. Vivamus pretium ornare est.'))
create_text()
g.add(dwg.text('article', insert=(secondary_text_x + 1, basic_text_y + 1), fill='#000', fill_opacity=0.3,
font_size=40, font_family='Helvetica'))
g.add(dwg.text('article', insert=(secondary_text_x, basic_text_y), fill='#34495e', font_size=40,
font_family='Helvetica'))
dwg.save()
def get_some_random10(num):
results = ''
for x in range(1, num):
results += str(random.getrandbits(1))
return results
def generate_works():
width = 316
height = 50
dwg = svgwrite.Drawing('shields/works-small.svg', size=(u'316', u'50'))
rect_with_radius_mask = dwg.mask((0, 0), (width, height), id='a')
rect_with_radius_mask.add(dwg.rect((0, 0), (width, height), fill='#eee', rx=3))
dwg.add(rect_with_radius_mask)
g = dwg.add(dwg.g(id='g', fill='none', mask='url(#a)'))
g.add(dwg.rect((phodal_width, 0), (width - phodal_width, height), fill='#2c3e50'))
shapes = dwg.add(dwg.g(id='shapes', fill='none'))
for x in range(0, 100, 5):
text = get_some_random10(100)
g.add(
dwg.text(text, insert=(phodal_width + 1, x), fill='#27ae60', font_size=4,
font_family='Inconsolata for Powerline',
opacity=0.3, transform="rotate(15 300, 0)"))
g.add(dwg.rect((0, 0), (phodal_width, height), fill='#5E6772'))
slogan_link = Hyperlink('https://www.phodal.com/', target='_blank')
shapes.add(dwg.text('phodal', insert=(28, basic_text_y + 1), fill='#000', fill_opacity=0.3, font_size=40,
font_family='Helvetica'))
slogan_link.add(
dwg.text('phodal', insert=(27, basic_text_y), fill='#FFFFFF', font_size=40, font_family='Helvetica'))
dwg.add(slogan_link)
shapes.add(
dwg.text('works', insert=(secondary_text_x + 1, basic_text_y + 1), fill='#000', fill_opacity=0.3, font_size=40,
font_family='Helvetica'))
shapes.add(dwg.text('works', insert=(secondary_text_x, basic_text_y), fill='#FFFFFF', font_size=40,
font_family='Helvetica'))
dwg.save()
def generate_design():
# for D Rect
red_point = 272
design_width = 162
width = 338
height = 50
dwg = svgwrite.Drawing('shields/design-small.svg', size=(u'338', u'50'))
rect_with_radius_mask = dwg.mask((0, 0), (width, height), id='a')
rect_with_radius_mask.add(dwg.rect((0, 0), (width, height), fill='#eee', rx=3))
dwg.add(rect_with_radius_mask)
g = dwg.add(dwg.g(id='g', fill='none', mask='url(#a)'))
shapes = dwg.add(dwg.g(id='shapes', fill='none'))
g.add(dwg.rect((0, 0), (phodal_width, 50), fill='#5E6772'))
shapes.add(dwg.rect((phodal_width, 25.6), (design_width, 30), fill='#2196f3'))
shapes.add(dwg.text('design', insert=(secondary_text_x + 5, 36), fill='#000', stroke_width=4, font_size=40,
font_family='Helvetica'))
shapes.add(dwg.rect((phodal_width, 0), (design_width, 26), fill='#03a9f4'))
shapes.add(dwg.rect((phodal_width, 25.6), (design_width, 0.6), fill='#000'))
shapes.add(dwg.text('design', insert=(secondary_text_x + 4, basic_text_y), fill='#FFFFFF', font_size=40,
font_family='Helvetica'))
def draw_red_point():
shapes.add(dwg.ellipse((red_point, 8), (3, 3), fill='#000'))
shapes.add(dwg.ellipse((red_point + 1, 8), (3, 3), fill='#f44336'))
draw_red_point()
slogan_link = Hyperlink('https://www.phodal.com/', target='_blank')
shapes.add(dwg.text('phodal', insert=(28, basic_text_y + 1), fill='#000', fill_opacity=0.3, font_size=40,
font_family='Helvetica'))
slogan_link.add(
dwg.text('phodal', insert=(27, basic_text_y), fill='#FFFFFF', font_size=40, font_family='Helvetica'))
dwg.add(slogan_link)
dwg.save()
generate_idea()
generate_article()
# generate_works()
generate_design()
| [
"[email protected]"
] | |
67a21a65d9789ec1615b82c1d99d08c87c6c9089 | da1f49aa0ee3cbbd0b7add4a8ee4210c50fc81b7 | /demo/funs/passing_by_value_ref.py | 37a1dea123297fec3d8947fe4dabb9f97c94cefb | [] | no_license | srikanthpragada/PYTHON_30_AUG_2021 | a1cde290072e152440dcd07dce377154a9e3052e | f84f272718b483fbf67ca8f950e6e4f933307e63 | refs/heads/master | 2023-08-25T14:11:12.826321 | 2021-10-11T14:14:36 | 2021-10-11T14:14:36 | 402,412,522 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 252 | py | # Pass an immutable object by reference
def increment(v):
print(id(v))
v += 1
print(id(v))
print(v)
def prepend(lst, value):
lst.insert(0, value)
a = 100
print(id(a))
increment(a)
print(a)
l = [1, 2, 3]
prepend(l, 10)
print(l)
| [
"[email protected]"
] | |
e2d67dc32cb8b6f526bcf44f7fb21a804805dfc6 | d5daa769d289c6e230982aa403cfd5dccc852d7f | /digipal/patches.py | 94affe83ca3ae42c39c7ef5ae224f24ff32a6a3d | [] | no_license | kcl-ddh/digipal | 912171fbe6c5046d72278cdba8613a8450f19e84 | 99f3d9d8d99d3bb9d074ccf147b6a9f06fb60ff1 | refs/heads/master | 2022-12-11T01:48:18.255582 | 2022-01-20T11:51:16 | 2022-01-20T11:51:16 | 6,492,508 | 54 | 19 | null | 2022-11-22T00:34:41 | 2012-11-01T16:04:53 | JavaScript | UTF-8 | Python | false | false | 9,411 | py | '''
Patches for other apps or libraries.
Patches should always be avoided if possible because they are likely to break when the
patched app gets upgraded. However we don't have direct control over them and it's
sometimes the most efficient way to quickly include missing features.
Patches are applied at the end of model.py (b/c/ this module is always loaded)
'''
import re
from digipal.compressor_filters import compressor_patch
def mezzanine_patches():
# Patch 4:
# Fix Mezzanine case-insensitive keyword issue
# See https://github.com/stephenmcd/mezzanine/issues/647
from mezzanine.conf import settings
if 'mezzanine.blog' in settings.INSTALLED_APPS:
keyword_exists = True
try:
from mezzanine.generic.models import Keyword
except Exception, e:
keyword_exists = False
if getattr(settings, 'DEV_SERVER', False):
print 'WARNING: import failed (%s) and Mezzanine Blog case-sensitive keywords patch cannot be applied.' % e
if keyword_exists:
# patch integrated into the latest mezzanine version
# from django.contrib.admin.views.decorators import staff_member_required
# @staff_member_required
# def admin_keywords_submit(request):
# """
# Adds any new given keywords from the custom keywords field in the
# admin, and returns their IDs for use when saving a model with a
# keywords field.
# """
# ids, titles = [], []
# for title in request.POST.get("text_keywords", "").split(","):
# title = "".join([c for c in title if c.isalnum() or c in "- "])
# title = title.strip()
# if title:
# keywords = Keyword.objects.filter(title__iexact=title)
#
# # pick a case-sensitive match if it exists.
# # otherwise pick any other match.
# for keyword in keywords:
# if keyword.title == title:
# break
#
# # no match at all, create a new keyword.
# if not keywords.count():
# keyword = Keyword(title=title)
# keyword.save()
#
# id = str(keyword.id)
# if id not in ids:
# ids.append(id)
# titles.append(title)
# from django.http import HttpResponse
# return HttpResponse("%s|%s" % (",".join(ids), ", ".join(titles)))
#
# import mezzanine.generic.views
# mezzanine.generic.views.admin_keywords_submit = admin_keywords_submit
# TODO: move this code to a new view that extends Mezzanine blog_post_detail.
# Not documented way of doing it so we stick with this temporary
# solution for the moment.
def blogPost_get_related_posts_by_tag(self):
# returns a list of BlogPosts with common tags to the current post
# the list is reverse chronological order
ret = []
from django.contrib.contenttypes.models import ContentType
content_type_id = ContentType.objects.get_for_model(self).id
select = r'''
select p.*
from blog_blogpost p
join generic_assignedkeyword ak on (p.id = ak.object_pk)
where
ak.content_type_id = %s
AND
ak.keyword_id in (select distinct ak2.keyword_id from generic_assignedkeyword ak2 where ak2.object_pk = %s and ak.content_type_id = %s)
AND
p.id <> %s
order by p.publish_date;
'''
params = [content_type_id, self.id, content_type_id, self.id]
# run the query and remove duplicates
posts = {}
for post in BlogPost.objects.raw(select, params):
posts[post.publish_date] = post
keys = posts.keys()
keys.sort()
for key in keys[::-1]:
ret.append(posts[key])
return ret
from mezzanine.blog.models import BlogPost
BlogPost.get_related_posts_by_tag = blogPost_get_related_posts_by_tag
# see https://github.com/stephenmcd/mezzanine/issues/1060
patch_thumbnail = True
if patch_thumbnail:
from mezzanine.core.templatetags import mezzanine_tags
thumbnail = mezzanine_tags.thumbnail
def thumbnail_2(*args, **kwargs):
ret = ''
try:
ret = thumbnail(*args, **kwargs)
except:
pass
return ret
mezzanine_tags.thumbnail = thumbnail_2
def admin_patches():
# Patch 5: bar permissions to some application models in the admin
# Why not doing it with django permissions and groups?
# Because we want to keep the data migration scripts simple and
# therefore we copy all the django records from STG to the other
# servers. This means that we can't have different permissions and
# user groups across our servers.
#
# setings.HIDDEN_ADMIN_APPS = ('APP_LABEL_1', )
#
from mezzanine.conf import settings
import django.contrib.auth.models
_user_has_module_perms_old = django.contrib.auth.models._user_has_module_perms
def _user_has_module_perms(user, app_label):
if user and not user.is_superuser and user.is_active and \
app_label in getattr(settings, 'HIDDEN_ADMIN_APPS', ()):
return False
return _user_has_module_perms_old(user, app_label)
django.contrib.auth.models._user_has_module_perms = _user_has_module_perms
_user_has_perm_old = django.contrib.auth.models._user_has_perm
def _user_has_perm(user, perm, obj):
# perm = 'digipal.add_allograph'
if user and not user.is_superuser and user.is_active and perm and \
re.sub(ur'\..*$', '', '%s' % perm) in getattr(settings, 'HIDDEN_ADMIN_APPS', ()):
return False
return _user_has_perm_old(user, perm, obj)
django.contrib.auth.models._user_has_perm = _user_has_perm
# Whoosh 2.6 patch for the race condition during clearing of the cache
# See JIRA DIGIPAL-480
def whoosh_patches():
import functools
from heapq import nsmallest
from whoosh.compat import iteritems, xrange
from operator import itemgetter
from time import time
from threading import Lock
def lru_cache(maxsize=100):
"""A simple cache that, when the cache is full, deletes the least recently
used 10% of the cached values.
This function duplicates (more-or-less) the protocol of the
``functools.lru_cache`` decorator in the Python 3.2 standard library.
Arguments to the cached function must be hashable.
View the cache statistics tuple ``(hits, misses, maxsize, currsize)``
with f.cache_info(). Clear the cache and statistics with f.cache_clear().
Access the underlying function with f.__wrapped__.
"""
def decorating_function(user_function):
stats = [0, 0] # Hits, misses
data = {}
lastused = {}
lock = Lock()
@functools.wraps(user_function)
def wrapper(*args):
with lock:
try:
result = data[args]
stats[0] += 1 # Hit
except KeyError:
stats[1] += 1 # Miss
if len(data) == maxsize:
for k, _ in nsmallest(maxsize // 10 or 1,
iteritems(lastused),
key=itemgetter(1)):
del data[k]
del lastused[k]
data[args] = user_function(*args)
result = data[args]
finally:
lastused[args] = time()
return result
def cache_info():
with lock:
return stats[0], stats[1], maxsize, len(data)
def cache_clear():
with lock:
data.clear()
lastused.clear()
stats[0] = stats[1] = 0
wrapper.cache_info = cache_info
wrapper.cache_clear = cache_clear
return wrapper
return decorating_function
from whoosh.util import cache
cache.lru_cache = lru_cache
| [
"[email protected]"
] | |
63af416133ae408431705293a3634711f5cf8416 | 7bcc7f36743694a2b0a8aa1b496ceca1371b130e | /rltools/rltools/distributions.py | 8759f25a04a5caad0764865ddf21a86f17d5cb22 | [] | no_license | parachutel/MADRL | 7cfa32ab0e9a6bee2b6e31434a8e2835b9e9265d | f03b8009ede6d3324f6e2091dcfa3911b5968fe0 | refs/heads/master | 2020-04-25T07:23:21.004116 | 2019-10-09T21:21:16 | 2019-10-09T21:21:16 | 172,612,159 | 0 | 1 | null | 2019-02-26T01:10:00 | 2019-02-26T01:10:00 | null | UTF-8 | Python | false | false | 5,032 | py | import numpy as np
import tensorflow as tf
from rltools import tfutil
from rltools import util
TINY = 1e-10
class Distribution(object):
@property
def dim(self):
raise NotImplementedError()
def kl(self, old, new):
raise NotImplementedError()
def log_density(self, dist_params, x):
raise NotImplementedError()
def entropy(self, logprobs_N_K):
raise NotImplementedError()
def sample(self, logprobs_N_K):
raise NotImplementedError()
def kl_expr(self, logprobs1, logprobs2):
raise NotImplementedError()
def log_density_expr(self, dist_params, x):
raise NotImplementedError()
class Categorical(Distribution):
def __init__(self, dim):
self._dim = dim
@property
def dim(self):
return self._dim
def log_density(self, dist_params_B_A, x_B_A):
return util.lookup_last_idx(dist_params_B_A, x_B_A)
def entropy(self, probs_N_K):
tmp = -probs_N_K * np.log(probs_N_K + TINY)
tmp[~np.isfinite(tmp)] = 0
return tmp.sum(axis=1)
def sample(self, probs_N_K):
"""Sample from N categorical distributions, each over K outcomes"""
N, K = probs_N_K.shape
return np.array([np.random.choice(K, p=probs_N_K[i, :]) for i in range(N)])
def kl_expr(self, logprobs1_B_A, logprobs2_B_A, name=None):
"""KL divergence between categorical distributions, specified as log probabilities"""
with tf.op_scope([logprobs1_B_A, logprobs2_B_A], name, 'categorical_kl') as scope:
kl_B = tf.reduce_sum(
tf.exp(logprobs1_B_A) * (logprobs1_B_A - logprobs2_B_A), 1, name=scope)
return kl_B
def log_density_expr(self, dist_params_B_A, x_B_A):
"""Log density from categorical distribution params"""
return tfutil.lookup_last_idx(dist_params_B_A, x_B_A)
class RecurrentCategorical(Distribution):
def __init__(self, dim):
self._dim = dim
self._cat = Categorical(dim)
@property
def dim(self):
return self._dim
def log_density(self, dist_params_B_H_A, x_B_H_A):
adim = dist_params_B_H_A.shape[-1]
flat_logd = self._cat.log_density(
dist_params_B_H_A.reshape((-1, adim)), x_B_H_A.reshape((-1, adim)))
return flat_logd.reshape(dist_params_B_H_A.shape)
def entropy(self, probs_N_H_K):
tmp = -probs_N_H_K * np.log(probs_N_H_K + TINY)
tmp[~np.isfinite(tmp)] = 0
return tmp.sum(axis=-1)
def sample(self, probs_N_K):
"""Sample from N categorical distributions, each over K outcomes"""
return self._cat.sample(probs_N_K)
def kl_expr(self, logprobs1_B_H_A, logprobs2_B_H_A, name=None):
"""KL divergence between categorical distributions, specified as log probabilities"""
with tf.op_scope([logprobs1_B_H_A, logprobs2_B_H_A], name, 'categorical_kl') as scope:
kl_B_H = tf.reduce_sum(
tf.exp(logprobs1_B_H_A) * (logprobs1_B_H_A - logprobs2_B_H_A), 2, name=scope)
return kl_B_H
def log_density_expr(self, dist_params_B_H_A, x_B_H_A):
adim = tf.shape(dist_params_B_H_A)[len(dist_params_B_H_A.get_shape()) - 1]
flat_logd = self._cat.log_density_expr(
tf.reshape(dist_params_B_H_A, tf.pack([-1, adim])),
tf.reshape(x_B_H_A, tf.pack([-1, adim])))
return tf.reshape(flat_logd, tf.shape(dist_params_B_H_A)[:2])
class Gaussian(Distribution):
def __init__(self, dim):
self._dim = dim
@property
def dim(self):
return self._dim
def entropy(self, stdevs):
d = stdevs.shape[-1]
return .5 * d * (1. + np.log(2. * np.pi)) + np.log(stdevs).sum(axis=-1)
def kl_expr(self, means1_stdevs1, means2_stdevs2, name=None):
"""KL divergence wbw diagonal covariant gaussians"""
means1, stdevs1 = means1_stdevs1
means2, stdevs2 = means2_stdevs2
with tf.op_scope([means1, stdevs1, means2, stdevs2], name, 'gaussian_kl') as scope:
D = tf.shape(means1)[len(means1.get_shape()) - 1]
kl = tf.mul(.5, (tf.reduce_sum(tf.square(stdevs1 / stdevs2), -1) + tf.reduce_sum(
tf.square((means2 - means1) / stdevs2), -1) + 2. * (tf.reduce_sum(
tf.log(stdevs2), -1) - tf.reduce_sum(tf.log(stdevs1), -1)) - tf.to_float(D)),
name=scope)
return kl
def log_density_expr(self, means, stdevs, x, name=None):
"""Log density of diagonal gauss"""
with tf.op_scope([means, stdevs, x], name, 'gauss_log_density') as scope:
D = tf.shape(means)[len(means.get_shape()) - 1]
lognormconsts = -.5 * tf.to_float(D) * np.log(2. * np.pi) + 2. * tf.reduce_sum(
tf.log(stdevs), -1) # log norm consts
logprobs = tf.add(-.5 * tf.reduce_sum(tf.square((x - means) / stdevs), -1),
lognormconsts, name=scope)
return logprobs
RecurrentGaussian = Gaussian
| [
"[email protected]"
] | |
be1820b5330c658d7e0361946656285991cd853c | 7a4f61d55378fec4df952bd574cb1db57e795084 | /analyze/simulation/Trader.py | 273c29a7e0b47bf9d142acc039fb3a9db5d5fd64 | [] | no_license | 569593913/enoch | 9da83638b99aa6e6a649914e01bef3abdc236895 | 8c8c7ff5a527a5287894b6522fe4e87974cc5e01 | refs/heads/master | 2021-05-05T15:48:45.931858 | 2019-03-03T12:07:47 | 2019-03-03T12:07:47 | 117,324,651 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,497 | py | # -*- coding: utf-8 -*-
from .HoldStock import *
from .TradeRecord import *
import threading
from datetime import *
class Trader:
"""
模拟交易者
attribute:
original 初始资金
cash 当前资金
own 当前拥有的股票,map类,key为股票代码,value为HoldStock list
record 交易记录
taxRate 税率
earningsLine 收益曲线,list类型,值为[时间,当开收益]
"""
def __init__(self, original=1000000):
"""
:param original:初始资金
"""
self.original = original
self.cash = original
self.own = {}
self.record = []
self.earningsLine = []
self.taxRate = 0.002
self.lock = threading.Lock()
def buy(self, code, buyTime, buyPrice, buyAmount=None,condition=None):
"""
买一只股票
:param code: 股票代码
:param buyTime: 购买时间,long型
:param buyPrice: 购买价格
:param buyAmount: 购买金额
:return:
"""
with self.lock:
if buyPrice <= 0:
print("buyPrice=0 return")
return
#获取可买的金额
if buyAmount == None or buyAmount > self.cash:
buyAmount = self.cash
#税后可买金额
afterTaxAmount = 0
tax = 0
if buyAmount*(1+self.taxRate) < self.cash:
afterTaxAmount = buyAmount
tax = buyAmount*self.taxRate
else:
tax = self.cash * self.taxRate
afterTaxAmount = self.cash - tax
#可买到的数量
canBuyQuantity = afterTaxAmount / buyPrice
if canBuyQuantity < 100:
# print('buy code:%s quantity:%s less 100,price:%s,cash:%s,original:%s,isHold:%s' \
# % (code,quantity,buyPrice,self.cash,self.original,self.isHold(code)))
return
holdStock = HoldStock(code, buyTime, buyPrice, buyPrice, canBuyQuantity)
if code not in self.own:
self.own[code] = []
self.own[code].append(holdStock)
tradeRecord = TradeRecord(code, buyTime, "buy", buyPrice, canBuyQuantity,condition)
self.record.append(tradeRecord)
self.cash -= (afterTaxAmount+tax)
def isHold(self, code):
if (code not in self.own) or (len(self.own[code]) < 1):
return False
return True
def sell(self, code, sellTime, sellPrice, quantity=None,condition=None):
"""
卖一只股票
:param code: 股票代码
:param sellTime: 卖出时间,long型
:param sellPrice: 卖出价格
:param quantity: 卖出数量
:return:
"""
with self.lock:
if self.isHold(code) == False:
# print("%s 没有持有,不可卖!" % code)
return
if sellPrice <= 0:
print('price:%s' % sellPrice)
return
if None == quantity:
quantity = 0
for hs in self.own[code]:
quantity += hs.quantity
if quantity < 100:
print('sell quantity:% less 100' % quantity)
return
# 获取可卖出的数量
actualSellQuantity = 0
holdStocks = []
for hs in self.own[code]: # list顺序保证先买先卖
if quantity > 0:
if (sellTime - hs.buyTime) > 1000: # 超过一天才能卖,只有天粒度的数据才能这么简单计算
if hs.quantity >= quantity:
actualSellQuantity += quantity
hs.quantity -= quantity
quantity = 0
else:
actualSellQuantity += hs.quantity
quantity -= hs.quantity
hs.quantity = 0
if hs.quantity != 0: # quantity为0的清理掉
holdStocks.append(hs)
self.own[code] = holdStocks
self.cash += sellPrice * actualSellQuantity
tradeRecord = TradeRecord(code, sellTime, "sell", sellPrice, actualSellQuantity,condition)
self.record.append(tradeRecord)
def asset(self):
"""
获取当前的资产
:return:
"""
with self.lock:
asset = self.cash
for hsl in self.own.values():
for hs in hsl:
asset += hs.capitalisation()
return asset
def earnings(self):
"""
当前收益
:return:
"""
return (self.asset() - self.original) / self.original
def fresh(self,date, dic):
"""
刷新股票的价格
:param dic: 股票代码:价格
:return:
"""
with self.lock:
for code, price in dic.items():
for hs in self.own.get(code, []):
hs.currentPice = price
self.earningsLine.append([date,self.earnings()])
def __repr__(self):
return "Trader{earnings=%s,asset=%s,\noriginal=%s,\ncash=%s,\nown=%s,\nrecord=%s,\nearningsLine=%s}" % \
(self.earnings(), self.asset(), self.original, self.cash, self.own, self.record,self.earnings()) | [
"[email protected]"
] | |
2ed228386348b55ff136f6ca805e70a088d5c9d1 | 2f8190a41c90b5fdc44610358e5222d0193b1ae9 | /geomodelgrids/apps/__init__.py | f9a17349b10e2cb487df056c5d1107496c1f2c29 | [
"LicenseRef-scancode-public-domain",
"CC0-1.0",
"MIT",
"LicenseRef-scancode-warranty-disclaimer"
] | permissive | ehirakawa/geomodelgrids | 26e296be37ade0d2f877d9e6263a41663835b08a | 827668ea5b2d03621a750f1a068dc81f58316399 | refs/heads/master | 2020-08-20T22:32:30.093818 | 2019-11-27T04:35:51 | 2019-11-27T04:35:51 | 216,073,345 | 0 | 0 | NOASSERTION | 2019-10-18T17:17:31 | 2019-10-18T17:17:31 | null | UTF-8 | Python | false | false | 77 | py | """Initialize geomodelgrids applications module."""
from . import rasterize
| [
"[email protected]"
] | |
f957d70afd2e84447f77b11cfb6357e00c3b3251 | 09cd370cdae12eb45090033a00e9aae45ee26638 | /BOJ/[S1]2110 공유기 설치.py | f30a43696c21b9a44ec824111b7989ae7af87ed9 | [] | no_license | KWONILCHEOL/Python | ee340f6328945651eb29d2b23c425a92c84a4adb | 1ea5f5f74894a5929e0e894c5c12f049b8eb9fb4 | refs/heads/main | 2023-04-11T09:36:54.874638 | 2021-04-24T04:29:12 | 2021-04-24T04:29:12 | 328,658,511 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 666 | py | # [S1]2110 공유기 설치
# https://www.acmicpc.net/problem/2110
# 파라메트릭 서치, 이진탐색
def check(k):
value = arr[0]
cnt = 1
for i in range(1,n):
if arr[i] >= value + k:
value = arr[i]
cnt += 1
if cnt == c:
return True
return False
import sys
input = sys.stdin.readline
n,c = list(map(int, input().split()))
arr = []
for _ in range(n):
arr.append(int(input()))
arr.sort()
lo = 1 #최소 gap
hi = arr[-1] - arr[0] #최대 gap
result = 0
while lo + 1 < hi: #O(logN)
mid = (lo + hi) // 2
if check(mid) == False:
hi = mid
else:
lo = mid
result = mid
value = arr[0]
cnt = 1
print(lo) | [
"[email protected]"
] | |
bf376078d8387fe06b0ee8b09d2774dda4c6d84a | 78c4f0d5cfcdcec678ff78e259b64550692120fa | /ormar/models/__init__.py | eb6bdd7c5c5224e69246a5b734a225109dc2cd42 | [
"MIT"
] | permissive | dudil/ormar | 3c31d592f6a23cacff2ac0e3a6163f4778ab286f | 7c0f8e976a651c40dbc669f1caba1361f0af2ead | refs/heads/master | 2023-07-07T22:26:20.165473 | 2021-03-05T11:37:28 | 2021-03-05T11:37:28 | 344,250,603 | 0 | 0 | MIT | 2021-09-07T03:44:08 | 2021-03-03T20:08:35 | Python | UTF-8 | Python | false | false | 555 | py | """
Definition of Model, it's parents NewBaseModel and mixins used by models.
Also defines a Metaclass that handles all constructions and relations registration,
ass well as vast number of helper functions for pydantic, sqlalchemy and relations.
"""
from ormar.models.newbasemodel import NewBaseModel # noqa I100
from ormar.models.model_row import ModelRow # noqa I100
from ormar.models.model import Model # noqa I100
from ormar.models.excludable import ExcludableItems # noqa I100
__all__ = ["NewBaseModel", "Model", "ModelRow", "ExcludableItems"]
| [
"[email protected]"
] | |
f459e40a8809ff26b3ca6c6860a1ad7d2bc37866 | eb2ebaf1c53cfeb5a1abbc05e6423c5fdc27ca8a | /.history/game_functions_20200129213418.py | 3fdbdaa6f774bd7226f7e217a22bd777294d0e99 | [] | no_license | HeperW/VSCO | 8120bde36c73601e5b956955c0b68b599c9a663d | 4c3ee4aa45424313e335ff05934fbb6df7950304 | refs/heads/master | 2022-05-08T00:51:44.618385 | 2022-03-16T03:18:34 | 2022-03-16T03:18:34 | 233,367,724 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 429 | py | import sys
import pygame
def check_events():
"""响应按键与鼠标事件"""
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
def update_screen(ai_settings,screen,ship):
"""更新屏幕上的图像,并切换到新屏幕"""
#每次循环时都重绘屏幕
screen.fill(ai_settings,bg_color)
ship.blitme()
#让最近绘制的屏幕可见
pygame.dis | [
"[email protected]"
] | |
3d5b02ff98f9c65ed8a040f582968c14c0bc5442 | 53fa34a5ecfbeea84c960afc3ba088c3a7a41587 | /subunit2sql/migrations/versions/10a2b6d4b06e_add_even_more_indexes.py | 76c52efc0511d66674a0a73f6f3796157a16d32f | [
"Apache-2.0"
] | permissive | mguiney/subunit2sql | dd7259110363416c7892fdab63e2391884a5179b | 6f95e43478ba91027e07af0d9c7e1305f0829c2e | refs/heads/master | 2021-09-06T23:09:21.095681 | 2018-02-07T19:42:46 | 2018-02-07T19:42:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,503 | py | # Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Add even more indexes
Revision ID: 10a2b6d4b06e
Revises: 35cd45895e56
Create Date: 2015-12-01 18:19:11.328298
"""
# revision identifiers, used by Alembic.
revision = '10a2b6d4b06e'
down_revision = '35cd45895e56'
from alembic import op
def upgrade():
with op.batch_alter_table('run_metadata') as batch_op:
batch_op.create_unique_constraint('uq_run_metadata',
['run_id', 'key', 'value'])
with op.batch_alter_table('test_metadata') as batch_op:
batch_op.create_unique_constraint('uq_test_metadata',
['test_id', 'key', 'value'])
with op.batch_alter_table('test_run_metadata') as batch_op:
batch_op.create_unique_constraint('uq_test_run_metadata',
['test_run_id', 'key', 'value'])
def downgrade():
NotImplementedError()
| [
"[email protected]"
] | |
61ef95b1f797f2f0ce56bf7014743057452773e3 | 6e601105760f09d3c9f5306e18e4cf085f0bb4a2 | /10000-99999/10875.py | 2f6f2663592474c300ad791f26143b1e12197241 | [] | no_license | WSJI0/BOJ | 6412f69fddd46c4bcc96377e2b6e013f3bb1b524 | 160d8c13f72d7da835d938686f433e7b245be682 | refs/heads/master | 2023-07-06T15:35:50.815021 | 2023-07-04T01:39:48 | 2023-07-04T01:39:48 | 199,650,520 | 2 | 0 | null | 2020-04-20T09:03:03 | 2019-07-30T12:48:37 | Python | UTF-8 | Python | false | false | 633 | py | '''
10875번
뱀
미완성
'''
import sys
input=sys.stdin.readline
l=int(input())
n=int(input())
a=[]
for _ in range(n):
a.append(list(input().rstrip().split()))
visited=[]
now=[0, 0]
d='r'
change={
'r':['u', 'd'],
'd':['r', 'l'],
'l':['d', 'u'],
'u':['l', 'r']
}
ans=0
for i in range(n):
if d=='r':
if now[0]+a[i][0]<=l:
col=-1
for j in range(len(visited)):
if
else:
ans+=l-(now[0]+a[i][0])
break
visited.append([now[0], now[1], now[0]+a[i][0], now[1]])
if a[i][1]=='L': d=change[d][0]
else: d=change[d][1] | [
"[email protected]"
] | |
de60657dc3cdcffb3495e5a43f9c653e3ec535ad | 81d7c62c357c086a8990105d4179c9a2cda0f89c | /Requests_module_old_project_cDVR_reference/aurora_cdvr_sanity_tests/scripts/sanity_scripts/sanity_dataplane_MOSrecorder.py | b8be0ae710863f822af8a34ae23dcbfa6aa0c8e1 | [] | no_license | JAGASABARIVEL/Python_reference | 57f49a3f8d894f02f8003657a914395f4b55d844 | f2438289f189fc364dbe9dff0421c3df9be366b2 | refs/heads/master | 2020-04-08T19:23:23.406783 | 2018-11-29T12:25:43 | 2018-11-29T12:25:43 | 159,653,055 | 0 | 1 | null | 2020-01-25T19:07:42 | 2018-11-29T11:06:42 | Python | UTF-8 | Python | false | false | 3,606 | py | #!/usr/bin/python
import os
import sys
import time
import requests
import json
import ast
import mypaths
from readYamlConfig import readYAMLConfigs
from L1commonFunctions import *
from L2commonFunctions import *
##########################################################################
# Get MOS recorder
##########################################################################
def doit(cfg,printflg=False):
disable_warning() #Method Called to suppress all warnings
# announce
abspath = os.path.abspath(__file__)
scriptName = os.path.basename(__file__)
(test, ext) = os.path.splitext(scriptName)
name = (__file__.split('/'))[-1]
I = 'Core DVR Functionality' #rally initiatives
US = ' Get MOS recorder'
TIMS_testlog = []
TIMS_testlog = [name,I,US]
print "Starting test " + test
recording_api = cfg['recording_api']
if recording_api != 'mos':
TIMS_testlog.append(2)
msg = 'Testcase warning :Test bypassed since for MOS'
print msg
TIMS_testlog.append(msg)
return TIMS_testlog
# set values based on config
hosts = get_hosts_by_config_type(cfg,'rm',printflg)
print hosts
if hosts == None:
msg = 'Testcase failed :unable to get the host ip'
print msg
TIMS_testlog.append(1)
TIMS_testlog.append(msg)
return TIMS_testlog
protocol = cfg['protocol']
throttle_milliseconds = cfg['sanity']['throttle_milliseconds']
if throttle_milliseconds < 1:
throttle_milliseconds = 25
headers = {
'Content-Type': 'application/json; charset=utf-8',
}
timeout=5
any_host_pass = 0
for index, host in enumerate(hosts):
if index > 1:
time.sleep(throttle_milliseconds / 1000.0 )
url = protocol +"://" + host + "/emdata/MosRecorder"
print "Get MOS recorder via ", url
r = sendURL("get",url,timeout,headers)
if r is not None :
if ( r.status_code != 200):
print "Problem accessing: " + url
print r.status_code
print r.headers
print r.content
msg= 'Testcase failed :Problem accessing url'
print msg
TIMS_testlog.append(1)
TIMS_testlog.append(msg)
return TIMS_testlog
else:
if r.content is None :
print "\n" + "#"*20 + " DEBUG STARTED "+ "#"*20+ "\n"
print "Get MOS recorder \n" + json.dumps(json.loads(r.content),indent = 4, sort_keys=False)
print "\n" + "#"*20 + " DEBUG ENDED "+ "#"*20+ "\n"
any_host_pass = any_host_pass + 1
printLog("Get MOS recorder \n" + json.dumps(json.loads(r.content),indent = 4, sort_keys=False),printflg)
if any_host_pass:
msg = 'Testcase passed :MOS recorder is successfully retrieved'
print msg
TIMS_testlog.append(0)
TIMS_testlog.append(msg)
return TIMS_testlog
else:
msg = 'Testcase failed :MOS recorder is not successfully retrieved '
print msg
TIMS_testlog.append(1)
TIMS_testlog.append(msg)
return TIMS_testlog
if __name__ == '__main__':
scriptName = os.path.basename(__file__)
#read config file
sa = sys.argv
cfg = relative_config_file(sa,scriptName)
if cfg['sanity']['print_cfg']:
print "\nThe following configuration is being used:\n"
pprint(cfg)
print
L = doit(cfg, True)
exit(L[3] )
| [
"[email protected]"
] | |
6bca10149c8caba082f89061bea81bdf7681bbb6 | 23dfbb1bab2e8a8a8d85b30fb9edfdbf0a2cbdaa | /python/test/feature_extractor_test.py | f632a7868ad93854f69ef642494231f3aa031000 | [
"Apache-2.0"
] | permissive | zucksong/vmaf | ff13854bd06ddf3554bcdcea8bffb8c8daa767ac | 90aff1d5464e05ed752dd39a285de6636b9fa897 | refs/heads/master | 2021-05-31T23:25:03.229557 | 2016-06-28T20:43:05 | 2016-06-28T20:43:05 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 37,542 | py | __copyright__ = "Copyright 2016, Netflix, Inc."
__license__ = "Apache, Version 2.0"
import os
import unittest
import re
import config
from core.feature_extractor import VmafFeatureExtractor, MomentFeatureExtractor, \
PsnrFeatureExtractor, SsimFeatureExtractor, MsSsimFeatureExtractor
from core.asset import Asset
from core.executor import run_executors_in_parallel
from core.result_store import FileSystemResultStore
class FeatureExtractorTest(unittest.TestCase):
def tearDown(self):
if hasattr(self, 'fextractor'):
self.fextractor.remove_results()
pass
def test_executor_id(self):
asset = Asset(dataset="test", content_id=0, asset_id=1,
ref_path="dir/refvideo.yuv", dis_path="dir/disvideo.yuv",
asset_dict={})
fextractor = VmafFeatureExtractor([asset], None)
self.assertEquals(fextractor.executor_id, "VMAF_feature_V0.2.1")
def test_get_log_file_path(self):
import hashlib
asset = Asset(dataset="test", content_id=0, asset_id=1,
ref_path="dir/refvideo.yuv", dis_path="dir/disvideo.yuv",
asset_dict={'width':720, 'height':480,
'start_frame':2, 'end_frame':2},
workdir_root="my_workdir_root")
fextractor = VmafFeatureExtractor([asset], None)
log_file_path = fextractor._get_log_file_path(asset)
h = hashlib.sha1("test_0_1_refvideo_720x480_2to2_vs_disvideo_720x480_2to2_q_720x480").hexdigest()
self.assertTrue(re.match(r"^my_workdir_root/[a-zA-Z0-9-]+/VMAF_feature_V0.2.1_{}$".format(h), log_file_path))
def test_run_vamf_fextractor(self):
print 'test on running VMAF feature extractor...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractor = VmafFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run()
results = self.fextractor.results
self.assertAlmostEqual(results[0]['VMAF_feature_vif_score'], 0.44455808333333313, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm2_score'], 0.9254334398006141, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_ansnr_score'], 22.533456770833329, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_num_score'], 644527.3311971038, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_num_score'], 6899.815530270836, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_anpsnr_score'], 34.15266368750002, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale0_score'], 0.3655846219305399, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale1_score'], 0.7722301581694561, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale2_score'], 0.8681486658208089, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale3_score'], 0.9207121810522212, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_ansnr_score'], 30.030914145833322, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_num_score'], 1449635.3522745417, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_num_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_anpsnr_score'], 41.65012097916668, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale0_score'], 1.0000000132944864, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale1_score'], 0.9999998271651448, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale2_score'], 0.9999998649680067, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale3_score'], 0.9999998102499, places=4)
def test_run_vamf_fextractor_with_result_store(self):
print 'test on running VMAF feature extractor with result store...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
result_store = FileSystemResultStore(logger=None)
self.fextractor = VmafFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=result_store
)
print ' running for the first time with fresh calculation...'
self.fextractor.run()
result0, result1 = self.fextractor.results
self.assertTrue(os.path.exists(result_store._get_result_file_path(result0)))
self.assertTrue(os.path.exists(result_store._get_result_file_path(result1)))
print ' running for the second time with stored results...'
self.fextractor.run()
results = self.fextractor.results
self.assertAlmostEqual(results[0]['VMAF_feature_vif_score'], 0.44455808333333313, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm2_score'], 0.9254334398006141, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_ansnr_score'], 22.533456770833329, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_num_score'], 644527.3311971038, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_num_score'], 6899.815530270836, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_anpsnr_score'], 34.15266368750002, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale0_score'], 0.3655846219305399, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale1_score'], 0.7722301581694561, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale2_score'], 0.8681486658208089, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale3_score'], 0.9207121810522212, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_ansnr_score'], 30.030914145833322, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_num_score'], 1449635.3522745417, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_num_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_anpsnr_score'], 41.65012097916668, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale0_score'], 1.0000000132944864, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale1_score'], 0.9999998271651448, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale2_score'], 0.9999998649680067, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale3_score'], 0.9999998102499, places=4)
def test_run_vmaf_fextractor_not_unique(self):
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
with self.assertRaises(AssertionError):
self.fextractor = VmafFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True)
def test_run_moment_fextractor(self):
print 'test on running Moment feature extractor...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractor = MomentFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run()
results = self.fextractor.results
self.assertAlmostEqual(results[0]['Moment_feature_ref1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_ref2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_refvar_score'], 1121.519917231203, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis1st_score'], 61.332006624999984, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis2nd_score'], 4798.659574041666, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_disvar_score'], 1036.837184348847, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_refvar_score'], 1121.519917231203, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_disvar_score'], 1121.519917231203, places=4)
def test_run_psnr_fextractor(self):
print 'test on running PSNR feature extractor...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractor = PsnrFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run()
results = self.fextractor.results
self.assertAlmostEqual(results[0]['PSNR_feature_psnr_score'], 30.755063979166664, places=4)
self.assertAlmostEqual(results[1]['PSNR_feature_psnr_score'], 60.0, places=4)
def test_run_ssim_fextractor(self):
print 'test on running SSIM feature extractor...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractor = SsimFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run()
results = self.fextractor.results
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_score'], 0.86325137500000004, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_l_score'], 0.99814749999999997, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_c_score'], 0.96126793750000006, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_s_score'], 0.89770760416666662, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_l_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_c_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_s_score'], 1.0, places=4)
def test_run_ms_ssim_fextractor(self):
print 'test on running MS-SSIM feature extractor...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractor = MsSsimFeatureExtractor(
[asset, asset_original],
None, fifo_mode=True,
result_store=None
)
self.fextractor.run()
results = self.fextractor.results
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_score'], 0.96324620833333319, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale0_score'], 0.9981474999999999, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale0_score'], 0.96126793750000006, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale0_score'], 0.8977076041666665, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale1_score'], 0.9989961250000002, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale1_score'], 0.9858215416666668, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale1_score'], 0.9411672708333335, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale2_score'], 0.9992356458333332, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale2_score'], 0.9970406458333333, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale2_score'], 0.9779967291666667, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale3_score'], 0.9992921041666665, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale3_score'], 0.9995884375000003, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale3_score'], 0.9938731666666668, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale4_score'], 0.99940356249999995, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale4_score'], 0.99990762500000008, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale4_score'], 0.99822306250000004, places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale4_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale4_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale4_score'], 1., places=4)
class ParallelFeatureExtractorTest(unittest.TestCase):
def tearDown(self):
if hasattr(self, 'fextractors'):
for fextractor in self.fextractors:
fextractor.remove_results()
pass
def test_run_parallel_vamf_fextractor(self):
print 'test on running VMAF feature extractor in parallel...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractors, results = run_executors_in_parallel(
VmafFeatureExtractor,
[asset, asset_original],
fifo_mode=True,
delete_workdir=True,
parallelize=True,
result_store=None,
)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_score'], 0.44455808333333313, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm2_score'], 0.9254334398006141, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_ansnr_score'], 22.533456770833329, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_num_score'], 644527.3311971038, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_num_score'], 6899.815530270836, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_anpsnr_score'], 34.15266368750002, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale0_score'], 0.3655846219305399, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale1_score'], 0.7722301581694561, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale2_score'], 0.8681486658208089, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale3_score'], 0.9207121810522212, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_ansnr_score'], 30.030914145833322, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_num_score'], 1449635.3522745417, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_num_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_anpsnr_score'], 41.65012097916668, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale0_score'], 1.0000000132944864, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale1_score'], 0.9999998271651448, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale2_score'], 0.9999998649680067, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale3_score'], 0.9999998102499, places=4)
def test_run_parallel_vamf_fextractor_with_result_store(self):
print 'test on running VMAF feature extractor with result store ' \
'in parallel...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
result_store = FileSystemResultStore(logger=None)
print ' running for the first time with fresh calculation...'
self.fextractors, results = run_executors_in_parallel(
VmafFeatureExtractor,
[asset, asset_original],
fifo_mode=True,
delete_workdir=True,
parallelize=True,
result_store=result_store,
)
result0, result1 = results
self.assertTrue(os.path.exists(result_store._get_result_file_path(result0)))
self.assertTrue(os.path.exists(result_store._get_result_file_path(result1)))
print ' running for the second time with stored results...'
_, results = run_executors_in_parallel(
VmafFeatureExtractor,
[asset, asset_original],
fifo_mode=True,
delete_workdir=True,
parallelize=True,
result_store=result_store,
)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_score'], 0.44455808333333313, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm2_score'], 0.9254334398006141, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_ansnr_score'], 22.533456770833329, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_num_score'], 644527.3311971038, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_num_score'], 6899.815530270836, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_anpsnr_score'], 34.15266368750002, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale0_score'], 0.3655846219305399, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale1_score'], 0.7722301581694561, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale2_score'], 0.8681486658208089, places=4)
self.assertAlmostEqual(results[0]['VMAF_feature_vif_scale3_score'], 0.9207121810522212, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_motion_score'], 3.5916076041666667, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm2_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_ansnr_score'], 30.030914145833322, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_num_score'], 1449635.3522745417, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_den_score'], 1449635.3812459996, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_num_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_adm_den_score'], 7535.801140312499, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_anpsnr_score'], 41.65012097916668, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale0_score'], 1.0000000132944864, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale1_score'], 0.9999998271651448, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale2_score'], 0.9999998649680067, places=4)
self.assertAlmostEqual(results[1]['VMAF_feature_vif_scale3_score'], 0.9999998102499, places=4)
def test_run_parallel_moment_fextractor(self):
print 'test on running Moment feature extractor in parallel...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractors, results = run_executors_in_parallel(
MomentFeatureExtractor,
[asset, asset_original],
fifo_mode=True,
delete_workdir=True,
parallelize=True,
result_store=None,
)
self.assertAlmostEqual(results[0]['Moment_feature_ref1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_ref2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_refvar_score'], 1121.519917231203, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis1st_score'], 61.332006624999984, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_dis2nd_score'], 4798.659574041666, places=4)
self.assertAlmostEqual(results[0]['Moment_feature_disvar_score'], 1036.837184348847, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_ref2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_refvar_score'], 1121.519917231203, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis1st_score'], 59.788567297525134, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_dis2nd_score'], 4696.668388042269, places=4)
self.assertAlmostEqual(results[1]['Moment_feature_disvar_score'], 1121.519917231203, places=4)
def test_run_parallel_ssim_fextractor(self):
print 'test on running SSIM feature extractor in parallel...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractors, results = run_executors_in_parallel(
SsimFeatureExtractor,
[asset, asset_original],
fifo_mode=True,
delete_workdir=True,
parallelize=True,
result_store=None,
)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_score'], 0.86325137500000004, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_l_score'], 0.99814749999999997, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_c_score'], 0.96126793750000006, places=4)
self.assertAlmostEqual(results[0]['SSIM_feature_ssim_s_score'], 0.89770760416666662, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_l_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_c_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['SSIM_feature_ssim_s_score'], 1.0, places=4)
def test_run_parallel_ms_ssim_fextractor(self):
print 'test on running MS-SSIM feature extractor in parallel...'
ref_path = config.ROOT + "/resource/yuv/src01_hrc00_576x324.yuv"
dis_path = config.ROOT + "/resource/yuv/src01_hrc01_576x324.yuv"
asset = Asset(dataset="test", content_id=0, asset_id=0,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=dis_path,
asset_dict={'width':576, 'height':324})
asset_original = Asset(dataset="test", content_id=0, asset_id=1,
workdir_root=config.ROOT + "/workspace/workdir",
ref_path=ref_path,
dis_path=ref_path,
asset_dict={'width':576, 'height':324})
self.fextractors, results = run_executors_in_parallel(
MsSsimFeatureExtractor,
[asset, asset_original],
fifo_mode=True,
delete_workdir=True,
parallelize=True,
result_store=None,
)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_score'], 0.96324620833333319, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale0_score'], 0.9981474999999999, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale0_score'], 0.96126793750000006, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale0_score'], 0.8977076041666665, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale1_score'], 0.9989961250000002, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale1_score'], 0.9858215416666668, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale1_score'], 0.9411672708333335, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale2_score'], 0.9992356458333332, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale2_score'], 0.9970406458333333, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale2_score'], 0.9779967291666667, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale3_score'], 0.9992921041666665, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale3_score'], 0.9995884375000003, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale3_score'], 0.9938731666666668, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_l_scale4_score'], 0.99940356249999995, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_c_scale4_score'], 0.99990762500000008, places=4)
self.assertAlmostEqual(results[0]['MS_SSIM_feature_ms_ssim_s_scale4_score'], 0.99822306250000004, places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_score'], 1.0, places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale0_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale1_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale2_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale3_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_l_scale4_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_c_scale4_score'], 1., places=4)
self.assertAlmostEqual(results[1]['MS_SSIM_feature_ms_ssim_s_scale4_score'], 1., places=4)
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
907c9f9d2b9bd1d9a072348f7dbe58b4a396c314 | 877bd6d9f9f38320a82f46d6c581d6099f77597b | /dev/installers/windows/substitute_version.py | b93fc7dafa563a3d23215c2ebe705f6cdd978140 | [
"BSD-3-Clause",
"CC-BY-3.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | caltechlibrary/holdit | 04b7563722e6883ffe135398401e35da35b0af73 | 474165764e3514303dfd118d1beb0b6570fb6e13 | refs/heads/main | 2021-06-01T17:06:24.569179 | 2020-12-04T20:39:45 | 2020-12-04T20:39:45 | 141,499,163 | 2 | 0 | NOASSERTION | 2020-07-28T00:18:25 | 2018-07-18T23:13:58 | Python | UTF-8 | Python | false | false | 1,097 | py | # =============================================================================
# @file substitute_version.py
# @brief Replace version string in holdit_innosetup_script.iss.in
# @author Michael Hucka <[email protected]>
# @license Please see the file named LICENSE in the project directory
# @website https://github.com/caltechlibrary/holdit
# =============================================================================
import os
from os import path
this_version = 0
here = path.abspath(path.dirname(__file__))
with open(path.join(here, '../../../holdit/__version__.py')) as f:
lines = f.read().rstrip().splitlines()
for line in [x for x in lines if x.startswith('__') and '=' in x]:
setting = line.split('=')
name = setting[0].strip()
if name == '__version__':
this_version = setting[1].strip().replace("'", '')
with open(path.join(here, 'holdit_innosetup_script.iss.in')) as infile:
with open(path.join(here, 'holdit_innosetup_script.iss'), 'w') as outfile:
outfile.write(infile.read().replace('@@VERSION@@', this_version))
| [
"[email protected]"
] | |
532e42534f668fbcfe7009eba86ba479f485e7fc | 54937a50e74ad209f648f69b4a4509113d90b016 | /unsubscribe/views.py | d96d3bcc0ba01c063caa2d37ec5e26c29e6f0e08 | [
"MIT"
] | permissive | MHM5000/mechanical-mooc | 0a728ab45659075f6cf28bd3cafb72ac9abff259 | a0faa6d06f4cb157e4b92cc35d6d148e53096a41 | refs/heads/master | 2021-01-17T15:16:37.531668 | 2013-11-15T19:56:41 | 2013-11-15T19:56:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 789 | py | from django import http
from django.views.decorators.csrf import csrf_exempt
from mailgun import utils
import models as unsubscribe_model
import logging
log = logging.getLogger(__name__)
@csrf_exempt
def unsubscribe_webhook(request):
verified = utils.verify_webhook(
request.POST.get('token'),
request.POST.get('timestamp'),
request.POST.get('signature')
)
if not verified:
return http.HttpResponseForbidden()
address = request.POST.get('recipient')
try:
if request.POST.get('mailing-list'):
unsubscribe_model.unsubscribe_from_sequence(address)
else:
unsubscribe_model.unsubscribe_user(address)
except:
log.error(u'Could not unsubscribe {0}')
return http.HttpResponse('')
| [
"[email protected]"
] | |
e145bb17d89f0c7fbcba3dcc46cf5bdd6350a7af | 3fcd2c184abaa9bef5f4a916fbf0e9587da06346 | /ByTags/Two_pointers/N_Sum/Two_Sum.py | 5e3fdf31bf183199ab56e2d7f281fb29c83cf6ef | [] | no_license | chinitacode/Python_Learning | 865ff42722e256776ae91d744b779fa476e23f45 | 49aa02367e3097aca107b70dab43b5f60a67ef9f | refs/heads/master | 2020-06-29T01:05:39.331297 | 2020-03-21T14:29:51 | 2020-03-21T14:29:51 | 200,393,997 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,263 | py | '''
1. 两数之和 [简单]
给定一个整数数组 nums 和一个目标值 target,请你在该数组中找出和为目标值的那 两个 整数,
并返回他们的数组下标。
你可以假设每种输入只会对应一个答案。但是,你不能重复利用这个数组中同样的元素。
示例:
给定 nums = [2, 7, 11, 15], target = 9
因为 nums[0] + nums[1] = 2 + 7 = 9
所以返回 [0, 1]
[Method 1] 字典(空间换时间)
[Time]: O(N)
[Space]: O(N)
'''
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
visited = {}
for i, num in enumerate(nums):
if target-num in visited:
return [visited[target-num], i]
visited[num] = i
return []
'''
[Method 2]: Two-Pointer
双指针用法只能用于数组是排好序的或者数组未排序但是只要求返回和为target的数值而非索引,
因为排好序后索引就乱了,就算是先用一个字典记录其value:index,遇到有duplicates的情况,
字典中的同一个value只会记录下它最新的index,当2nums = target的时候就会出错。
'''
# 如果只要求返回满足和为target的两个num:
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
nums.sort()
if not nums or nums [0] > target: return []
i, j = 0, len(nums)-1
while nums[j] > target:
j -= 1
while i < j:
if nums[i] + nums[j] == target:
return [nums[i], nums[j]]
elif nums[i] + nums[j] < target:
i += 1
else:
j -= 1
return []
# 如果要求返回索引,数组未排序但是无duplicates:
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
dic = {num:i for i, num in enumerate(nums)}
nums.sort()
if not nums or nums [0] > target: return []
i, j = 0, len(nums)-1
while nums[j] > target:
j -= 1
while i < j:
if nums[i] + nums[j] == target:
return [dic[nums[i]], dic[nums[j]]]
elif nums[i] + nums[j] < target:
i += 1
else:
j -= 1
return []
| [
"[email protected]"
] | |
f8b6383e310a57cf9e66e15cab84cf3a09b22109 | 2071cf1aec8e9762a70b8c932943d8769da7f37a | /python_source/gui/tkinter/tkcascade.py | e795b56bd5793825da24ee9c77edbe452742da7b | [] | no_license | rinkeigun/linux_module | 70581a793d1f170ad1a776fd1acf4cda1abecd52 | 94450fb2c6af0fc56f451ae9bf74f7aca248d0a6 | refs/heads/master | 2020-05-21T17:49:15.608902 | 2018-09-18T22:57:40 | 2018-09-18T22:57:40 | 60,572,658 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,149 | py | from tkinter import *
import tkinter.messagebox as tkmsg
root = Tk()
root.option_add('*font', ('FixedSys', 14))
var = StringVar()
var.set('normal')
def dummy(): pass
# 状態の変更
def change_state():
m1.entryconfigure('Menu1', state = var.get())
m1.entryconfigure('Menu2', state = var.get())
m1.entryconfigure('Menu3', state = var.get())
m0.entryconfigure('SingleMenu', state = var.get())
def tandoku():
res=tkmsg.askquestion( title = 'about', message = '単独メニューが選ばれました')
print('戻り値:',res)
# メニューの設定
m0 = Menu(root)
root.configure(menu = m0)
m1 = Menu(m0, tearoff = False)
m0.add_cascade(label = 'カスケードMenu', under = 0, menu = m1)
m1.add_command(label = 'Menu1', command = dummy)
m1.add_command(label = 'Menu2', command = dummy)
m1.add_command(label = 'Menu3', command = dummy)
m0.add_command(label = 'SingleMenu', command = tandoku)
# ラジオボタンの設定
for x in ('normal', 'active', 'disabled'):
Radiobutton(root, text = x, value = x,
variable = var, command = change_state).pack(anchor = W)
root.mainloop() | [
"[email protected]"
] | |
0087bc89d3f65b6eb2c771479968d35395ed9bca | e7b7d22571fba04f333422a4d39cc24a9b6ccc18 | /btre/accounts/models.py | 5037b67096f7019f1fa95cf54cdf9d9aa17fa811 | [] | no_license | fayblash/RealEstateSite | 21ca7ef15d3e10d44e95e6d1028943f230166d64 | 49c94ccef58fd1a6bc0b022a8221f04d4163c2d6 | refs/heads/main | 2023-05-09T20:36:38.472318 | 2021-06-07T19:39:39 | 2021-06-07T19:39:39 | 374,766,507 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 185 | py | from django.db import models
from django.contrib.auth.models import User
from django.db.models.signals import post_save
from django.dispatch import receiver
# Create your models here.
| [
"[email protected]"
] | |
86204860d504da65e8bee2af93e4a12db6e22975 | 0f79fd61dc47fcafe22f83151c4cf5f2f013a992 | /BOJ/20061.py | 513d669ff1856e5b91e001f0bd0c2ed3df46b2e8 | [] | no_license | sangm1n/problem-solving | 670e119f28b0f0e293dbc98fc8a1aea74ea465ab | bc03f8ea9a6a4af5d58f8c45c41e9f6923f55c62 | refs/heads/master | 2023-04-22T17:56:21.967766 | 2021-05-05T12:34:01 | 2021-05-05T12:34:01 | 282,863,638 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,676 | py | """
모노미노도미노는 아래와 같이 생긴 보드에서 진행되는 게임이다. 보드는 빨간색 보드, 파란색 보드, 초록색 보드가 그림과 같이 붙어있는 형태이다.
게임에서 사용하는 좌표 (x, y)에서 x는 행, y는 열을 의미한다. 빨간색, 파란색, 초록색 보드가 사용하는 좌표는 그 색으로 그림에 적혀있다.
이 게임에서 사용하는 블록은 타일 하나 또는 두 개가 가로 또는 세로로 붙어있는 형태이다. 아래와 같이 세 종류가 있으며, 왼쪽부터 순서대로 크기가 1×1, 1×2, 2×1 이다.
행이나 열이 타일로 가득찬 경우와 연한 칸에 블록이 있는 경우가 동시에 발생할 수 있다.
이 경우에는 행이나 열이 타일로 가득 찬 경우가 없을 때까지 점수를 획득하는 과정이 모두 진행된 후, 연한 칸에 블록이 있는 경우를 처리해야 한다.
블록은 보드에 놓인 이후에 다른 블록과 합쳐지지 않는다. 블록을 놓은 위치가 순서대로 주어졌을 때, 얻은 점수와 초록색 보드와 파란색 보드에 타일이 있는 칸의 개수를 모두 구해보자.
"""
import sys
input = sys.stdin.readline
N = int(input())
red = [[0] * 4 for _ in range(4)]
blue = [[0] * 6 for _ in range(4)]
green = [[0] * 4 for _ in range(6)]
def green_domino(t, tile):
dx, dy = 1, 0
if t == 1:
x1, y1 = -1, tile[1]
while True:
nx, ny = x1 + dx, y1 + dy
if nx > 5 or green[nx][ny] != 0:
break
x1, y1 = nx, ny
green[nx-1][ny] = 1
else:
if t == 2:
x1, y1 = -1, tile[0][1]
x2, y2 = -1, tile[1][1]
else:
x1, y1 = -1, tile[0][1]
x2, y2 = 0, tile[1][1]
while True:
nx1, ny1 = x1 + dx, y1 + dy
nx2, ny2 = x2 + dx, y2 + dy
if nx1 > 5 or nx2 > 5 or green[nx1][ny1] != 0 or green[nx2][ny2] != 0:
break
x1, y1 = nx1, ny1
x2, y2 = nx2, ny2
green[nx1-1][ny1] = 1
green[nx2-1][ny2] = 1
def blue_domino(t, tile):
dx, dy = 0, 1
if t == 1:
x1, y1 = tile[0], -1
while True:
nx, ny = x1 + dx, y1 + dy
if ny > 5 or blue[nx][ny] != 0:
break
x1, y1 = nx, ny
blue[nx][ny-1] = 1
else:
if t == 2:
x1, y1 = tile[0][0], -1
x2, y2 = tile[1][0], 0
else:
x1, y1 = tile[0][0], -1
x2, y2 = tile[1][0], -1
while True:
nx1, ny1 = x1 + dx, y1 + dy
nx2, ny2 = x2 + dx, y2 + dy
if ny1 > 5 or ny2 > 5 or blue[nx1][ny1] != 0 or blue[nx2][ny2] != 0:
break
x1, y1 = nx1, ny1
x2, y2 = nx2, ny2
blue[nx1][ny1-1] = 1
blue[nx2][ny2-1] = 1
def green_check():
global score
for i in range(6):
if green[i].count(1) == 4:
for j in range(i, -1, -1):
green[j] = green[j-1]
green[0] = [0, 0, 0, 0]
score += 1
block_count = 0
for i in range(2):
if green[i].count(1) > 0:
block_count += 1
for i in range(block_count):
green.pop()
green.insert(0, [0, 0, 0, 0])
def blue_check():
global score
for i in range(6):
cnt = 0
for j in range(4):
if blue[j][i] == 1:
cnt += 1
if cnt == 4:
for j in range(i, -1, -1):
for k in range(4):
blue[k][j] = blue[k][j-1]
for j in range(4):
blue[j][0] = 0
score += 1
block_count = 0
for i in range(2):
for j in range(4):
if blue[j][i] == 1:
block_count += 1
break
for i in range(block_count):
for j in range(4):
blue[j].pop()
blue[j].insert(0, 0)
def green_sum():
global total
for i in range(6):
for j in range(4):
if green[i][j] == 1:
total += 1
def blue_sum():
global total
for i in range(4):
for j in range(6):
if blue[i][j] == 1:
total += 1
score, total = 0, 0
for _ in range(N):
t, x, y = map(int, input().split())
if t == 1:
tile = (x, y)
elif t == 2:
tile = [(x, y), (x, y+1)]
elif t == 3:
tile = [(x, y), (x+1, y)]
green_domino(t, tile)
blue_domino(t, tile)
green_check()
blue_check()
green_sum()
blue_sum()
print(score)
print(total)
| [
"[email protected]"
] | |
520f12149113622abf4fab89bc8f557df8cb8448 | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/nouns/_stopcocks.py | 68ca2f6991ffbd35b964e278c28a3a6940bfce02 | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 252 | py |
from xai.brain.wordbase.nouns._stopcock import _STOPCOCK
#calss header
class _STOPCOCKS(_STOPCOCK, ):
def __init__(self,):
_STOPCOCK.__init__(self)
self.name = "STOPCOCKS"
self.specie = 'nouns'
self.basic = "stopcock"
self.jsondata = {}
| [
"[email protected]"
] | |
8eead63d1c460a41992cf8c79c6553c01f6c96fe | 7dced0f2325269e8a4ebfdddd24330a0d33b1a53 | /os_brick/tests/initiator/test_connector.py | 2e985feaae00282788caeb2395eeafde2c6191b6 | [
"Apache-2.0"
] | permissive | hemna/os-brick | d0fc017a4a7b62f8d8ab9cba1bb36b8191339422 | 4f4ced7d8ff1c043065091771c2a8c9d69161d96 | refs/heads/master | 2021-05-30T15:54:17.668144 | 2015-02-06T19:50:34 | 2015-02-06T19:50:34 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 46,237 | py | # (c) Copyright 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os.path
import socket
import string
import tempfile
import time
import mock
from oslo_concurrency import processutils as putils
import testtools
from os_brick import exception
from os_brick.i18n import _LE
from os_brick.initiator import connector
from os_brick.initiator import host_driver
from os_brick.initiator import linuxfc
from os_brick.openstack.common import log as logging
from os_brick.openstack.common import loopingcall
from os_brick.tests import base
LOG = logging.getLogger(__name__)
MY_IP = '10.0.0.1'
class ConnectorUtilsTestCase(base.TestCase):
@mock.patch.object(socket, 'gethostname', return_value='fakehost')
@mock.patch.object(connector.ISCSIConnector, 'get_initiator',
return_value='fakeinitiator')
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_wwpns',
return_value=None)
@mock.patch.object(linuxfc.LinuxFibreChannel, 'get_fc_wwnns',
return_value=None)
def _test_brick_get_connector_properties(self, multipath,
enforce_multipath,
multipath_result,
mock_wwnns, mock_wwpns,
mock_initiator, mock_gethostname):
props_actual = connector.get_connector_properties('sudo',
MY_IP,
multipath,
enforce_multipath)
props = {'initiator': 'fakeinitiator',
'host': 'fakehost',
'ip': MY_IP,
'multipath': multipath_result}
self.assertEqual(props, props_actual)
def test_brick_get_connector_properties(self):
self._test_brick_get_connector_properties(False, False, False)
@mock.patch.object(putils, 'execute')
def test_brick_get_connector_properties_multipath(self, mock_execute):
self._test_brick_get_connector_properties(True, True, True)
mock_execute.assert_called_once_with('multipathd', 'show', 'status',
run_as_root=True,
root_helper='sudo')
@mock.patch.object(putils, 'execute',
side_effect=putils.ProcessExecutionError)
def test_brick_get_connector_properties_fallback(self, mock_execute):
self._test_brick_get_connector_properties(True, False, False)
mock_execute.assert_called_once_with('multipathd', 'show', 'status',
run_as_root=True,
root_helper='sudo')
@mock.patch.object(putils, 'execute',
side_effect=putils.ProcessExecutionError)
def test_brick_get_connector_properties_raise(self, mock_execute):
self.assertRaises(putils.ProcessExecutionError,
self._test_brick_get_connector_properties,
True, True, None)
class ConnectorTestCase(base.TestCase):
def setUp(self):
super(ConnectorTestCase, self).setUp()
self.cmds = []
def fake_execute(self, *cmd, **kwargs):
self.cmds.append(string.join(cmd))
return "", None
def test_connect_volume(self):
self.connector = connector.InitiatorConnector(None)
self.assertRaises(NotImplementedError,
self.connector.connect_volume, None)
def test_disconnect_volume(self):
self.connector = connector.InitiatorConnector(None)
self.assertRaises(NotImplementedError,
self.connector.disconnect_volume, None, None)
def test_factory(self):
obj = connector.InitiatorConnector.factory('iscsi', None)
self.assertEqual(obj.__class__.__name__, "ISCSIConnector")
obj = connector.InitiatorConnector.factory('fibre_channel', None)
self.assertEqual(obj.__class__.__name__, "FibreChannelConnector")
obj = connector.InitiatorConnector.factory('aoe', None)
self.assertEqual(obj.__class__.__name__, "AoEConnector")
obj = connector.InitiatorConnector.factory(
'nfs', None, nfs_mount_point_base='/mnt/test')
self.assertEqual(obj.__class__.__name__, "RemoteFsConnector")
obj = connector.InitiatorConnector.factory(
'glusterfs', None, glusterfs_mount_point_base='/mnt/test')
self.assertEqual(obj.__class__.__name__, "RemoteFsConnector")
obj = connector.InitiatorConnector.factory('local', None)
self.assertEqual(obj.__class__.__name__, "LocalConnector")
self.assertRaises(ValueError,
connector.InitiatorConnector.factory,
"bogus", None)
def test_check_valid_device_with_wrong_path(self):
self.connector = connector.InitiatorConnector(None)
self.connector._execute = \
lambda *args, **kwargs: ("", None)
self.assertFalse(self.connector.check_valid_device('/d0v'))
def test_check_valid_device(self):
self.connector = connector.InitiatorConnector(None)
self.connector._execute = \
lambda *args, **kwargs: ("", "")
self.assertTrue(self.connector.check_valid_device('/dev'))
def test_check_valid_device_with_cmd_error(self):
def raise_except(*args, **kwargs):
raise putils.ProcessExecutionError
self.connector = connector.InitiatorConnector(None)
self.connector._execute = mock.Mock()
self.connector._execute.side_effect = raise_except
self.assertFalse(self.connector.check_valid_device('/dev'))
class HostDriverTestCase(base.TestCase):
def setUp(self):
super(HostDriverTestCase, self).setUp()
isdir_mock = mock.Mock()
isdir_mock.return_value = True
os.path.isdir = isdir_mock
self.devlist = ['device1', 'device2']
listdir_mock = mock.Mock()
listdir_mock.return_value = self.devlist
os.listdir = listdir_mock
def test_host_driver(self):
expected = ['/dev/disk/by-path/' + dev for dev in self.devlist]
driver = host_driver.HostDriver()
actual = driver.get_all_block_devices()
self.assertEqual(expected, actual)
class ISCSIConnectorTestCase(ConnectorTestCase):
def setUp(self):
super(ISCSIConnectorTestCase, self).setUp()
self.connector = connector.ISCSIConnector(
None, execute=self.fake_execute, use_multipath=False)
self.connector_with_multipath = connector.ISCSIConnector(
None, execute=self.fake_execute, use_multipath=True)
get_name_mock = mock.Mock()
get_name_mock.return_value = "/dev/sdb"
self.connector._linuxscsi.get_name_from_path = get_name_mock
def iscsi_connection(self, volume, location, iqn):
return {
'driver_volume_type': 'iscsi',
'data': {
'volume_id': volume['id'],
'target_portal': location,
'target_iqn': iqn,
'target_lun': 1,
}
}
def iscsi_connection_multipath(self, volume, locations, iqns, luns):
return {
'driver_volume_type': 'iscsi',
'data': {
'volume_id': volume['id'],
'target_portals': locations,
'target_iqns': iqns,
'target_luns': luns,
}
}
def test_get_initiator(self):
def initiator_no_file(*args, **kwargs):
raise putils.ProcessExecutionError('No file')
def initiator_get_text(*arg, **kwargs):
text = ('## DO NOT EDIT OR REMOVE THIS FILE!\n'
'## If you remove this file, the iSCSI daemon '
'will not start.\n'
'## If you change the InitiatorName, existing '
'access control lists\n'
'## may reject this initiator. The InitiatorName must '
'be unique\n'
'## for each iSCSI initiator. Do NOT duplicate iSCSI '
'InitiatorNames.\n'
'InitiatorName=iqn.1234-56.foo.bar:01:23456789abc')
return text, None
self.connector._execute = initiator_no_file
initiator = self.connector.get_initiator()
self.assertIsNone(initiator)
self.connector._execute = initiator_get_text
initiator = self.connector.get_initiator()
self.assertEqual(initiator, 'iqn.1234-56.foo.bar:01:23456789abc')
@testtools.skipUnless(os.path.exists('/dev/disk/by-path'),
'Test requires /dev/disk/by-path')
def test_connect_volume(self):
exists_mock = mock.Mock()
exists_mock.return_value = True
os.path.exists = exists_mock
location = '10.0.2.15:3260'
name = 'volume-00000001'
iqn = 'iqn.2010-10.org.openstack:%s' % name
vol = {'id': 1, 'name': name}
connection_info = self.iscsi_connection(vol, location, iqn)
device = self.connector.connect_volume(connection_info['data'])
dev_str = '/dev/disk/by-path/ip-%s-iscsi-%s-lun-1' % (location, iqn)
self.assertEqual(device['type'], 'block')
self.assertEqual(device['path'], dev_str)
self.connector.disconnect_volume(connection_info['data'], device)
expected_commands = [('iscsiadm -m node -T %s -p %s' %
(iqn, location)),
('iscsiadm -m session'),
('iscsiadm -m node -T %s -p %s --login' %
(iqn, location)),
('iscsiadm -m node -T %s -p %s --op update'
' -n node.startup -v automatic'
% (iqn, location)),
('iscsiadm -m node --rescan'),
('iscsiadm -m session --rescan'),
('blockdev --flushbufs /dev/sdb'),
('tee -a /sys/block/sdb/device/delete'),
('iscsiadm -m node -T %s -p %s --op update'
' -n node.startup -v manual' % (iqn, location)),
('iscsiadm -m node -T %s -p %s --logout' %
(iqn, location)),
('iscsiadm -m node -T %s -p %s --op delete' %
(iqn, location)), ]
LOG.debug("self.cmds = %s" % self.cmds)
LOG.debug("expected = %s" % expected_commands)
self.assertEqual(expected_commands, self.cmds)
def test_connect_volume_with_multipath(self):
location = '10.0.2.15:3260'
name = 'volume-00000001'
iqn = 'iqn.2010-10.org.openstack:%s' % name
vol = {'id': 1, 'name': name}
connection_properties = self.iscsi_connection(vol, location, iqn)
self.connector_with_multipath = \
connector.ISCSIConnector(None, use_multipath=True)
self.connector_with_multipath._run_iscsiadm_bare = \
lambda *args, **kwargs: "%s %s" % (location, iqn)
portals_mock = mock.Mock()
portals_mock.return_value = [[location, iqn]]
self.connector_with_multipath.\
_get_target_portals_from_iscsiadm_output = portals_mock
connect_to_mock = mock.Mock()
connect_to_mock.return_value = None
self.connector_with_multipath._connect_to_iscsi_portal = \
connect_to_mock
rescan_iscsi_mock = mock.Mock()
rescan_iscsi_mock.return_value = None
self.connector_with_multipath._rescan_iscsi = rescan_iscsi_mock
rescan_multipath_mock = mock.Mock()
rescan_multipath_mock.return_value = None
self.connector_with_multipath._rescan_multipath = \
rescan_multipath_mock
get_device_mock = mock.Mock()
get_device_mock.return_value = 'iqn.2010-10.org.openstack:%s' % name
self.connector_with_multipath._get_multipath_device_name = \
get_device_mock
exists_mock = mock.Mock()
exists_mock.return_value = True
os.path.exists = exists_mock
result = self.connector_with_multipath.connect_volume(
connection_properties['data'])
expected_result = {'path': 'iqn.2010-10.org.openstack:volume-00000001',
'type': 'block'}
self.assertEqual(result, expected_result)
@mock.patch.object(os.path, 'exists', return_value=True)
@mock.patch.object(host_driver.HostDriver, 'get_all_block_devices')
@mock.patch.object(connector.ISCSIConnector, '_rescan_multipath')
@mock.patch.object(connector.ISCSIConnector, '_run_multipath')
@mock.patch.object(connector.ISCSIConnector, '_get_multipath_device_name')
@mock.patch.object(connector.ISCSIConnector, '_get_multipath_iqn')
def test_connect_volume_with_multiple_portals(
self, mock_get_iqn, mock_device_name, mock_run_multipath,
mock_rescan_multipath, mock_devices, mock_exists):
location1 = '10.0.2.15:3260'
location2 = '10.0.3.15:3260'
name1 = 'volume-00000001-1'
name2 = 'volume-00000001-2'
iqn1 = 'iqn.2010-10.org.openstack:%s' % name1
iqn2 = 'iqn.2010-10.org.openstack:%s' % name2
fake_multipath_dev = '/dev/mapper/fake-multipath-dev'
vol = {'id': 1, 'name': name1}
connection_properties = self.iscsi_connection_multipath(
vol, [location1, location2], [iqn1, iqn2], [1, 2])
devs = ['/dev/disk/by-path/ip-%s-iscsi-%s-lun-1' % (location1, iqn1),
'/dev/disk/by-path/ip-%s-iscsi-%s-lun-2' % (location2, iqn2)]
mock_devices.return_value = devs
mock_device_name.return_value = fake_multipath_dev
mock_get_iqn.return_value = [iqn1, iqn2]
result = self.connector_with_multipath.connect_volume(
connection_properties['data'])
expected_result = {'path': fake_multipath_dev, 'type': 'block'}
cmd_format = 'iscsiadm -m node -T %s -p %s --%s'
expected_commands = [cmd_format % (iqn1, location1, 'login'),
cmd_format % (iqn2, location2, 'login')]
self.assertEqual(expected_result, result)
for command in expected_commands:
self.assertIn(command, self.cmds)
mock_device_name.assert_called_once_with(devs[0])
self.cmds = []
self.connector_with_multipath.disconnect_volume(
connection_properties['data'], result)
expected_commands = [cmd_format % (iqn1, location1, 'logout'),
cmd_format % (iqn2, location2, 'logout')]
for command in expected_commands:
self.assertIn(command, self.cmds)
@mock.patch.object(os.path, 'exists')
@mock.patch.object(host_driver.HostDriver, 'get_all_block_devices')
@mock.patch.object(connector.ISCSIConnector, '_rescan_multipath')
@mock.patch.object(connector.ISCSIConnector, '_run_multipath')
@mock.patch.object(connector.ISCSIConnector, '_get_multipath_device_name')
@mock.patch.object(connector.ISCSIConnector, '_get_multipath_iqn')
@mock.patch.object(connector.ISCSIConnector, '_run_iscsiadm')
def test_connect_volume_with_multiple_portals_primary_error(
self, mock_iscsiadm, mock_get_iqn, mock_device_name,
mock_run_multipath, mock_rescan_multipath, mock_devices,
mock_exists):
location1 = '10.0.2.15:3260'
location2 = '10.0.3.15:3260'
name1 = 'volume-00000001-1'
name2 = 'volume-00000001-2'
iqn1 = 'iqn.2010-10.org.openstack:%s' % name1
iqn2 = 'iqn.2010-10.org.openstack:%s' % name2
fake_multipath_dev = '/dev/mapper/fake-multipath-dev'
vol = {'id': 1, 'name': name1}
connection_properties = self.iscsi_connection_multipath(
vol, [location1, location2], [iqn1, iqn2], [1, 2])
dev1 = '/dev/disk/by-path/ip-%s-iscsi-%s-lun-1' % (location1, iqn1)
dev2 = '/dev/disk/by-path/ip-%s-iscsi-%s-lun-2' % (location2, iqn2)
def fake_run_iscsiadm(iscsi_properties, iscsi_command, **kwargs):
if iscsi_properties['target_portal'] == location1:
if iscsi_command == ('--login',):
raise putils.ProcessExecutionError(None, None, 21)
return mock.DEFAULT
mock_exists.side_effect = lambda x: x != dev1
mock_devices.return_value = [dev2]
mock_device_name.return_value = fake_multipath_dev
mock_get_iqn.return_value = [iqn2]
mock_iscsiadm.side_effect = fake_run_iscsiadm
props = connection_properties['data'].copy()
result = self.connector_with_multipath.connect_volume(
connection_properties['data'])
expected_result = {'path': fake_multipath_dev, 'type': 'block'}
self.assertEqual(expected_result, result)
mock_device_name.assert_called_once_with(dev2)
props['target_portal'] = location1
props['target_iqn'] = iqn1
mock_iscsiadm.assert_any_call(props, ('--login',),
check_exit_code=[0, 255])
props['target_portal'] = location2
props['target_iqn'] = iqn2
mock_iscsiadm.assert_any_call(props, ('--login',),
check_exit_code=[0, 255])
mock_iscsiadm.reset_mock()
self.connector_with_multipath.disconnect_volume(
connection_properties['data'], result)
props = connection_properties['data'].copy()
props['target_portal'] = location1
props['target_iqn'] = iqn1
mock_iscsiadm.assert_any_call(props, ('--logout',),
check_exit_code=[0, 21, 255])
props['target_portal'] = location2
props['target_iqn'] = iqn2
mock_iscsiadm.assert_any_call(props, ('--logout',),
check_exit_code=[0, 21, 255])
def test_connect_volume_with_not_found_device(self):
exists_mock = mock.Mock()
exists_mock.return_value = False
os.path.exists = exists_mock
sleep_mock = mock.Mock()
sleep_mock.return_value = False
time.sleep = sleep_mock
location = '10.0.2.15:3260'
name = 'volume-00000001'
iqn = 'iqn.2010-10.org.openstack:%s' % name
vol = {'id': 1, 'name': name}
connection_info = self.iscsi_connection(vol, location, iqn)
self.assertRaises(exception.VolumeDeviceNotFound,
self.connector.connect_volume,
connection_info['data'])
def test_get_target_portals_from_iscsiadm_output(self):
connector = self.connector
test_output = '''10.15.84.19:3260 iqn.1992-08.com.netapp:sn.33615311
10.15.85.19:3260 iqn.1992-08.com.netapp:sn.33615311'''
res = connector._get_target_portals_from_iscsiadm_output(test_output)
ip_iqn1 = ['10.15.84.19:3260', 'iqn.1992-08.com.netapp:sn.33615311']
ip_iqn2 = ['10.15.85.19:3260', 'iqn.1992-08.com.netapp:sn.33615311']
expected = [ip_iqn1, ip_iqn2]
self.assertEqual(expected, res)
def test_get_multipath_device_name(self):
realpath = mock.Mock()
realpath.return_value = None
os.path.realpath = realpath
multipath_return_string = [('mpath2 (20017380006c00036)'
'dm-7 IBM,2810XIV')]
self.connector._run_multipath = \
lambda *args, **kwargs: multipath_return_string
expected = '/dev/mapper/mpath2'
self.assertEqual(expected,
self.connector.
_get_multipath_device_name('/dev/md-1'))
def test_get_iscsi_devices(self):
paths = [('ip-10.0.0.1:3260-iscsi-iqn.2013-01.ro.'
'com.netapp:node.netapp02-lun-0')]
walk_mock = lambda x: [(['.'], ['by-path'], paths)]
os.walk = walk_mock
self.assertEqual(self.connector._get_iscsi_devices(), paths)
def test_get_iscsi_devices_with_empty_dir(self):
walk_mock = mock.Mock()
walk_mock.return_value = []
os.walk = walk_mock
self.assertEqual(self.connector._get_iscsi_devices(), [])
def test_get_multipath_iqn(self):
paths = [('ip-10.0.0.1:3260-iscsi-iqn.2013-01.ro.'
'com.netapp:node.netapp02-lun-0')]
realpath = lambda x: '/dev/disk/by-path/%s' % paths[0]
os.path.realpath = realpath
get_iscsi_mock = mock.Mock()
get_iscsi_mock.return_value = paths
self.connector._get_iscsi_devices = get_iscsi_mock
get_multipath_device_mock = mock.Mock()
get_multipath_device_mock.return_value = paths[0]
self.connector._get_multipath_device_name = get_multipath_device_mock
self.assertEqual(self.connector._get_multipath_iqn(paths[0]),
'iqn.2013-01.ro.com.netapp:node.netapp02')
def test_disconnect_volume_multipath_iscsi(self):
result = []
def fake_disconnect_from_iscsi_portal(properties):
result.append(properties)
iqn1 = 'iqn.2013-01.ro.com.netapp:node.netapp01'
iqn2 = 'iqn.2013-01.ro.com.netapp:node.netapp02'
iqns = [iqn1, iqn2]
portal = '10.0.0.1:3260'
dev = ('ip-%s-iscsi-%s-lun-0' % (portal, iqn1))
get_portals_mock = mock.Mock()
get_portals_mock.return_value = [[portal, iqn1]]
rescan_iscsi_mock = mock.Mock()
rescan_iscsi_mock.return_value = None
rescan_multipath = mock.Mock()
rescan_multipath.return_value = None
get_block_devices_mock = mock.Mock()
get_block_devices_mock.return_value = [dev, '/dev/mapper/md-1']
get_multipath_name_mock = mock.Mock()
get_multipath_name_mock.return_value = '/dev/mapper/md-3'
self.connector._get_multipath_iqn = lambda x: iqns.pop()
disconnect_mock = fake_disconnect_from_iscsi_portal
self.connector._disconnect_from_iscsi_portal = disconnect_mock
fake_property = {'target_portal': portal,
'target_iqn': iqn1}
self.connector._disconnect_volume_multipath_iscsi(fake_property,
'fake/multipath')
# Target in use by other mp devices, don't disconnect
self.assertEqual([], result)
def test_disconnect_volume_multipath_iscsi_without_other_mp_devices(self):
result = []
def fake_disconnect_from_iscsi_portal(properties):
result.append(properties)
portal = '10.0.2.15:3260'
name = 'volume-00000001'
iqn = 'iqn.2010-10.org.openstack:%s' % name
get_portals_mock = mock.Mock()
get_portals_mock.return_value = [[portal, iqn]]
self.connector._get_target_portals_from_iscsiadm_output = \
get_portals_mock
rescan_iscsi_mock = mock.Mock()
rescan_iscsi_mock.return_value = None
self.connector._rescan_iscsi = rescan_iscsi_mock
rescan_multipath_mock = mock.Mock()
rescan_multipath_mock.return_value = None
self.connector._rescan_multipath = rescan_multipath_mock
get_all_devices_mock = mock.Mock()
get_all_devices_mock.return_value = []
self.connector.driver.get_all_block_devices = get_all_devices_mock
self.connector._disconnect_from_iscsi_portal = \
fake_disconnect_from_iscsi_portal
fake_property = {'target_portal': portal,
'target_iqn': iqn}
self.connector._disconnect_volume_multipath_iscsi(fake_property,
'fake/multipath')
# Target not in use by other mp devices, disconnect
self.assertEqual([fake_property], result)
def test_disconnect_volume_multipath_iscsi_with_invalid_symlink(self):
result = []
def fake_disconnect_from_iscsi_portal(properties):
result.append(properties)
portal = '10.0.0.1:3260'
name = 'volume-00000001'
iqn = 'iqn.2010-10.org.openstack:%s' % name
dev = ('ip-%s-iscsi-%s-lun-0' % (portal, iqn))
get_portals_mock = mock.Mock()
get_portals_mock.return_value = [[portal, iqn]]
self.connector._get_target_portals_from_iscsiadm_output = \
get_portals_mock
rescan_iscsi_mock = mock.Mock()
rescan_iscsi_mock.return_value = None
self.connector._rescan_iscsi = rescan_iscsi_mock
rescan_multipath_mock = mock.Mock()
rescan_multipath_mock.return_value = None
self.connector._rescan_multipath = rescan_multipath_mock
get_all_devices_mock = mock.Mock()
get_all_devices_mock.return_value = [dev, '/dev/mapper/md-1']
self.connector.driver.get_all_block_devices = get_all_devices_mock
self.connector._disconnect_from_iscsi_portal = \
fake_disconnect_from_iscsi_portal
# Simulate a broken symlink by returning False for os.path.exists(dev)
mock_exists = mock.Mock()
mock_exists.return_value = False
os.path.exists = mock_exists
fake_property = {'target_portal': portal,
'target_iqn': iqn}
self.connector._disconnect_volume_multipath_iscsi(fake_property,
'fake/multipath')
# Target not in use by other mp devices, disconnect
self.assertEqual([fake_property], result)
class FibreChannelConnectorTestCase(ConnectorTestCase):
def setUp(self):
super(FibreChannelConnectorTestCase, self).setUp()
self.connector = connector.FibreChannelConnector(
None, execute=self.fake_execute, use_multipath=False)
self.assertIsNotNone(self.connector)
self.assertIsNotNone(self.connector._linuxfc)
self.assertIsNotNone(self.connector._linuxscsi)
def fake_get_fc_hbas(self):
return [{'ClassDevice': 'host1',
'ClassDevicePath': '/sys/devices/pci0000:00/0000:00:03.0'
'/0000:05:00.2/host1/fc_host/host1',
'dev_loss_tmo': '30',
'fabric_name': '0x1000000533f55566',
'issue_lip': '<store method only>',
'max_npiv_vports': '255',
'maxframe_size': '2048 bytes',
'node_name': '0x200010604b019419',
'npiv_vports_inuse': '0',
'port_id': '0x680409',
'port_name': '0x100010604b019419',
'port_state': 'Online',
'port_type': 'NPort (fabric via point-to-point)',
'speed': '10 Gbit',
'supported_classes': 'Class 3',
'supported_speeds': '10 Gbit',
'symbolic_name': 'Emulex 554M FV4.0.493.0 DV8.3.27',
'tgtid_bind_type': 'wwpn (World Wide Port Name)',
'uevent': None,
'vport_create': '<store method only>',
'vport_delete': '<store method only>'}]
def fake_get_fc_hbas_info(self):
hbas = self.fake_get_fc_hbas()
info = [{'port_name': hbas[0]['port_name'].replace('0x', ''),
'node_name': hbas[0]['node_name'].replace('0x', ''),
'host_device': hbas[0]['ClassDevice'],
'device_path': hbas[0]['ClassDevicePath']}]
return info
def fibrechan_connection(self, volume, location, wwn):
return {'driver_volume_type': 'fibrechan',
'data': {
'volume_id': volume['id'],
'target_portal': location,
'target_wwn': wwn,
'target_lun': 1,
}}
def test_connect_volume(self):
self.connector._linuxfc.get_fc_hbas = self.fake_get_fc_hbas
self.connector._linuxfc.get_fc_hbas_info = \
self.fake_get_fc_hbas_info
exists_mock = mock.Mock()
exists_mock.return_value = True
os.path.exists = exists_mock
realpath_mock = mock.Mock()
realpath_mock.return_value = '/dev/sdb'
os.path.realpath = realpath_mock
multipath_devname = '/dev/md-1'
devices = {"device": multipath_devname,
"id": "1234567890",
"devices": [{'device': '/dev/sdb',
'address': '1:0:0:1',
'host': 1, 'channel': 0,
'id': 0, 'lun': 1}]}
find_device_mock = mock.Mock()
find_device_mock.return_value = devices
self.connector._linuxscsi.find_multipath_device = find_device_mock
remove_device_mock = mock.Mock()
remove_device_mock.return_value = None
self.connector._linuxscsi.remove_scsi_device = remove_device_mock
get_device_info_mock = mock.Mock()
get_device_info_mock.return_value = devices['devices'][0]
self.connector._linuxscsi.get_device_info = get_device_info_mock
location = '10.0.2.15:3260'
name = 'volume-00000001'
vol = {'id': 1, 'name': name}
# Should work for string, unicode, and list
wwns = ['1234567890123456', unicode('1234567890123456'),
['1234567890123456', '1234567890123457']]
for wwn in wwns:
connection_info = self.fibrechan_connection(vol, location, wwn)
dev_info = self.connector.connect_volume(connection_info['data'])
exp_wwn = wwn[0] if isinstance(wwn, list) else wwn
dev_str = ('/dev/disk/by-path/pci-0000:05:00.2-fc-0x%s-lun-1' %
exp_wwn)
self.assertEqual(dev_info['type'], 'block')
self.assertEqual(dev_info['path'], dev_str)
self.connector.disconnect_volume(connection_info['data'], dev_info)
expected_commands = []
self.assertEqual(expected_commands, self.cmds)
# Should not work for anything other than string, unicode, and list
connection_info = self.fibrechan_connection(vol, location, 123)
self.assertRaises(exception.NoFibreChannelHostsFound,
self.connector.connect_volume,
connection_info['data'])
get_fc_hbas_mock = mock.Mock()
get_fc_hbas_mock.return_value = []
self.connector._linuxfc.get_fc_hbas = get_fc_hbas_mock
get_fc_hbas_info_mock = mock.Mock()
get_fc_hbas_info_mock.return_value = []
self.connector._linuxfc.get_fc_hbas_info = get_fc_hbas_info_mock
self.assertRaises(exception.NoFibreChannelHostsFound,
self.connector.connect_volume,
connection_info['data'])
class FakeFixedIntervalLoopingCall(object):
def __init__(self, f=None, *args, **kw):
self.args = args
self.kw = kw
self.f = f
self._stop = False
def stop(self):
self._stop = True
def wait(self):
return self
def start(self, interval, initial_delay=None):
while not self._stop:
try:
self.f(*self.args, **self.kw)
except loopingcall.LoopingCallDone:
return self
except Exception:
LOG.exception(_LE('in fixed duration looping call'))
raise
class AoEConnectorTestCase(ConnectorTestCase):
"""Test cases for AoE initiator class."""
def setUp(self):
super(AoEConnectorTestCase, self).setUp()
self.connector = connector.AoEConnector('sudo')
self.connection_properties = {'target_shelf': 'fake_shelf',
'target_lun': 'fake_lun'}
loopingcall.FixedIntervalLoopingCall = FakeFixedIntervalLoopingCall
def _mock_path_exists(self, aoe_path, mock_values=None):
exists_mock = mock.Mock()
exists_mock.return_value = mock_values
os.path.exists = exists_mock
def test_connect_volume(self):
"""Ensure that if path exist aoe-revaliadte was called."""
aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
self._mock_path_exists(aoe_path, [True, True])
exec_mock = mock.Mock()
exec_mock.return_value = ["", ""]
self.connector._execute = exec_mock
self.connector.connect_volume(self.connection_properties)
def test_connect_volume_without_path(self):
"""Ensure that if path doesn't exist aoe-discovery was called."""
aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
expected_info = {
'type': 'block',
'device': aoe_device,
'path': aoe_path,
}
self._mock_path_exists(aoe_path, [False, True])
exec_mock = mock.Mock()
exec_mock.return_value = ["", ""]
self.connector._execute = exec_mock
volume_info = self.connector.connect_volume(
self.connection_properties)
self.assertDictMatch(volume_info, expected_info)
def test_connect_volume_could_not_discover_path(self):
_aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
exists_mock = mock.Mock()
exists_mock.return_value = False
os.path.exists = exists_mock
exec_mock = mock.Mock()
exec_mock.return_value = ["", ""]
self.connector._execute = exec_mock
self.assertRaises(exception.VolumeDeviceNotFound,
self.connector.connect_volume,
self.connection_properties)
def test_disconnect_volume(self):
"""Ensure that if path exist aoe-revaliadte was called."""
aoe_device, aoe_path = self.connector._get_aoe_info(
self.connection_properties)
self._mock_path_exists(aoe_path, [True])
exec_mock = mock.Mock()
exec_mock.return_value = ["", ""]
self.connector._execute = exec_mock
self.connector.disconnect_volume(self.connection_properties, {})
class RemoteFsConnectorTestCase(ConnectorTestCase):
"""Test cases for Remote FS initiator class."""
TEST_DEV = '172.18.194.100:/var/nfs'
TEST_PATH = '/mnt/test/df0808229363aad55c27da50c38d6328'
def setUp(self):
super(RemoteFsConnectorTestCase, self).setUp()
self.connection_properties = {
'export': self.TEST_DEV,
'name': '9c592d52-ce47-4263-8c21-4ecf3c029cdb'}
self.connector = connector.RemoteFsConnector(
'nfs', root_helper='sudo', nfs_mount_point_base='/mnt/test',
nfs_mount_options='vers=3')
def test_connect_volume(self):
"""Test the basic connect volume case."""
client = self.connector._remotefsclient
client.mount = mock.Mock()
client.get_mount_point = mock.Mock()
client.get_mount_point.return_value = "something"
self.connector.connect_volume(self.connection_properties)
def test_disconnect_volume(self):
"""Nothing should happen here -- make sure it doesn't blow up."""
self.connector.disconnect_volume(self.connection_properties, {})
class LocalConnectorTestCase(base.TestCase):
def setUp(self):
super(LocalConnectorTestCase, self).setUp()
self.connection_properties = {'name': 'foo',
'device_path': '/tmp/bar'}
def test_connect_volume(self):
self.connector = connector.LocalConnector(None)
cprops = self.connection_properties
dev_info = self.connector.connect_volume(cprops)
self.assertEqual(dev_info['type'], 'local')
self.assertEqual(dev_info['path'], cprops['device_path'])
def test_connect_volume_with_invalid_connection_data(self):
self.connector = connector.LocalConnector(None)
cprops = {}
self.assertRaises(ValueError,
self.connector.connect_volume, cprops)
class HuaweiStorHyperConnectorTestCase(ConnectorTestCase):
"""Test cases for StorHyper initiator class."""
attached = False
def setUp(self):
super(HuaweiStorHyperConnectorTestCase, self).setUp()
self.fake_sdscli_file = tempfile.mktemp()
self.addCleanup(os.remove, self.fake_sdscli_file)
newefile = open(self.fake_sdscli_file, 'w')
newefile.write('test')
newefile.close()
self.connector = connector.HuaweiStorHyperConnector(
None, execute=self.fake_execute)
self.connector.cli_path = self.fake_sdscli_file
self.connector.iscliexist = True
self.connector_fail = connector.HuaweiStorHyperConnector(
None, execute=self.fake_execute_fail)
self.connector_fail.cli_path = self.fake_sdscli_file
self.connector_fail.iscliexist = True
self.connector_nocli = connector.HuaweiStorHyperConnector(
None, execute=self.fake_execute_fail)
self.connector_nocli.cli_path = self.fake_sdscli_file
self.connector_nocli.iscliexist = False
self.connection_properties = {
'access_mode': 'rw',
'qos_specs': None,
'volume_id': 'volume-b2911673-863c-4380-a5f2-e1729eecfe3f'
}
self.device_info = {'type': 'block',
'path': '/dev/vdxxx'}
HuaweiStorHyperConnectorTestCase.attached = False
def fake_execute(self, *cmd, **kwargs):
method = cmd[2]
self.cmds.append(string.join(cmd))
if 'attach' == method:
HuaweiStorHyperConnectorTestCase.attached = True
return 'ret_code=0', None
if 'querydev' == method:
if HuaweiStorHyperConnectorTestCase.attached:
return 'ret_code=0\ndev_addr=/dev/vdxxx', None
else:
return 'ret_code=1\ndev_addr=/dev/vdxxx', None
if 'detach' == method:
HuaweiStorHyperConnectorTestCase.attached = False
return 'ret_code=0', None
def fake_execute_fail(self, *cmd, **kwargs):
method = cmd[2]
self.cmds.append(string.join(cmd))
if 'attach' == method:
HuaweiStorHyperConnectorTestCase.attached = False
return 'ret_code=330151401', None
if 'querydev' == method:
if HuaweiStorHyperConnectorTestCase.attached:
return 'ret_code=0\ndev_addr=/dev/vdxxx', None
else:
return 'ret_code=1\ndev_addr=/dev/vdxxx', None
if 'detach' == method:
HuaweiStorHyperConnectorTestCase.attached = True
return 'ret_code=330155007', None
def test_connect_volume(self):
"""Test the basic connect volume case."""
retval = self.connector.connect_volume(self.connection_properties)
self.assertEqual(self.device_info, retval)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
LOG.debug("self.cmds = %s." % self.cmds)
LOG.debug("expected = %s." % expected_commands)
self.assertEqual(expected_commands, self.cmds)
def test_disconnect_volume(self):
"""Test the basic disconnect volume case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
self.connector.disconnect_volume(self.connection_properties,
self.device_info)
self.assertEqual(False, HuaweiStorHyperConnectorTestCase.attached)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c detach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
LOG.debug("self.cmds = %s." % self.cmds)
LOG.debug("expected = %s." % expected_commands)
self.assertEqual(expected_commands, self.cmds)
def test_is_volume_connected(self):
"""Test if volume connected to host case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
is_connected = self.connector.is_volume_connected(
'volume-b2911673-863c-4380-a5f2-e1729eecfe3f')
self.assertEqual(HuaweiStorHyperConnectorTestCase.attached,
is_connected)
self.connector.disconnect_volume(self.connection_properties,
self.device_info)
self.assertEqual(False, HuaweiStorHyperConnectorTestCase.attached)
is_connected = self.connector.is_volume_connected(
'volume-b2911673-863c-4380-a5f2-e1729eecfe3f')
self.assertEqual(HuaweiStorHyperConnectorTestCase.attached,
is_connected)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c detach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
LOG.debug("self.cmds = %s." % self.cmds)
LOG.debug("expected = %s." % expected_commands)
self.assertEqual(expected_commands, self.cmds)
def test__analyze_output(self):
cliout = 'ret_code=0\ndev_addr=/dev/vdxxx\nret_desc="success"'
analyze_result = {'dev_addr': '/dev/vdxxx',
'ret_desc': '"success"',
'ret_code': '0'}
result = self.connector._analyze_output(cliout)
self.assertEqual(analyze_result, result)
def test_connect_volume_fail(self):
"""Test the fail connect volume case."""
self.assertRaises(exception.BrickException,
self.connector_fail.connect_volume,
self.connection_properties)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
LOG.debug("self.cmds = %s." % self.cmds)
LOG.debug("expected = %s." % expected_commands)
self.assertEqual(expected_commands, self.cmds)
def test_disconnect_volume_fail(self):
"""Test the fail disconnect volume case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
self.assertRaises(exception.BrickException,
self.connector_fail.disconnect_volume,
self.connection_properties,
self.device_info)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c detach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
LOG.debug("self.cmds = %s." % self.cmds)
LOG.debug("expected = %s." % expected_commands)
self.assertEqual(expected_commands, self.cmds)
def test_connect_volume_nocli(self):
"""Test the fail connect volume case."""
self.assertRaises(exception.BrickException,
self.connector_nocli.connect_volume,
self.connection_properties)
def test_disconnect_volume_nocli(self):
"""Test the fail disconnect volume case."""
self.connector.connect_volume(self.connection_properties)
self.assertEqual(True, HuaweiStorHyperConnectorTestCase.attached)
self.assertRaises(exception.BrickException,
self.connector_nocli.disconnect_volume,
self.connection_properties,
self.device_info)
expected_commands = [self.fake_sdscli_file + ' -c attach'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f',
self.fake_sdscli_file + ' -c querydev'
' -v volume-b2911673-863c-4380-a5f2-e1729eecfe3f']
LOG.debug("self.cmds = %s." % self.cmds)
LOG.debug("expected = %s." % expected_commands)
| [
"[email protected]"
] | |
0a9a17d46c80539a638273628970b034d16c45ab | cbf9f600374d7510988632d7dba145c8ff0cd1f0 | /AISing/c.py | 1bced9d46211c10b0d06d2db890caae23670859b | [] | no_license | sakakazu2468/AtCoder_py | d0945d03ad562474e40e413abcec39ded61e6855 | 34bdf39ee9647e7aee17e48c928ce5288a1bfaa5 | refs/heads/master | 2022-04-27T18:32:28.825004 | 2022-04-21T07:27:00 | 2022-04-21T07:27:00 | 225,844,364 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 404 | py | n = int(input())
xyz = [[[0 for i in range(101)] for j in range(101)] for k in range(101)]
ans = [0 for i in range(n+1)]
def gen_n(x, y, z):
return x**2 + y**2 + z**2 + x*y + y*z + z*x
for i in range(1, 101):
for j in range(1, 101):
for k in range(1, 101):
num = gen_n(i, j, k)
if n >= num:
ans[num] += 1
for i in range(1, n+1):
print(ans[i])
| [
"[email protected]"
] | |
7e9fe626518f5b85951b1e02fa7d6781d85d37d9 | 664646ccbeb6575582299e7d1c6ccc696f07ccba | /web/route/user/api.py | 3533bfbe0e367bb6cc3ade981e379c80b4e3e156 | [] | no_license | 0xss/bayonet | 3f1ce5832a06eef7e60b198c6c56cf59e4543199 | d723dbf0299ac86d9a4419741a197985558e283c | refs/heads/master | 2021-02-25T20:21:11.342592 | 2020-03-06T04:40:14 | 2020-03-06T04:40:14 | 245,462,098 | 0 | 1 | null | 2020-03-06T16:02:33 | 2020-03-06T16:02:32 | null | UTF-8 | Python | false | false | 18,514 | py | from flask_restful import reqparse, Resource
from flask import session, request, json
from werkzeug.security import check_password_hash, generate_password_hash
import datetime
from web.utils.logs import logger
from web.models import User, UserLoginLogs, UserLogs
from web import DB, APP
from web.utils.auxiliary import addlog
class UserLogin(Resource):
'''user login类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("username", type=str, required=True, location='json')
self.parser.add_argument("password", type=str, required=True, location='json')
self.parser.add_argument("captcha", type=str, required=True, location='json')
self.parser.add_argument("rememberMe", type=bool, location='json')
def post(self):
'''登录接口'''
args = self.parser.parse_args()
key_username = args.username
key_password = args.password
key_vercode = args.captcha
key_remember = args.rememberMe
try: # 获取客户端IP地址
login_ip = request.headers['X-Forwarded-For'].split(',')[0]
except:
login_ip = request.remote_addr
if 'code' not in session: # 判断session中是否有验证码
return {'result': {'status_code': 202}}
if session.get('code').lower() != key_vercode.lower(): # 判断验证码结果
logger.log('INFOR', f'用户[{key_username}]登录失败,原因:验证码错误,IP:{login_ip}')
session.pop('code')
return {'result': {'status_code': 202}}
session.pop('code')
user_query = User.query.filter(User.username == key_username).first() # 进行数据库查询
if not user_query: # 若不存在此用户
logger.log('INFOR', f'用户[{key_username}]登录失败,原因:用户名不存在,IP:{login_ip}')
return {'result': {'status_code': 201}}
if check_password_hash(user_query.password, key_password): # 进行密码核对
session['status'] = True # 登录成功设置session
session['username'] = key_username
session['login_ip'] = login_ip
useragent = request.user_agent.string
userlogins = UserLoginLogs(username=key_username, login_ip=login_ip, useragent=useragent)
try:
DB.session.add(userlogins)
DB.session.commit()
except Exception as e:
logger.log('ALERT', f'用户登录接口-SQL错误:{e}')
DB.session.rollback()
logger.log('INFOR', f'用户[{key_username}]登录成功,IP:{login_ip}')
addlog(key_username, login_ip, '登录系统成功')
if key_remember: # 若选择了记住密码选项
session.permanent = True
APP.permanent_session_lifetime = datetime.timedelta(weeks=7) # 设置session到期时间7天
return {'result': {'status_code': 200}}
else:
logger.log('INFOR', f'用户[{key_username}]登录失败,原因:密码错误;IP{login_ip}')
addlog(key_username, login_ip, '登录失败,原因:密码错误')
return {'result': {'status_code': 201}}
class UserSetting(Resource):
'''user 修改用户资料类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("xingming", type=str, required=True, location='json')
self.parser.add_argument("phone", type=str, required=True, location='json')
self.parser.add_argument("email", type=str, required=True, location='json')
self.parser.add_argument("remark", type=str, location='json')
def post(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_xingming = args.xingming
key_phone = args.phone
key_email = args.email
key_remark = args.remark
user_query = User.query.filter(User.username == session.get('username')).first()
if not user_query:
addlog(session.get('username'), session.get('login_ip'), '修改用户资料失败,原因:越权修改其他用户')
return {'result': {'status_code': 500}}
user_query.name = key_xingming
user_query.phone = key_phone
user_query.email = key_email
if key_remark:
user_query.remark = key_remark
try:
DB.session.commit()
except Exception as e:
logger.log('ALERT', f'用户修改资料接口SQL错误:{e}')
DB.session.rollback()
addlog(session.get('username'), session.get('login_ip'), '修改用户资料失败,原因:SQL错误')
return {'result': {'status_code': 500}}
addlog(session.get('username'), session.get('login_ip'), '修改用户资料成功')
logger.log('INFOR', f"[{session.get('username')}]修改用户资料成功")
return {'result': {'status_code': 200}}
class UserPassword(Resource):
'''user 修改用户密码类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("old_password", type=str, required=True, location='json')
self.parser.add_argument("new_password", type=str, required=True, location='json')
self.parser.add_argument("again_password", type=str, required=True, location='json')
def post(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_old_password = args.old_password
key_new_password = args.new_password
key_again_password = args.again_password
if key_new_password != key_again_password:
return {'result': {'status_code': 203}}
if key_old_password == key_new_password:
return {'result': {'status_code': 204}}
user_query = User.query.filter(User.username == session.get('username')).first()
if not user_query:
addlog(session.get('username'), session.get('login_ip'), '修改用户密码失败,原因:不存在此账户')
return {'result': {'status_code': 500}}
if not check_password_hash(user_query.password, key_old_password): # 检测原密码
addlog(session.get('username'), session.get('login_ip'), '修改用户密码失败,原因:原密码不正确')
return {'result': {'status_code': 201}}
user_query.password = generate_password_hash(key_new_password) # 更新密码
try:
DB.session.commit()
except Exception as e:
logger.log('ALERT', f'用户修改密码接口SQL错误:{e}')
DB.session.rollback()
return {'result': {'status_code': 500}}
addlog(session.get('username'), session.get('login_ip'), '修改用户密码成功')
logger.log('INFOR', f"[{session.get('username')}]修改用户密码成功")
return {'result': {'status_code': 200}}
class UserAdd(Resource):
'''user 新增用户类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("username", type=str, required=True, location='json')
self.parser.add_argument("password", type=str, required=True, location='json')
self.parser.add_argument("xingming", type=str, required=True, location='json')
self.parser.add_argument("phone", type=str, required=True, location='json')
self.parser.add_argument("email", type=str, required=True, location='json')
self.parser.add_argument("remark", type=str, required=True, location='json')
def post(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_username = args.username
key_password = args.password
key_xingming = args.xingming
key_phone = args.phone
key_email = args.email
key_remark = args.remark
if session['username'] != 'root':
return {'result': {'status_code': 202}}
user_query = User.query.filter(User.username == key_username).first()
if user_query: # 用户名存在
addlog(session.get('username'), session.get('login_ip'), f'新增用户[{key_username}]失败,原因:用户已存在')
return {'result': {'status_code': 201}}
user1 = User(username=key_username,
password=key_password, name=key_xingming, phone=key_phone, email=key_email, remark=key_remark)
DB.session.add(user1)
try:
DB.session.commit()
except Exception as e:
logger.log('ALERT', f'用户新增接口SQL错误:{e}')
DB.session.rollback()
return {'result': {'status_code': 500}}
addlog(session.get('username'), session.get('login_ip'), f'新增用户[{key_username}]成功')
return {'result': {'status_code': 200}}
class UserManager(Resource):
'''user 用户管理类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("page", type=int)
self.parser.add_argument("limit", type=int)
self.parser.add_argument("searchParams", type=str)
self.parser.add_argument("username", type=str)
def get(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_page = args.page
key_limit = args.limit
key_searchParams = args.searchParams
count = User.query.count()
jsondata = {'code': 0, 'msg': '', 'count': count}
if count == 0: # 若没有数据返回空列表
jsondata.update({'data': []})
return jsondata
if not key_searchParams: # 若没有查询参数
if not key_page or not key_limit: # 判断是否有分页查询参数
paginate = User.query.limit(20).offset(0).all()
else:
paginate = User.query.limit(key_limit).offset((key_page - 1) * key_limit).all()
else:
try:
search_dict = json.loads(key_searchParams) # 解析查询参数
except:
paginate = User.query.limit(20).offset(0).all()
else:
if 'username' not in search_dict or 'name' not in search_dict: # 查询参数有误
paginate = User.query.limit(20).offset(0).all()
else:
paginate = User.query.filter(
User.username.like("%" + search_dict['username'] + "%"),
User.name.like("%" + search_dict['name'] + "%")).limit(key_limit).offset((key_page - 1) * key_limit).all()
jsondata = {'code': 0, 'msg': '', 'count': len(paginate)}
data = []
if paginate:
index = (key_page - 1) * key_limit + 1
for i in paginate:
data1 = {}
data1['id'] = index
data1['username'] = i.username
data1['name'] = i.name
data1['phone'] = i.phone
data1['email'] = i.email
data1['remark'] = i.remark
data1['login_count'] = len(i.src_user_login_logs)
data.append(data1)
index += 1
jsondata.update({'data': data})
return jsondata
else:
jsondata = {'code': 0, 'msg': '', 'count': 0}
jsondata.update({'data': []})
return jsondata
def post(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_username = args.username
if not key_username:
return {'result': {'status_code': 500}}
if session['username'] != 'root':
addlog(session.get('username'), session.get('login_ip'), f'删除用户:[{key_username}] 失败,原因:非root用户')
return {'result': {'status_code': 203}}
if 'root' == key_username: # 不能删除root用户
addlog(session.get('username'), session.get('login_ip'), f'删除用户:[{key_username}] 失败,原因:不能删除内置用户')
return {'result': {'status_code': 201}}
user_query = User.query.filter(User.username == key_username).first()
if not user_query: # 删除的用户不存在
addlog(session.get('username'), session.get('login_ip'), f'删除用户:[{key_username}] 失败,原因:该用户不存在')
return {'result': {'status_code': 202}}
DB.session.delete(user_query)
try:
DB.session.commit()
except:
DB.session.rollback()
return {'result': {'status_code': 500}}
addlog(session.get('username'), session.get('login_ip'), f'删除用户:[{key_username}] 成功')
return {'result': {'status_code': 200}}
class UserLog(Resource):
'''user 用户操作日志类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("page", type=int)
self.parser.add_argument("limit", type=int)
self.parser.add_argument("searchParams", type=str)
def get(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_page = args.page
key_limit = args.limit
key_searchParams = args.searchParams
count = UserLogs.query.count()
jsondata = {'code': 0, 'msg': '', 'count': count}
if count == 0: # 若没有数据返回空列表
jsondata.update({'data': []})
return jsondata
if not key_searchParams: # 若没有查询参数
if not key_page or not key_limit: # 判断是否有分页查询参数
paginate = UserLogs.query.limit(20).offset(0).all()
else:
paginate = UserLogs.query.limit(key_limit).offset((key_page - 1) * key_limit).all()
else:
try:
search_dict = json.loads(key_searchParams) # 解析查询参数
except:
paginate = UserLogs.query.limit(20).offset(0).all()
else:
if 'username' not in search_dict or 'log_ip' not in search_dict: # 查询参数有误
paginate = UserLogs.query.limit(20).offset(0).all()
else:
paginate1 = UserLogs.query.filter(
UserLogs.username.like("%" + search_dict['username'] + "%"),
UserLogs.logs_ip.like("%" + search_dict['log_ip'] + "%"))
paginate = paginate1.limit(key_limit).offset((key_page - 1) * key_limit).all()
jsondata = {'code': 0, 'msg': '', 'count': len(paginate1.all())}
data = []
if paginate:
index = (key_page - 1) * key_limit + 1
for i in paginate:
data1 = {}
data1['id'] = index
data1['username'] = i.username
data1['log_ip'] = i.logs_ip
data1['log_time'] = i.logs_time
data1['log_text'] = i.logs_text
data1['name'] = i.src_user.name
data1['phone'] = i.src_user.phone
index += 1
data.append(data1)
jsondata.update({'data': data})
return jsondata
else:
jsondata = {'code': 0, 'msg': '', 'count': 0}
jsondata.update({'data': []})
return jsondata
class UserLoginLog(Resource):
'''user 用户登录日志类'''
def __init__(self):
self.parser = reqparse.RequestParser()
self.parser.add_argument("page", type=int)
self.parser.add_argument("limit", type=int)
self.parser.add_argument("searchParams", type=str)
def get(self):
if not session.get('status'):
return {'result': {'status_code': 401}}
args = self.parser.parse_args()
key_page = args.page
key_limit = args.limit
key_searchParams = args.searchParams
count = UserLoginLogs.query.count()
jsondata = {'code': 0, 'msg': '', 'count': count}
if count == 0: # 若没有数据返回空列表
jsondata.update({'data': []})
return jsondata
if not key_searchParams: # 若没有查询参数
if not key_page or not key_limit: # 判断是否有分页查询参数
paginate = UserLoginLogs.query.limit(20).offset(0).all()
else:
paginate = UserLoginLogs.query.limit(key_limit).offset((key_page - 1) * key_limit).all()
else:
try:
search_dict = json.loads(key_searchParams) # 解析查询参数
except:
paginate = UserLoginLogs.query.limit(20).offset(0).all()
else:
if 'username' not in search_dict or 'log_ip' not in search_dict: # 查询参数有误
paginate = UserLoginLogs.query.limit(20).offset(0).all()
else:
paginate1 = UserLoginLogs.query.filter(
UserLoginLogs.username.like("%" + search_dict['username'] + "%"),
UserLoginLogs.login_ip.like("%" + search_dict['log_ip'] + "%"))
paginate = paginate1.limit(key_limit).offset(
(key_page - 1) * key_limit).all()
jsondata = {'code': 0, 'msg': '', 'count': len(paginate.all())}
data = []
if paginate:
index = (key_page - 1) * key_limit + 1
for i in paginate:
data1 = {}
data1['id'] = index
data1['username'] = i.username
data1['login_ip'] = i.login_ip
data1['login_time'] = i.login_time
data1['useragent'] = i.useragent
data1['name'] = i.src_user.name
data1['phone'] = i.src_user.phone
index += 1
data.append(data1)
jsondata.update({'data': data})
return jsondata
else:
jsondata = {'code': 0, 'msg': '', 'count': 0}
jsondata.update({'data': []})
return jsondata | [
"[email protected]"
] | |
d392365edbf57883fa52e225c5ec1c543754e39e | cccf0b4c6b08502dea94ac1febb49fc0f8561cc2 | /src/bind.py | e9823644dd74973261df64057956374e5c7b9592 | [] | no_license | rpm-io/rpm.io | 55e13a4795f421a2514f45d707820c4fe605dbf2 | 80cd5f0703c550628d62a7f45e213e9434a64e8d | refs/heads/master | 2022-12-15T21:54:14.040117 | 2019-08-25T16:44:28 | 2019-08-25T16:44:28 | 187,483,128 | 1 | 0 | null | 2022-12-10T15:03:44 | 2019-05-19T13:53:28 | JavaScript | UTF-8 | Python | false | false | 3,020 | py | import sys
import uuid
import json
from importlib import import_module
import os
sys.path.append(os.getcwd())
class Bind:
__VARIABLES__ = {
"END": "END"
}
def __init__(self):
self.module = import_module(sys.argv[1])
self.ok = True
def declare(self, value):
name = str(uuid.uuid1())
self.__VARIABLES__[name] = value
return name
def var(self, name):
if name:
return self.__VARIABLES__[name]
def var_from(self, name):
if name in self.message:
return self.var(self.message[name])
def val_from(self, name):
if name in self.message:
return self.message[name]
def init(self, clazz, params):
return clazz(*params)
def call(self, method, params):
return method(*params)
def command(self):
self.message = json.loads(input())
def is_primitive(self, data):
if data:
return not hasattr(self.var(data), "__dict__")
return True
def value_of(self, name):
data = self.var(name)
if data:
return str(data)
return data
def type_of(self, data):
if data:
return str(type(self.var(data)))
def show(self, data, __id__):
print(json.dumps({
"data": data,
"type": self.type_of(data),
"primitive": self.is_primitive(data),
"value": self.value_of(data),
"__id__": __id__
}), flush=True)
def run(self):
self.show(self.declare(self.module), "__self__")
while self.ok:
self.command()
COMMAND = self.val_from('com')
__id__ = self.val_from('__id__')
if COMMAND == 'attr':
variable = self.var_from('var')
name = self.val_from('attr')
if variable and name:
if hasattr(variable, name):
attr = getattr(variable, name)
self.show(self.declare(attr), __id__)
else:
self.show(None, __id__)
if COMMAND == 'new':
clazz = self.var_from('var')
params = self.val_from('params')
instance = self.init(clazz, params)
self.show(self.declare(instance), __id__)
if COMMAND == 'str':
self.show(self.var_from('var'), __id__)
if COMMAND == 'call':
method = self.var_from('var')
params = self.val_from('params')
result = self.call(method, params)
self.show(self.declare(result), __id__)
if COMMAND == 'describe':
self.show(self.var_from('var').__dict__, __id__)
if COMMAND == 'destroy':
self.ok = False
self.show("END", __id__)
if __name__ == "__main__":
Bind().run()
| [
"[email protected]"
] | |
ae21549b8b1f5138084eb96159f1b2b83e5bc6a6 | fc772efe3eccb65e4e4a8da7f2b2897586b6a0e8 | /Controller/glance/common/location_strategy/store_type.py | 75f8c239670b96c2e5855ca2b5e1b7de69bda730 | [] | no_license | iphonestack/Openstack_Kilo | 9ae12505cf201839631a68c9ab4c041f737c1c19 | b0ac29ddcf24ea258ee893daf22879cff4d03c1f | refs/heads/master | 2021-06-10T23:16:48.372132 | 2016-04-18T07:25:40 | 2016-04-18T07:25:40 | 56,471,076 | 0 | 2 | null | 2020-07-24T02:17:46 | 2016-04-18T02:32:43 | Python | UTF-8 | Python | false | false | 4,238 | py | # Copyright 2014 IBM Corp.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Storage preference based location strategy module"""
from oslo.config import cfg
import six
import six.moves.urllib.parse as urlparse
from glance import i18n
_ = i18n._
store_type_opts = [
cfg.ListOpt("store_type_preference",
default=[],
help=_("The store names to use to get store preference order. "
"The name must be registered by one of the stores "
"defined by the 'known_stores' config option. "
"This option will be applied when you using "
"'store_type' option as image location strategy "
"defined by the 'location_strategy' config option."))
]
CONF = cfg.CONF
CONF.register_opts(store_type_opts, group='store_type_location_strategy')
_STORE_TO_SCHEME_MAP = {}
def get_strategy_name():
"""Return strategy module name."""
return 'store_type'
def init():
"""Initialize strategy module."""
# NOTE(zhiyan): We have a plan to do a reusable glance client library for
# all clients like Nova and Cinder in near period, it would be able to
# contains common code to provide uniform image service interface for them,
# just like Brick in Cinder, this code can be moved to there and shared
# between Glance and client both side. So this implementation as far as
# possible to prevent make relationships with Glance(server)-specific code,
# for example: using functions within store module to validate
# 'store_type_preference' option.
mapping = {'filesystem': ['file', 'filesystem'],
'http': ['http', 'https'],
'rbd': ['rbd'],
's3': ['s3', 's3+http', 's3+https'],
'swift': ['swift', 'swift+https', 'swift+http'],
'gridfs': ['gridfs'],
'sheepdog': ['sheepdog'],
'cinder': ['cinder'],
'vmware_datastore': ['vsphere']}
_STORE_TO_SCHEME_MAP.clear()
_STORE_TO_SCHEME_MAP.update(mapping)
def get_ordered_locations(locations, uri_key='url', **kwargs):
"""
Order image location list.
:param locations: The original image location list.
:param uri_key: The key name for location URI in image location dictionary.
:return: The image location list with preferred store type order.
"""
def _foreach_store_type_preference():
store_types = CONF.store_type_location_strategy.store_type_preference
for preferred_store in store_types:
preferred_store = str(preferred_store).strip()
if not preferred_store:
continue
yield preferred_store
if not locations:
return locations
preferences = {}
others = []
for preferred_store in _foreach_store_type_preference():
preferences[preferred_store] = []
for location in locations:
uri = location.get(uri_key)
if not uri:
continue
pieces = urlparse.urlparse(uri.strip())
store_name = None
for store, schemes in six.iteritems(_STORE_TO_SCHEME_MAP):
if pieces.scheme.strip() in schemes:
store_name = store
break
if store_name in preferences:
preferences[store_name].append(location)
else:
others.append(location)
ret = []
# NOTE(zhiyan): While configuration again since py26 does not support
# ordereddict container.
for preferred_store in _foreach_store_type_preference():
ret.extend(preferences[preferred_store])
ret.extend(others)
return ret
| [
"[email protected]"
] | |
d54bbda96683cdd763c1b279aba965e3873a5a09 | 34599596e145555fde0d4264a1d222f951f49051 | /pcat2py/class/21ede120-5cc5-11e4-af55-00155d01fe08.py | 517fcbf9e1895c3f103154e3d831715a37a09a75 | [
"MIT"
] | permissive | phnomcobra/PCAT2PY | dc2fcbee142ce442e53da08476bfe4e68619346d | 937c3b365cdc5ac69b78f59070be0a21bdb53db0 | refs/heads/master | 2021-01-11T02:23:30.669168 | 2018-02-13T17:04:03 | 2018-02-13T17:04:03 | 70,970,520 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,497 | py | #!/usr/bin/python
################################################################################
# 21ede120-5cc5-11e4-af55-00155d01fe08
#
# Justin Dierking
# [email protected]
# [email protected]
#
# 10/24/2014 Original Construction
################################################################################
class Finding:
def __init__(self):
self.output = []
self.is_compliant = False
self.uuid = "21ede120-5cc5-11e4-af55-00155d01fe08"
def check(self, cli):
# Initialize Compliance
self.is_compliant = False
# Get Registry DWORD
dword = cli.get_reg_dword(r'HKCU:\Software\Policies\Microsoft\Office\12.0\Outlook\Security\TrustedAddins', '')
# Output Lines
self.output = [r'HKCU:\Software\Policies\Microsoft\Office\12.0\Outlook\Security\TrustedAddins', ('=' + str(dword))]
if dword == -1:
self.is_compliant = True
return self.is_compliant
def fix(self, cli):
cli.powershell(r"New-Item -path 'HKCU:\Software\Policies\Microsoft\Office\12.0\Outlook'")
cli.powershell(r"New-Item -path 'HKCU:\Software\Policies\Microsoft\Office\12.0\Outlook\Security'")
cli.powershell(r"New-Item -path 'HKCU:\Software\Policies\Microsoft\Office\12.0\Outlook\Security\TrustedAddins'")
cli.powershell(r"Set-ItemProperty -path 'HKCU:\Software\Policies\Microsoft\Office\12.0\Outlook\Security\TrustedAddins' -name '' -value -Type DWord")
| [
"[email protected]"
] | |
bc1680235109bc4da1b863579b5bf349298ae9fe | 2ae59d7f70f083fc255c29669ada5dcacbd3411f | /encasa/utils/models/common_regex.py | 7b41e0009066740932b74d66f52a278a95a23019 | [] | no_license | gonza56d/django_docker_template | aeccfec357aa9732abf869f8baf0c0b9f11f7200 | f739c71dcb01819caf4520a2169264ea1abcf4b0 | refs/heads/master | 2023-03-14T04:09:01.200633 | 2021-03-07T21:08:28 | 2021-03-07T21:08:28 | 345,263,273 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 606 | py | # Django
from django.core.validators import RegexValidator
class CommonRegex:
"""Common validation regular expressions."""
LOWERCASE_AND_NUMBERS = RegexValidator(
regex='[a0-z9]',
message='Only lowercase letters and numbers allowed.'
)
LETTERS_AND_NUMBERS = RegexValidator(
regex='[aA0-zA9]',
message='Only letters and numbers allowed.'
)
LETTERS = RegexValidator(
regex='[aA-zZ]',
message='Only letters allowed.'
)
LOWERCASE = RegexValidator(
regex='[a-z]',
message='Only lowercase letters allowed.'
)
| [
"[email protected]"
] | |
4bd1566537db005bd9d5471cb5b042372ba16b80 | 7ba4e38e0835cd009a078ce39a480b5bacaba21f | /sample_code/chap8/8.4.5.rectify.py | 0b0fab1f3600418d5bf6050b41150c51447f663e | [] | no_license | moguranran/computer_vision_test | fe0641987905755c733e4ab16f48c3b76d01b3f4 | 4c5b5572d01e13a42eefb2423e66e34675c305cb | refs/heads/master | 2022-04-20T17:53:37.668609 | 2020-03-31T00:13:02 | 2020-03-31T00:13:02 | 249,196,701 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 806 | py | #!/usr/bin/python
# -*- coding: utf-8 -*-
from PIL import Image
from pylab import *
from scipy import ndimage
import homography
imname = 'sudoku_images/sudokus/sudoku8.JPG'
im = array(Image.open(imname).convert('L'))
# 4隅を入力する
figure()
imshow(im)
gray()
x = ginput(4)
# 左上、右上、右下、左下
fp = array([array([p[1],p[0],1]) for p in x]).T
tp = array([[0,0,1],[0,1000,1],[1000,1000,1],[1000,0,1]]).T
# ホモグラフィーを推定する
H = homography.H_from_points(tp,fp)
# geometric_transform用のヘルパー関数
def warpfcn(x):
x = array([x[0],x[1],1])
xt = dot(H,x)
xt = xt/xt[2]
return xt[0],xt[1]
# 射影変換を使って画像を変形する
im_g = ndimage.geometric_transform(im,warpfcn,(1000,1000))
figure()
imshow(im_g)
axis('off')
gray()
show()
| [
"[email protected]"
] | |
15006b6d8c7f8ae7ecca9c43b3bfa34aa5e18d1c | 62ea331d8da218e65a4aee517f4473110f80c03c | /matches/migrations/0011_auto_20180601_2337.py | 5afbf4770bf95e23ffe57a987598cffa7867b8b4 | [] | no_license | maddrum/world_cup_results | 11f47a1b0f9a68a0761c7d83d25cc1efb57c2240 | 282d8f55344ba718ea371a22f34454673f23a615 | refs/heads/master | 2020-03-20T05:40:44.173185 | 2018-07-16T13:12:15 | 2018-07-16T13:12:15 | 136,724,186 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,612 | py | # Generated by Django 2.0.2 on 2018-06-01 20:37
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('matches', '0010_auto_20180601_1641'),
]
operations = [
migrations.CreateModel(
name='UserPredictions',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('prediction_match_state', models.CharField(choices=[('home', 'Победа домакин'), ('guest', 'Победа гост'), ('tie', 'Равен')], max_length=20)),
('prediction_goals_home', models.IntegerField()),
('prediction_goals_guest', models.IntegerField()),
('user_points', models.IntegerField()),
],
),
migrations.AlterUniqueTogether(
name='matches',
unique_together={('country_home', 'country_guest', 'phase')},
),
migrations.AddField(
model_name='userpredictions',
name='match',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='user_predictions', to='matches.Matches'),
),
migrations.AddField(
model_name='userpredictions',
name='user_id',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='user', to=settings.AUTH_USER_MODEL),
),
]
| [
"[email protected]"
] | |
f68dedf0d4f41d08e278bea9df10e7841f8c749b | 46c76c7ca1d9d030606f2e3e95a2a9e6bbad2789 | /workspace201406/ClassExamples/accounts.py | 77f58e4bc1ffee16226f5db5e4c1ad21430917fc | [] | no_license | KayMutale/pythoncourse | be9ff713cffc73c1b9b3c1dd2bdd6d293637ce1e | 985a747ff17133aa533b7a049f83b37fc0fed80e | refs/heads/master | 2023-04-13T07:58:00.993724 | 2021-04-16T14:19:41 | 2021-04-16T14:19:41 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,401 | py | '''
Created on 10 Jun 2014
@author: mark
'''
class Account(object):
'''
classdocs
'''
last_accno = 0
types = ("ASSET","LIABILITY","EQUITY","INCOME","EXPENCE")
_perm_accounts = types[:3]
_credit_accounts = types[1:4]
_debit_accounts = types[0] +types[4]
def __init__(self,balance=0):
self.balance = balance
Account.last_accno += 1
self.accno = Account.last_accno
self.type = Account.types[0]
def debit(self,amount):
self.balance += amount if self.type in Account._debit_accounts else -amount
def credit(self,amount):
self.balance += amount if self.type in Account._credit_accounts else -amount
class CreditAccount(Account):
def __init__(self,credit_limmit=0,balance=0):
Account.__init__(self,balance)
self.credit_limmit = credit_limmit
def credit(self,amount):
assert self.balance + self.ceditlimmit > amount, "not enough funds"
Account.credit(self,amount)
if __name__ == "__main__":
check = Account()
card = CreditAccount()
print isinstance(check,Account)
card.type = "LIABILITY"
print card.balance
check.balance = 100
check.debit(5)
print check.balance
card.debit(50)
print card.balance
card.debit(10) #Account.debit(card,10)
print card.balance
print Account.__dict__ | [
"[email protected]"
] | |
f4630ba76d4e6069dec7ef7512e88ed698af23d2 | 5f61724fc5cad3f82094a681c853cc9f0337f050 | /test/test_xmlpart.py | 9f1f8a02e7773b2d4bbc9354889b40ace6f30d36 | [
"Apache-2.0"
] | permissive | barseghyanartur/odfdo | 2cecbbbb33f23d5ed0ba80cb9208a8e7857b93a0 | e628a9e9daa40319a777d216ec7ebca4057b3344 | refs/heads/master | 2022-11-17T15:43:15.662484 | 2020-06-27T00:41:38 | 2020-06-28T22:53:07 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,598 | py | #!/usr/bin/env python
# Copyright 2018 Jérôme Dumonteil
# Copyright (c) 2009-2010 Ars Aperta, Itaapy, Pierlis, Talend.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# Authors (odfdo project): [email protected]
# The odfdo project is a derivative work of the lpod-python project:
# https://github.com/lpod/lpod-python
# Authors: Hervé Cauwelier <[email protected]>
# David Versmisse <[email protected]>
from unittest import TestCase, main
from lxml.etree import _ElementTree
from odfdo.const import ODF_CONTENT
from odfdo.container import Container
from odfdo.element import Element
from odfdo.xmlpart import XmlPart
from odfdo.content import Content
class XmlPartTestCase(TestCase):
def setUp(self):
self.container = Container()
self.container.open('samples/example.odt')
def tearDown(self):
del self.container
def test_get_element_list(self):
content_part = XmlPart(ODF_CONTENT, self.container)
elements = content_part.get_elements('//text:p')
# The annotation paragraph is counted
self.assertEqual(len(elements), 8)
def test_tree(self):
# Testing a private but important method
content = XmlPart(ODF_CONTENT, self.container)
tree = content._XmlPart__get_tree()
self.assertTrue(isinstance(tree, _ElementTree))
self.assertNotEqual(content._XmlPart__tree, None)
def test_root(self):
content = XmlPart(ODF_CONTENT, self.container)
root = content.root
self.assertTrue(isinstance(root, Element))
self.assertEqual(root.tag, "office:document-content")
self.assertNotEqual(content._XmlPart__root, None)
def test_serialize(self):
container = self.container
content_bytes = container.get_part(ODF_CONTENT)
content_part = XmlPart(ODF_CONTENT, container)
# differences with lxml
serialized = content_part.serialize().replace(b"'", b"'")
self.assertEqual(content_bytes, serialized)
def test_pretty_serialize(self):
# With pretty = True
element = Element.from_tag('<root><a>spam</a><b/></root>')
serialized = element.serialize(pretty=True)
expected = ('<root>\n' ' <a>spam</a>\n' ' <b/>\n' '</root>\n')
self.assertEqual(serialized, expected)
def test_clone(self):
# Testing that the clone works on subclasses too
container = self.container
content = Content(ODF_CONTENT, container)
clone = content.clone
self.assertEqual(clone.part_name, content.part_name)
self.assertNotEqual(id(container), id(clone.container))
self.assertEqual(clone._XmlPart__tree, None)
def test_delete(self):
container = self.container
content = XmlPart(ODF_CONTENT, container)
paragraphs = content.get_elements('//text:p')
for paragraph in paragraphs:
content.delete_element(paragraph)
serialized = content.serialize()
self.assertEqual(serialized.count(b'<text:p'), 0)
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
dee86a91a321348c1a9ea17593f58fb0fab6248f | a39f7413dcd87bb26319fe032d59cf12d7c69d54 | /backbones/decoder.py | 67afba19646e66388f3d87d8fefe214f1dc4ee88 | [] | no_license | liangyuandg/cross_modality_ibsr | 8ad937b5475bd5e6b00ad50351706304a962f975 | bb5cefd890f5fa0e15eae6e54d9559f5e8eb94ed | refs/heads/master | 2023-06-24T02:58:25.318170 | 2021-07-27T08:29:27 | 2021-07-27T08:29:27 | 389,904,637 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,455 | py | import torch.nn as nn
import torch
def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=dilation, groups=groups, bias=False, dilation=dilation)
class ResNet_Decoder(nn.Module):
def __init__(self, inplanes, midplanes, outplanes):
super(ResNet_Decoder, self).__init__()
self.mse = nn.MSELoss(reduction='mean')
self.layer1 = self._make_layer(inplanes[0], midplanes[0], outplanes[0])
self.layer2 = self._make_layer(inplanes[1], midplanes[1], outplanes[1])
self.layer3 = self._make_layer(inplanes[2], midplanes[2], outplanes[2])
self.layer4 = self._make_layer(inplanes[3], midplanes[3], outplanes[3])
self.finallayer = conv3x3(outplanes[3], 3)
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def _make_layer(self, inplane, midplane, outplane):
layers = []
layers.append(conv3x3(inplane, midplane))
layers.append(nn.BatchNorm2d(midplane))
layers.append(nn.ReLU(inplace=True))
layers.append(conv3x3(midplane, midplane))
layers.append(nn.BatchNorm2d(midplane))
layers.append(nn.ReLU(inplace=True))
layers.append(nn.ConvTranspose2d(midplane, outplane, 2, stride=2))
layers.append(nn.BatchNorm2d(outplane))
return nn.Sequential(*layers)
def train_forward(self, directs, gt):
x = self.layer1(directs[2])
x = torch.cat((x, directs[1]), 1)
x = self.layer2(x)
x = torch.cat((x, directs[0]), 1)
x = self.layer3(x)
x = self.layer4(x)
x = self.finallayer(x)
return x, self.mse(x, gt)
def test_forward(self, directs):
x = self.layer1(directs[2])
x = torch.cat((x, directs[1]), 1)
x = self.layer2(x)
x = torch.cat((x, directs[0]), 1)
x = self.layer3(x)
x = self.layer4(x)
x = self.finallayer(x)
return x
def load(self, file_path):
checkpoint = torch.load(file_path)
# normal
self.load_state_dict(checkpoint, strict=False) | [
"[email protected]"
] | |
979ced1823c4944d573592df7133274f4f069d64 | 0c13c5b6ca20c639a94ca3b52b891d2d7cf4366f | /code/libs/EM/EM POO/CDistribution.py | e132f38111a3a657cb2acf5880608e6fca3ca123 | [] | no_license | manuwhs/EM-HMM-Directional-Statistics | 3869b7dbbe21d81868f0e7c212f97de4216b58bc | 275bf5ea3b2d84838e27f3c6fa60ded728947031 | refs/heads/master | 2021-04-28T03:09:51.787459 | 2018-04-28T08:51:01 | 2018-04-28T08:51:01 | 122,133,390 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 26,987 | py | # Change main directory to the main folder and import folders
import os
# Official libraries
import matplotlib.pyplot as plt
import pandas as pd
# Own libraries
from graph_lib import gl
import sampler_lib as sl
import EM_libfunc as EMlf
import EM_lib as EMl
import copy
import pickle_lib as pkl
import Gaussian_distribution as Gad
import Gaussian_estimators as Gae
import Watson_distribution as Wad
import Watson_estimators as Wae
import vonMisesFisher_distribution as vMFd
import vonMisesFisher_estimators as vMFe
import general_func as gf
import numpy as np
plt.close("all")
folder = "./data/test_data/"
folder_HMM = "./data/HMM_data/"
class CDistributionManager():
"""
We want to create a more open EM algorithm that allows clusters with different
properties: Differente Distribution, or different hyperparameters, or different Constraints in the distribution..
We would also like to be able to manage constrains between the different clusters,
like for example, the mean of two clusters is the same.
This class aims to contain a set of distributions, D, and each distribution has a
number of cluster K_d that are generated according to that distribution
"""
def __init__(self, distribution_list = None):
"""
The next dictionary holds the different distributions that the mixtures has.
They are references by name
"""
"""
Dictionaty with ["Distribution_name"] = Distribution_object
This distribution object has its own hyperparameters
"""
self.Dname_to_distribution = dict()
"""
Dictionaty with ["Distribution_name"] = [3,4,5]
The index of the clusters that belong to this distribution.
"""
self.Dname_to_clusters = dict()
"""
Dictionary with the number of clusters, where each element tells the
Distribution ID to which the k-th cluster belongs to.
It is redundant since the previous 2 are enough but it is easier
to program with this.
[3]: Distribution_name.
Making this a dictionary makes it easier.
"""
self.clusterk_to_Dname = dict()
"""
The clusters numbers "k" in the distribution could be anything.
As the algo progresses, they could be deleted or added with new k.
The thetas for now is a list, we need to match the thetas con the
"k"s in this distribution. This is what this dict does.
"""
self.clusterk_to_thetak = dict()
def add_distribution(self, distribution, Kd_list):
# Kd_list = [0,1,2]
"""
Function to add a distribution and the number of clusters.
We identify the distributions by the name given.
"""
self.Dname_to_distribution[distribution.name] = distribution
self.Dname_to_clusters[distribution.name] = []
for k in Kd_list:
self.add_cluster(k, distribution.name)
# print self.Dname_to_clusters
# print self.Dname_to_distribution
# print self.clusterk_to_Dname
# print self.clusterk_to_thetak
def remove_cluster(self, k):
"""
Remove the cluster from the data structures.
TODO: What if a distribution ends with no clusters ?
Will it crash somewhere ?
"""
distribution_name = self.clusters_distribution_dict[k]
self.Dname_to_clusters[distribution_name].remove(k)
self.clusterk_to_Dname.pop(k, None)
self.clusterk_to_thetak.pop(k, None)
def add_cluster(self,k, distribution_name):
"""
We can add a cluster to one of the asociations
"""
K = len(self.clusterk_to_Dname.keys())
self.Dname_to_clusters[distribution_name].append(k)
self.clusterk_to_Dname[k] = distribution_name
self.clusterk_to_thetak[k] = K
def init_params(self, X, theta_init):
"""
This function initilizes the parameters of the distributions
using the dunction provided, or directly the theta_init provided.
We provide with X in order to be able to use the samples for initialization
"""
K = len(self.clusterk_to_Dname.keys())
theta = []
for k in range(K):
theta.append(None)
# print ("K",K)
Dnames = self.Dname_to_distribution.keys()
for Dname in Dnames:
distribution = self.Dname_to_distribution[Dname]
theta_indexes = self.Dname_to_clusters[Dname]
# print ("theta_indexes",theta_indexes)
## If we are given directly the init theta, we do nothing
if (type(theta_init)!= type(None)):
# Get the theta_k corresponding to the clusters of the distribution
theta_dist = [theta_init[self.clusterk_to_thetak[i]] for i in theta_indexes]
else:
theta_dist = None
## Compute the init parameters for the Kd clusters of the distribution
Kd = len(theta_indexes) # Clusters of the distribution
theta_cluster = distribution.init_params(X, Kd, theta_dist, distribution.parameters)
for i in range(len(theta_indexes)): # Apprend them to the global theta
theta[self.clusterk_to_thetak[theta_indexes[i]]] = theta_cluster[i]
# print ("Length theta:", len(theta))
return theta
def get_Cs_log(self, theta):
"""
This funciton computes the Normalization constant of the clusters.
TODO: Ideally, we will not need it when we only compute the likelihoods once.
For now we will use it
"""
K = len(self.clusterk_to_thetak.keys())
Cs = []
for k in range(K):
Cs.append(None)
Dnames = self.Dname_to_distribution.keys()
for Dname in Dnames:
distribution = self.Dname_to_distribution[Dname]
theta_indexes = self.Dname_to_clusters[Dname]
for k in theta_indexes:
k_theta = self.clusterk_to_thetak[k]
try:
C_k = distribution.get_Cs_log(theta[k_theta], parameters = distribution.parameters) # Parameters of the k-th cluster
except RuntimeError as err:
# error_type = err.args[1]
# print err.args[0] % err.args[2]
print ("Cluster %i degenerated during computing normalization constant" %k) ####### HANDLE THE DEGENERATED CLUSTER #############
C_k = None;
Cs[k_theta] = C_k
return Cs
###################
def pdf_log_K(self, data, theta, Cs_logs = None):
"""
Returns the likelihood of the samples for each of the clusters. Not multiplied by pi or anything.
It is independent of the model.
If the the data is an (N,D) array then it computes it for it. It returns ll[N,K]
if it is a list, it computes it for all of them separatelly. It returns ll[ll1[N1,K], ll2[N2,K],...]
"""
if (type(data) == type([])):
pass
list_f = 1
else:
data = [data]
list_f = 0
K = len(theta)
# print D,N,K
if (type(Cs_logs) == type(None)):
Cs_logs = self.get_Cs_log(theta)
ll_chains = []
for X in data:
N,D = X.shape
lls = np.zeros((N,K))
Dnames = self.Dname_to_distribution.keys()
for Dname in Dnames:
distribution = self.Dname_to_distribution[Dname]
theta_indexes = self.Dname_to_clusters[Dname]
# print (theta_indexes)
theta_dist = [theta[self.clusterk_to_thetak[i]] for i in theta_indexes]
if (type(Cs_logs) != type(None)):
Cs_logs_dist = [Cs_logs[self.clusterk_to_thetak[i]] for i in theta_indexes]
else:
Cs_logs_dist = None
lls_d = distribution.pdf_log_K(X.T,theta_dist, parameters = distribution.parameters, Cs_log = Cs_logs_dist )
# print lls_d.shape
for i in range(len(theta_indexes)):
lls[:,self.clusterk_to_thetak[theta_indexes[i]]] = lls_d[:,i]
## TODO: Variational inference ! Change in this !
## In this simple case since Gaussian is 1 dim less, we want to give it more importance !
lls[:,0] = lls[:,0] # *(D-1)/float(D) #(2/float(3)) # + np.log(2) # Multiply by 2, just to try
ll_chains.append(lls)
if (list_f == 0):
ll_chains = ll_chains[0]
return ll_chains
def get_theta(self, X, r):
N,D = X.shape
N,K = r.shape
theta = []
for k in range(K):
theta.append(None)
Dnames = self.Dname_to_distribution.keys()
for Dname in Dnames:
distribution = self.Dname_to_distribution[Dname]
theta_indexes = self.Dname_to_clusters[Dname]
for k in theta_indexes:
k_theta = self.clusterk_to_thetak[k]
rk = r[:,[k_theta]] # Responsabilities of all the samples for that cluster
try:
theta_k = distribution.theta_estimator(X, rk, parameters = distribution.parameters) # Parameters of the k-th cluster
except RuntimeError as err:
# error_type = err.args[1]
# print err.args[0] % err.args[2]
print ("Cluster %i degenerated during estimation" %k) ####### HANDLE THE DEGENERATED CLUSTER #############
theta_k = None;
theta[k_theta] = theta_k
return theta
################# Functions for managing cluster ############################
def manage_clusters(self, X, r, theta_prev, theta_new):
"""
Here we manage the clusters that fell into singlularities or that their
pdf cannot be computed,usually because the normalization constant is
not computable.
In the process we will compute the likelihoods
"""
clusters_change = 0
K = len(self.clusterk_to_Dname.keys())
##################### Check for singularities (degenerated cluster) ##########
if (type(theta_prev) != type(None)): # Not run this part for the initialization one.
for k in self.clusterk_to_thetak:
k_theta = self.clusterk_to_thetak[k]
if(type(theta_new[k_theta]) == type(None)): # Degenerated cluster during estimation
print ("Cluster %i has degenerated during estimation (singularity) "%k_theta)
# TODO: Move only the following line inside the function below
print (" Xum responsability of samples r: %f"% (np.sum(r[:,[k_theta]])))
distribution = self.Dname_to_distribution[self.clusterk_to_Dname[k]]
theta_new[k_theta] = distribution.degenerated_estimation_handler(
X, rk = r[:,[k_theta]] , prev_theta_k = theta_prev[k], parameters = distribution.parameters )
clusters_change = 1 # We changed a cluster !
################# Check if the parameters are well defined ( we can compute pdf) ################
for k in self.clusterk_to_thetak:
k_theta = self.clusterk_to_thetak[k]
# Checking for the clusters that we are not gonna remove due to degenerated estimation.
if(type(theta_new[k_theta]) != type(None)):
# We estimate the likelihood of a sample for the cluster, if the result is None
# Then we know we cannot do it.
distribution = self.Dname_to_distribution[self.clusterk_to_Dname[k]]
if (type(distribution.pdf_log_K(X[[0],:].T,[theta_new[k_theta]],parameters = distribution.parameters)) == type(None)):
print ("Cluster %i has degenerated parameters "%k)
if (type(r) == type(None)): # We do not have rk if this is the first initialization
rk = None
else:
rk = r[:,[k_theta]]
# We give the current theta to this one !!!
theta_new[k_theta] = distribution.degenerated_params_handler(
X, rk = rk , prev_theta_k = theta_new[k_theta], parameters = distribution.parameters )
clusters_change = 1; # We changed a cluster !
return theta_new, clusters_change
class CDistribution ():
""" This is the distribution object that is given to the program in order to run the algorithm.
The template is as follows !!
Each distribution has a set of parameters, called "theta" in this case.
In the EM we will have "K" clusters, and the distributions are "D" dimensional.
Theta is given as a list where inside it has the parameters.
For example for the Gaussian distribution theta = [mus, Sigma]
Where mus would be a K x D matix and Sigma another DxD matrix.
The probabilities or densities have be given in log() since they could be very small.
TODO: I want the calling of funcitons in a way that when called externally,
the parameters are given automatically ?
Right now, simply:
- Store the parameters as a dict in the object
- Externally when calling the funcitons, pass them the dict,
the functions should have as input the dict
"""
def __init__(self, name = "Distribution"):
self.name = name;
self.pdf_log = None;
self.pdf_log_K = None;
self.init_theta = None;
self.theta_estimator = None;
self.sampler = None;
## If we are gonna somehow set new parameters for the clusters according
## to some rule, instead of deleting them.
self.degenerated_estimation_handler = None
self.degenerated_params_handler = None
self.check_degenerated_params = None
# Function to use at the end of the iteration to modify the cluster parameters
self.use_chageOfClusters = None
########### Hyperparameters ###############
"""
We can store the hyperparameters in this distribution.
Examples are:
- Parameters for initialization of clusters
- Parameters for singularities
- Number of iterations of Newton for the parameter estimation
- Constraints in the Estimation (Only diagonal Gaussian matrix)
"""
self.parameters = dict()
def set_distribution(self,distribution, parameters = None):
""" We have a set of preconfigured distributions ready to use """
if (type(distribution) != type(None)):
if (distribution == "Watson"):
# For initialization, likelihood and weighted parameter estimamtion
self.init_params = Wad.init_params
self.pdf_log_K = Wad.Watson_K_pdf_log
self.theta_estimator = Wae.get_Watson_muKappa_ML
## For degeneration
self.degenerated_estimation_handler = Wad.degenerated_estimation_handler
self.degenerated_params_handler = Wad.degenerated_params_handler
## For optimization
self.get_Cs_log = Wad.get_Cs_log
## Optional for more complex processing
self.use_chageOfClusters = Wad.avoid_change_sign_centroids
if(type(parameters) == type(None)):
self.parameters["Num_Newton_iterations"] = 5
self.parameters["Allow_negative_kappa"] = "no"
self.parameters["Kappa_min_init"] = 0
self.parameters["Kappa_max_init"] = 100
self.parameters["Kappa_max_singularity"] =1000
self.parameters["Kappa_max_pdf"] = 1000
else:
self.parameters = parameters
elif(distribution == "Gaussian"):
self.init_params = Gad.init_params
self.pdf_log_K = Gad.Gaussian_K_pdf_log
self.theta_estimator = Gae.get_Gaussian_muSigma_ML
## For degeneration
self.degenerated_estimation_handler = Gad.degenerated_estimation_handler
self.degenerated_params_handler = Gad.degenerated_params_handler
## For optimization
self.get_Cs_log = Gad.get_Cs_log
## Optional for more complex processing
self.use_chageOfClusters = None
if(type(parameters) == type(None)):
self.parameters["mu_variance"] = 1
self.parameters["Sigma_min_init"] = 1
self.parameters["Sigma_max_init"] = 15
self.parameters["Sigma_min_singularity"] = 0.1
self.parameters["Sigma_min_pdf"] = 0.1
self.parameters["Sigma"] = "diagonal" # "full", "diagonal"
else:
self.parameters = parameters
elif (distribution == "vonMisesFisher"):
# For initialization, likelihood and weighted parameter estimamtion
self.init_params = vMFd.init_params
self.pdf_log_K = vMFd.vonMisesFisher_K_pdf_log
self.theta_estimator = vMFe.get_vonMissesFisher_muKappa_ML
## For degeneration
self.degenerated_estimation_handler = vMFd.degenerated_estimation_handler
self.degenerated_params_handler = vMFd.degenerated_params_handler
## For optimization
self.get_Cs_log = vMFd.get_Cs_log
## Optional for more complex processing
# self.use_chageOfClusters = Wad.avoid_change_sign_centroids
if(type(parameters) == type(None)):
self.parameters["Num_Newton_iterations"] = 2
self.parameters["Kappa_min_init"] = 0
self.parameters["Kappa_max_init"] = 100
self.parameters["Kappa_max_singularity"] =1000
self.parameters["Kappa_max_pdf"] = 1000
else:
self.parameters = parameters
def set_parameters(self, parameters):
self.parameters = parameters
"""
################################################################################
##################### TEMPLATE FUNCTIONS #######################################
################################################################################
"""
### Funcitons and shit
def pdf_log_K(X, theta, Cs_log = None):
return None
"""
This function returns the pdf in logarithmic units of the distribution.
It accepts any number of samples N and any number of set of parameters K
(number of clusters) simultaneuously. For optimizaiton purposes, if the
normalization constants are precomputed, they can be given as input.
Inputs:
- X: (DxN) numpy array with the samples you want to compute the pdf for.
- theta: This is the set of parameters of all the clusters.
Theta is a list with the set of parameters of each cluster
theta = [theta_1, theta_2,..., theta_K]
The format in which the parameters of each cluster theta_i are
indicated are a design choice of the programmer.
We recomment using another list for each of ther parameters.
For example for the gaussian distribution:
theta_i = [mu_i, Sigma_i]
Where mu_i would be a Dx1 vector and Sigma_i a DxD matrix
- Cs_log: Some distributions have normalization constant that is
expensive to compute, you can optionally give it as input if you
already computed somewhere else to save computation.
Note: If you provide the computing function, this optimization is done
automatically within the implementation.
Outputs:
- log_pdf: numpy 2D array (NxK) with the log(pdf) of each of the samples
for each of the cluters
"""
def get_Cs_log(theta_k, parameters = None):
return None
"""
This function will compute the normalization constant of a cluster.
Usually, the normalization constant is a computaitonal bottelneck since it
could take quite some time to compute. In an iteration of the EM algoeithm,
it should only be computed once per cluster. If given, the computation
of such constants can be minimized.
Input:
- theta_k: The parameters of the cluster in the previous iteration.
Output:
- Cs_log: Normalization constant of the cluster
"""
def init_params(X,K, theta_init = None, parameters = None):
return None
"""
This function initializes the parameters of the K clusters if no inital
theta has been provided. If a initial theta "theta_init" is specified when
calling the "fit()" function then that initialization will be used instead.
The minimum parameters to be provided are the number of cluster K
and the dimensionality of the data D.
In order to add expressivity we can use parameters of the dictionary
Inputs:
- K: Number of clusters to initialize
- X: The input samples to use. We can use them to initilize.
- theta_init: Optional parameter. If we specified externally an initialization
then this theta will bypass the function. It needs to be specified in the interface
to be used internally by the algorithm.
- parameters: Dictionary with parameters that can be useful for the initialization.
It is up to the programmers how to use it. An example of use in the Gaussian Distribution,
is setting the maximum variance that the initialized clusters can have.
Outputs:
- theta_init: This is the set of parameters of all the clusters.
Its format is the same as previously stated.
"""
def theta_estimator(X, rk = None, parameters = None):
return None
"""
This function estimates the parameters of a given cluster k, k =1,...,K
given the datapoint X and the responsability vector rk of the cluster.
Input:
- X: (DxN) numpy array with the samples.
- rk: Nx1 numpy vector with the responsibility of the cluster to each sample
- parameters: Dictionary with parameters useful foe the algorithm.
Output:
- theta_k: The parameters of the cluster. This is distribution dependent
and its format must the coherent with the rest of the functions.
Note: It is likely for this function to fail due to degenetate clusters. For example
trying to compute the variance of one point.
The user should handle such exeptions, catching the possible numerical errors,
if the computation is not possible, then this funciton should return None
and the logic will be handled later.
A recommended way of doing this is calling try - execpt XXXXXX
"""
def degenerated_estimation_handler(X, rk , prev_theta_k , parameters = None):
return None
"""
If during the estimation of the parameters there was a numerical error and
the computation is not possible, this function will try to solve the situation.
Some common solutions are to use the previous parameters theta_k of the cluster
or reinitilizite it using other hyperperameters.
Input:
- X: (DxN) numpy array with the samples.
- rk: Nx1 numpy vector with the responsibility of the cluster to each sample
- prev_theta_k: The parameters of the cluster in the previous iteration.
It can be used to replace the current one.
- parameters: Dictionary with hyperparameters if needed in order to
compute the new parameters of the cluster
Output:
- theta_k: The parameters of the cluster. Distribution dependent.
If no recomputation method is possible, then this function
must return "None" in which case the cluster will be later removed.
"""
def degenerated_params_handler(X, rk , prev_theta_k , parameters = None):
return None
"""
It at some point the obtained parameters of the cluster makes it non-feasible
to compute the pdf of the data, for example because the normalization constant
is too big or too small, its inaccurate, or it takes too much time to compute then
this function will attempt to recompute another set of parameters.
Input:
- X: (DxN) numpy array with the samples.
- rk: Nx1 numpy vector with the responsibility of the cluster to each sample
- prev_theta_k: The parameters of the cluster in the previous iteration.
It can be used to replace the current one.
- parameters: Dictionary with hyperparameters if needed in order to
compute the new parameters of the cluster
Output:
- theta_k: The parameters of the cluster. Distribution dependent.
If no recomputation method is possible, then this function
must return "None" in which case the cluster will be later removed.
"""
def use_chageOfClusters(theta_new, theta_prev):
"""
At the end of each iteration of the EM-algorithm we have updated the cluster parameters.
We might want to do some operation on them depending on this change so this function
will add some more capabilities to the algorithm.
Input:
- theta_new: The newly computed theta
- theta_prev: The previously computed theta
Output:
- theta_new: The modified new parameters.
"""
| [
"[email protected]"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.