blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
616
| content_id
stringlengths 40
40
| detected_licenses
sequencelengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 777
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 149
values | src_encoding
stringclasses 26
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 3
10.2M
| extension
stringclasses 188
values | content
stringlengths 3
10.2M
| authors
sequencelengths 1
1
| author_id
stringlengths 1
132
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0697cfae733c1c85df9bb03a68d69a1b583cc00a | 9ac793d32e70775bb119aaddeb832624e3cf9281 | /strkeyword3.py | 2eb32f55412a855bb78a367f3294cc7ae3f400c3 | [] | no_license | prabhatpal77/Adv-python-polymorphism | 9368311732e1bca9b54e099489c255e3498fbb9b | d68375e4816a746a1ffbffa6d179c50227267feb | refs/heads/master | 2020-07-29T00:41:08.162385 | 2019-09-19T16:35:32 | 2019-09-19T16:35:32 | 209,601,547 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 324 | py | # __init__ magic method with __str__ magic method.
class X:
def __init__(self, msg):
self.msg=msg
def display(self):
print("welcome")
def __str__(self):
return self.msg
x1=X("prabhat pal")
print(x1)
x1.display()
x2=X("python")
print(x2)
x2.display()
x3=X("django")
print(x3)
x3.display()
| [
"[email protected]"
] | |
ce67e22340faa26b9021729066f24d2f809865a6 | 3a84f9b61a21904251236c22aa893d6ca77a6650 | /pyrosim/demos/ludobots/Demo_19_Torque.py | 8baf3731228a475e057c4f798cd2676fd3b7de5b | [] | no_license | davidmatthews1uvm/2020-ALIFE | 8fd58d59c98364ccc8f40f14c6e0c6281d4d44de | bf8321f0112974b26239710ac7f3f42afb34aec8 | refs/heads/master | 2022-11-29T17:23:58.384592 | 2020-07-18T14:21:29 | 2020-07-18T14:21:29 | 272,540,984 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,396 | py | import sys
sys.path.insert(0, '../..')
import pyrosim
import math
ARM_LENGTH = 0.75
ARM_RADIUS = ARM_LENGTH / 10.0
TORQUES = [0.1, 100.0]
SPEEDS = [0.1, 1.0]
# torque is max torque possible, not neccesarily the torque used
for torque in TORQUES:
for speed in SPEEDS:
sim = pyrosim.Simulator(eval_time=100)
cyl = sim.send_cylinder(x=0, y=0, z=2.0*ARM_LENGTH,
r1=0, r2=0, r3=1,
length=ARM_LENGTH, radius=ARM_RADIUS)
box = sim.send_box(x=0, y=0, z=1.25*ARM_LENGTH, length=ARM_RADIUS *
7., width=ARM_RADIUS*7.0, height=ARM_RADIUS*7.0)
world_cyl_joint = sim.send_hinge_joint(
first_body_id=-1, second_body_id=cyl,
x=0, y=0, z=2.5*ARM_LENGTH,
n1=1, n2=0, n3=0, lo=-3.14159/2.0, hi=+3.14159/2.0,
torque=torque, speed=speed, position_control=True
)
cyl_box_joint = sim.send_hinge_joint(
first_body_id=cyl, second_body_id=box,
x=0, y=0, z=1.5*ARM_LENGTH)
fneuron = sim.send_user_input_neuron(in_values=1)
mneuron = sim.send_motor_neuron(joint_id=world_cyl_joint)
sim.send_synapse(source_neuron_id=fneuron,
target_neuron_id=mneuron, weight=1.0)
sim.film_body(box, 'track')
sim.start()
sim.wait_to_finish()
| [
"[email protected]"
] | |
3cec6f7301e98a820bb832f1da5da84e0a9399f0 | 1e21f0939d4c46db8eeca9fa8ef034ed14b7a549 | /PhotonIDSFs/TnP_76X/test/Signal_fitfunction_Syst_withMVAcut/Signal_fit_systematic_Fit2/fitter_Medium.py | 47b96945ef7a044d056278de2cc37e8440b4f822 | [] | no_license | Ming-Yan/photonTnp | 4e46286998d4e2806e423e2e27893c0a8675494f | 5468bea3eff51b21eed2701cda4f3e5d2ad9e6bf | refs/heads/master | 2021-10-08T20:33:55.910375 | 2018-10-22T09:12:26 | 2018-10-22T09:12:26 | 162,109,988 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 11,325 | py | import FWCore.ParameterSet.Config as cms
from FWCore.ParameterSet.VarParsing import VarParsing
#import PhysicsTools.TagAndProbe.signalFitFunction.commonFitMedium_S as common
options = VarParsing('analysis')
options.register(
"isMC",
False,
VarParsing.multiplicity.singleton,
VarParsing.varType.bool,
"Compute MC efficiencies"
)
options.register(
"inputFileName",
#"/afs/cern.ch/work/i/ishvetso/public/for_Matteo/TnPTree_mc-powheg.root",
"/data2/pwang/TnP/76X_v2/TnP_Data.root",
#"TnPTree_mc.root",
VarParsing.multiplicity.singleton,
VarParsing.varType.string,
"Input filename"
)
options.register(
"outputFileName",
"Signal",
VarParsing.multiplicity.singleton,
VarParsing.varType.string,
"Output filename"
)
options.register(
"idName",
"passingMedium",
#"passingTrigWP90",
VarParsing.multiplicity.singleton,
VarParsing.varType.string,
"ID variable name as in the fitter_tree"
)
options.register(
"dirName",
"PhotonToRECO",
VarParsing.multiplicity.singleton,
VarParsing.varType.string,
"Folder name containing the fitter_tree"
)
options.register(
"doCutAndCount",
False,
VarParsing.multiplicity.singleton,
VarParsing.varType.bool,
"Perform cut and count efficiency measurement"
)
options.parseArguments()
process = cms.Process("TagProbe")
process.source = cms.Source("EmptySource")
process.maxEvents = cms.untracked.PSet( input = cms.untracked.int32(1) )
process.load("FWCore.MessageService.MessageLogger_cfi")
process.MessageLogger.destinations = ['cout', 'cerr']
process.MessageLogger.cerr.FwkReport.reportEvery = 1000
################################################
InputFileName = options.inputFileName
OutputFile = "efficiency-mc-"+options.idName
if (not options.isMC):
OutputFile = "efficiency-data-"+options.idName
if (options.outputFileName != ""):
OutputFile = OutputFile+"-"+options.outputFileName+".root"
else:
OutputFile = OutputFile+".root"
################################################
EfficiencyBins = cms.PSet(
#probe_Ele_eta = cms.vdouble( -2.5, -1.566, -1.4442, -0.8, 0.0, 0.8, 1.4442, 1.566, 2.5 ),
#probe_Ele_pt = cms.vdouble(15., 25., 35., 45., 55., 5000.),
probe_sc_eta = cms.vdouble(-2.5,-1.566,-1.4442,-1.0,0.0, 1.0, 1.4442, 1.566, 2.5),
probe_sc_et = cms.vdouble(20. ,30, 40. ,50., 200.),
)
EfficiencyBinningSpecification = cms.PSet(
#UnbinnedVariables = cms.vstring("mass", "totWeight"),
UnbinnedVariables = cms.vstring("mass"),
BinnedVariables = cms.PSet(EfficiencyBins,
mcTrue = cms.vstring("true")
),
BinToPDFmap = cms.vstring("pdfSignalPlusBackground")
)
if (not options.isMC):
EfficiencyBinningSpecification.UnbinnedVariables = cms.vstring("mass")
EfficiencyBinningSpecification.BinnedVariables = cms.PSet(EfficiencyBins)
mcTruthModules = cms.PSet()
if (options.isMC):
setattr(mcTruthModules, "MCtruth_" + options.idName, cms.PSet(EfficiencyBinningSpecification))
setattr(getattr(mcTruthModules, "MCtruth_" + options.idName), "EfficiencyCategoryAndState", cms.vstring(options.idName, "pass"))
############################################################################################
process.TnPMeasurement = cms.EDAnalyzer("TagProbeFitTreeAnalyzer",
InputFileNames = cms.vstring(InputFileName),
InputDirectoryName = cms.string(options.dirName),
InputTreeName = cms.string("fitter_tree"),
OutputFileName = cms.string(OutputFile),
NumCPU = cms.uint32(1),
SaveWorkspace = cms.bool(False), #VERY TIME CONSUMING FOR MC
doCutAndCount = cms.bool(options.doCutAndCount),
floatShapeParameters = cms.bool(True),
binnedFit = cms.bool(True),
binsForFit = cms.uint32(60),
WeightVariable = cms.string("totWeight"),
#fixVars = cms.vstring("meanP", "meanF", "sigmaP", "sigmaF", "sigmaP_2", "sigmaF_2"),
# defines all the real variables of the probes available in the input tree and intended for use in the efficiencies
Variables = cms.PSet(mass = cms.vstring("Tag-Probe Mass", "60.0", "120.0", "GeV/c^{2}"),
#probe_Ele_et = cms.vstring("Probe E_{T}", "0", "1000", "GeV/c"),
probe_sc_eta = cms.vstring("Probe #eta", "-2.5", "2.5", ""),
#totWeight = cms.vstring("totWeight", "-1000000000", "100000000", ""),
#probe_Ele_e = cms.vstring("probe_Ele_e", "0", "1000", ""),
probe_sc_et = cms.vstring("probe_Ele_et", "0", "1000", ""),
#probe_Ele_trigMVA = cms.vstring("probe_Ele_trigMVA", "-1", "1", ""),
#passingTrigWP90 = cms.vstring("passingTrigWP90", "-1", "1", ""),
),
# defines all the discrete variables of the probes available in the input tree and intended for use in the efficiency calculations
Categories = cms.PSet(),
#Expressions = cms.PSet(myeop = cms.vstring("myeop", "probe_Ele_e/probe_Ele_pt", "probe_Ele_e", "probe_Ele_pt")
# ),
Cuts = cms.PSet(mvacut = cms.vstring("tag_Pho_mva","0.90","above"),
tagEt = cms.vstring("tag_Pho_et","30","above") ###new
#fakeEoPCut = cms.vstring("myeop", "2.", "above")
),
# defines all the PDFs that will be available for the efficiency calculations;
# uses RooFit's "factory" syntax;
# each pdf needs to define "signal", "backgroundPass", "backgroundFail" pdfs, "efficiency[0.9,0,1]"
# and "signalFractionInPassing[0.9]" are used for initial values
PDFs = cms.PSet(pdfSignalPlusBackground = cms.vstring(
"RooCBExGaussShape::signalResPass(mass,meanP[-0.0,-10.000,10.000],sigmaP[0.956,0.00,20.000],alphaP[1.0, 0.9,1.5],nP[1.8,1.5,2.500],sigmaP_2[1.000,0.500,40.00])",
"RooCBExGaussShape::signalResFail(mass,meanF[-0.0,-20.000,10.000],sigmaF[1,0.00,30.000],alphaF[0.2, 0.15,1.0],nF[1.7,1.5,2.5],sigmaF_2[1.675,0.100,40.000])",
"ZGeneratorLineShape::signalPhy(mass)",
"RooCMSShape::backgroundPass(mass, alphaPass[60.,50.,70.], betaPass[0.001, 0.,0.1], gammaPass[0.1, 0, 1], peakPass[90.0])",
"RooCMSShape::backgroundFail(mass, alphaFail[60.,50.,70.], betaFail[0.001, 0.,0.1], gammaFail[0.1, 0, 1], peakFail[90.0])",
"FCONV::signalPass(mass, signalPhy, signalResPass)",
"FCONV::signalFail(mass, signalPhy, signalResFail)",
"efficiency[0.5,0,1]",
"signalFractionInPassing[1.0]"
),
),
# defines a set of efficiency calculations, what PDF to use for fitting and how to bin the data;
# there will be a separate output directory for each calculation that includes a simultaneous fit, side band subtraction and counting.
Efficiencies = cms.PSet(mcTruthModules)
)
setattr(process.TnPMeasurement.Categories, options.idName, cms.vstring(options.idName, "dummy[pass=1,fail=0]"))
setattr(process.TnPMeasurement.Categories, "mcTrue", cms.vstring("MC true", "dummy[true=1,false=0]"))
if (not options.isMC):
delattr(process.TnPMeasurement, "WeightVariable")
process.TnPMeasurement.Variables = cms.PSet(
mass = cms.vstring("Tag-Probe Mass", "60.0", "120.0", "GeV/c^{2}"),
probe_sc_et = cms.vstring("Probe E_{T}", "20", "1000", "GeV/c"),
probe_sc_eta = cms.vstring("Probe #eta", "-2.5", "2.5", ""),
event_met_pfmet = cms.vstring("event_met_pfmet", "0", "100000000", "GeV"),
#event_met_phi = cms.vstring("event_met_phi", "-10", "10", ""),
#tag_Pho_phi = cms.vstring("tag_Pho_phi", "-10", "10", ""),
###SJ
tag_Pho_et = cms.vstring("Tag E_{T}", "20", "1000", "GeV/c"),
tag_Pho_mva = cms.vstring("Tag MVA", "-1.5", "1.5", "GeV/c")
###SJ
#event_met_pfsumet = cms.vstring("event_met_pfsumet", "0", "1000", ""),
)
for pdf in process.TnPMeasurement.PDFs.__dict__:
param = process.TnPMeasurement.PDFs.getParameter(pdf)
if (type(param) is not cms.vstring):
continue
for i, l in enumerate(getattr(process.TnPMeasurement.PDFs, pdf)):
if l.find("signalFractionInPassing") != -1:
getattr(process.TnPMeasurement.PDFs, pdf)[i] = l.replace("[1.0]","[0.5,0.,1.]")
setattr(process.TnPMeasurement.Efficiencies, options.idName, EfficiencyBinningSpecification)
setattr(getattr(process.TnPMeasurement.Efficiencies, options.idName) , "EfficiencyCategoryAndState", cms.vstring(options.idName, "pass"))
else:
for pdf in process.TnPMeasurement.PDFs.__dict__:
param = process.TnPMeasurement.PDFs.getParameter(pdf)
if (type(param) is not cms.vstring):
continue
for i, l in enumerate(getattr(process.TnPMeasurement.PDFs, pdf)):
if l.find("backgroundPass") != -1:
getattr(process.TnPMeasurement.PDFs, pdf)[i] = "RooPolynomial::backgroundPass(mass, a[0.0])"
if l.find("backgroundFail") != -1:
getattr(process.TnPMeasurement.PDFs, pdf)[i] = "RooPolynomial::backgroundFail(mass, a[0.0])"
process.fit = cms.Path(
process.TnPMeasurement
)
| [
"[email protected]"
] | |
74f5cb59d17df02e4542da48b8d5020b5be8d921 | 6c3bb7feea3b3b029fe65de11954aee778ac3578 | /sorting algorithms/radix_sort_imp.py | 431a39dfefbe0d32954ecf059fd96db94d1d466e | [
"Unlicense"
] | permissive | mkoryor/Python | 72ebb2201c7f4887e023f541509da7e2c6fab5d5 | 837ec4c03130dc4cb919fb5f1eeb4d31206790e4 | refs/heads/master | 2023-05-04T13:00:09.106811 | 2021-05-11T03:06:04 | 2021-05-11T03:06:04 | 114,468,023 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,220 | py |
class RadixSort(object):
def sort(self, array, base=10):
if array is None:
raise TypeError('array cannot be None')
if not array:
return []
max_element = max(array)
max_digits = len(str(abs(max_element)))
curr_array = array
for digit in range(max_digits):
buckets = [[] for _ in range(base)]
for item in curr_array:
buckets[(item//(base**digit))%base].append(item)
curr_array = []
for bucket in buckets:
curr_array.extend(bucket)
return curr_array
import unittest
class TestRadixSort(unittest.TestCase):
def test_sort(self):
radix_sort = RadixSort()
self.assertRaises(TypeError, radix_sort.sort, None)
self.assertEqual(radix_sort.sort([]), [])
array = [128, 256, 164, 8, 2, 148, 212, 242, 244]
expected = [2, 8, 128, 148, 164, 212, 242, 244, 256]
self.assertEqual(radix_sort.sort(array), expected)
print('Success: test_sort')
def main():
test = TestRadixSort()
test.test_sort()
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
aa87dbc1381cc510f4fd0e691ac80df795db118b | a7b592be95dc2af9fdb56725f44e98cc59166e6f | /apps/common/biz_utils/utils_dictwrapper.py | a1f43066b827d3f81de5ed1bcdb7c7bd4e4e747f | [] | no_license | cash2one/CRM-3 | bc864c462d155b5dc6a51a5edbd564574b3e2f94 | cedcaeb397ccadb36952534242bd296c5b4513bb | refs/heads/master | 2021-06-14T07:40:53.572013 | 2017-05-23T15:52:06 | 2017-05-23T15:52:06 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,712 | py | # coding=UTF-8
import re
RPT_PATTERN = re.compile(r'^([a-z]+)([1-9]{1}|1[0-5]{1})$')
class DictWrapper(dict):
def __getattr__(self, name):
try:
return super(DictWrapper, self).__getitem__(name)
except KeyError:
raise AttributeError("key %s not found" % name)
def __setattr__(self, name, value):
super(DictWrapper, self).__setitem__(name, value)
def __delattr__(self, name):
super(DictWrapper, self).__delitem__(name)
def hasattr(self, name):
return name in self
@classmethod
def load_dict(cls, org_data):
"""支持将嵌套的dict转成wrapper, e.g.:
test_dict = {'a':{'b':1,'c':[2,{'e':3}],'f':{'g':4}}}
ss = DictWrapper.load_dict(test_dict)
print ss.a.c[0].e
print ss.a.b
"""
if isinstance(org_data, dict):
dr = {}
for k,v in org_data.items():
dr.update({k:cls.load_dict(v)})
return cls(dr)
elif isinstance(org_data, (list, tuple)):
return [cls.load_dict(i) for i in org_data]
else:
return org_data
class KeywordGlobal(DictWrapper):
def __init__(self, g_pv = 0, g_click = 0, g_competition = 0, g_cpc = 0, g_coverage = 0, g_roi = 0, g_paycount = 0):
self.g_pv = g_pv
self.g_click = g_click
self.g_competition = g_competition
self.g_cpc = g_cpc
self.g_coverage = g_coverage
self.g_roi = g_roi
self.g_paycount = g_paycount
@property
def g_ctr(self):
'''返回全网点击率'''
if self.g_click and self.g_pv:
return self.g_click * 100.0 / self.g_pv
return 0.00
| [
"[email protected]"
] | |
81041ed0d6f75bedeba160908e717643a6fc408b | 66fc0b6f603285f32544b90d6562a7f66e341abf | /parser.py | 4638524e101de45969e96a083740910b38aabc11 | [] | no_license | openelections/openelections-data-hi | f4257cab929954218f111f9eab0d38568c41cd30 | 7b56c5ddd5448c4f62fc2ae30d0eadcad609bd19 | refs/heads/master | 2023-05-02T22:43:42.495617 | 2023-04-19T00:00:56 | 2023-04-19T00:00:56 | 96,584,468 | 0 | 5 | null | 2023-04-19T00:00:57 | 2017-07-07T23:54:10 | Python | UTF-8 | Python | false | false | 3,383 | py | # -*- coding: utf-8 -*-
import csv
import requests
OFFICES = ['President and Vice President', 'Governor', 'U.S. Representative', 'State Senator', 'State Representative', 'Lieutenant Governor', 'U.S. Senator']
precinct_file = open("precincts.txt", "rt")
csvfile = csv.DictReader(precinct_file, delimiter=',')
precincts = list(csvfile)
def general():
results = []
url = "https://elections.hawaii.gov/wp-content/results/media.txt"
r = requests.get(url)
decoded_content = r.text
reader = csv.DictReader(decoded_content.splitlines())
for row in reader:
county = next((p['COUNTY'] for p in precincts if row['Precinct_Name'] == p['PRECINCT']), None)
office = row['Contest_title']
if 'Dist' in office:
office, district = office.split(', Dist ')
if district == 'I':
district = "1"
elif district == 'I Vacancy':
district = "1 Unexpired"
elif district == 'II':
district = "2"
else:
district = None
party = row['Choice_party']
votes = int(row['Absentee_votes']) + int(row['Early_votes']) + int(row['Election_Votes'])
results.append([county, row['Precinct_Name'], office, district, party, row['Candidate_name'], row['Absentee_votes'], row['Early_votes'], row['Election_Votes'], votes])
with open('2020/20201103__hi__general__precinct.csv','wt') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(['county','precinct', 'office', 'district', 'party', 'candidate', 'absentee', 'early_votes', 'election_day', 'votes'])
csvwriter.writerows(results)
def primary():
results = []
url = "https://elections.hawaii.gov/wp-content/results/media.txt"
r = requests.get(url)
decoded_content = r.text
reader = csv.DictReader(decoded_content.splitlines(), delimiter=',', quotechar='"')
for row in reader:
if any(x in row['Contest_title'] for x in OFFICES):
county = next((p['COUNTY'] for p in precincts if row['Precinct_Name'] == p['PRECINCT']), None)
if row['Contest_title'] == 'SELECT A PARTY':
office = 'Straight Party'
party = None
else:
office, party = row['Contest_title'].split(' - ')
if 'Dist' in office:
office, district = office.split(', Dist ')
if district == 'I':
district = "1"
elif district == 'I Vacancy':
district = "1 Unexpired"
elif district == 'II':
district = "2"
else:
district = None
votes = int(row['Absentee_votes']) + int(row['Early_votes']) + int(row['Election_Votes'])
results.append([county, row['Precinct_Name'], office, district, party, row['Candidate_name'], row['Absentee_votes'], row['Early_votes'], row['Election_Votes'], votes])
with open('2018/20180811__hi__primary__precinct.csv','w') as csvfile:
csvwriter = csv.writer(csvfile, quoting=csv.QUOTE_NONNUMERIC)
csvwriter.writerow(['county','precinct', 'office', 'district', 'party', 'candidate', 'absentee', 'early_votes', 'election_day', 'votes'])
csvwriter.writerows(results)
if __name__ == "__main__":
# general()
primary()
| [
"[email protected]"
] | |
dba8bef202bdd565edd4a902adbacc05ed643e9a | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02609/s490665806.py | a8dbceee4a95774d3a0ccfe52e582fddc2f1fdeb | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 518 | py | def f(n):
ans=1
while n!=0:
ans+=1
n%=bin(n).count("1")
return ans
n=int(input())
x=input()
o=x.count("1")
if o==0:exit(print(*[1]*n))
if o==1:
if x[-1]=="1":
ans=[2]*n
ans[-1]=0
else:
ans=[1]*n
ans[-1]=2
ans[x.index("1")]=0
exit(print(*ans))
mo=0
mz=0
for i in range(n):
if x[n-i-1]=="1":
mo=(pow(2,i,o+1)+mo)%(o+1)
mz=(pow(2,i,o-1)+mz)%(o-1)
for i in range(n):
if x[i]=="1":
m=(mz-pow(2,n-i-1,o-1))%(o-1)
else:
m=(mo+pow(2,n-i-1,o+1))%(o+1)
print(f(m)) | [
"[email protected]"
] | |
455a1ea7e2943449754c8994accb0933e72d7732 | bd211803ddb664c2ba937abdb14dd8a34429e999 | /kokkuvote/migrations/0001_initial.py | e128e6762fca91d41c8bbc76ae2e64598c981041 | [] | no_license | alvarantson/emartauto | f8055257966964c75363bfed881f861c411dbf9d | c81fd15e509ac85f22c7a6249cecda040bbf78ff | refs/heads/master | 2022-02-21T19:13:15.757053 | 2022-02-06T14:50:29 | 2022-02-06T14:50:29 | 218,960,558 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 491 | py | # Generated by Django 2.2.9 on 2020-04-17 11:55
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='google_link',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('link', models.CharField(max_length=999)),
],
),
]
| [
"[email protected]"
] | |
054492b4c0a901a426a9aa22ee5e584cbee95884 | 2c4763aa544344a3a615f9a65d1ded7d0f59ae50 | /playground/cfg_cache/wscript | 9303a0b4fe3444276207a25f55d371502561e5f1 | [] | no_license | afeldman/waf | 572bf95d6b11571bbb2941ba0fe463402b1e39f3 | 4c489b38fe1520ec1bc0fa7e1521f7129c20f8b6 | refs/heads/master | 2021-05-09T18:18:16.598191 | 2019-03-05T06:33:42 | 2019-03-05T06:33:42 | 58,713,085 | 0 | 0 | null | 2016-05-13T07:34:33 | 2016-05-13T07:34:33 | null | UTF-8 | Python | false | false | 401 | #! /usr/bin/env python
"""
compare the execution time of
waf configure
and
waf configure --confcache
"""
top = '.'
out = 'build'
def options(opt):
opt.load('compiler_c')
opt.add_option('--confcache', dest='confcache', default=0, action='count', help='Use a configuration cache')
def configure(conf):
conf.load('compiler_c')
conf.check(fragment='int main() { return 0; }')
| [
"[email protected]"
] | ||
ca8707be7abdfa925b8ebdeabf5485b6943a3066 | 6545714ada44ce8a3bc3a55dfc9abb3ea9282c05 | /code/figures/si/figS0X_histograms.py | cb07599ef7ab19959aacb74dddf6909977cae293 | [
"MIT",
"CC-BY-4.0",
"CC-BY-3.0"
] | permissive | RPGroup-PBoC/bursty_transcription | 88b8e30f1fa05b2a57319aa73ab22c3e51f4cb3a | cd3082c567168dfad12c08621976ea49d6706f89 | refs/heads/master | 2023-02-08T07:19:13.599192 | 2020-12-15T22:27:44 | 2020-12-15T22:27:44 | 229,149,541 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,444 | py | # %%
import enum
import re
import dill
from git import Repo #for directory convenience
import numpy as np
import pandas as pd
import emcee
import arviz as az
import matplotlib.pyplot as plt
import seaborn as sns
import bebi103.viz
import srep
srep.viz.plotting_style()
pboc_colors = srep.viz.color_selector('pboc')
# %%
fig, ax = plt.subplots(4, 3, figsize=(8.5, 10), sharex=False, sharey=False)
# # Modify tick font size
# for a in ax:
# a.tick_params(axis="both", which="major", labelsize=8)
repo = Repo("./", search_parent_directories=True)
# repo_rootdir holds the absolute path to the top-level of our repo
repo_rootdir = repo.working_tree_dir
# Select PBoC color palette
colors = srep.viz.color_selector('pboc')
# Set PBoC plotting style
srep.viz.plotting_style()
# load in the pickled samples
pklfile = open(
f"{repo_rootdir}/data/mcmc_samples/repression_pooled_expts.pkl", 'rb'
)
model, sampler, ppc_uv5, ppc_rep = dill.load(pklfile)
pklfile.close()
inf_dat = az.convert_to_inference_data(
sampler, var_names=model.var_labels
)
data_uv5, data_rep = srep.utils.condense_data(model.expts)
n_dim = np.shape(model.var_labels)
# Define operators
op_array = ["Oid", "O1", "O2"]
# Define aTc concentrations
aTc_array = ["0p5ngmL", "1ngmL", "2ngmL", "10ngmL"]
# Set global colors for aTc concentrations
aTc_colors = ('blue', 'betancourt', 'green', 'orange')
aTc_col_dict = dict(zip(aTc_array , aTc_colors))
# organize all the options upfront
all_expts = (
("Oid_2ngmL", "Oid_1ngmL"),
("O1_1ngmL", "O1_2ngmL", "O1_10ngmL"),
("O2_0p5ngmL", "O2_1ngmL", "O2_2ngmL", "O2_10ngmL")
)
# Loop through operators concentrations
for op_idx, op in enumerate(op_array):
# List experiments available for operator
op_exp = all_expts[op_idx]
# Loop through aTc concentrations
for aTc_idx, aTc in enumerate(aTc_array):
# Define aTc concentration color
col = aTc_col_dict[aTc]
color = srep.viz.bebi103_colors()[col]
# Define experiment
expt = f"{op}_{aTc}"
# Add operator top of colums
if aTc_idx == 0:
label = f"operator {op}"
ax[aTc_idx, op_idx].set_title(label, bbox=dict(facecolor="#ffedce"))
# Add aTc concentration to right plots
if op_idx == 2:
# Generate twin axis
axtwin = ax[aTc_idx, op_idx].twinx()
# Remove ticks
axtwin.get_yaxis().set_ticks([])
# Fix label
label = expt.split("_")[1]
label = label.replace("ngmL", " ng/mL")
label = label.replace("0p5", "0.5")
# Set label
axtwin.set_ylabel(
f"[aTc] {label}",
bbox=dict(facecolor="#ffedce"),
)
# Remove residual ticks from the original left axis
ax[aTc_idx, op_idx].tick_params(color="w", width=0)
# Add ylabel to left plots
# if op_idx == 0:
# ax[aTc_idx, op_idx].set_ylabel("probability")
# Check if experiment exists, if not, skip experiment
if expt not in op_exp:
ax[aTc_idx, op_idx].set_facecolor("#D3D3D3")
ax[aTc_idx, op_idx].tick_params(axis='x', colors='white')
ax[aTc_idx, op_idx].tick_params(axis='y', colors='white')
continue
# Find experiment index
expt_idx = model.expts.index(expt)
# Extract PPC samples and unpack them to raw format
ppc_samples = srep.utils.uncondense_ppc(ppc_rep[expt_idx])
# Define bins in histogram
bins = np.arange(0, ppc_samples.max() + 1)
# Initialize matrix to save histograms
hist_mat = np.zeros([ppc_samples.shape[0], len(bins) - 1])
# Loop through each ppc sample and compute histogram
for s_idx, s in enumerate(ppc_samples):
hist_mat[s_idx] = np.histogram(s, bins=bins, density=True)[0]
# Find percentiles to be plot
lower_tile = np.percentile(hist_mat, 2.5, axis=0)
upper_tile = np.percentile(hist_mat, 97.5, axis=0)
mid_tile = np.percentile(hist_mat, 50, axis=0)
# Extract data
expt_data = srep.utils.uncondense_valuescounts(data_rep[expt_idx])
# Compute histogram for data
hist_data = np.histogram(expt_data, bins=bins, density=True)[0]
# Plot predicted histogram with percentiles
# 95% percentile
ax[aTc_idx, op_idx].fill_between(
bins[:-1],
lower_tile,
upper_tile,
step="post",
edgecolor=color[0],
color=color[0]
)
# median
ax[aTc_idx, op_idx].step(
bins[:-1],
mid_tile,
where="post",
color=color[-1]
)
# add data on top
ax[aTc_idx, op_idx].step(
bins[:-1],
hist_data,
where="post",
color="black",
linewidth=1.25
)
# Set x-label
ax[aTc_idx, op_idx].set_xlabel("mRNA / cell")
ax[aTc_idx, op_idx].set_ylabel("probability")
# Set axis limit
upper_limit = np.where(hist_data > 5E-3)[0][-1]
ax[aTc_idx, op_idx].set_xlim(0, upper_limit)
# Adjust spacing between plots
plt.subplots_adjust(hspace=0.3, wspace=0.4)
plt.savefig(
f"{repo_rootdir}/figures/si/figS0X_histograms.pdf", bbox_inches='tight'
)
# %%
| [
"[email protected]"
] | |
93f6e5d8bb3b2467e9215ebface9f1f79610489f | 18c1cbda3f9f6ca9cc9a27e93ddfece583c4fe43 | /projects/DensePose/densepose/config.py | d4366b11a115d5d9673008dd2df9cdda193fda82 | [
"Apache-2.0"
] | permissive | zzzzzz0407/detectron2 | 0bd8e5def65eb72bc9477f08f8907958d9fd73a1 | 021fc5b1502bbba54e4714735736898803835ab0 | refs/heads/master | 2022-12-04T14:25:36.986566 | 2020-08-26T10:39:30 | 2020-08-26T10:39:30 | 276,800,695 | 1 | 0 | Apache-2.0 | 2020-07-03T03:42:26 | 2020-07-03T03:42:25 | null | UTF-8 | Python | false | false | 3,394 | py | # -*- coding = utf-8 -*-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
from detectron2.config import CfgNode as CN
def add_dataset_category_config(cfg: CN):
"""
Add config for additional category-related dataset options
- category whitelisting
- category mapping
"""
_C = cfg
_C.DATASETS.CATEGORY_MAPS = CN(new_allowed=True)
_C.DATASETS.WHITELISTED_CATEGORIES = CN(new_allowed=True)
def add_densepose_config(cfg: CN):
"""
Add config for densepose head.
"""
_C = cfg
_C.MODEL.DENSEPOSE_ON = True
_C.MODEL.ROI_DENSEPOSE_HEAD = CN()
_C.MODEL.ROI_DENSEPOSE_HEAD.NAME = ""
_C.MODEL.ROI_DENSEPOSE_HEAD.NUM_STACKED_CONVS = 8
# Number of parts used for point labels
_C.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES = 24
_C.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL = 4
_C.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_DIM = 512
_C.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_KERNEL = 3
_C.MODEL.ROI_DENSEPOSE_HEAD.UP_SCALE = 2
_C.MODEL.ROI_DENSEPOSE_HEAD.HEATMAP_SIZE = 112
_C.MODEL.ROI_DENSEPOSE_HEAD.POOLER_TYPE = "ROIAlignV2"
_C.MODEL.ROI_DENSEPOSE_HEAD.POOLER_RESOLUTION = 28
_C.MODEL.ROI_DENSEPOSE_HEAD.POOLER_SAMPLING_RATIO = 2
_C.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS = 2 # 15 or 2
# Overlap threshold for an RoI to be considered foreground (if >= FG_IOU_THRESHOLD)
_C.MODEL.ROI_DENSEPOSE_HEAD.FG_IOU_THRESHOLD = 0.7
# Loss weights for annotation masks.(14 Parts)
_C.MODEL.ROI_DENSEPOSE_HEAD.INDEX_WEIGHTS = 5.0
# Loss weights for surface parts. (24 Parts)
_C.MODEL.ROI_DENSEPOSE_HEAD.PART_WEIGHTS = 1.0
# Loss weights for UV regression.
_C.MODEL.ROI_DENSEPOSE_HEAD.POINT_REGRESSION_WEIGHTS = 0.01
# Coarse segmentation is trained using instance segmentation task data
_C.MODEL.ROI_DENSEPOSE_HEAD.COARSE_SEGM_TRAINED_BY_MASKS = False
# For Decoder
_C.MODEL.ROI_DENSEPOSE_HEAD.DECODER_ON = True
_C.MODEL.ROI_DENSEPOSE_HEAD.DECODER_NUM_CLASSES = 256
_C.MODEL.ROI_DENSEPOSE_HEAD.DECODER_CONV_DIMS = 256
_C.MODEL.ROI_DENSEPOSE_HEAD.DECODER_NORM = ""
_C.MODEL.ROI_DENSEPOSE_HEAD.DECODER_COMMON_STRIDE = 4
# For DeepLab head
_C.MODEL.ROI_DENSEPOSE_HEAD.DEEPLAB = CN()
_C.MODEL.ROI_DENSEPOSE_HEAD.DEEPLAB.NORM = "GN"
_C.MODEL.ROI_DENSEPOSE_HEAD.DEEPLAB.NONLOCAL_ON = 0
# Confidences
# Enable learning UV confidences (variances) along with the actual values
_C.MODEL.ROI_DENSEPOSE_HEAD.UV_CONFIDENCE = CN({"ENABLED": False})
# UV confidence lower bound
_C.MODEL.ROI_DENSEPOSE_HEAD.UV_CONFIDENCE.EPSILON = 0.01
# Enable learning segmentation confidences (variances) along with the actual values
_C.MODEL.ROI_DENSEPOSE_HEAD.SEGM_CONFIDENCE = CN({"ENABLED": False})
# Segmentation confidence lower bound
_C.MODEL.ROI_DENSEPOSE_HEAD.SEGM_CONFIDENCE.EPSILON = 0.01
# Statistical model type for confidence learning, possible values:
# - "iid_iso": statistically independent identically distributed residuals
# with isotropic covariance
# - "indep_aniso": statistically independent residuals with anisotropic
# covariances
_C.MODEL.ROI_DENSEPOSE_HEAD.UV_CONFIDENCE.TYPE = "iid_iso"
# List of angles for rotation in data augmentation during training
_C.INPUT.ROTATION_ANGLES = [0]
_C.TEST.AUG.ROTATION_ANGLES = () # Rotation TTA
| [
"[email protected]"
] | |
2d208e527b45d814669dbc1dcaafd017efa974cf | 2af6a5c2d33e2046a1d25ae9dd66d349d3833940 | /res_bw/scripts/client/fx/events/setorbitorpoint.py | ce1f72d4a16ab75244a47fce7c26186c59c9120e | [] | no_license | webiumsk/WOT-0.9.12-CT | e6c8b5bb106fad71b5c3056ada59fb1aebc5f2b2 | 2506e34bd6634ad500b6501f4ed4f04af3f43fa0 | refs/heads/master | 2021-01-10T01:38:38.080814 | 2015-11-11T00:08:04 | 2015-11-11T00:08:04 | 45,803,240 | 0 | 0 | null | null | null | null | WINDOWS-1250 | Python | false | false | 1,334 | py | # 2015.11.10 21:32:17 Střední Evropa (běžný čas)
# Embedded file name: scripts/client/FX/Events/SetOrbitorPoint.py
from FX import s_sectionProcessors
from ParticleSubSystem import *
import Pixie
from bwdebug import *
class SetOrbitorPoint(ParticleSubSystem):
"""
This class implements an event that sets the world location of an orbitor
to the position of the Effect source when the effect is started.
"""
def __init__(self):
ParticleSubSystem.__init__(self)
def isInteresting(self, subSystem):
act = subSystem.action(ORBITOR_PSA)
return act != None
def setOrbitorPoint(self, actor, source, target, subSystem):
try:
act = subSystem.action(ORBITOR_PSA)
act.point = source.position
except:
ERROR_MSG('setOrbitorPoint has a problem with finding the position of the source object', source)
def go(self, effect, actor, source, target, **kargs):
self.subSystemIterate(actor, source, target, self.setOrbitorPoint)
return 0.0
s_sectionProcessors['SetOrbitorPoint'] = SetOrbitorPoint
# okay decompyling c:\Users\PC\wotsources\files\originals\res_bw\scripts\client\fx\events\setorbitorpoint.pyc
# decompiled 1 files: 1 okay, 0 failed, 0 verify failed
# 2015.11.10 21:32:17 Střední Evropa (běžný čas)
| [
"[email protected]"
] | |
e13e7d92bfdb8255a0c0eccbbcacd0f29a374af3 | 999ed80db247794159be1d752bc6f0fc272bd117 | /spytest/spytest/env.py | 9daccb6af3427a3131e4466266a4203d71bb5e5e | [
"LicenseRef-scancode-generic-cla",
"Apache-2.0"
] | permissive | ramakristipati/sonic-mgmt | 7fee876412f0121da96d751f7d199690c73496f3 | a86f0e5b1742d01b8d8a28a537f79bf608955695 | refs/heads/master | 2023-08-31T07:55:38.446663 | 2023-08-31T06:34:53 | 2023-08-31T06:34:53 | 315,448,103 | 2 | 0 | NOASSERTION | 2020-11-23T21:44:07 | 2020-11-23T21:44:07 | null | UTF-8 | Python | false | false | 6,803 | py | import os
max_buckets = 32
defaults = {
"SPYTEST_ONIE_FAIL_ON_NORMAL_PROMPT": "1",
"SPYTEST_LOGS_TIME_FMT_ELAPSED": "0",
"SPYTEST_NO_CONSOLE_LOG": "0",
"SPYTEST_PROMPTS_FILENAME": None,
"SPYTEST_TEXTFSM_INDEX_FILENAME": None,
"SPYTEST_UI_POSITIVE_CASES_ONLY": "0",
"SPYTEST_REPEAT_MODULE_SUPPORT": "0",
"SPYTEST_FILE_PREFIX": "results",
"SPYTEST_RESULTS_PREFIX": None,
"SPYTEST_RESULTS_PNG": "1",
"SPYTEST_MODULE_CSV_FILENAME": "modules.csv",
"SPYTEST_MODULE_INFO_CSV_FILENAME": "module_info.csv",
"SPYTEST_FUNCTION_INFO_CSV_FILENAME": "function_info.csv",
"SPYTEST_TCMAP_CSV_FILENAME": "tcmap.csv,tcmap-ut.csv",
"SPYTEST_TESTBED_IGNORE_CONSTRAINTS": "",
"SPYTEST_FLEX_DUT": "1",
"SPYTEST_FLEX_PORT": "0",
"SPYTEST_MGMT_IFNAME": "eth0",
"SPYTEST_TOPO_SEP": None,
"SPYTEST_TESTBED_RANDOMIZE_DEVICES": "0",
"SPYTEST_TOPO_1": "D1T1:2",
"SPYTEST_TOPO_2": "D1T1:4 D1D2:6 D2T1:2",
"SPYTEST_TOPO_4": "D1T1:2 D2T1:2 D3T1:2 D4T1:2 D1D2:4 D2D3:4 D3D4:4 D4D1:4",
"SPYTEST_TOPO_6": "D1D3:4 D1D4:4 D1D5:2 D1D6:4 D2D3:4 D2D4:4 D2D5:4 D2D6:4 D3T1:2 D4T1:2 D5T1:2 D6T1:2",
"SPYTEST_EMAIL_BODY_PREFIX": "",
"SPYTEST_TECH_SUPPORT_ONERROR": "system,port_list,port_status,console_hang,on_cr_recover",
"SPYTEST_SAVE_CLI_TYPE": "1",
"SPYTEST_SAVE_CLI_CMDS": "1",
"SPYTEST_SHUTDOWN_FREE_PORTS": "0",
"SPYTEST_ABORT_ON_VERSION_MISMATCH": "2",
"SPYTEST_TOPOLOGY_STATUS_MAX_WAIT": "60",
"SPYTEST_TOPOLOGY_STATUS_ONFAIL_ABORT": "module",
"SPYTEST_LIVE_RESULTS": "1",
"SPYTEST_DEBUG_FIND_PROMPT": "0",
"SPYTEST_KDUMP_ENABLE": "0",
"SPYTEST_LOG_DUTID_FMT": "LABEL",
"SPYTEST_SYSRQ_ENABLE": "0",
"SPYTEST_SET_STATIC_IP": "1",
"SPYTEST_ONREBOOT_RENEW_MGMT_IP": "0",
"SPYTEST_DATE_SYNC": "1",
"SPYTEST_BOOT_FROM_GRUB": "0",
"SPYTEST_RECOVERY_MECHANISMS": "1",
"SPYTEST_RESET_CONSOLES": "1",
"SPYTEST_ONCONSOLE_HANG": "recover",
"SPYTEST_CONNECT_DEVICES_RETRY": "10",
"SPYTEST_OPENCONFIG_API": "GNMI",
"SPYTEST_IFA_ENABLE": "0",
"SPYTEST_ROUTING_CONFIG_MODE": None,
"SPYTEST_CLEAR_MGMT_INTERFACE": "0",
"SPYTEST_CLEAR_DEVICE_METADATA_HOSTNAME": "0",
"SPYTEST_CLEAR_DEVICE_METADATA_BGP_ASN": "0",
"SPYTEST_NTP_CONFIG_INIT": "0",
"SPYTEST_BASE_CONFIG_RETAIN_FDB_AGETIME": "0",
"SPYTEST_GENERATE_CERTIFICATE": "0",
"SPYTEST_HOOKS_SYSTEM_STATUS_UITYPE": "",
"SPYTEST_HOOKS_PORT_ADMIN_STATE_UITYPE": "click",
"SPYTEST_HOOKS_PORT_STATUS_UITYPE": "click",
"SPYTEST_HOOKS_VERSION_UITYPE": "click",
"SPYTEST_HOOKS_BREAKOUT_UITYPE": "klish",
"SPYTEST_HOOKS_SPEED_UITYPE": "",
"SPYTEST_IFNAME_MAP_UITYPE": "click",
"SPYTEST_IFNAME_TYPE_UITYPE": "klish",
"SPYTEST_API_INSTRUMENT_SUPPORT": "0",
"SPYTEST_REDIS_DB_CLI_TYPE": "1",
"SPYTEST_TOPOLOGY_SHOW_ALIAS": "0",
"SPYTEST_TOPOLOGY_STATUS_FAST": "1",
"SPYTEST_BGP_API_UITYPE": "",
"SPYTEST_BGP_CFG_API_UITYPE": "",
"SPYTEST_BGP_SHOW_API_UITYPE": "",
"SPYTEST_RECOVERY_CTRL_C": "1",
"SPYTEST_RECOVERY_CTRL_Q": "1",
"SPYTEST_SOFT_TGEN_WAIT_MULTIPLIER": "2",
"SPYTEST_SUDO_SHELL": "1",
# CSV: normal, fast, rps
"SPYTEST_SYSTEM_NREADY_RECOVERY_METHODS": "normal",
"SPYTEST_DETECT_CONCURRENT_ACCESS": "1",
"SPYTEST_SYSLOG_ANALYSIS": "1",
"SPYTEST_USE_NO_MORE": "0",
"SPYTEST_PRESERVE_GNMI_CERT": "1",
"SPYTEST_CMD_FAIL_RESULT_SUPPORT": "1",
"SPYTEST_USE_FULL_NODEID": "0",
"SPYTEST_BATCH_DEFAULT_BUCKET": "1",
"SPYTEST_BATCH_DEAD_NODE_MAX_TIME": "0",
"SPYTEST_BATCH_POLL_STATUS_TIME": "0",
"SPYTEST_BATCH_SAVE_FREE_DEVICES": "1",
"SPYTEST_BATCH_TOPO_PREF": "0",
"SPYTEST_TECH_SUPPORT_DELETE_ON_DUT": "0",
"SPYTEST_SHOWTECH_MAXTIME": "1200",
"SPYTEST_ABORT_ON_APPLY_BASE_CONFIG_FAIL": "1",
"SPYTEST_TCMAP_DEFAULT_TRYSSH": "0",
"SPYTEST_TCMAP_DEFAULT_FASTER_CLI": "0",
"SPYTEST_RECOVERY_CR_FAIL": "0",
"SPYTEST_RECOVER_FROM_ONIE_ON_REBOOT": "0",
"SPYTEST_RECOVER_FROM_ONIE_WTIHOUT_IP": "1",
}
dev_defaults = {
"SPYTEST_TOPOLOGY_SIMULATE_FAIL": "0",
"SPYTEST_REST_TEST_URL": None,
"SPYTEST_BATCH_BACKUP_NODES": None,
"SPYTEST_BATCH_RERUN_NODES": None,
"SPYTEST_BATCH_MODULE_TOPO_PREF": None,
"SPYTEST_BATCH_MATCHING_BUCKET_ORDER": "larger,largest",
"SPYTEST_BATCH_RERUN": None,
"SPYTEST_TESTBED_FILE": "testbed.yaml",
"SPYTEST_FILE_MODE": "0",
"SPYTEST_SCHEDULING": None,
"SPYTEST_BATCH_RUN": None,
"PYTEST_XDIST_WORKER": None,
"SPYTEST_BUCKETS_DEADNODE_SIMULATE": "0",
"SPYTEST_USER_ROOT": None,
"SPYTEST_CMDLINE_ARGS": "",
"SPYTEST_SUITE_ARGS": "",
"SPYTEST_TEXTFSM_DUMP_INDENT_JSON": None,
"SPYTEST_TESTBED_EXCLUDE_DEVICES": None,
"SPYTEST_TESTBED_INCLUDE_DEVICES": None,
"SPYTEST_LOGS_PATH": None,
"SPYTEST_LOGS_LEVEL": "info",
"SPYTEST_APPLY_BASE_CONFIG_AFTER_MODULE": "0",
"SPYTEST_COMMUNITY_BUILD_FEATURES": "0",
"SPYTEST_SYSTEM_READY_AFTER_PORT_SETTINGS": "0",
"SPYTEST_TCLIST_FILE": None,
"SPYTEST_MODULE_REPORT_SORTER": "CDT",
"SPYTEST_ASAN_OPTIONS": "",
"SPYTEST_RECOVER_INITIAL_SYSTEM_NOT_READY": "0",
"SPYTEST_LIVE_TRACE_OUTPUT": "0",
"SPYTEST_USE_SAMPLE_DATA": "0",
"SPYTEST_DRYRUN_CMD_DELAY": "0",
"SPYTEST_FASTER_CLI_OVERRIDE": None,
"SPYTEST_FASTER_CLI_LAST_PROMPT": "1",
"SPYTEST_NEW_FIND_PROMPT": "0",
"SPYTEST_SPLIT_COMMAND_LIST": "0",
"SPYTEST_CHECK_SKIP_ERROR": "0",
"SPYTEST_HELPER_CONFIG_DB_RELOAD": "yes",
"SPYTEST_CHECK_HELPER_SIGNATURE": "0",
"SPYTEST_CLICK_HELPER_ARGS": "",
}
def _get_logs_path():
user_root = os.getenv("SPYTEST_USER_ROOT", os.getcwd())
logs_path = os.getenv("SPYTEST_LOGS_PATH", user_root)
if not os.path.isabs(logs_path):
logs_path = os.path.join(user_root, logs_path)
if not os.path.exists(logs_path):
os.makedirs(logs_path)
return logs_path
def _get_defaults():
if "SPYTEST_TOPO_{}".format(max_buckets) not in defaults:
for i in range(1, max_buckets + 1):
name = "SPYTEST_TOPO_{}".format(i)
if name not in defaults:
value = ["D{}".format(n + 1) for n in range(i)]
defaults[name] = " ".join(value)
return defaults
def get(name, default=None):
cur_def = _get_defaults().get(name, default)
if cur_def is None and default is not None:
cur_def = default
retval = os.getenv(name, cur_def)
return retval
def getint(name, default=0):
return int(get(name) or default)
def match(name, expected, default=None):
return bool(expected == get(name, default))
def get_default_all():
return sorted(_get_defaults().items())
def set_default(name, value):
defaults[name] = value
| [
"[email protected]"
] | |
7329bf28f44b2f6cbded2cd24158892a1a7d480d | 36e12b65922ebbb6d95aff6cbac0777c47e24153 | /getlongest3UTR.py | 3bbdeb30cbe012570b84894898bf2fa0c6690df2 | [
"MIT"
] | permissive | NailouZhang/AnalysisScripts | d0d00174f642d6722cc907f9a392084600630780 | 3df37d2f8fca9bc402afe5ea870c42200fca1ed3 | refs/heads/master | 2023-06-06T08:14:39.064920 | 2021-06-22T16:46:26 | 2021-06-22T16:46:26 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,362 | py | import argparse
import sys
from Bio import SeqIO
from Bio.SeqUtils import GC
from numpy import mean
def nameconversion(ens_to_short):
#Given a file that has ENSG IDs and their corresponding short names, make a dictionary.
ens2short = {} # {ENSMUSG000000 : gene_short_name}
infh = open(ens_to_short, 'r')
for line in infh:
line = line.strip().split('\t')
if line[0].startswith('ENSMUSG'):
ens2short[line[0]] = line[1]
infh.close()
return ens2short
def getlongestUTRs(UTRgff, ens2short):
#Given a gff of UTRs (usualy mm9_ensGene.3putrs.gff), get the longest UTR for that gene.
UTRs = {} # {ENSGENEID : [chrm, start, stop, strand]}
infh = open(UTRgff, 'r')
for line in infh:
line = line.strip().split('\t')
chrm = line[0]
start = int(line[3])
stop = int(line[4])
strand = line[6]
gene = line[8].split(';')[-1]
if gene in ens2short and 'random' not in chrm:
#Don't deal with any gene that doesn't have a short name or is on chr_random
gene_short_name = ens2short[gene]
else:
continue
length = stop - start
if gene_short_name not in UTRs:
UTRs[gene_short_name] = [chrm, start, stop, strand]
elif gene_short_name in UTRs:
currentlength = UTRs[gene_short_name][2] - UTRs[gene_short_name][1]
if length > currentlength:
UTRs[gene_short_name] = [chrm, start, stop, strand]
infh.close()
print 'Have UTRs for {0} genes.'.format(len(UTRs))
return UTRs
def getsequences(UTRs, genomefasta, genes):
genesofinterest = []
GCs = []
lengths = []
infh = open(genes, 'r')
for line in infh:
line = line.strip()
genesofinterest.append(line)
infh.close()
seqs = {} # {genename : UTR_sequence}
sys.stderr.write('Indexing genome sequence...\n')
seq_dict = SeqIO.to_dict(SeqIO.parse(genomefasta, 'fasta'))
sys.stderr.write('{0} chromosomes indexed.\n'.format(len(seq_dict)))
for UTR in UTRs:
chrm = UTRs[UTR][0]
start = UTRs[UTR][1]
stop = UTRs[UTR][2]
strand = UTRs[UTR][3]
if strand == '+':
UTRseq = seq_dict[chrm].seq[start - 1 : stop].upper()
elif strand == '-':
UTRseq = seq_dict[chrm].seq[start - 1 : stop].upper().reverse_complement()
if UTR in genesofinterest:
seqs[UTR] = str(UTRseq)
GCs.append(GC(str(UTRseq)))
lengths.append(len(str(UTRseq)))
print 'Started with {0} genes. Found UTR sequences for {1} of them. Their average GC content is {2}%.'.format(len(genesofinterest), len(seqs), mean(GCs))
print 'Their average length is {0}.'.format(mean(lengths))
outfh.close()
return seqs
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--ens2short', type = str, help = 'File of tab delimited ENSGENEIDs and gene short names.')
parser.add_argument('--genes', type = str, help = 'List of genes for which you want the 3\' UTRs.')
parser.add_argument('--UTRgff', type = str, help = '3\'UTR coordinates in gff format. mm9_ensGene.3putrs.gff, for example.')
parser.add_argument('--genomefasta', type = str, help = 'Genome sequence in fasta format.')
parser.add_argument('--output', type = str, help = 'Output file.')
args = parser.parse_args()
ens2short = nameconversion(args.ens2short)
UTRs = getlongestUTRs(args.UTRgff, ens2short)
seqs = getsequences(UTRs, args.genomefasta, args.genes)
outfh = open(args.output, 'w')
for UTR in seqs:
outfh.write('>' + UTR + '\n')
outfh.write(seqs[UTR] + '\n')
outfh.close()
| [
"[email protected]"
] | |
2a0bfa5609aca2dfc02a924a2145644180b42c3b | a08d85552ed0db1a906c3b31ed99f56bae857c60 | /arguments.py | fb8b8bce971e780a8a1f147fee8b65881ca343c2 | [] | no_license | MagdalenaZZ/Python_ditties | 90866e53f9aafa603f05735e2ceb094cf5518a18 | 757d8de1df0e53d38d4ba9854b092eabe6ec6570 | refs/heads/master | 2023-02-20T12:23:09.778092 | 2023-02-07T10:06:55 | 2023-02-07T10:06:55 | 136,293,051 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,753 | py | #!/usr/bin/env python
"""
Nice descriptive header
"""
import sys
import argparse
# Make sure you are working on the right version of Python
if sys.version_info[0] == 3:
print ('\nCAVA does not run on Python 3.\n')
quit()
# Command line argument parsing
parser = argparse.ArgumentParser(description='Process some integers.')
descr = 'OpEx (Optimised Exome) pipeline ' + ver + '.'
parser = OptionParser(usage='python opex.py <options>', version=ver, description=descr)
parser.add_option('-i', "--input", default=None, dest='fastq', action='store', help="fastq.gz files")
parser.add_option('-o', "--output", default=None, dest='name', action='store', help="Sample name (output prefix)")
parser.add_option('-b', "--bed", default=None, dest='bed', action='store', help="Bed file")
parser.add_option('-r', "--reference", default=None, dest='reference', action='store', help="Reference genome file")
parser.add_option('-t', "--threads", default=1, dest='threads', action='store', help="Number of processes to use")
parser.add_option('-f', "--full", default=False, dest='full', action='store_true',help="Output full CoverView output [default value: %default]")
parser.add_option('-c', "--config", default=None, dest='config', action='store', help="Configuration file")
parser.add_option('-k', "--keep", default=False, dest='keep', action='store_true', help="Keep temporary files")
parser.add_option('-h', "--help", default=None, help="This is a help message")
(options, args) = parser.parse_args()
checkInputs(options)
args = parser.parse_args()
# complain if something is missing
if not 'REFERENCE' in params.keys():
if options.reference is None:
print ('Error: no reference genom provided.')
quit()
| [
"[email protected]"
] | |
9b11c4878c83539c204e7440be80bbd367f71458 | 09e57dd1374713f06b70d7b37a580130d9bbab0d | /benchmark/startQiskit_noisy1396.py | 97706fc4f887aef70259b2716a81166bb629ea23 | [
"BSD-3-Clause"
] | permissive | UCLA-SEAL/QDiff | ad53650034897abb5941e74539e3aee8edb600ab | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | refs/heads/main | 2023-08-05T04:52:24.961998 | 2021-09-19T02:56:16 | 2021-09-19T02:56:16 | 405,159,939 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,349 | py | # qubit number=5
# total number=54
import cirq
import qiskit
from qiskit.providers.aer import QasmSimulator
from qiskit.test.mock import FakeVigo
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import BasicAer, execute, transpile
from pprint import pprint
from qiskit.test.mock import FakeVigo
from math import log2,floor, sqrt, pi
import numpy as np
import networkx as nx
def build_oracle(n: int, f) -> QuantumCircuit:
# implement the oracle O_f^\pm
# NOTE: use U1 gate (P gate) with \lambda = 180 ==> CZ gate
# or multi_control_Z_gate (issue #127)
controls = QuantumRegister(n, "ofc")
oracle = QuantumCircuit(controls, name="Zf")
for i in range(2 ** n):
rep = np.binary_repr(i, n)
if f(rep) == "1":
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.h(controls[n])
if n >= 2:
oracle.mcu1(pi, controls[1:], controls[0])
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.barrier()
return oracle
def make_circuit(n:int,f) -> QuantumCircuit:
# circuit begin
input_qubit = QuantumRegister(n,"qc")
classical = ClassicalRegister(n, "qm")
prog = QuantumCircuit(input_qubit, classical)
prog.h(input_qubit[0]) # number=3
prog.h(input_qubit[1]) # number=4
prog.h(input_qubit[2]) # number=5
prog.h(input_qubit[3]) # number=6
prog.h(input_qubit[0]) # number=41
prog.cz(input_qubit[1],input_qubit[0]) # number=42
prog.h(input_qubit[0]) # number=43
prog.z(input_qubit[1]) # number=37
prog.cx(input_qubit[1],input_qubit[0]) # number=38
prog.h(input_qubit[4]) # number=21
prog.x(input_qubit[2]) # number=39
Zf = build_oracle(n, f)
repeat = floor(sqrt(2 ** n) * pi / 4)
for i in range(repeat):
prog.append(Zf.to_gate(), [input_qubit[i] for i in range(n)])
prog.h(input_qubit[0]) # number=1
prog.h(input_qubit[1]) # number=2
prog.h(input_qubit[2]) # number=7
prog.h(input_qubit[3]) # number=8
prog.h(input_qubit[0]) # number=51
prog.cz(input_qubit[3],input_qubit[0]) # number=52
prog.h(input_qubit[0]) # number=53
prog.h(input_qubit[0]) # number=48
prog.cz(input_qubit[3],input_qubit[0]) # number=49
prog.h(input_qubit[0]) # number=50
prog.z(input_qubit[3]) # number=46
prog.cx(input_qubit[3],input_qubit[0]) # number=47
prog.x(input_qubit[4]) # number=40
prog.cx(input_qubit[3],input_qubit[0]) # number=35
prog.x(input_qubit[0]) # number=9
prog.cx(input_qubit[0],input_qubit[1]) # number=29
prog.x(input_qubit[1]) # number=30
prog.cx(input_qubit[0],input_qubit[1]) # number=31
prog.x(input_qubit[2]) # number=11
prog.x(input_qubit[1]) # number=44
prog.x(input_qubit[3]) # number=12
if n>=2:
prog.mcu1(pi,input_qubit[1:],input_qubit[0])
prog.cx(input_qubit[1],input_qubit[0]) # number=24
prog.x(input_qubit[0]) # number=25
prog.cx(input_qubit[1],input_qubit[0]) # number=26
prog.x(input_qubit[1]) # number=14
prog.x(input_qubit[2]) # number=15
prog.x(input_qubit[3]) # number=16
prog.h(input_qubit[0]) # number=17
prog.h(input_qubit[1]) # number=18
prog.h(input_qubit[2]) # number=19
prog.h(input_qubit[3]) # number=20
prog.x(input_qubit[1]) # number=22
prog.y(input_qubit[1]) # number=32
prog.x(input_qubit[1]) # number=23
# circuit end
for i in range(n):
prog.measure(input_qubit[i], classical[i])
return prog
if __name__ == '__main__':
key = "00000"
f = lambda rep: str(int(rep == key))
prog = make_circuit(5,f)
backend = FakeVigo()
sample_shot =7924
info = execute(prog, backend=backend, shots=sample_shot).result().get_counts()
backend = FakeVigo()
circuit1 = transpile(prog,backend,optimization_level=2)
writefile = open("../data/startQiskit_noisy1396.csv","w")
print(info,file=writefile)
print("results end", file=writefile)
print(circuit1.depth(),file=writefile)
print(circuit1,file=writefile)
writefile.close()
| [
"[email protected]"
] | |
5935c694933a8525af6556273b3dcd2ccf5b5bbc | 1976a844596022aa2410efa1656fb669c611fd19 | /web/api/views.py | 80974fe4d8bb946be19980c8b2c4ec679b26bab6 | [
"Apache-2.0"
] | permissive | sacharya/FleetDeploymentReporting | 575f9e311c49e7a5da1ee76422fe30467097c61b | 79236222d0bcf9d3baddae7675232f9b92c46a7f | refs/heads/master | 2020-03-23T06:09:49.044743 | 2018-08-23T20:13:50 | 2018-08-23T20:13:50 | 141,192,574 | 0 | 0 | null | 2018-07-16T20:45:55 | 2018-07-16T20:45:54 | null | UTF-8 | Python | false | false | 10,475 | py | import logging
from cloud_snitch.models import registry
from django.http import Http404
from rest_framework import viewsets
from rest_framework import status
from rest_framework.decorators import list_route
from rest_framework.exceptions import ValidationError
from rest_framework.response import Response
from neo4jdriver.query import Query
from .decorators import cls_cached_result
from .exceptions import JobError
from .exceptions import JobRunningError
from .serializers import DiffSerializer
from .serializers import DiffNodeSerializer
from .serializers import DiffNodesSerializer
from .serializers import ModelSerializer
from .serializers import PropertySerializer
from .serializers import SearchSerializer
from .serializers import TimesChangedSerializer
from .query import TimesQuery
from .tasks import objectdiff
logger = logging.getLogger(__name__)
class ModelViewSet(viewsets.ViewSet):
"""Viewset around model information."""
def list(self, request):
"""Get a list of models."""
models = registry.modeldicts()
serializer = ModelSerializer(models, many=True)
return Response(serializer.data)
def retrieve(self, request, pk=None):
"""Get a specific model."""
model = registry.modeldict(pk)
if model is None:
raise Http404
serializer = ModelSerializer(model)
return Response(serializer.data)
class PathViewSet(viewsets.ViewSet):
"""Viewset around paths."""
def list(self, request):
"""List all paths to each model."""
paths = {}
for label, model in registry.models.items():
paths[label] = [l for l, _ in registry.path(label)]
serializer = ModelSerializer(paths)
return Response(serializer.data)
class PropertyViewSet(viewsets.ViewSet):
"""Viewset around model properties."""
def list(self, request, model=None):
"""List all properties models."""
props = registry.properties()
serializer = PropertySerializer(props)
return Response(serializer.data)
def retrieve(self, request, pk=None):
"""List all properties for a specific model."""
props = registry.properties(model=pk)
if not props:
raise Http404
serializer = PropertySerializer(props)
return Response(serializer.data)
class ObjectViewSet(viewsets.ViewSet):
"""View set for searching, viewing objects."""
@cls_cached_result(prefix="times", timeout=3600)
def _times(self, model, identity):
"""Get times a specific instance of a model has changed.
:param model: Name of the model
:type model: str
:param identity: Identity of the instance
:type identity: str
:returns: List of times the instance has changed.
:rtype: list
"""
# Build query to get times
query = TimesQuery(model, identity)
times = query.fetch()
return times
@list_route(methods=['post'])
def times(self, request):
"""Get times an object has changed.
The object's created_at time will be added if not present.
"""
# Validate input
times = TimesChangedSerializer(data=request.data)
if not times.is_valid():
raise ValidationError(times.errors)
vd = times.validated_data
# Find object by type and identity
query = Query(vd.get('model')) \
.identity(vd.get('identity')) \
.time(vd.get('time'))
records = query.fetch()
# Raise 404 if not found
if not records:
raise Http404()
created_at = records[0][vd.get('model')]['created_at']
# Build query to get times
logger.debug("GETTING TIMES")
times = self._times(vd['model'], vd['identity'])
if created_at not in times:
times.append(created_at)
results = ModelSerializer({
'data': vd,
'times': times
})
return Response(results.data)
@list_route(methods=['post'])
def search(self, request):
"""Search objects by type, identity, and property filters."""
search = SearchSerializer(data=request.data)
if not search.is_valid():
raise ValidationError(search.errors)
vd = search.validated_data
query = Query(vd.get('model')) \
.identity(vd.get('identity')) \
.time(vd.get('time'))
for f in vd.get('filters', []):
query.filter(
f['prop'],
f['operator'],
f['value'],
label=f['model']
)
for o in vd.get('orders', []):
query.orderby(o['prop'], o['direction'], label=o['model'])
count = query.count()
records = query.page(
page=vd['page'],
pagesize=vd['pagesize'],
index=vd.get('index')
)
serializer = ModelSerializer({
'query': str(query),
'data': vd,
'params': query.params,
'count': count,
'pagesize': vd['pagesize'],
'page': vd['page'],
'records': records
})
return Response(serializer.data)
class ObjectDiffViewSet(viewsets.ViewSet):
"""Viewset for diffing the same object at different points in time."""
def _data(self, request, serializer):
"""Serialize input from request and validate.
:param request: Http request
:type request: ?
:param serializer: Serializer class to use.
:type serializer: rest_framework.serializers.Serializer
:returns: Validated data
:rtype: dict
"""
s = serializer(data=request.data)
if not s.is_valid():
raise ValidationError(s.errors)
return s.validated_data
def _exists(self, model, identity, time):
"""Check that an instance of a model exists at a time.
:param model: Name of the model
:type model: str
:param identity: Identity of the instance of the model.
:type identity: str
:param time: Time to verify in milliseconds since epoch
:type type: int
"""
query = Query(model).identity(identity).time(time)
records = query.fetch()
logger.debug("Found {} matches for time {}".format(len(records), time))
return len(records) > 0
def _check_sides(self, data):
"""Check both sides of diff for existence.
:param data: Validate request data
:type data: dict
"""
# Find left side
exists = self._exists(
data.get('model'),
data.get('identity'),
data.get('left_time')
)
if not exists:
raise Http404("Left not found")
# Find right side
exists = self._exists(
data.get('model'),
data.get('identity'),
data.get('right_time')
)
if not exists:
raise Http404("Right not found")
def _job_running_response(self):
"""Create a response for a diff that is still running.
:returns: Response with http 202 status code.
:rtype: rest_framework.response.Response
"""
return Response(
{'status': 'Job is running. Try later.'},
status=status.HTTP_202_ACCEPTED
)
def _job_error_response(self):
"""Create a response for a diff that has failed.
:returns: Response with 500 status code.
:rtype: rest_framework.response.Response
"""
return Response(
{'status': 'The job failed.'},
status=status.HTTP_500_INTERNAL_SERVER_ERROR
)
@list_route(methods=['post'])
def node(self, request):
"""Get a specific node in the diff tree."""
# Validate request
data = self._data(request, DiffNodeSerializer)
# Make sure both sides are kosher
self._check_sides(data)
try:
diff = objectdiff(
data['model'],
data['identity'],
data['left_time'],
data['right_time']
)
except JobRunningError:
return self._job_running_response()
except JobError:
return self._job_error_response()
# 404 if node not found.
node = diff.getnode(data['node_model'], data['node_identity'])
if node is None:
raise Http404()
results = ModelSerializer({
'node': node,
'nodecount': diff.diffdict['nodecount'],
'data': data
})
return Response(results.data)
@list_route(methods=['post'])
def nodes(self, request):
"""Get a range of nodes from the diff tree."""
# Validate request
data = self._data(request, DiffNodesSerializer)
# Make sure both sides are kosher
self._check_sides(data)
try:
diff = objectdiff(
data['model'],
data['identity'],
data['left_time'],
data['right_time']
)
except JobRunningError:
return self._job_running_response()
except JobError:
return self._job_error_response()
results = ModelSerializer({
'nodes': diff.getnodes(data['offset'], data['limit']),
'nodecount': diff.diffdict['nodecount'],
'data': data
})
return Response(results.data)
@list_route(methods=['post'])
def structure(self, request):
"""Get structure of the tree."""
# Validate the data
data = self._data(request, DiffSerializer)
# Make sure both sides are kosher
self._check_sides(data)
try:
diff = objectdiff(
data['model'],
data['identity'],
data['left_time'],
data['right_time']
)
except JobRunningError:
return self._job_running_response()
except JobError:
return self._job_error_response()
# Return the response
results = ModelSerializer({
'frame': diff.frame(),
'nodemap': diff.diffdict['nodemap'],
'nodecount': diff.diffdict['nodecount'],
'data': data
})
return Response(results.data)
| [
"[email protected]"
] | |
4adc810e449ce9f1cc6ddb43d8bdd3fa31a43e0a | e82b761f53d6a3ae023ee65a219eea38e66946a0 | /All_In_One/addons/deeva/generation.py | 34c4783703c53c851a289ebe8e2be7391365a386 | [] | no_license | 2434325680/Learnbgame | f3a050c28df588cbb3b14e1067a58221252e2e40 | 7b796d30dfd22b7706a93e4419ed913d18d29a44 | refs/heads/master | 2023-08-22T23:59:55.711050 | 2021-10-17T07:26:07 | 2021-10-17T07:26:07 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,838 | py | # Deeva - Character Generation Platform
# Copyright (C) 2018 Fabrizio Nunnari
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from typing import List
from typing import Tuple
import pandas
class AttributesTable:
"""Support class to load and manage attributes.
Sample table format:
id,name,type,min,max,labels
277,Cheeks_Mass,nc,0.2,0.8,N/A
287,Chin_Prominence,nc,0.0,1.0,N/A
300,Eyebrows_Angle,nc,0.0,1.0,N/A
323,Eyes_Size,nc,0.1,1.0,N/A
"""
def __init__(self, table_filename: str):
self._table = pandas.read_csv(filepath_or_buffer=table_filename)
self._table.set_index('id', inplace=True)
# print(self._table)
def attributes_count(self) -> int:
return len(self._table)
def attribute_ids(self) -> List[int]:
return [int(i) for i in self._table.index]
def attribute_names(self) -> List[str]:
return [s for s in self._table['name']]
def attribute_name(self, attr_id: int) -> str:
return self._table.loc[attr_id]['name']
def attribute_range(self, attr_id: int) -> Tuple[float, float]:
entry = self._table.loc[attr_id]
return entry['min'], entry['max']
class IndividualsTable:
"""Support class to load and manage individuals of a generation.
Sample table format:
id,creation_type,has_content_files,277,287,300,323
35,rm,False,0.35,1.0,0.5,0.775
36,rm,False,0.575,0.75,0.875,0.55
37,rm,False,0.425,0.75,0.625,0.6625
"""
FIRST_ATTRIBUTE_INDEX = 2
def __init__(self, individuals_filename):
self._table = pandas.read_csv(filepath_or_buffer=individuals_filename) # type: pandas.DataFrame
self._table.set_index('id', inplace=True)
def count(self) -> int:
return len(self._table)
def ids(self) -> List[int]:
return [int(i) for i in self._table.index]
def attribute_ids(self) -> List[int]:
return [int(i) for i in self._table.columns.values[IndividualsTable.FIRST_ATTRIBUTE_INDEX:].tolist()]
def attribute_values(self, individual_id: int) -> List[float]:
table_line = self._table.loc[individual_id]
attrib_values = table_line[IndividualsTable.FIRST_ATTRIBUTE_INDEX:] # self._table.loc[individual_id]
return [float(a) for a in attrib_values]
#
#
#
# Invoke register if started from editor
if __name__ == "__main__":
print("Test attrs")
import os
from deeva.generation_tools import create_mblab_chars_json_dir
print(os.getcwd())
attributes_tab = AttributesTable("../../BlenderScenes/VS-1-testvarset1.csv")
print(attributes_tab.attributes_count())
print(attributes_tab.attribute_ids())
print(attributes_tab.attribute_names())
for a in attributes_tab.attribute_ids():
print(attributes_tab.attribute_name(a))
print(attributes_tab.attribute_range(a))
print("")
# create_random_individuals(attributes_table=attributes_tab, num_individuals=30, out_filename="individuals2.csv", random_segments=9)
indiv_tab = IndividualsTable("../../BlenderScenes/individuals2-fake.csv")
print(indiv_tab._table)
create_mblab_chars_json_dir(individuals=indiv_tab, attributes=attributes_tab, dirpath="generated_indiv")
print("end.")
| [
"[email protected]"
] | |
435e46730f8c4d4c153b4262319dceef250fa076 | 5c94e032b2d43ac347f6383d0a8f0c03ec3a0485 | /Push2/transport_state.py | 3c64a9d2e39ef49634256f34ad79259e83fd83ec | [] | no_license | Elton47/Ableton-MRS-10.1.13 | 997f99a51157bd2a2bd1d2dc303e76b45b1eb93d | 54bb64ba5e6be52dd6b9f87678ee3462cc224c8a | refs/heads/master | 2022-07-04T01:35:27.447979 | 2020-05-14T19:02:09 | 2020-05-14T19:02:09 | 263,990,585 | 0 | 0 | null | 2020-05-14T18:12:04 | 2020-05-14T18:12:03 | null | UTF-8 | Python | false | false | 2,961 | py | # uncompyle6 version 3.6.5
# Python bytecode 2.7 (62211)
# Decompiled from: Python 2.7.17 (default, Dec 23 2019, 21:25:33)
# [GCC 4.2.1 Compatible Apple LLVM 11.0.0 (clang-1100.0.33.16)]
# Embedded file name: /Users/versonator/Jenkins/live/output/Live/mac_64_static/Release/python-bundle/MIDI Remote Scripts/Push2/transport_state.py
# Compiled at: 2020-01-09 15:21:34
from __future__ import absolute_import, print_function, unicode_literals
from ableton.v2.base import listenable_property, listens
from ableton.v2.control_surface import Component
from .real_time_channel import RealTimeDataComponent
COUNT_IN_DURATION_IN_BARS = (0, 1, 2, 4)
class TransportState(Component):
count_in_duration = listenable_property.managed(0)
def __init__(self, song=None, *a, **kw):
super(TransportState, self).__init__(*a, **kw)
self._song = song
self.__on_is_playing_changed.subject = song
self._count_in_time_real_time_data = RealTimeDataComponent(parent=self, channel_type='count-in')
self.__on_count_in_duration_changed.subject = song
self.__on_is_counting_in_changed.subject = song
self.__on_signature_numerator_changed.subject = song
self.__on_signature_denominator_changed.subject = song
self.__on_count_in_channel_changed.subject = self._count_in_time_real_time_data
self._update_count_in_duration()
@listenable_property
def count_in_real_time_channel_id(self):
return self._count_in_time_real_time_data.channel_id
@listenable_property
def is_counting_in(self):
return self._song.is_counting_in
@listenable_property
def signature_numerator(self):
return self._song.signature_numerator
@listenable_property
def signature_denominator(self):
return self._song.signature_denominator
def _update_count_in_duration(self):
self.count_in_duration = COUNT_IN_DURATION_IN_BARS[self._song.count_in_duration]
@listens('count_in_duration')
def __on_count_in_duration_changed(self):
if not self.is_counting_in:
self._update_count_in_duration()
@listens('is_counting_in')
def __on_is_counting_in_changed(self):
self._count_in_time_real_time_data.set_data(self._song if self.is_counting_in else None)
self.notify_is_counting_in()
self._update_count_in_duration()
return
@listens('signature_numerator')
def __on_signature_numerator_changed(self):
self.notify_signature_numerator()
@listens('signature_denominator')
def __on_signature_denominator_changed(self):
self.notify_signature_denominator()
@listenable_property
def is_playing(self):
return self._song.is_playing
@listens('is_playing')
def __on_is_playing_changed(self):
self.notify_is_playing()
@listens('channel_id')
def __on_count_in_channel_changed(self):
self.notify_count_in_real_time_channel_id() | [
"[email protected]"
] | |
1bea5bd6fcead58b583e1c74a86710faafc6bd76 | cda43bf6a84f7e55fab26aa70cda934683a51fe5 | /NikWork/main.py | 9d222dfb77ef6da559dc7b9bd989f12cd2ab1fb3 | [] | no_license | nikolaosdionelis/NeuralNetworksNNs | abb55622882e31c8d130a8986868b3d19ede186f | 8a217490ad5bb3f7fccf4002c6b43a06c1e562fc | refs/heads/master | 2022-11-13T00:50:23.578197 | 2020-07-12T18:52:20 | 2020-07-12T18:52:20 | 279,042,013 | 5 | 1 | null | null | null | null | UTF-8 | Python | false | false | 9,161 | py | import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="3"
#import os
import scipy.misc
import numpy as np
np.random.seed(0)
import tensorflow as tf
#print(tf.__version__)
#asdfasdfasf
from model import DCGAN
from utils import pp
#import tensorflow as tf
tf.set_random_seed(0)
flags = tf.app.flags
# main.py --dataset mnist --input_height=28 --c_dim=1
# --checkpoint_dir checkpoint_mnist/flow --sample_dir samples_mnist/flow
# --model_type nice --log_dir logs_mnist/flow
# --prior logistic --beta1 0.5 --learning_rate 1e-4 --alpha 1e-7
# --reg 10.0 --epoch 500 --batch_size 100 --like_reg 1.0 --n_critic 5 --no_of_layers 5
#flags.DEFINE_integer("epoch", 25, "Epoch to train [25]")
#flags.DEFINE_float("learning_rate", 0.0002, "Learning rate of for adam [0.0002]")
flags.DEFINE_float("beta1", 0.5, "Momentum term of adam [0.5]")
#flags.DEFINE_integer("batch_size", 64, "The size of batch images [64]")
#flags.DEFINE_integer("input_height", 32, "The size of image to use [32]")
#flags.DEFINE_integer("input_height", 32, "The size of image to use [32]")
flags.DEFINE_integer("input_height", 28, "The size of image to use [32]")
flags.DEFINE_integer("input_width", None, "The size of image to use If None, same value as input_height [None]")
#flags.DEFINE_integer("c_dim", 3, "Dimension of image color. [3]")
#flags.DEFINE_integer("c_dim", 3, "Dimension of image color. [3]")
#flags.DEFINE_integer("c_dim", 3, "Dimension of image color. [3]")
flags.DEFINE_integer("c_dim", 1, "Dimension of image color. [3]")
flags.DEFINE_string("dataset", "mnist", "The name of dataset [mnist, multi-mnist, cifar-10]")
#flags.DEFINE_string("checkpoint_dir", "checkpoint", "Directory name to save the checkpoints [checkpoint]")
#flags.DEFINE_string("checkpoint_dir", "checkpoint", "Directory name to save the checkpoints [checkpoint]")
#flags.DEFINE_string("checkpoint_dir", "checkpoint", "Directory name to save the checkpoints [checkpoint]")
flags.DEFINE_string("checkpoint_dir", "checkpointMnist/flow", "Directory name to save the checkpoints [checkpoint]")
#flags.DEFINE_string("log_dir", "logs", "Directory name to save the logs [logs]")
#flags.DEFINE_string("sample_dir", "samples", "Directory name to save the image samples [samples]")
#flags.DEFINE_string("sample_dir", "samples", "Directory name to save the image samples [samples]")
#flags.DEFINE_string("sample_dir", "samples", "Directory name to save the image samples [samples]")
flags.DEFINE_string("sample_dir", "samples_mnist/flow", "Directory name to save the image samples [samples]")
#flags.DEFINE_string("log_dir", "logs", "Directory name to save the logs [logs]")
#flags.DEFINE_string("log_dir", "logs", "Directory name to save the logs [logs]")
#flags.DEFINE_string("log_dir", "logs_mnist/flow", "Directory name to save the logs [logs]")
#flags.DEFINE_string("log_dir", "logs_mnist/flow", "Directory name to save the logs [logs]")
flags.DEFINE_string("loLog_dir", "logs_mnist/flow", "Directory name to save the logs [logs]")
flags.DEFINE_string("f_div", "wgan", "f-divergence used for specifying the objective")
#flags.DEFINE_string("prior", "gaussian", "prior for generator")
#flags.DEFINE_string("prior", "gaussian", "prior for generator")
#flags.DEFINE_string("prior", "gaussian", "prior for generator")
flags.DEFINE_string("prior", "logistic", "prior for generator")
#flags.DEFINE_float("alpha", 1e-7, "alpha value (if applicable)")
flags.DEFINE_float("lr_decay", 1.0, "learning rate decay rate")
flags.DEFINE_float("min_lr", 0.0, "minimum lr allowed")
flags.DEFINE_float("reg", 10.0, "regularization parameter (only for wgan)")
#flags.DEFINE_string("model_type", "real_nvp", "model_type")
#flags.DEFINE_string("model_type", "real_nvp", "model_type")
#flags.DEFINE_string("model_type", "real_nvp", "model_type")
flags.DEFINE_string("model_type", "nice", "model_type")
flags.DEFINE_string("init_type", "normal", "initialization for weights")
#flags.DEFINE_integer("n_critic", 1, "no of discriminator iterations")
flags.DEFINE_integer("batch_norm_adaptive", 1, "type of batch norm used (only for real-nvp)")
#flags.DEFINE_integer("no_of_layers", 8,"No of units between input and output in the m function for a coupling layer")
flags.DEFINE_integer("hidden_layers", 1000, "Size of hidden layers if applicable")
flags.DEFINE_integer("gpu_nr", 0, "gpu no used")
#flags.DEFINE_float("like_reg", 0, "regularizing factor for likelihood")
flags.DEFINE_integer("df_dim", 64, "Dim depth of disc")
# main.py --dataset mnist --input_height=28 --c_dim=1
# --checkpoint_dir checkpoint_mnist/flow --sample_dir samples_mnist/flow
# --model_type nice --log_dir logs_mnist/flow
# --prior logistic --beta1 0.5 --learning_rate 1e-4 --alpha 1e-7
# --reg 10.0 --epoch 500 --batch_size 100 --like_reg 1.0 --n_critic 5 --no_of_layers 5
#flags.DEFINE_float("learning_rate", 0.0002, "Learning rate of for adam [0.0002]")
#flags.DEFINE_float("learning_rate", 0.0002, "Learning rate of for adam [0.0002]")
flags.DEFINE_float("learning_rate", 1e-4, "Learning rate of for adam [0.0002]")
#flags.DEFINE_float("alpha", 1e-7, "alpha value (if applicable)")
#flags.DEFINE_float("alpha", 1e-7, "alpha value (if applicable)")
flags.DEFINE_float("alpha", 1e-7, "alpha value (if applicable)")
#flags.DEFINE_integer("epoch", 25, "Epoch to train [25]")
#flags.DEFINE_integer("epoch", 25, "Epoch to train [25]")
flags.DEFINE_integer("epoch", 500, "Epoch to train [25]")
#flags.DEFINE_integer("batch_size", 64, "The size of batch images [64]")
#flags.DEFINE_integer("batch_size", 64, "The size of batch images [64]")
#flags.DEFINE_integer("batch_size", 100, "The size of batch images [64]")
#flags.DEFINE_integer("batch_size", 100, "The size of batch images [64]")
flags.DEFINE_integer("batch_size", 1024, "The size of batch images [64]")
#flags.DEFINE_float("like_reg", 0, "regularizing factor for likelihood")
#flags.DEFINE_float("like_reg", 0, "regularizing factor for likelihood")
flags.DEFINE_float("like_reg", 1.0, "regularizing factor for likelihood")
#flags.DEFINE_integer("n_critic", 1, "no of discriminator iterations")
#flags.DEFINE_integer("n_critic", 1, "no of discriminator iterations")
flags.DEFINE_integer("n_critic", 5, "no of discriminator iterations")
#flags.DEFINE_integer("no_of_layers", 8,"No of units between input and output in the m function for a coupling layer")
#flags.DEFINE_integer("no_of_layers", 8,"No of units between input and output in the m function for a coupling layer")
flags.DEFINE_integer("no_of_layers", 5,"No of units between input and output in the m function for a coupling layer")
FLAGS = flags.FLAGS
def main(_):
np.random.seed(0)
tf.set_random_seed(0)
pp.pprint(flags.FLAGS.__flags)
if FLAGS.input_width is None:
FLAGS.input_width = FLAGS.input_height
if not os.path.exists(FLAGS.checkpoint_dir):
os.makedirs(FLAGS.checkpoint_dir)
if not os.path.exists(FLAGS.sample_dir):
os.makedirs(FLAGS.sample_dir)
run_config = tf.ConfigProto()
run_config.gpu_options.allow_growth=True
run_config.allow_soft_placement=True
sess = None
with tf.Session(config=run_config) as sess:
dcgan = DCGAN(
sess,
input_width=FLAGS.input_width,
input_height=FLAGS.input_height,
batch_size=FLAGS.batch_size,
sample_num=FLAGS.batch_size,
c_dim=FLAGS.c_dim,
z_dim=FLAGS.c_dim * FLAGS.input_height * FLAGS.input_width,
dataset_name=FLAGS.dataset,
checkpoint_dir=FLAGS.checkpoint_dir,
f_div=FLAGS.f_div,
prior=FLAGS.prior,
lr_decay=FLAGS.lr_decay,
min_lr=FLAGS.min_lr,
model_type=FLAGS.model_type,
loLog_dir=FLAGS.loLog_dir,
alpha=FLAGS.alpha,
batch_norm_adaptive=FLAGS.batch_norm_adaptive,
init_type=FLAGS.init_type,
reg=FLAGS.reg,
n_critic=FLAGS.n_critic,
hidden_layers=FLAGS.hidden_layers,
no_of_layers=FLAGS.no_of_layers,
like_reg=FLAGS.like_reg,
df_dim=FLAGS.df_dim)
#dcgan2 = DCGAN(
# sess,
# input_width=FLAGS.input_width,
# input_height=FLAGS.input_height,
# batch_size=FLAGS.batch_size,
# sample_num=FLAGS.batch_size,
# c_dim=FLAGS.c_dim,
# z_dim=FLAGS.c_dim * FLAGS.input_height * FLAGS.input_width,
# dataset_name=FLAGS.dataset,
# checkpoint_dir=FLAGS.checkpoint_dir,
# f_div=FLAGS.f_div,
# prior=FLAGS.prior,
# lr_decay=FLAGS.lr_decay,
# min_lr=FLAGS.min_lr,
# model_type=FLAGS.model_type,
# loLog_dir=FLAGS.loLog_dir,
# alpha=FLAGS.alpha,
# batch_norm_adaptive=FLAGS.batch_norm_adaptive,
# init_type=FLAGS.init_type,
# reg=FLAGS.reg,
# n_critic=FLAGS.n_critic,
# hidden_layers=FLAGS.hidden_layers,
# no_of_layers=FLAGS.no_of_layers,
# like_reg=FLAGS.like_reg,
# df_dim=FLAGS.df_dim)
#dcgan.train(FLAGS)
#dcgan.train(FLAGS)
#dcgan.train(FLAGS)
#dcgan.train(FLAGS)
#dcgan.train2(FLAGS, dcgan2)
dcgan.train2(FLAGS)
#dcgan.train2(FLAGS, dcgan2.eval())
if __name__ == '__main__':
tf.app.run()
| [
"[email protected]"
] | |
4521d6a7244f51fec11bcd32f5d1d1be2dcbf08e | cb3d1b072391b07ef0e9596df7f223f37683e970 | /[0451]_Sort_Characters_By_Frequency/Sort_Characters_By_Frequency.py | c7229d1bbe77b54189179161cb4098bd1bbdf7ed | [] | no_license | kotori233/LeetCode | 99620255a64c898457901602de5db150bc35aabb | 996f9fcd26326db9b8f49078d9454fffb908cafe | refs/heads/master | 2021-09-10T18:00:56.968949 | 2018-03-30T14:38:27 | 2018-03-30T14:38:27 | 103,036,334 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 339 | py | class Solution(object):
def frequencySort(self, s):
"""
:type s: str
:rtype: str
"""
sheet = {}
for i in s:
sheet[i] = sheet.get(i, 0) + 1
res = ''
for key, val in sorted(sheet.items(), key=lambda x: -x[1]):
res += (key * val)
return res
| [
"[email protected]"
] | |
698dcbdbeec179200cec56754b3c09345c37f0c9 | 374b6fb00fe8b01a04964759ed5f7d97fc6f001f | /manage.py | 038e35696d7fc80e53672839ace8de010809fbeb | [] | no_license | Zoxon470/cleverbots | 2f45730073955a8e5b8e569778305bbc9bb7af90 | 67db087f2d1e00976bd466155bc32e3815d7bdc8 | refs/heads/master | 2022-12-11T09:26:24.817876 | 2019-06-29T04:56:54 | 2019-06-29T04:56:54 | 183,275,969 | 1 | 0 | null | 2022-12-08T05:04:59 | 2019-04-24T17:26:47 | Python | UTF-8 | Python | false | false | 830 | py | #!/usr/bin/env python
import os
import sys
if __name__ == '__main__':
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'config.settings.settings')
try:
from django.core.management import execute_from_command_line
except ImportError:
try:
import django
except ImportError:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
)
# This allows easy placement of apps within the interior
# taxi_corp directory.
current_path = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(current_path, 'backend'))
execute_from_command_line(sys.argv)
| [
"[email protected]"
] | |
99aea2b3ffa5ed245b5be98edc2adaf32ee45be1 | ca55dcaa64ea9db4068e13091321cfebecc0ff41 | /codeUp/codeUpBasic/1527.py | 0978736b0e3aa2072616c47aaa23d72ff23a3911 | [] | no_license | gomtinQQ/algorithm-python | 8fb8343594b945099ae2a4dfa794ecb47e54ab0b | 751562922b66e335f621d366bb73dacdc7125140 | refs/heads/master | 2022-12-07T23:05:44.535593 | 2020-08-21T12:29:58 | 2020-08-21T12:29:58 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 185 | py | '''
1527 : [기초-함수작성] 함수로 123 값 출력하기
123 을 출력하시오.
단, 함수형 문제이므로 함수 f()만 작성하시오.
'''
def f():
print("123")
f() | [
"[email protected]"
] | |
8ec6d8887b07d24192c626f26d8eaab0b8db1f3a | 651a296c8f45b5799781fd78a6b5329effe702a0 | /rnglib/cg_memory.py | 4cdecab1c27aa25a499c94878f8eea4233a7a639 | [] | no_license | pdhhiep/Computation_using_Python | 095d14370fe1a01a192d7e44fcc81a52655f652b | 407ed29fddc267950e9860b8bbd1e038f0387c97 | refs/heads/master | 2021-05-29T12:35:12.630232 | 2015-06-27T01:05:17 | 2015-06-27T01:05:17 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,550 | py | #!/usr/bin/env python
def cg_memory ( i, g, cg1, cg2 ):
#*****************************************************************************80
#
## CG_MEMORY stores the CG values for all generators.
#
# Licensing:
#
# This code is distributed under the GNU LGPL license.
#
# Modified:
#
# 27 May 2013
#
# Author:
#
# John Burkardt
#
# Parameters:
#
# Input, integer I, the desired action.
# -1, get a value.
# 0, initialize all values.
# 1, set a value.
#
# Input, integer G, for I = -1 or +1, the index of
# the generator, with 1 <= G <= 32.
#
# Input/output, integer CG1, CG2. For I = -1,
# these are output, for I = +1, these are input, for I = 0,
# these arguments are ignored. When used, the arguments are
# old or new values of the CG parameter for generator G.
#
from sys import exit
g_max = 32
if ( g < 1 or g_max < g ):
print ''
print 'CG_MEMORY - Fatal error!'
print ' Input generator index G is out of bounds.'
exit ( 'CG_MEMORY - Fatal error!' )
if ( i < 0 ):
cg1 = cg_memory.cg1_save[g-1]
cg2 = cg_memory.cg2_save[g-1]
elif ( i == 0 ):
for i in range ( 1, g_max + 1 ):
cg_memory.cg1_save[i-1] = 0
cg_memory.cg2_save[i-1] = 0
cg1 = 0
cg2 = 0
elif ( 0 < i ):
cg_memory.cg1_save[g-1] = cg1
cg_memory.cg2_save[g-1] = cg2
return cg1, cg2
cg_memory.cg1_save = [ \
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ]
cg_memory.cg2_save = [ \
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ]
| [
"[email protected]"
] | |
82b0f72f6f2d36f9269e4c508df534368159de11 | b8eb666c8b6fe4610d87bff8048f4a95a1c5b549 | /测试/接口性能/测试工具_Postman/Django/ppppp/manage.py | ca770a4fc71096442031a51c83199b3cc3b057f6 | [] | no_license | cainiaosun/study | 1e983e404005e537410b205634a27cee974faba0 | 91df9b63cda1839b7fc60de3b5f1eb19ccc33a1f | refs/heads/master | 2020-05-30T09:59:19.749099 | 2019-11-22T10:39:12 | 2019-11-22T10:39:12 | 189,641,828 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 625 | py | #!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ppppp.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
9258465aa0d433f9acebe02ace571cd240004f9f | 7bc54bae28eec4b735c05ac7bc40b1a8711bb381 | /src/trainer_v2/per_project/transparency/splade_regression/runner/run_splade_regression_fit.py | 4bd23e7ff5f343d8e1095a96d635abed5616e6d9 | [] | no_license | clover3/Chair | 755efd4abbd5f3f2fb59e9b1bc6e7bc070b8d05e | a2102ebf826a58efbc479181f1ebb5de21d1e49f | refs/heads/master | 2023-07-20T17:29:42.414170 | 2023-07-18T21:12:46 | 2023-07-18T21:12:46 | 157,024,916 | 0 | 0 | null | 2023-02-16T05:20:37 | 2018-11-10T21:55:29 | Python | UTF-8 | Python | false | false | 1,773 | py | import logging
import sys
import tensorflow as tf
from transformers import AutoTokenizer
from misc_lib import path_join
from trainer_v2.custom_loop.train_loop_helper import get_strategy_from_config
from trainer_v2.per_project.transparency.splade_regression.data_loaders.dataset_factories import \
get_vector_regression_dataset
from trainer_v2.per_project.transparency.splade_regression.modeling.regression_modeling import get_transformer_sparse_encoder
from trainer_v2.train_util.arg_flags import flags_parser
from taskman_client.wrapper3 import report_run3
from trainer_v2.chair_logging import c_log
from trainer_v2.custom_loop.run_config2 import get_run_config2, RunConfig2
@report_run3
def main(args):
c_log.info("Start {}".format(__file__))
c_log.setLevel(logging.DEBUG)
run_config: RunConfig2 = get_run_config2(args)
run_config.print_info()
strategy = get_strategy_from_config(run_config)
model_config = {
"model_type": "distilbert-base-uncased",
}
vocab_size = AutoTokenizer.from_pretrained(model_config["model_type"]).vocab_size
dataset_info = {
"max_seq_length": 256,
"max_vector_indices": 512,
"vocab_size": vocab_size
}
def build_dataset():
input_files = run_config.dataset_config.train_files_path
return get_vector_regression_dataset(input_files, dataset_info, run_config, True)
with strategy.scope():
new_model = get_transformer_sparse_encoder(model_config, True)
new_model.compile(loss="MSE", optimizer="adam")
dataset = build_dataset()
train_steps = 10000
new_model.fit(dataset, epochs=1, steps_per_epoch=train_steps)
if __name__ == "__main__":
args = flags_parser.parse_args(sys.argv[1:])
main(args)
| [
"[email protected]"
] | |
28462c56603bddf4c7c0c0a3a53ba040c1d5cf98 | 288a00d2ab34cba6c389b8c2444455aee55a8a95 | /tests/data23/recipe-519639.py | c34e06b386bf393e68d454f3b4c3206f3e367c16 | [
"BSD-2-Clause"
] | permissive | JohannesBuchner/pystrict3 | ffd77b7bbc378bd4d8f21b5c6bd69a0d64a52ddb | 18b0dd369082422f9bf0f89c72e7acb53a49849c | refs/heads/master | 2023-08-14T06:37:37.954880 | 2023-07-13T11:16:38 | 2023-07-13T11:16:38 | 268,571,175 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,891 | py | #!/usr/bin/env python
"""
True Lieberman-style delegation in Python.
Proxies are usually implemented as objects that forward method calls to a
"target" object. This approach has a major problem: forwarding makes the target
object the receiver of the method call; this means that calls originating from
the body of a method in the target will not go through the proxy (and thus their
behavior cannot be modified by the proxy).
For example, suppose we want a proxy to an instance of Target (shown below)
that is "safe", i.e., does not do anything bad like firing missiles. We can
just define a class that forwards calls to the safe methods, namely
send_flowers() and hang_out(). This class can have its own version of
fire_missiles() that does nothing. Now consider what happens when we call
the proxy object's innocent-looking hang_out() method. The call is forwarded
to the target object, which in turn calls the target object's (not the
proxy's) fire_missiles() method, and BOOM! (The proxy's version of
fire_missiles() is not called because forwarding has made the target object
the receiver of the new method call.)
By using delegation, one can implement proxies without the drawbacks of the
method-forwarding approach. This recipe shows how Python's __getattr__
method can be used to implement the kind of delegation present in
prototype-based languages like Self and Javascript, and how delegation can
be used to implement better proxies.
"""
__authors__ = ('Alessandro Warth <[email protected]>',
'Martin Blais <[email protected]>',)
class Target(object):
def __init__(self, n):
self.n = n
def send_flowers(self):
print('Sending %d flowers from %s' % (self.n, self))
def fire_missiles(self):
print('Firing %d missiles! from %s' % (self.n, self))
def hang_out(self):
# Oops! This is not as innocent as it looks!
print('Hang out... not so innocently.')
self.fire_missiles()
t = Target(17)
"""
Given 't', can we make a proxy to it that avoids firing missiles?
"""
import new
from types import MethodType
class Proxy(object):
def __init__(self, target):
self._target = target
def __getattr__(self, aname):
target = self._target
f = getattr(target, aname)
if isinstance(f, MethodType):
# Rebind the method to the target.
return new.instancemethod(f.__func__, self, target.__class__)
else:
return f
class SafeProxy(Proxy):
"Override dangerous methods of the target."
def fire_missiles(self):
pass
print('--------')
p = SafeProxy(t)
p.send_flowers()
p.hang_out()
class SafeProxy2(Proxy):
"Override more methods, wrapping two proxies deep."
def send_flowers(self):
print('Sending MORE and MORE flowers: %s' % self.n)
print('--------')
p2 = SafeProxy2(p)
p2.send_flowers()
| [
"[email protected]"
] | |
f883cbe229b7a5431271f2695882c1f0a45dc9a1 | 71cac038fabbc61602dbafb3ecddefbe132362aa | /Mining/Executables/package_integrity_test.py | 714ae7813f2e6a41d54d501a18f28fb59439a762 | [
"MIT"
] | permissive | AdamSwenson/TwitterProject | 00a38f18d6c5c6146f8ff21917da456c87c5453d | 8c5dc7a57eac611b555058736d609f2f204cb836 | refs/heads/master | 2022-12-14T07:06:43.958429 | 2019-10-02T01:22:22 | 2019-10-02T01:22:22 | 138,349,359 | 0 | 0 | MIT | 2022-11-22T02:58:09 | 2018-06-22T21:23:31 | Jupyter Notebook | UTF-8 | Python | false | false | 342 | py | """
Created by adam on 6/22/18
"""
__author__ = 'adam'
if __name__ == '__main__':
import environment
from Mining.AccessManagement import TwitterLogin
from Mining.UserQueries import UserFinder
from CommonTools.FileTools import CsvFileTools
from CommonTools.Loggers import SlackNotifications
print('I like stuff ')
| [
"[email protected]"
] | |
10e2ad332a4ab24121f1f3de9d8d610a87ad6421 | 6b2a8dd202fdce77c971c412717e305e1caaac51 | /solutions_5648941810974720_1/Python/lbj/A.py | fc68beb5e6f1eb1db1d76e181fd7a1244032b118 | [] | no_license | alexandraback/datacollection | 0bc67a9ace00abbc843f4912562f3a064992e0e9 | 076a7bc7693f3abf07bfdbdac838cb4ef65ccfcf | refs/heads/master | 2021-01-24T18:27:24.417992 | 2017-05-23T09:23:38 | 2017-05-23T09:23:38 | 84,313,442 | 2 | 4 | null | null | null | null | UTF-8 | Python | false | false | 1,008 | py | from collections import Counter
from sys import stderr
t = int(input())
def sf(vs):
for i in range(10):
for _ in range(vs[i]):
yield i
for cn in range(t):
s = raw_input()
cnt = Counter(s)
vals = [0 for i in range(10)]
# Linear programming / Gauss-Jordan reduction
vals[0] = cnt['Z']
vals[2] = cnt['W']
vals[4] = cnt['U']
vals[6] = cnt['X']
vals[8] = cnt['G']
vals[5] = cnt['F'] - vals[4]
vals[9] = cnt['I'] - vals[5] - vals[6] - vals[8]
vals[7] = cnt['V'] - vals[5]
vals[3] = cnt['H'] - vals[8]
vals[1] = cnt['O'] - vals[0] - vals[2] - vals[4]
# Santiy checking
assert vals[3] + vals[8] == cnt['H']
assert vals[0] + vals[1] + vals[3] + vals[3] + vals[5] + vals[7] + vals[7] + vals[8] + vals[9] == cnt['E'], "Expected %d, got %d: %r %r" % (cnt['E'], vals[0] + vals[1] + vals[3] + vals[3] + vals[5] + vals[7] + vals[7] + vals[8] + vals[9], cnt, vals)
print("Case #%d: %s" % (cn + 1, ''.join(map(str, sf(vals)))))
| [
"[email protected]"
] | |
17b1cf7ee5701068d16e5eb678d394de6e54e0ba | 89abe56b171c6b7bf29395f92155f1f2c524c861 | /src/examples/windy_grid.py | f837e0454e0ef055d7846e2950790cf3b3c88448 | [] | no_license | soumyamulgund/MDP-DP-RL | d37f24d381f9be2cd8aeb0142e0fc747469fab72 | b3ee8b69e2b26f46c7a7cefb6b5a07b5dd38738e | refs/heads/master | 2020-03-26T07:55:50.798742 | 2018-08-14T02:33:36 | 2018-08-14T02:33:36 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,971 | py | from typing import Tuple, Sequence, NamedTuple, Set, Mapping
from enum import Enum
from scipy.stats import norm
from processes.mdp_refined import MDPRefined
from processes.det_policy import DetPolicy
from func_approx.dnn_spec import DNNSpec
from func_approx.func_approx_base import FuncApproxBase
from algorithms.func_approx_spec import FuncApproxSpec
from examples.run_all_algorithms import RunAllAlgorithms
Node = Tuple[int, int]
NodeSet = Set[Node]
WindSpec = Sequence[Tuple[float, float]]
class Move(Enum):
U = (0, 1)
D = (0, -1)
L = (-1, 0)
R = (1, 0)
S = (0, 0)
class WindyGrid(NamedTuple):
x_len: int
y_len: int
blocks: NodeSet
terminals: NodeSet
wind: WindSpec
edge_bump_cost: float
block_bump_cost: float
def validate_spec(self) -> bool:
b1 = self.x_len >= 2
b2 = self.y_len >= 2
b3 = all(0 <= x < self.x_len and 0 <= y < self.y_len
for x, y in self.blocks)
b4 = len(self.terminals) >= 1
b5 = all(0 <= x < self.x_len and 0 <= y < self.y_len
for x, y in self.terminals)
b6 = len(self.wind) == self.x_len
b7 = all(y >= 0 for _, y in self.wind)
b8 = self.edge_bump_cost > 1
b9 = self.block_bump_cost > 1
return all([b1, b2, b3, b4, b5, b6, b7, b8, b9])
@staticmethod
def add_tuples(a: Node, b: Node) -> Node:
return a[0] + b[0], a[1] + b[1]
def is_valid_state(self, state: Node) -> bool:
return 0 <= state[0] < self.x_len \
and 0 <= state[1] < self.y_len \
and state not in self.blocks
def get_all_nt_states(self) -> NodeSet:
return {(i, j) for i in range(self.x_len) for j in range(self.y_len)
if (i, j) not in set.union(self.blocks, self.terminals)}
def get_actions_and_next_states(self, nt_state: Node) \
-> Set[Tuple[Move, Node]]:
temp = {(a.name, WindyGrid.add_tuples(nt_state, a.value))
for a in Move if a != Move.S}
return {(a, s) for a, s in temp if self.is_valid_state(s)}
def get_state_probs_and_rewards(self, state: Node) \
-> Mapping[Node, Tuple[float, float]]:
state_x, state_y = state
barriers = set.union(
{-1, self.y_len},
{y for x, y in self.blocks if x == state_x}
)
lower = max(y for y in barriers if y < state_y) + 1
upper = min(y for y in barriers if y > state_y) - 1
mu, sigma = self.wind[state_x]
if sigma == 0:
only_state = round(state_y + mu)
if lower <= only_state <= upper:
cost = 0.
elif only_state < lower:
cost = self.edge_bump_cost if lower == 0 \
else self.block_bump_cost
else:
cost = self.edge_bump_cost if upper == self.y_len - 1 \
else self.block_bump_cost
ret = {(state_x, max(lower, min(upper, only_state))):
(1., -(1. + cost))}
else:
rv = norm(loc=mu, scale=sigma)
temp_data = []
for y in range(lower, upper + 1):
if y == lower:
pr = rv.cdf(lower - state_y + 0.5)
pr1 = rv.cdf(lower - state_y - 0.5)
cost = pr1 / pr * (self.edge_bump_cost if lower == 0
else self.block_bump_cost) \
if pr != 0. else 0.
elif y == upper:
pr = 1. - rv.cdf(upper - state_y - 0.5)
pr1 = 1. - rv.cdf(upper - state_x + 0.5)
cost = pr1 / pr * (self.edge_bump_cost
if upper == self.y_len - 1
else self.block_bump_cost) \
if pr != 0. else 0.
else:
pr = rv.cdf(y - state_y + 0.5) - rv.cdf(y - state_y - 0.5)
cost = 0.
temp_data.append((y, pr, cost))
sum_pr = sum(p for _, p, _ in temp_data)
ret = {(state_x, y): (p / sum_pr, -(1. + c))
for y, p, c in temp_data}
return ret
def get_non_terminals_dict(self) \
-> Mapping[Node, Mapping[Move, Mapping[Node, Tuple[float, float]]]]:
return {s: {a: ({s1: (1., -1.)} if s1 in self.terminals else
self.get_state_probs_and_rewards(s1))
for a, s1 in self.get_actions_and_next_states(s)}
for s in self.get_all_nt_states()}
def get_mdp_refined_dict(self) \
-> Mapping[Node, Mapping[Move, Mapping[Node, Tuple[float, float]]]]:
d1 = self.get_non_terminals_dict()
d2 = {s: {Move.S.name: {s: (1.0, 0.0)}} for s in self.terminals}
return {**d1, **d2}
def get_mdp_refined(self) -> MDPRefined:
return MDPRefined(self.get_mdp_refined_dict(), gamma=1.)
def print_vf(self, vf_dict, chars: int, decimals: int) -> None:
display = "%%%d.%df" % (chars, decimals)
display1 = "%%%dd" % chars
display2 = "%%%dd " % 2
blocks_dict = {s: 'X' * chars for s in self.blocks}
non_blocks_dict = {s: display % -v for s, v in vf_dict.items()}
full_dict = {**non_blocks_dict, **blocks_dict}
print(" " + " ".join([display1 % j for j in range(0, self.x_len)]))
for i in range(self.y_len - 1, -1, -1):
print(display2 % i + " ".join(full_dict[(j, i)]
for j in range(0, self.x_len)))
def print_policy(self, pol: DetPolicy) -> None:
display1 = "%%%dd" % 2
display2 = "%%%dd " % 2
blocks_dict = {s: 'X' for s in self.blocks}
full_dict = {**pol.get_state_to_action_map(), **blocks_dict}
print(" " + " ".join([display1 % j for j in range(0, self.x_len)]))
for i in range(self.y_len - 1, -1, -1):
print(display2 % i + " ".join(full_dict[(j, i)]
for j in range(0, self.x_len)))
def print_wind_and_bumps(self, chars: int, decimals: int) -> None:
display = "%%%d.%df" % (chars, decimals)
print("mu " + " ".join(display % m for m, _ in self.wind))
print("sd " + " ".join(display % s for _, s in self.wind))
print("Block Bump Cost = %5.2f" % self.block_bump_cost)
print("Edge Bump Cost = %5.2f" % self.edge_bump_cost)
if __name__ == '__main__':
wg = WindyGrid(
x_len=6,
y_len=9,
blocks={(1, 5), (2, 1), (2, 2), (2, 3), (4, 4), (4, 5), (4, 6), (4, 7)},
terminals={(5, 7)},
wind=[(0., 0.), (0., 0.), (-1.2, 0.3), (-1.7, 0.7), (0.6, 0.4), (0.5, 1.2)],
edge_bump_cost=3.,
block_bump_cost=4.
)
valid = wg.validate_spec()
mdp_ref_obj = wg.get_mdp_refined()
this_tolerance = 1e-2
this_first_visit_mc = True
this_num_samples = 30
this_softmax = False
this_epsilon = 0.05
this_epsilon_half_life = 100
this_learning_rate = 0.1
this_learning_rate_decay = 1e6
this_lambd = 0.8
this_num_episodes = 10000
this_max_steps = 1000
this_td_offline = True
this_fa_spec = FuncApproxSpec(
state_feature_funcs=FuncApproxBase.get_indicator_feature_funcs(
mdp_ref_obj.all_states
),
action_feature_funcs=FuncApproxBase.get_indicator_feature_funcs(
{m.name for m in Move}
),
dnn_spec=DNNSpec(
neurons=[2, 4],
hidden_activation=DNNSpec.relu,
hidden_activation_deriv=DNNSpec.relu_deriv,
output_activation=DNNSpec.identity,
output_activation_deriv=DNNSpec.identity_deriv
)
)
raa = RunAllAlgorithms(
mdp_refined=mdp_ref_obj,
tolerance=this_tolerance,
first_visit_mc=this_first_visit_mc,
num_samples=this_num_samples,
softmax=this_softmax,
epsilon=this_epsilon,
epsilon_half_life=this_epsilon_half_life,
learning_rate=this_learning_rate,
learning_rate_decay=this_learning_rate_decay,
lambd=this_lambd,
num_episodes=this_num_episodes,
max_steps=this_max_steps,
tdl_fa_offline=this_td_offline,
fa_spec=this_fa_spec
)
for name, algo in raa.get_all_algorithms().items():
print(name)
opt_pol_func = algo.get_optimal_det_policy_func()
opt_pol = DetPolicy({s: opt_pol_func(s) for s in mdp_ref_obj.all_states})
opt_vf_func = algo.get_optimal_value_func()
opt_vf_dict = {s: opt_vf_func(s) for s in mdp_ref_obj.all_states}
wg.print_policy(opt_pol)
chars_count = 5
decimals_count = 2
print()
wg.print_vf(opt_vf_dict, chars_count, decimals_count)
print()
wg.print_wind_and_bumps(chars_count, decimals_count)
print()
print()
| [
"[email protected]"
] | |
21ce95a24a03783d71c31d4be40287f8ba4b5f47 | 82b946da326148a3c1c1f687f96c0da165bb2c15 | /sdk/python/pulumi_azure_native/insights/v20201020/my_workbook.py | 30d5863b2595f34343b8a7942db921ada414cc9e | [
"BSD-3-Clause",
"Apache-2.0"
] | permissive | morrell/pulumi-azure-native | 3916e978382366607f3df0a669f24cb16293ff5e | cd3ba4b9cb08c5e1df7674c1c71695b80e443f08 | refs/heads/master | 2023-06-20T19:37:05.414924 | 2021-07-19T20:57:53 | 2021-07-19T20:57:53 | 387,815,163 | 0 | 0 | Apache-2.0 | 2021-07-20T14:18:29 | 2021-07-20T14:18:28 | null | UTF-8 | Python | false | false | 22,897 | py | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from ... import _utilities
from . import outputs
from ._enums import *
from ._inputs import *
__all__ = ['MyWorkbookArgs', 'MyWorkbook']
@pulumi.input_type
class MyWorkbookArgs:
def __init__(__self__, *,
category: pulumi.Input[str],
display_name: pulumi.Input[str],
resource_group_name: pulumi.Input[str],
serialized_data: pulumi.Input[str],
etag: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input['MyWorkbookManagedIdentityArgs']] = None,
kind: Optional[pulumi.Input[Union[str, 'Kind']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_name: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
storage_uri: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
type: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a MyWorkbook resource.
:param pulumi.Input[str] category: Workbook category, as defined by the user at creation time.
:param pulumi.Input[str] display_name: The user-defined name of the private workbook.
:param pulumi.Input[str] resource_group_name: The name of the resource group. The name is case insensitive.
:param pulumi.Input[str] serialized_data: Configuration of this particular private workbook. Configuration data is a string containing valid JSON
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] etag: Resource etag
:param pulumi.Input[str] id: Azure resource Id
:param pulumi.Input['MyWorkbookManagedIdentityArgs'] identity: Identity used for BYOS
:param pulumi.Input[Union[str, 'Kind']] kind: The kind of workbook. Choices are user and shared.
:param pulumi.Input[str] location: Resource location
:param pulumi.Input[str] name: Azure resource name
:param pulumi.Input[str] resource_name: The name of the Application Insights component resource.
:param pulumi.Input[str] source_id: Optional resourceId for a source resource.
:param pulumi.Input[str] storage_uri: BYOS Storage Account URI
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Resource tags
:param pulumi.Input[str] type: Azure resource type
:param pulumi.Input[str] version: This instance's version of the data model. This can change as new features are added that can be marked private workbook.
"""
pulumi.set(__self__, "category", category)
pulumi.set(__self__, "display_name", display_name)
pulumi.set(__self__, "resource_group_name", resource_group_name)
pulumi.set(__self__, "serialized_data", serialized_data)
if etag is not None:
pulumi.set(__self__, "etag", etag)
if id is not None:
pulumi.set(__self__, "id", id)
if identity is not None:
pulumi.set(__self__, "identity", identity)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if location is not None:
pulumi.set(__self__, "location", location)
if name is not None:
pulumi.set(__self__, "name", name)
if resource_name is not None:
pulumi.set(__self__, "resource_name", resource_name)
if source_id is not None:
pulumi.set(__self__, "source_id", source_id)
if storage_uri is not None:
pulumi.set(__self__, "storage_uri", storage_uri)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if type is not None:
pulumi.set(__self__, "type", type)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter
def category(self) -> pulumi.Input[str]:
"""
Workbook category, as defined by the user at creation time.
"""
return pulumi.get(self, "category")
@category.setter
def category(self, value: pulumi.Input[str]):
pulumi.set(self, "category", value)
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Input[str]:
"""
The user-defined name of the private workbook.
"""
return pulumi.get(self, "display_name")
@display_name.setter
def display_name(self, value: pulumi.Input[str]):
pulumi.set(self, "display_name", value)
@property
@pulumi.getter(name="resourceGroupName")
def resource_group_name(self) -> pulumi.Input[str]:
"""
The name of the resource group. The name is case insensitive.
"""
return pulumi.get(self, "resource_group_name")
@resource_group_name.setter
def resource_group_name(self, value: pulumi.Input[str]):
pulumi.set(self, "resource_group_name", value)
@property
@pulumi.getter(name="serializedData")
def serialized_data(self) -> pulumi.Input[str]:
"""
Configuration of this particular private workbook. Configuration data is a string containing valid JSON
"""
return pulumi.get(self, "serialized_data")
@serialized_data.setter
def serialized_data(self, value: pulumi.Input[str]):
pulumi.set(self, "serialized_data", value)
@property
@pulumi.getter
def etag(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Resource etag
"""
return pulumi.get(self, "etag")
@etag.setter
def etag(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "etag", value)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
"""
Azure resource Id
"""
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def identity(self) -> Optional[pulumi.Input['MyWorkbookManagedIdentityArgs']]:
"""
Identity used for BYOS
"""
return pulumi.get(self, "identity")
@identity.setter
def identity(self, value: Optional[pulumi.Input['MyWorkbookManagedIdentityArgs']]):
pulumi.set(self, "identity", value)
@property
@pulumi.getter
def kind(self) -> Optional[pulumi.Input[Union[str, 'Kind']]]:
"""
The kind of workbook. Choices are user and shared.
"""
return pulumi.get(self, "kind")
@kind.setter
def kind(self, value: Optional[pulumi.Input[Union[str, 'Kind']]]):
pulumi.set(self, "kind", value)
@property
@pulumi.getter
def location(self) -> Optional[pulumi.Input[str]]:
"""
Resource location
"""
return pulumi.get(self, "location")
@location.setter
def location(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "location", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
Azure resource name
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="resourceName")
def resource_name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the Application Insights component resource.
"""
return pulumi.get(self, "resource_name")
@resource_name.setter
def resource_name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "resource_name", value)
@property
@pulumi.getter(name="sourceId")
def source_id(self) -> Optional[pulumi.Input[str]]:
"""
Optional resourceId for a source resource.
"""
return pulumi.get(self, "source_id")
@source_id.setter
def source_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "source_id", value)
@property
@pulumi.getter(name="storageUri")
def storage_uri(self) -> Optional[pulumi.Input[str]]:
"""
BYOS Storage Account URI
"""
return pulumi.get(self, "storage_uri")
@storage_uri.setter
def storage_uri(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "storage_uri", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]:
"""
Resource tags
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
"""
Azure resource type
"""
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def version(self) -> Optional[pulumi.Input[str]]:
"""
This instance's version of the data model. This can change as new features are added that can be marked private workbook.
"""
return pulumi.get(self, "version")
@version.setter
def version(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "version", value)
class MyWorkbook(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
category: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
etag: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['MyWorkbookManagedIdentityArgs']]] = None,
kind: Optional[pulumi.Input[Union[str, 'Kind']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
resource_name_: Optional[pulumi.Input[str]] = None,
serialized_data: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
storage_uri: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
type: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
An Application Insights private workbook definition.
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] category: Workbook category, as defined by the user at creation time.
:param pulumi.Input[str] display_name: The user-defined name of the private workbook.
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] etag: Resource etag
:param pulumi.Input[str] id: Azure resource Id
:param pulumi.Input[pulumi.InputType['MyWorkbookManagedIdentityArgs']] identity: Identity used for BYOS
:param pulumi.Input[Union[str, 'Kind']] kind: The kind of workbook. Choices are user and shared.
:param pulumi.Input[str] location: Resource location
:param pulumi.Input[str] name: Azure resource name
:param pulumi.Input[str] resource_group_name: The name of the resource group. The name is case insensitive.
:param pulumi.Input[str] resource_name_: The name of the Application Insights component resource.
:param pulumi.Input[str] serialized_data: Configuration of this particular private workbook. Configuration data is a string containing valid JSON
:param pulumi.Input[str] source_id: Optional resourceId for a source resource.
:param pulumi.Input[str] storage_uri: BYOS Storage Account URI
:param pulumi.Input[Mapping[str, pulumi.Input[str]]] tags: Resource tags
:param pulumi.Input[str] type: Azure resource type
:param pulumi.Input[str] version: This instance's version of the data model. This can change as new features are added that can be marked private workbook.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: MyWorkbookArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
An Application Insights private workbook definition.
:param str resource_name: The name of the resource.
:param MyWorkbookArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(MyWorkbookArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
category: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
etag: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
id: Optional[pulumi.Input[str]] = None,
identity: Optional[pulumi.Input[pulumi.InputType['MyWorkbookManagedIdentityArgs']]] = None,
kind: Optional[pulumi.Input[Union[str, 'Kind']]] = None,
location: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
resource_group_name: Optional[pulumi.Input[str]] = None,
resource_name_: Optional[pulumi.Input[str]] = None,
serialized_data: Optional[pulumi.Input[str]] = None,
source_id: Optional[pulumi.Input[str]] = None,
storage_uri: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Mapping[str, pulumi.Input[str]]]] = None,
type: Optional[pulumi.Input[str]] = None,
version: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = MyWorkbookArgs.__new__(MyWorkbookArgs)
if category is None and not opts.urn:
raise TypeError("Missing required property 'category'")
__props__.__dict__["category"] = category
if display_name is None and not opts.urn:
raise TypeError("Missing required property 'display_name'")
__props__.__dict__["display_name"] = display_name
__props__.__dict__["etag"] = etag
__props__.__dict__["id"] = id
__props__.__dict__["identity"] = identity
__props__.__dict__["kind"] = kind
__props__.__dict__["location"] = location
__props__.__dict__["name"] = name
if resource_group_name is None and not opts.urn:
raise TypeError("Missing required property 'resource_group_name'")
__props__.__dict__["resource_group_name"] = resource_group_name
__props__.__dict__["resource_name"] = resource_name_
if serialized_data is None and not opts.urn:
raise TypeError("Missing required property 'serialized_data'")
__props__.__dict__["serialized_data"] = serialized_data
__props__.__dict__["source_id"] = source_id
__props__.__dict__["storage_uri"] = storage_uri
__props__.__dict__["tags"] = tags
__props__.__dict__["type"] = type
__props__.__dict__["version"] = version
__props__.__dict__["time_modified"] = None
__props__.__dict__["user_id"] = None
alias_opts = pulumi.ResourceOptions(aliases=[pulumi.Alias(type_="azure-nextgen:insights/v20201020:MyWorkbook"), pulumi.Alias(type_="azure-native:insights:MyWorkbook"), pulumi.Alias(type_="azure-nextgen:insights:MyWorkbook"), pulumi.Alias(type_="azure-native:insights/v20150501:MyWorkbook"), pulumi.Alias(type_="azure-nextgen:insights/v20150501:MyWorkbook"), pulumi.Alias(type_="azure-native:insights/v20210308:MyWorkbook"), pulumi.Alias(type_="azure-nextgen:insights/v20210308:MyWorkbook")])
opts = pulumi.ResourceOptions.merge(opts, alias_opts)
super(MyWorkbook, __self__).__init__(
'azure-native:insights/v20201020:MyWorkbook',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None) -> 'MyWorkbook':
"""
Get an existing MyWorkbook resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = MyWorkbookArgs.__new__(MyWorkbookArgs)
__props__.__dict__["category"] = None
__props__.__dict__["display_name"] = None
__props__.__dict__["etag"] = None
__props__.__dict__["identity"] = None
__props__.__dict__["kind"] = None
__props__.__dict__["location"] = None
__props__.__dict__["name"] = None
__props__.__dict__["serialized_data"] = None
__props__.__dict__["source_id"] = None
__props__.__dict__["storage_uri"] = None
__props__.__dict__["tags"] = None
__props__.__dict__["time_modified"] = None
__props__.__dict__["type"] = None
__props__.__dict__["user_id"] = None
__props__.__dict__["version"] = None
return MyWorkbook(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def category(self) -> pulumi.Output[str]:
"""
Workbook category, as defined by the user at creation time.
"""
return pulumi.get(self, "category")
@property
@pulumi.getter(name="displayName")
def display_name(self) -> pulumi.Output[str]:
"""
The user-defined name of the private workbook.
"""
return pulumi.get(self, "display_name")
@property
@pulumi.getter
def etag(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Resource etag
"""
return pulumi.get(self, "etag")
@property
@pulumi.getter
def identity(self) -> pulumi.Output[Optional['outputs.MyWorkbookManagedIdentityResponse']]:
"""
Identity used for BYOS
"""
return pulumi.get(self, "identity")
@property
@pulumi.getter
def kind(self) -> pulumi.Output[Optional[str]]:
"""
The kind of workbook. Choices are user and shared.
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def location(self) -> pulumi.Output[Optional[str]]:
"""
Resource location
"""
return pulumi.get(self, "location")
@property
@pulumi.getter
def name(self) -> pulumi.Output[Optional[str]]:
"""
Azure resource name
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="serializedData")
def serialized_data(self) -> pulumi.Output[str]:
"""
Configuration of this particular private workbook. Configuration data is a string containing valid JSON
"""
return pulumi.get(self, "serialized_data")
@property
@pulumi.getter(name="sourceId")
def source_id(self) -> pulumi.Output[Optional[str]]:
"""
Optional resourceId for a source resource.
"""
return pulumi.get(self, "source_id")
@property
@pulumi.getter(name="storageUri")
def storage_uri(self) -> pulumi.Output[Optional[str]]:
"""
BYOS Storage Account URI
"""
return pulumi.get(self, "storage_uri")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Mapping[str, str]]]:
"""
Resource tags
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter(name="timeModified")
def time_modified(self) -> pulumi.Output[str]:
"""
Date and time in UTC of the last modification that was made to this private workbook definition.
"""
return pulumi.get(self, "time_modified")
@property
@pulumi.getter
def type(self) -> pulumi.Output[Optional[str]]:
"""
Azure resource type
"""
return pulumi.get(self, "type")
@property
@pulumi.getter(name="userId")
def user_id(self) -> pulumi.Output[str]:
"""
Unique user id of the specific user that owns this private workbook.
"""
return pulumi.get(self, "user_id")
@property
@pulumi.getter
def version(self) -> pulumi.Output[Optional[str]]:
"""
This instance's version of the data model. This can change as new features are added that can be marked private workbook.
"""
return pulumi.get(self, "version")
| [
"[email protected]"
] | |
864ceef09f60b62a0a11649678e819700f532179 | 535fe1b2b746e096c2a39c61792dffe702024841 | /ch5_client/4_10mTestWithThread.py | 09750dedd80a510c622f3c7f13e932617d0013b9 | [] | no_license | AstinCHOI/book_thisIsRedis | bad890a7570767da3661069aba55b604a2c1284f | 9ec10df7a757e05e7459f003fadfcc4eab892a3b | refs/heads/master | 2020-03-11T18:16:06.826665 | 2018-05-22T03:00:02 | 2018-05-22T03:00:02 | 130,172,385 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 769 | py | import time
import threading
import redis
TOTAL_OP = 10000000
THREAD = 5
def redis_thread(pool, idx):
r = redis.Redis(connection_pool=pool)
for i in range(TOTAL_OP):
if i % THREAD == idx:
key = value = "key" + str(100000000 + i)
r.set(key, value)
pool = redis.BlockingConnectionPool(host='localhost', port=6379, db=0,
max_connections=500, decode_responses=True)
threads = []
start = int(time.time())
for i in range(THREAD):
t = threading.Thread(target=redis_thread, args=(pool, i))
threads.append(t)
t.start()
for t in threads:
t.join()
pool.disconnect()
elapsed = int(time.time()) - start
print("requests per second : {}".format(TOTAL_OP / elapsed))
print("time : {}s".format(elapsed))
| [
"[email protected]"
] | |
891c2662d78dca4bb7636a77a59d30e31e8a9460 | 6ace7e15e3191d1b8228ad7922a8552ca84f84e7 | /.history/image_detector_20200614200237.py | 41ec32060b2f3b5048ad6b999ba4dbbdd054c16f | [] | no_license | mehmetaliarican/Similar-Image-Finder | f72e95be50c51aa03fc64954a03124b199ca64b1 | a9e0015c443b4a73394099cccf60329cfc4c7cef | refs/heads/master | 2022-10-27T00:57:43.173993 | 2020-06-14T18:02:16 | 2020-06-14T18:02:16 | 272,256,295 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,028 | py | from skimage.metrics import structural_similarity as ssim
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import cv2
import glob
import os
import argparse
ap = argparse.ArgumentParser()
ap.add_argument("-t", "--threshold", type=float, default=0.9,
help="threshold")
ap.add_argument("-d", "--dataset", required=True,
help="path to input dataset")
args = vars(ap.parse_args())
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
err /= float(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
def compare_images(path,imageA, imageB):
# compute the mean squared error and structural similarity
# index for the images
m = mse(imageA, imageB)
s = ssim(imageA, imageB)
tres = args['threshold']
if s >= tres:
print("____".join([str(path),str(m), str(tres), str(s)]))
twin = np.hstack([imageA, imageB])
cv2.imshow('', twin)
cv2.waitKey(0)
imagePaths = list(paths.list_images(args['dataset']))
companies = ['dhl', 'paypal', 'wellsfargo']
all_data = []
for path in imagePaths:
company = ''
for c in companies:
if c in path:
company = c
all_data.append({'comp': c, 'path': path})
for image in all_data:
try:
p1 = cv2.imread(image['path'])
p1 = cv2.resize(p1, (300, 300))
p1 = cv2.cvtColor(p1, cv2.COLOR_BGR2GRAY)
for i in all_data:
if i['path']!=image['path']:
p2 = cv2.imread(i['path'])
p2 = cv2.resize(p2, (300, 300))
p2 = cv2.cvtColor(p2, cv2.COLOR_BGR2GRAY)
compare_images(image['path'],p1, p2)
except Exception as e:
print(str(e))
| [
"[email protected]"
] | |
90fce63be2d2ea67ff71cb83ce336991d16670c6 | 94f8d393536a38136420b299555a47989cb95e06 | /tengxunzhaopin123/tengxunzhaopin123/middlewares.py | a406fda992e2fc64e8c8f5e26f48c0662c6ef797 | [] | no_license | nolan0536/weizhiBigDataPython | 9164ddc50cd0b850ec7536270d690dd0848b9f06 | ef4ab9d749159166fcfe48883d680ac058b12425 | refs/heads/main | 2023-04-21T21:15:11.235258 | 2021-05-08T01:28:51 | 2021-05-08T01:28:51 | 361,971,771 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,670 | py | # Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
class Tengxunzhaopin123SpiderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider.
# Should return None or raise an exception.
return None
def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response.
# Must return an iterable of Request, or item objects.
for i in result:
yield i
def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception.
# Should return either None or an iterable of Request or item objects.
pass
def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated.
# Must return only requests (not items).
for r in start_requests:
yield r
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
class Tengxunzhaopin123DownloaderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware.
# Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
return None
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
# Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception.
# Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
| [
"[email protected]"
] | |
a1ccfcb82ab8dd810185bd40e65365e3fa67a304 | e5a511e346f5be8a82fe9cb2edf457aa7e82859c | /Python/cppsecrets.com/program 14.py | 7072f9e819a92b61e186cb3ff5e5ff835dad44e7 | [] | no_license | nekapoor7/Python-and-Django | 8397561c78e599abc8755887cbed39ebef8d27dc | 8fa4d15f4fa964634ad6a89bd4d8588aa045e24f | refs/heads/master | 2022-10-10T20:23:02.673600 | 2020-06-11T09:06:42 | 2020-06-11T09:06:42 | 257,163,996 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 171 | py | """Python Program to Count the Occurrences of Each Word in a Given String Sentence"""
from collections import Counter
text = input()
occur = Counter(text)
print(occur) | [
"[email protected]"
] | |
3701c020b39c94cf68bd19271c90cc4e2b9b1579 | bfe9c678726a53421f26a7cfdc1447681624c4f2 | /bast/graphics/mesh/sub_mesh.py | 6bc0a14c7930c25239866280f4636279921bb430 | [] | no_license | adamlwgriffiths/bast | df983cf0322b320efdc8ef4ba0207214ebd31ef6 | a78186e9d111a799581bd604b4985467638b0b10 | refs/heads/master | 2021-01-19T20:18:27.558273 | 2015-05-02T03:41:22 | 2015-05-02T03:41:22 | 31,202,746 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,756 | py | from __future__ import absolute_import
from OpenGL import GL
from ...common.object import DescriptorMixin
from ..buffer.vertex_array import VertexArray
from ..buffer.buffer_pointer import BufferPointer
class SubMesh(DescriptorMixin):
def __init__(self, material, indices=None, primitive=GL.GL_TRIANGLES, **pointers):
self._pointers = pointers
self._material = material
self.primitive = primitive
self.indices = indices
for pointer in pointers.values():
if not isinstance(pointer, BufferPointer):
raise ValueError('Must be of type BufferPointer')
self._vertex_array = VertexArray()
self._bind_pointers()
def _bind_pointers(self):
# TODO: make this more efficient, don't just clear all pointers
self._vertex_array.clear()
# assign our pointers to the vertex array
for name, pointer in self._pointers.items():
if not isinstance(pointer, BufferPointer):
raise ValueError('Must be a buffer pointer')
attribute = self._material.program.attributes.get(name)
if attribute:
self._vertex_array[attribute.location] = pointer
def render(self, **uniforms):
# set our uniforms
self._material.set_uniforms(**uniforms)
# render
with self._material:
if self.indices is not None:
self._vertex_array.render_indices(self.indices, self.primitive)
else:
self._vertex_array.render(self.primitive)
@property
def material(self):
return self._material
@material.setter
def material(self, material):
self._material = material
self._bind_pointers()
| [
"[email protected]"
] | |
42a947cc4062cb659223ccf9afbd8090f7fdc4aa | 74060c5771ae3904e99cda84ef3d1ead58940917 | /app.py | 0dd8b44dd6721d70dd36f820ac10c849c3f26655 | [] | no_license | claraj/river-level | 423f34027287f03b0b10a79bddc6cce17d1c4226 | 8a8aed77382337de58af6b694b01c210ea3d6a72 | refs/heads/main | 2023-06-03T07:47:46.025175 | 2021-06-11T02:10:57 | 2021-06-11T02:10:57 | 375,766,339 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,273 | py | from flask import Flask, abort
from flask.json import jsonify
import requests
from datetime import datetime
app = Flask(__name__)
@app.errorhandler(400)
def not_found(e):
return jsonify({'Error': 'Bad request. Ensure you use a valid river code, and the number of days must be between 1 and 365.'}), 404
@app.errorhandler(404)
def not_found(e):
return jsonify({'Error': 'Not found'}), 404
@app.errorhandler(500)
def problem(e):
return jsonify({'Error': 'There was an error. Please report this to Clara.'}), 500
@app.route('/')
def homepage():
return 'This is the home page.'
@app.route('/api/river/<site_id>/<days>')
def river_info(site_id, days):
url = 'https://waterservices.usgs.gov/nwis/iv'
parameter_code_map = {
# '00011': 'Water temperature, fahrenheit',
# '00060': 'Flow, cubic feet per second',
'00065': 'Gauge height, feet',
}
parameter_codes = ','.join(parameter_code_map.keys()) # height, flow, temp
# is period a positive number between 1 and 365?
try:
days = int(days)
if days < 1 or days > 365:
abort(400, 'Days must be an integer between 1 and 365')
except:
abort(400, 'Days must be an integer between 1 and 365')
params = {
'format': 'json',
'site': site_id,
'parameterCd': parameter_codes,
'siteStatus': 'all',
'period': f'P{days}D'
}
response = requests.get(url, params=params)
if response.status_code == 400: # Bad request, often unrecognized site number
app.logger.error(f'Bad request for site {site_id} because {response.text}')
abort(400)
response.raise_for_status()
# get site name, values of parameters, time measurement made
river_data = response.json()
time_series = river_data['value']['timeSeries']
if not time_series:
# no data or site number not found
app.logger.error(f'No series of data for site {site_id}')
abort(404)
simplified_data = {'data': {} }
for series in time_series:
code = series['variable']['variableCode'][0]['value']
simple_name = parameter_code_map[code]
values = series['values'][0]['value']
values_list = []
times_list = []
times_human_list = []
timestamp_list = []
for value_dict in values:
data_point = value_dict['value']
date_str = value_dict['dateTime']
date_time = datetime.fromisoformat(date_str)
# human_date = datetime.strftime(date_time, '%a %d %b %Y at %I:%M %p')
timestamp = date_time.timestamp()
timestamp_list.append(timestamp)
values_list.append(data_point)
times_list.append(date_str)
# times_human_list.append(human_date)
site_name = series['sourceInfo']['siteName']
site_name_title = site_name.title()
simplified_data['data'][simple_name] = {
'values': values_list,
'times': times_list,
'timestamps': timestamp_list
# 'formatted_times': times_human_list,
}
simplified_data['location'] = site_name_title
return jsonify(simplified_data) | [
"[email protected]"
] | |
acd5eaa9f9e459be6493fb13f20f229d7bb22132 | a94c446a0d9ce77df965674f63be54d54b2be577 | /raspy/invalid_operation_exception.py | 619a77c7c483963df689e25c8c88f7cc472685e5 | [
"MIT"
] | permissive | cyrusbuilt/RasPy | 3434e02c2bff09ef9f3ff4995bda14edc781c14b | 1e34840cc90ea7f19317e881162209d3d819eb09 | refs/heads/master | 2020-03-18T20:19:27.426002 | 2018-08-03T17:07:25 | 2018-08-03T17:07:25 | 135,207,376 | 0 | 0 | MIT | 2018-08-03T17:07:26 | 2018-05-28T20:42:17 | Python | UTF-8 | Python | false | false | 297 | py | """This module contains the InvalidOperationException exception class."""
class InvalidOperationException(Exception):
"""Invalid operation exception.
The exception that is thrown when an operation is attempted on an object
whose current state does not support it.
"""
pass
| [
"[email protected]"
] | |
5a691fd91db8702bd66b8e9e3d63005f7c6f009d | 489814a9008e482eb1098c3c97aac23ff037b3cf | /www/rabota/context_processors.py | 7227e85af53f954205996ae8e27a574bcda14bb5 | [] | no_license | boogiiieee/Delo70 | f70fcb92c91f96348513d415b120aad3b4507721 | 5c48371a513b4b1bdd6068c90895a9bda126d88c | refs/heads/master | 2021-09-04T03:10:13.362897 | 2018-01-15T03:57:48 | 2018-01-15T03:57:48 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 624 | py | # -*- coding: utf-8 -*-
from rabota.forms import SearchMinForm
from geo.models import CustomLocation
##################################################################################################
##################################################################################################
def custom_proc(request):
return {
'search': SearchMinForm(),
'city': CustomLocation.objects.get(slug=u'tomsk')
}
##################################################################################################
################################################################################################## | [
"[email protected]"
] | |
c755e9e0d94bb5396ef7140478ebd01d23417626 | a566cb316ab93aeadd366b148f5110c327c7eb2b | /chp3/test.py | 12ac2e4c8da6cf7e5fd81b79f352b6ddc085af59 | [] | no_license | piochelepiotr/crackingTheCode | 4aeaffd2c46b2761b2f9642107292d0932731489 | 163ff60f723869a7096b330965d90dc1443d7199 | refs/heads/master | 2021-06-20T21:30:56.033989 | 2021-01-13T08:44:57 | 2021-01-13T08:44:57 | 172,414,034 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,050 | py | import unittest
import ex2
import ex3
import ex4
import ex5
import ex6
import stack
class Testing(unittest.TestCase):
def test_s(self):
s = stack.Stack()
s.push(3)
s.push(4)
self.assertEqual(4, s.pop())
self.assertEqual(3, s.pop())
def test_min_s(self):
s = ex2.MinStack()
s.add(3)
s.add(4)
s.add(2)
self.assertEqual(2, s.min())
self.assertEqual(2, s.pop())
self.assertEqual(3, s.min())
self.assertEqual(4, s.pop())
def test_set_of_ss(self):
s = ex3.SetOfStacks(2)
s.push(1)
s.push(2)
s.push(3)
self.assertEqual(3, s.pop())
self.assertEqual(2, s.pop())
self.assertEqual(1, s.pop())
s.push(1)
s.push(2)
s.push(3)
self.assertEqual(2, s.pop_at(0))
self.assertEqual(3, s.pop())
self.assertEqual(1, s.pop())
def test_queue(self):
q = ex4.MyQueue()
q.push(1)
q.push(2)
q.push(3)
self.assertEqual(1, q.pull())
self.assertEqual(2, q.pull())
self.assertEqual(3, q.pull())
def test_sort_stack(self):
s = stack.Stack()
s.push(2)
s.push(1)
s.push(3)
s.push(5)
sorted_stack = ex5.sort_stack(s)
self.assertEqual(4, sorted_stack.size())
self.assertEqual(5, sorted_stack.pop())
self.assertEqual(3, sorted_stack.pop())
self.assertEqual(2, sorted_stack.pop())
self.assertEqual(1, sorted_stack.pop())
def test_shelter(self):
shelter = ex6.Shelter()
shelter.enqueue(ex6.Cat('Garfield'))
shelter.enqueue(ex6.Dog('Sirius'))
shelter.enqueue(ex6.Dog('Rantanplan'))
shelter.enqueue(ex6.Cat('Crookshanks'))
self.assertEqual('Sirius', shelter.dequeue_dog().name)
self.assertEqual('Garfield', shelter.dequeue_any().name)
self.assertEqual('Crookshanks', shelter.dequeue_cat().name)
if __name__ == "__main__":
unittest.main()
| [
"[email protected]"
] | |
421a92abcac080c140990d2ba04f70ef50b6473d | de3b77cb0927f28cbd85e9142c2dfd7c8be7c27e | /tests/migrations/024_add_updated_at_to_endpoint_params_down.py | 293af995a4fa50583d125154ddd7dc3ad812b296 | [
"MIT"
] | permissive | LoansBot/database | f3dcbccde59fdb80c876d2612f250662946588e6 | eeaed26c2dcfdf0f9637b47ebe15cd1e000d8cc4 | refs/heads/master | 2021-07-02T22:07:18.683278 | 2021-06-02T04:09:38 | 2021-06-02T04:09:38 | 239,400,935 | 0 | 1 | MIT | 2021-06-02T04:14:31 | 2020-02-10T01:06:53 | Python | UTF-8 | Python | false | false | 690 | py | import unittest
import helper
class DownTest(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.connection = helper.setup_connection()
cls.cursor = cls.connection.cursor()
@classmethod
def tearDownClass(cls):
cls.cursor.close()
cls.connection.rollback()
helper.teardown_connection(cls.connection)
def tearDown(self):
self.connection.rollback()
def test_updated_at_dne(self):
self.assertFalse(
helper.check_if_column_exist(
self.cursor, 'endpoint_params', 'updated_at'
)
)
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
e98a5646e95ea0a4833b1b0150b72feea5dc1830 | c9aca558963537ae10e87b791cc878f8f6a33d77 | /Chapter02/Simple_linear_regression.py | bff0e3c4dfc7ca91098e85038dc360cdf9cb04ec | [
"MIT"
] | permissive | PacktPublishing/TensorFlow-1x-Deep-Learning-Cookbook | d1f8fe311fa127346122aee1a8cc12a85ef4cc8a | 9e23044b0c43e2f6b9ad40a82023f7935757d3d0 | refs/heads/master | 2023-02-05T09:27:15.951141 | 2023-01-30T09:49:25 | 2023-01-30T09:49:25 | 114,516,232 | 91 | 84 | null | null | null | null | UTF-8 | Python | false | false | 1,767 | py | """
"""
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
def normalize(X):
mean = np.mean(X)
std = np.std(X)
X = (X - mean)/std
return X
# Data
boston = tf.contrib.learn.datasets.load_dataset('boston')
X_train, Y_train = boston.data[:,5], boston.target
#X_train = normalize(X_train)
n_samples = len(X_train)
#print(X_train)
# Placeholder for the Training Data
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
# Variables for coefficients initialized to 0
b = tf.Variable(0.0)
w = tf.Variable(0.0)
# The Linear Regression Model
Y_hat = X * w + b
# Loss function
loss = tf.square(Y - Y_hat, name='loss')
# Gradient Descent with learning rate of 0.01 to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(loss)
# Initializing Variables
init_op = tf.global_variables_initializer()
total = []
# Computation Graph
with tf.Session() as sess:
# Initialize variables
sess.run(init_op)
writer = tf.summary.FileWriter('graphs', sess.graph)
# train the model for 100 epcohs
for i in range(100):
total_loss = 0
for x,y in zip(X_train,Y_train):
_, l = sess.run ([optimizer, loss], feed_dict={X:x, Y:y})
total_loss += l
total.append(total_loss / n_samples)
print('Epoch {0}: Loss {1}'.format(i, total_loss/n_samples))
writer.close()
b_value, w_value = sess.run([b, w])
Y_pred = X_train * w_value + b_value
print('Done')
# Plot the result
plt.plot(X_train, Y_train, 'bo', label='Real Data')
plt.plot(X_train,Y_pred, 'r', label='Predicted Data')
plt.legend()
plt.show()
plt.plot(total)
plt.show()
| [
"[email protected]"
] | |
88e1bce19a600e0c2c679c6e5cda236a8c2c4e07 | cd4d0df26a8cd40b01872e892dca7204aa66aa1e | /storescraper/bin/celeryconfig/defaults.py | 31d6f1ec02656232b0614147fc23ef99adf4d7f3 | [] | no_license | SoloTodo/storescraper | f3486782c37f48d1b8aac4dc5fa6fa993711382e | b04490f5f3db21a92e9ad7cb67c4030a69e51434 | refs/heads/develop | 2023-08-30T21:43:16.725320 | 2023-08-30T19:36:27 | 2023-08-30T19:36:27 | 95,259,334 | 47 | 23 | null | 2023-08-03T15:34:41 | 2017-06-23T21:56:48 | Python | UTF-8 | Python | false | false | 174 | py | import sys
sys.path.append('../..')
broker_url = 'amqp://storescraper:storescraper@localhost/storescraper'
result_backend = 'rpc://'
imports = (
'storescraper.store'
)
| [
"[email protected]"
] | |
c44c0c145082573453dda43fc2c47dbb33847365 | 9955a91c6193f28bc2a47fb0ab3955a0a0f525f8 | /model/medical_inpatient_medication_log.py | 90147c432e8c7b46743c6e85b82bd58456195df1 | [] | no_license | asop-source/Klinik- | 77372049fe6cdf2b2c922f093464980fea01bfc5 | a7ab1ca80c73e62178e577a685be888ff6c370da | refs/heads/master | 2020-12-11T06:38:11.765622 | 2020-01-14T08:14:51 | 2020-01-14T08:14:51 | 233,790,453 | 1 | 2 | null | null | null | null | UTF-8 | Python | false | false | 813 | py | # -*- coding: utf-8 -*-
# Part of BrowseInfo. See LICENSE file for full copyright and licensing details.
from odoo import models, fields, api, _
from datetime import date,datetime
class medical_inpatient_medication_log(models.Model):
_name = 'medical.inpatient.medication.log'
admin_time = fields.Datetime(string='Date',readonly=True)
dose = fields.Float(string='Dose')
remarks = fields.Text(string='Remarks')
medical_inpatient_medication_log_id = fields.Many2one('medical.physician',string='Health Professional',readonly=True)
medical_dose_unit_id = fields.Many2one('medical.dose.unit',string='Dose Unt')
medical_inaptient_log_medicament_id = fields.Many2one('medical.inpatient.medication',string='Log History')
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:s
| [
"[email protected]"
] | |
72f0fc5d92d6a4e25fec8d241b1e74159b598da8 | cced1f1ad18c6d9c3b96b2ae53cac8e86846f1f5 | /Blog/comment/views.py | 7d2183dd9347af2c83f7990348ea803cb88433de | [] | no_license | sug5806/portfolio | a3904be506a3746e16da57bba5926c38743783ad | b943955a52c622094a58fb9124323298261ae80a | refs/heads/master | 2022-12-10T06:23:38.472893 | 2019-07-05T04:56:59 | 2019-07-05T04:56:59 | 190,156,107 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,313 | py | from django.shortcuts import render
from .forms import CommentForm
from .models import Comment
from django.shortcuts import redirect
from django.urls import resolve
from urllib.parse import urlparse
from django.contrib import messages
def add_comment(request):
if not request.user.is_anonymous:
comment_form = CommentForm(request.POST)
comment_form.instance.author_id = request.user.id
if comment_form.is_valid():
comment_form.save()
messages.add_message(request, messages.SUCCESS, "댓글을 작성하였습니다.")
else:
messages.add_message(request, messages.WARNING, "Comment Invalid")
else:
messages.add_message(request, messages.WARNING, "댓글은 로그인 사용자만 남길 수 있습니다.")
referer = request.META['HTTP_REFERER']
return redirect(referer)
def delete_comment(request, pk):
comment = Comment.objects.filter(pk=pk)
if comment.exists() and comment[0].author == request.user :
comment.delete()
messages.add_message(request, messages.SUCCESS, "댓글을 삭제하였습니다.")
else:
messages.add_message(request, messages.WARNING, "댓글을 삭제할 수 없습니다.")
referer = request.META['HTTP_REFERER']
return redirect(referer)
| [
"[email protected]"
] | |
15b1f220688c13d72343c799f21bd54531825092 | 2e682fd72e3feaa70e3f7bf2a3b83c50d783ec02 | /PyTorch/contrib/cv/semantic_segmentation/DeeplabV3_for_Pytorch/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_40k_pascal_context.py | 7d41ce9d966dfabe06dd53a1c736699f7b258f24 | [
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"BSD-3-Clause",
"LicenseRef-scancode-generic-cla",
"LicenseRef-scancode-unknown-license-reference",
"GPL-1.0-or-later"
] | permissive | Ascend/ModelZoo-PyTorch | 4c89414b9e2582cef9926d4670108a090c839d2d | 92acc188d3a0f634de58463b6676e70df83ef808 | refs/heads/master | 2023-07-19T12:40:00.512853 | 2023-07-17T02:48:18 | 2023-07-17T02:48:18 | 483,502,469 | 23 | 6 | Apache-2.0 | 2022-10-15T09:29:12 | 2022-04-20T04:11:18 | Python | UTF-8 | Python | false | false | 807 | py | # Copyright 2021 Huawei
# Copyright 2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
_base_ = './deeplabv3plus_r50-d8_480x480_40k_pascal_context.py'
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
| [
"[email protected]"
] | |
660031d63690e79aa8df1ed183d32e387b29d77b | 5f5256284d4aa1c3d88dd99301024ba8fa04955e | /weis/multifidelity/test/test_trust_region.py | 3fb88e17c210bb3135a5e410645a1c3a2b75c1d9 | [
"Apache-2.0"
] | permissive | dousuguang/WEIS | 15fbff42dc4298d7592871b961c0f43fcd24feb7 | 1e4dbf6728050f75cee08cd483fe57c5614488fe | refs/heads/master | 2023-07-08T15:46:45.508489 | 2021-05-18T22:46:55 | 2021-05-18T22:46:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,413 | py | import unittest
import numpy as np
from weis.multifidelity.models.testbed_components import (
simple_2D_high_model,
simple_2D_low_model,
simple_1D_high_model,
simple_1D_low_model,
)
from weis.multifidelity.methods.trust_region import SimpleTrustRegion
class Test(unittest.TestCase):
def test_optimization(self):
np.random.seed(13)
bounds = {"x": np.array([[0.0, 1.0], [0.0, 1.0]])}
desvars = {"x": np.array([0.0, 0.25])}
model_low = simple_2D_low_model(desvars)
model_high = simple_2D_high_model(desvars)
trust_region = SimpleTrustRegion(model_low, model_high, bounds, disp=False)
trust_region.add_objective("y")
results = trust_region.optimize()
np.testing.assert_allclose(results["optimal_design"], [0.0, 0.333], atol=1e-3)
def test_constrained_optimization(self):
np.random.seed(13)
bounds = {"x": np.array([[0.0, 1.0], [0.0, 1.0]])}
desvars = {"x": np.array([0.0, 0.25])}
model_low = simple_2D_low_model(desvars)
model_high = simple_2D_high_model(desvars)
trust_region = SimpleTrustRegion(
model_low, model_high, bounds, num_initial_points=10, disp=False
)
trust_region.add_objective("y")
trust_region.add_constraint("con", equals=0.0)
results = trust_region.optimize(plot=False, num_iterations=10)
np.testing.assert_allclose(results["optimal_design"], [0.0, 0.10987], atol=1e-3)
np.testing.assert_allclose(results["outputs"]["con"], 0.0, atol=1e-5)
def test_1d_constrained_optimization(self):
np.random.seed(13)
bounds = {"x": np.array([[0.0, 1.0]])}
desvars = {"x": np.array([0.25])}
model_low = simple_1D_low_model(desvars)
model_high = simple_1D_high_model(desvars)
trust_region = SimpleTrustRegion(
model_low, model_high, bounds, num_initial_points=10, disp=False
)
trust_region.add_objective("y")
trust_region.add_constraint("con", equals=0.25)
results = trust_region.optimize(plot=False, num_iterations=10)
np.testing.assert_allclose(results["optimal_design"], 0.707105, atol=1e-3)
np.testing.assert_allclose(results["outputs"]["con"], 0.25, atol=1e-5)
if __name__ == "__main__":
unittest.main()
| [
"[email protected]"
] | |
a2eb28539aed3f9f4c023b85fe772c3742d174f8 | f569978afb27e72bf6a88438aa622b8c50cbc61b | /douyin_open/Oauth2UserToken/__init__.py | 57234d8fe6e43dd02aeb32d856adc28f5132d285 | [] | no_license | strangebank/swagger-petstore-perl | 4834409d6225b8a09b8195128d74a9b10ef1484a | 49dfc229e2e897cdb15cbf969121713162154f28 | refs/heads/master | 2023-01-05T10:21:33.518937 | 2020-11-05T04:33:16 | 2020-11-05T04:33:16 | 310,189,316 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 986 | py | # coding: utf-8
# flake8: noqa
"""
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 1.0.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from douyin_open.Oauth2UserToken.api.access_token_api import AccessTokenApi
from douyin_open.Oauth2UserToken.api.oauth_code_api import OauthCodeApi
# import ApiClient
from douyin_open.Oauth2UserToken.api_client import ApiClient
from douyin_open.Oauth2UserToken.configuration import Configuration
# import models into sdk package
from douyin_open.Oauth2UserToken.models.description import Description
from douyin_open.Oauth2UserToken.models.error_code import ErrorCode
from douyin_open.Oauth2UserToken.models.inline_response200 import InlineResponse200
from douyin_open.Oauth2UserToken.models.inline_response200_data import InlineResponse200Data
| [
"[email protected]"
] | |
6da7aba79b48c6e0a599681866a6dd7dd144697c | 236ea61ca55009173039bec5c6d0cef9daf79432 | /wscript | c1fc25f7ab568838a1a753380d9f3c2accab9618 | [
"BSD-2-Clause"
] | permissive | konarev/nuvolaruntime | 5c974bd4ebe5ddb66e43f1ddd80d96f35bb05b02 | 6a43a81597669b50eece246789d8fa85bfed0822 | refs/heads/master | 2021-01-21T21:15:03.485527 | 2017-06-11T22:35:25 | 2017-06-11T22:35:25 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 22,894 | # encoding: utf-8
#
# Copyright 2014-2017 Jiří Janoušek <[email protected]>
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Metadata #
#==========#
top = '.'
out = 'build'
APPNAME = "nuvolaruntime"
NUVOLA_BIN = "nuvola"
NUVOLACTL_BIN = "nuvolactl"
NEW_VERSION_SCHEME = True
VERSION = "3.1.3" if not NEW_VERSION_SCHEME else "4.4.0"
GENERIC_NAME = "Web Apps"
BLURB = "Tight integration of web apps with your Linux desktop"
DEFAULT_HELP_URL = "https://github.com/tiliado/nuvolaplayer/wiki/Unofficial"
DEFAULT_WEB_APP_REQUIREMENTS_HELP_URL = "https://github.com/tiliado/nuvolaplayer/wiki/Web-App-Requirements"
MIN_DIORITE = "0.3.4" if not NEW_VERSION_SCHEME else "4.4.0"
MIN_VALA = "0.34.0"
MIN_GLIB = "2.42.1"
MIN_GTK = "3.22.0"
LEGACY_WEBKIT = "2.16.0"
FLATPAK_WEBKIT = "2.16.1"
# Extras #
#========#
import sys
assert sys.version_info >= (3, 4, 0), "Run waf with Python >= 3.4"
import os
import json
from waflib.Errors import ConfigurationError
from waflib import TaskGen, Utils, Errors, Node, Task
from nuvolamergejs import mergejs as merge_js
if NEW_VERSION_SCHEME:
TARGET_DIORITE = str(MIN_DIORITE[0])
else:
TARGET_DIORITE = MIN_DIORITE.rsplit(".", 1)[0]
TARGET_GLIB = MIN_GLIB.rsplit(".", 1)[0]
REVISION_SNAPSHOT = "snapshot"
def get_git_version():
import subprocess
if os.path.isdir(".git"):
try:
output = subprocess.check_output(["git", "describe", "--tags", "--long"])
return output.decode("utf-8").strip().split("-")
except Exception as e:
print(e)
return VERSION, "0", REVISION_SNAPSHOT
def add_version_info(ctx):
bare_version, n_commits, revision_id = get_git_version()
if revision_id != REVISION_SNAPSHOT:
revision_id = "{}-{}".format(n_commits, revision_id)
versions = list(int(i) for i in bare_version.split("."))
if NEW_VERSION_SCHEME:
versions[2] += int(n_commits)
version = "{}.{}.{}".format(*versions)
if NEW_VERSION_SCHEME:
release = "{}.{}".format(*versions)
else:
release = version
version += "." + n_commits
ctx.env.VERSION = version
ctx.env.VERSIONS = versions
ctx.env.RELEASE = release
ctx.env.REVISION_ID = revision_id
def glib_encode_version(version):
major, minor, _ = tuple(int(i) for i in version.split("."))
return major << 16 | minor << 8
def vala_def(ctx, vala_definition):
"""Appends a Vala definition"""
ctx.env.append_unique("VALA_DEFINES", vala_definition)
def pkgconfig(ctx, pkg, uselib, version, mandatory=True, store=None, valadef=None, define=None):
"""Wrapper for ctx.check_cfg."""
result = True
try:
res = ctx.check_cfg(package=pkg, uselib_store=uselib, atleast_version=version, mandatory=True, args = '--cflags --libs')
if valadef:
vala_def(ctx, valadef)
if define:
for key, value in define.iteritems():
ctx.define(key, value)
except ConfigurationError as e:
result = False
if mandatory:
raise e
finally:
if store is not None:
ctx.env[store] = result
return res
def loadjson(path, optional=False):
try:
with open(path, "rt", encoding="utf-8") as f:
data = "".join((line if not line.strip().startswith("//") else "\n") for line in f)
return json.loads(data)
except FileNotFoundError:
if optional:
return {}
raise
def mask(string):
shift = int(1.0 * os.urandom(1)[0] / 255 * 85 + 15)
return [shift] + [c + shift for c in string.encode("utf-8")]
@TaskGen.feature('mergejs')
@TaskGen.before_method('process_source', 'process_rule')
def _mergejs_taskgen(self):
source = Utils.to_list(getattr(self, 'source', []))
if isinstance(source, Node.Node):
source = [source]
target = (getattr(self, 'target', []))
if isinstance(target, str):
target = self.path.find_or_declare(target)
elif not isinstance(target, Node.Node):
raise Errors.WafError('invalid target for %r' % self)
for i in range(len(source)):
item = source[i]
if isinstance(item, str):
source[i] = self.path.find_resource(item)
elif not isinstance(item, Node.Node):
raise Errors.WafError('invalid source for %r' % self)
task = self.create_task('mergejs', source, target)
install_path = getattr(self, 'install_path', None)
if install_path:
self.bld.install_files(install_path, target, chmod=getattr(self, 'chmod', Utils.O644))
self.source = []
class mergejs(Task.Task):
def run(self):
output = merge_js([i.abspath() for i in self.inputs])
self.outputs[0].write(output)
return 0
# Actions #
#=========#
def options(ctx):
ctx.load('compiler_c vala')
ctx.add_option('--jsdir', type=str, default=None, help="Path to JavaScript modules [DATADIR/javascript].")
ctx.add_option('--branding', type=str, default="default", help="Branding profile to load.")
ctx.add_option('--noopt', action='store_true', default=False, dest='noopt', help="Turn off compiler optimizations")
ctx.add_option('--nodebug', action='store_false', default=True, dest='debug', help="Turn off debugging symbols")
ctx.add_option('--nounity', action='store_false', default=True, dest='unity', help="Don't build Unity features.")
ctx.add_option('--noappindicator', action='store_false', default=True, dest='appindicator', help="Don't build functionality dependent on libappindicator")
ctx.add_option('--webkitgtk-supports-mse', action='store_true', default=False, dest='webkit_mse',
help="Use only if you are absolutely sure that your particular build of the WebKitGTK library supports Media Source Extension (as of 2.15.3, it is disabled by default)")
def configure(ctx):
add_version_info(ctx)
ctx.msg("Version", ctx.env.VERSION, "GREEN")
if ctx.env.REVISION_ID != REVISION_SNAPSHOT:
ctx.msg("Upstream revision", ctx.env.REVISION_ID, color="GREEN")
else:
ctx.msg("Upstream revision", "unknown", color="RED")
ctx.msg('Install prefix', ctx.options.prefix, color="GREEN")
ctx.env.append_unique("VALAFLAGS", "-v")
ctx.env.append_unique('CFLAGS', ['-w'])
ctx.env.append_unique("LINKFLAGS", ["-Wl,--no-undefined", "-Wl,--as-needed"])
for path in os.environ.get("LD_LIBRARY_PATH", "").split(":"):
path = path.strip()
if path:
ctx.env.append_unique('LIBPATH', path)
if not ctx.options.noopt:
ctx.env.append_unique('CFLAGS', '-O2')
if ctx.options.debug:
ctx.env.append_unique('CFLAGS', '-g3')
# Branding
ctx.env.BRANDING = ctx.options.branding or "default"
ctx.msg("Branding", ctx.env.BRANDING, color="GREEN")
branding_json = "branding/%s.json" % ctx.env.BRANDING
if os.path.isfile(branding_json):
ctx.msg("Branding metadata", branding_json, color="GREEN")
branding = loadjson(branding_json, False)
else:
if ctx.env.BRANDING != "default":
ctx.msg("Branding metadata not found", branding_json, color="RED")
branding = {}
ctx.env.WELCOME_XML = "branding/%s/welcome.xml" % ctx.env.BRANDING
if os.path.isfile(ctx.env.WELCOME_XML):
ctx.msg("Welcome screen", ctx.env.WELCOME_XML, color="GREEN")
else:
ctx.msg("Welcome screen not found", ctx.env.WELCOME_XML, color="RED")
ctx.env.WELCOME_XML = "branding/default/welcome.xml"
genuine = branding.get("genuine", False)
ctx.env.NAME = branding.get("name", "Web Apps")
ctx.env.SHORT_NAME = branding.get("short_name", ctx.env.NAME)
ctx.env.VENDOR = branding.get("vendor", "unknown")
ctx.env.HELP_URL = branding.get("help_url", DEFAULT_HELP_URL)
ctx.env.WEB_APP_REQUIREMENTS_HELP_URL = branding.get("requirements_help_url", DEFAULT_WEB_APP_REQUIREMENTS_HELP_URL)
tiliado_api = branding.get("tiliado_api", {})
# Variants
ctx.env.CDK = branding.get("cdk", False)
ctx.env.ADK = branding.get("adk", False)
ctx.env.FLATPAK = branding.get("flatpak", False)
MIN_WEBKIT = LEGACY_WEBKIT
if ctx.env.CDK:
vala_def(ctx, "NUVOLA_CDK")
ctx.env.UNIQUE_NAME = "eu.tiliado.NuvolaCdk"
MIN_WEBKIT = FLATPAK_WEBKIT
elif ctx.env.ADK:
vala_def(ctx, "NUVOLA_ADK")
ctx.env.UNIQUE_NAME = "eu.tiliado.NuvolaAdk"
MIN_WEBKIT = FLATPAK_WEBKIT
else:
vala_def(ctx, "NUVOLA_RUNTIME")
ctx.env.UNIQUE_NAME = "eu.tiliado.Nuvola"
ctx.env.ICON_NAME = ctx.env.UNIQUE_NAME
# Flatpak
if ctx.env.FLATPAK:
vala_def(ctx, "FLATPAK")
MIN_WEBKIT = FLATPAK_WEBKIT
# Base deps
ctx.load('compiler_c vala')
ctx.check_vala(min_version=tuple(int(i) for i in MIN_VALA.split(".")))
pkgconfig(ctx, 'glib-2.0', 'GLIB', MIN_GLIB)
pkgconfig(ctx, 'gio-2.0', 'GIO', MIN_GLIB)
pkgconfig(ctx, 'gio-unix-2.0', 'UNIXGIO', MIN_GLIB)
pkgconfig(ctx, 'gtk+-3.0', 'GTK+', MIN_GTK)
pkgconfig(ctx, 'gdk-3.0', 'GDK', MIN_GTK)
pkgconfig(ctx, 'gdk-x11-3.0', 'GDKX11', MIN_GTK)
pkgconfig(ctx, 'x11', 'X11', "0")
pkgconfig(ctx, 'sqlite3', 'SQLITE', "3.7")
pkgconfig(ctx, 'dioriteglib' + TARGET_DIORITE, 'DIORITEGLIB', MIN_DIORITE)
pkgconfig(ctx, 'dioritegtk' + TARGET_DIORITE, 'DIORITEGTK', MIN_DIORITE)
pkgconfig(ctx, 'json-glib-1.0', 'JSON-GLIB', '0.7')
pkgconfig(ctx, 'libnotify', 'NOTIFY', '0.7')
pkgconfig(ctx, 'libsecret-1', 'SECRET', '0.16')
pkgconfig(ctx, "gstreamer-1.0", 'GST', "1.11.90" if ctx.options.webkit_mse else "1.8")
pkgconfig(ctx, 'webkit2gtk-4.0', 'WEBKIT', MIN_WEBKIT)
pkgconfig(ctx, 'webkit2gtk-web-extension-4.0', 'WEBKITEXT', MIN_WEBKIT)
pkgconfig(ctx, 'javascriptcoregtk-4.0', 'JSCORE', MIN_WEBKIT)
pkgconfig(ctx, 'uuid', 'UUID', '0') # Engine.io
pkgconfig(ctx, 'libsoup-2.4', 'SOUP', '0') # Engine.io
# For tests
ctx.find_program("diorite-testgen{}".format(TARGET_DIORITE), var="DIORITE_TESTGEN")
# JavaScript dir
ctx.env.JSDIR = ctx.options.jsdir if ctx.options.jsdir else ctx.env.DATADIR + "/javascript"
# Optional features
ctx.env.WEBKIT_MSE = ctx.options.webkit_mse
if ctx.options.webkit_mse:
vala_def(ctx, "WEBKIT_SUPPORTS_MSE")
ctx.env.with_unity = ctx.options.unity
if ctx.options.unity:
pkgconfig(ctx, 'unity', 'UNITY', '3.0')
pkgconfig(ctx, 'dbusmenu-glib-0.4', 'DBUSMENU', '0.4')
vala_def(ctx, "UNITY")
ctx.env.with_appindicator = ctx.options.appindicator
if ctx.options.appindicator:
pkgconfig(ctx, 'appindicator3-0.1', 'APPINDICATOR', '0.4')
vala_def(ctx, "APPINDICATOR")
# Define HAVE_WEBKIT_X_YY Vala compiler definitions
webkit_version = tuple(int(i) for i in ctx.check_cfg(modversion='webkit2gtk-4.0').split(".")[0:2])
version = (2, 6)
while version <= webkit_version:
vala_def(ctx, "HAVE_WEBKIT_%d_%d" % version)
version = (version[0], version[1] + 2)
# Definitions
ctx.env.GENUINE = genuine
if genuine:
vala_def(ctx, "GENUINE")
if any((ctx.env.GENUINE, ctx.env.CDK, ctx.env.ADK)):
vala_def(ctx, "EXPERIMENTAL")
if tiliado_api.get("enabled", False):
vala_def(ctx, "TILIADO_API")
ctx.define("NUVOLA_APPNAME", APPNAME)
ctx.define("NUVOLA_OLDNAME", "nuvolaplayer3")
ctx.define("NUVOLA_NAME", ctx.env.NAME)
ctx.define("NUVOLA_WELCOME_SCREEN_NAME", ctx.env.RELEASE)
ctx.define("NUVOLA_UNIQUE_NAME", ctx.env.UNIQUE_NAME)
ctx.define("NUVOLA_APP_ICON", ctx.env.ICON_NAME)
ctx.define("NUVOLA_RELEASE", ctx.env.RELEASE)
ctx.define("NUVOLA_VERSION", ctx.env.VERSION)
ctx.define("NUVOLA_REVISION", ctx.env.REVISION_ID)
ctx.define("NUVOLA_VERSION_MAJOR", ctx.env.VERSIONS[0])
ctx.define("NUVOLA_VERSION_MINOR", ctx.env.VERSIONS[1])
ctx.define("NUVOLA_VERSION_BUGFIX", ctx.env.VERSIONS[2])
ctx.define("NUVOLA_VERSION_SUFFIX", ctx.env.REVISION_ID)
ctx.define("GETTEXT_PACKAGE", APPNAME)
ctx.env.NUVOLA_LIBDIR = "%s/%s" % (ctx.env.LIBDIR, APPNAME)
ctx.define("NUVOLA_TILIADO_OAUTH2_SERVER", tiliado_api.get("server", "https://tiliado.eu"))
ctx.define("NUVOLA_TILIADO_OAUTH2_CLIENT_ID", tiliado_api.get("client_id", ""))
repo_index = branding.get("repository_index", "https://nuvola.tiliado.eu/").split("|")
repo_index, repo_root = repo_index if len(repo_index) > 1 else repo_index + repo_index
ctx.define("NUVOLA_REPOSITORY_INDEX", repo_index)
ctx.define("NUVOLA_REPOSITORY_ROOT", repo_root)
ctx.define("NUVOLA_WEB_APP_REQUIREMENTS_HELP_URL", ctx.env.WEB_APP_REQUIREMENTS_HELP_URL)
ctx.define("NUVOLA_HELP_URL", ctx.env.HELP_URL)
ctx.define("NUVOLA_LIBDIR", ctx.env.NUVOLA_LIBDIR)
ctx.define('GLIB_VERSION_MAX_ALLOWED', glib_encode_version(MIN_GLIB))
ctx.define('GLIB_VERSION_MIN_REQUIRED', glib_encode_version(MIN_GLIB))
ctx.define('GDK_VERSION_MAX_ALLOWED', glib_encode_version(MIN_GTK))
ctx.define('GDK_VERSION_MIN_REQUIRED', glib_encode_version(MIN_GTK))
with open("build/secret.h", "wb") as f:
client_secret = tiliado_api.get("client_secret", "")
if client_secret:
secret = b"{"
for i in mask(client_secret):
secret += str(i).encode("ascii") + b", "
secret += b"0}"
else:
secret = b'""'
f.write(
b'#pragma once\nstatic const char NUVOLA_TILIADO_OAUTH2_CLIENT_SECRET[] = ' + secret + b';')
def build(ctx):
def valalib(source_dir=None, **kwargs):
if source_dir is not None:
kwargs["source"] = ctx.path.ant_glob(source_dir + '/**/*.vala') + ctx.path.ant_glob(source_dir + '/**/*.vapi')
kwargs.setdefault("vala_dir", source_dir)
return ctx(features="c cshlib", **kwargs)
def valaprog(source_dir=None, **kwargs):
if source_dir is not None:
kwargs["source"] = ctx.path.ant_glob(source_dir + '/**/*.vala') + ctx.path.ant_glob(source_dir + '/**/*.vapi')
kwargs.setdefault("vala_dir", source_dir)
return ctx.program(**kwargs)
#~ print(ctx.env)
vala_defines = ctx.env.VALA_DEFINES
APP_RUNNER = "apprunner"
ENGINEIO = "engineio"
NUVOLAKIT_RUNNER = APPNAME + "-runner"
NUVOLAKIT_BASE = APPNAME + "-base"
NUVOLAKIT_WORKER = APPNAME + "-worker"
NUVOLAKIT_TESTS = APPNAME + "-tests"
RUN_NUVOLAKIT_TESTS = "run-" + NUVOLAKIT_TESTS
DIORITE_GLIB = 'dioriteglib' + TARGET_DIORITE
DIORITE_GTK = 'dioriteglib' + TARGET_DIORITE
packages = 'dioritegtk{0} dioriteglib{0} '.format(TARGET_DIORITE)
packages += 'javascriptcoregtk-4.0 libnotify libarchive gtk+-3.0 gdk-3.0 gdk-x11-3.0 x11 posix json-glib-1.0 glib-2.0 gio-2.0'
uselib = 'NOTIFY JSCORE LIBARCHIVE DIORITEGTK DIORITEGLIB GTK+ GDK GDKX11 X11 JSON-GLIB GLIB GIO'
vapi_dirs = ['vapi', 'engineio-soup/vapi']
env_vapi_dir = os.environ.get("VAPIDIR")
if env_vapi_dir:
vapi_dirs.extend(os.path.relpath(path) for path in env_vapi_dir.split(":"))
if ctx.env.SNAPCRAFT:
vapi_dirs.append(os.path.relpath(ctx.env.SNAPCRAFT + "/usr/share/vala/vapi"))
if ctx.env.with_unity:
packages += " unity Dbusmenu-0.4"
uselib += " UNITY DBUSMENU"
if ctx.env.with_appindicator:
packages += " appindicator3-0.1"
uselib += " APPINDICATOR"
valalib(
target = ENGINEIO,
source_dir = 'engineio-soup/src',
packages = 'uuid libsoup-2.4 json-glib-1.0',
uselib = 'UUID SOUP JSON-GLIB',
defines = ['G_LOG_DOMAIN="Engineio"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
)
valalib(
target = NUVOLAKIT_BASE,
source_dir = 'src/nuvolakit-base',
packages = packages + ' gstreamer-1.0',
uselib = uselib + " GST",
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
)
valalib(
target = NUVOLAKIT_RUNNER,
source_dir = 'src/nuvolakit-runner',
packages = packages + ' webkit2gtk-4.0 javascriptcoregtk-4.0 gstreamer-1.0 libsecret-1',
uselib = uselib + ' JSCORE WEBKIT GST SECRET',
use = [NUVOLAKIT_BASE, ENGINEIO],
lib = ['m'],
includes = ["build"],
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
)
valaprog(
target = NUVOLA_BIN,
source_dir = 'src/master',
packages = "",
uselib = uselib + " SOUP WEBKIT",
use = [NUVOLAKIT_BASE, NUVOLAKIT_RUNNER],
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
)
valaprog(
target = APP_RUNNER,
source_dir = 'src/apprunner',
packages = "",
uselib = uselib + " SOUP WEBKIT",
use = [NUVOLAKIT_BASE, NUVOLAKIT_RUNNER],
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
install_path = ctx.env.NUVOLA_LIBDIR,
)
valaprog(
target = NUVOLACTL_BIN,
source_dir = 'src/control',
packages = "",
uselib = uselib + " SOUP WEBKIT",
use = [NUVOLAKIT_BASE, NUVOLAKIT_RUNNER],
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
)
valalib(
target = NUVOLAKIT_WORKER,
source_dir = 'src/nuvolakit-worker',
packages = "dioriteglib{0} {1} {2}".format(TARGET_DIORITE, 'webkit2gtk-web-extension-4.0', 'javascriptcoregtk-4.0'),
uselib = "SOUP DIORITEGLIB DIORITEGTK WEBKITEXT JSCORE",
use = [NUVOLAKIT_BASE],
vala_defines = vala_defines,
cflags = ['-DG_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
install_path = ctx.env.NUVOLA_LIBDIR,
)
valalib(
target = NUVOLAKIT_TESTS,
source_dir = 'src/tests',
packages = packages + ' webkit2gtk-4.0 javascriptcoregtk-4.0 gstreamer-1.0 libsecret-1',
uselib = uselib + ' JSCORE WEBKIT GST SECRET',
use = [NUVOLAKIT_BASE, NUVOLAKIT_RUNNER, ENGINEIO],
lib = ['m'],
includes = ["build"],
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
install_path = None,
install_binding = False
)
ctx(
rule='"%s" -i ${SRC} -o ${TGT}' % ctx.env.DIORITE_TESTGEN[0],
source=ctx.path.find_or_declare('src/tests/%s.vapi' % NUVOLAKIT_TESTS),
target=ctx.path.find_or_declare("%s.vala" % RUN_NUVOLAKIT_TESTS)
)
valaprog(
target = RUN_NUVOLAKIT_TESTS,
source = [ctx.path.find_or_declare("%s.vala" % RUN_NUVOLAKIT_TESTS)],
packages = packages,
uselib = uselib,
use = [NUVOLAKIT_BASE, ENGINEIO, NUVOLAKIT_TESTS],
vala_defines = vala_defines,
defines = ['G_LOG_DOMAIN="Nuvola"'],
vapi_dirs = vapi_dirs,
vala_target_glib = TARGET_GLIB,
install_path = None
)
ctx(features = 'subst',
source = 'data/templates/launcher.desktop',
target = "share/applications/%s.desktop" % ctx.env.UNIQUE_NAME,
install_path = '${PREFIX}/share/applications',
BLURB = BLURB,
APP_NAME = ctx.env.NAME,
ICON = ctx.env.ICON_NAME,
EXEC = NUVOLA_BIN if not ctx.env.ADK else "lxterminal",
GENERIC_NAME=GENERIC_NAME,
WMCLASS = ctx.env.UNIQUE_NAME,
)
ctx(features = 'subst',
source = ctx.env.WELCOME_XML,
target = 'share/%s/welcome.xml' % APPNAME,
install_path = '${PREFIX}/share/%s' % APPNAME,
BLURB = BLURB,
NAME = ctx.env.NAME,
VERSION = ctx.env.RELEASE,
FULL_VERSION = ctx.env.VERSION,
HELP_URL = ctx.env.HELP_URL,
WEB_APP_REQUIREMENTS_HELP_URL = ctx.env.WEB_APP_REQUIREMENTS_HELP_URL,
VENDOR = ctx.env.VENDOR,
)
dbus_name = ctx.env.UNIQUE_NAME if ctx.env.GENUINE else "eu.tiliado.NuvolaOse"
ctx(features = 'subst',
source = 'data/templates/dbus.service',
target = "share/dbus-1/services/%s.service" % dbus_name,
install_path = '${PREFIX}/share/dbus-1/services',
NAME = dbus_name,
EXEC = '%s/bin/%s --gapplication-service' % (ctx.env.PREFIX, NUVOLA_BIN)
)
PC_CFLAGS = ""
ctx(features = 'subst',
source='src/nuvolakitbase.pc.in',
target='{}-base.pc'.format(APPNAME),
install_path='${LIBDIR}/pkgconfig',
VERSION=ctx.env.RELEASE,
PREFIX=ctx.env.PREFIX,
INCLUDEDIR = ctx.env.INCLUDEDIR,
LIBDIR = ctx.env.LIBDIR,
APPNAME=APPNAME,
PC_CFLAGS=PC_CFLAGS,
LIBNAME=NUVOLAKIT_BASE,
DIORITE_GLIB=DIORITE_GLIB,
)
ctx(features = 'subst',
source='src/nuvolakitrunner.pc.in',
target='{}-runner.pc'.format(APPNAME),
install_path='${LIBDIR}/pkgconfig',
VERSION=ctx.env.RELEASE,
PREFIX=ctx.env.PREFIX,
INCLUDEDIR = ctx.env.INCLUDEDIR,
LIBDIR = ctx.env.LIBDIR,
APPNAME=APPNAME,
PC_CFLAGS=PC_CFLAGS,
LIBNAME=NUVOLAKIT_RUNNER,
NUVOLAKIT_BASE=NUVOLAKIT_BASE,
DIORITE_GLIB=DIORITE_GLIB,
DIORITE_GTK=DIORITE_GTK,
)
ctx(
features = 'subst',
source=ctx.path.find_node("data/nuvolaplayer3.appdata.xml"),
target=ctx.path.get_bld().make_node(ctx.env.UNIQUE_NAME + '.appdata.xml'),
install_path='${PREFIX}/share/appdata',
encoding="utf-8",
FULL_NAME=ctx.env.NAME,
PRELUDE=(
"" if ctx.env.GENUINE
else '<p>{} software is based on the open source code from the Nuvola Apps™ project.</p>'.format(ctx.env.NAME)
),
)
ctx.install_as(
'${PREFIX}/share/metainfo/%s.appdata.xml' % ctx.env.UNIQUE_NAME,
ctx.path.get_bld().find_node(ctx.env.UNIQUE_NAME + '.appdata.xml'))
ctx.symlink_as('${PREFIX}/share/%s/www/engine.io.js' % APPNAME, ctx.env.JSDIR + '/engine.io-client/engine.io.js')
web_apps = ctx.path.find_dir("web_apps")
ctx.install_files('${PREFIX}/share/' + APPNAME, web_apps.ant_glob('**'), cwd=web_apps.parent, relative_trick=True)
www = ctx.path.find_dir("data/www")
ctx.install_files('${PREFIX}/share/' + APPNAME, www.ant_glob('**'), cwd=www.parent, relative_trick=True)
app_icons = ctx.path.find_node("data/icons")
for size in (16, 22, 24, 32, 48, 64, 128, 256):
ctx.install_as('${PREFIX}/share/icons/hicolor/%sx%s/apps/%s.png' % (size, size, ctx.env.ICON_NAME), app_icons.find_node("%s.png" % size))
ctx.install_as('${PREFIX}/share/icons/hicolor/scalable/apps/%s.svg' % ctx.env.ICON_NAME, app_icons.find_node("scalable.svg"))
ctx(features = "mergejs",
source = ctx.path.ant_glob('src/mainjs/*.js'),
target = 'share/%s/js/main.js' % APPNAME,
install_path = '${PREFIX}/share/%s/js' % APPNAME
)
data_js = ctx.path.find_dir("data/js")
for node in data_js.listdir():
ctx(
rule = 'cp -v ${SRC} ${TGT}',
source = data_js.find_node(node),
target = 'share/%s/js/%s' % (APPNAME, node),
install_path = '${PREFIX}/share/%s/js' % APPNAME
)
ctx(
rule = 'cp -v ${SRC} ${TGT}',
source = ctx.path.find_node("data/audio/audiotest.mp3"),
target = 'share/%s/audio/audiotest.mp3' % APPNAME,
install_path = '${PREFIX}/share/%s/audio' % APPNAME
)
def dist(ctx):
ctx.algo = "tar.gz"
ctx.excl = '.git .gitignore build/* **/.waf* **/*~ **/*.swp **/.lock* bzrcommit.txt **/*.pyc core'
| [
"[email protected]"
] | ||
24a5685809e52808904bdcc90e982bc708d4cf22 | 6ff318a9f67a3191b2a9f1d365b275c2d0e5794f | /python/day26/复习面向对象.py | 528ff46caa2096b7158566b26e0dedf26620f292 | [] | no_license | lvhanzhi/Python | c1846cb83660d60a55b0f1d2ed299bc0632af4ba | c89f882f601898b5caab25855ffa7d7a1794f9ab | refs/heads/master | 2020-03-25T23:34:00.919197 | 2018-09-13T12:19:51 | 2018-09-13T12:19:51 | 144,281,084 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,391 | py | # class OldboyStudent:
# school='Oldboy'
# def choose_course(self):
# print('is choosing course')
# print(OldboyStudent.__dict__)
# print(OldboyStudent.__dict__['school'])
# print(OldboyStudent.school)
# print(OldboyStudent.choose_course)
# OldboyStudent.choose_course(1)
# OldboyStudent.country='English'
# print(OldboyStudent.country)
# del OldboyStudent.school
# class Foo:
# pass
# class Foo2(Foo):
# pass
# f=Foo2()
# # print(Foo)
# # obj=Foo()
# # print(type(obj))
# print(Foo.__name__)
# print(Foo.__dict__)
# print(dir(Foo))
# print(Foo.__module__)
# print(f.__class__.__name__)
# print(isinstance(f,Foo2))
# class People:
# count=0
# def __init__(self,name):
# self.name=name
# People.count+=1
# egon=People('egon')
# print(egon.count)
# alex=People('alex')
# print(alex.count)
# class Person:
# def __init__(self,name,attack,life_value):
# self.name=name
# self.attack=attack
# self.life_value=life_value
# def attacking(self):
# dog.life_value=dog.life_value-self.attack
# class Dog:
# def __init__(self,name,attack,life_value):
# self.name=name
# self.attack=attack
# self.life_value=life_value
# def attacking(self):
# egon.life_value=egon.life_value-self.attack
# egon=Person('egon',20,100)
# dog=Dog('dog',10,100)
# print('egon的生命',egon.life_value)
# print('dog的生命',dog.life_value)
# egon.attacking()
# print('egon的生命',egon.life_value)
# print('dog的生命',dog.life_value)
# class Birthday:
# def __init__(self,year,month,day):
# self.year=year
# self.month=month
# self.day=day
# class Course:
# def __init__(self,name,price,period):
# self.name=name
# self.price=price
# self.period=period
# class Teacher:
# def __init__(self,name,year,month,day,price,period,salary):
# self.name=name
# self.salary=salary
# self.birthday=Birthday(year,month,day)
# self.course=Course(name,price,period)
# egon=Teacher('egon',1998,5,5,19800,5.5,2000)
# class Birthday:
# def __init__(self,year,mothday,day):
# self.year=year
# self.mothday=mothday
# self.day=day
# class Course:
# def __init__(self,name,period,price):
# self.name=name
# self.period=period
# self.price=price
# class Teacher:
# def __init__(self,name,age,sex):
# self.name=name
# self.age=age
# self.sex=sex
# egg=Teacher('egon',28,'male')
# egg.birthday=Birthday(2018,8,14)
# print(egg.birthday.year)
# egg.course=Course('python',5.5,158000)
# print(egg.course.name)
# class A:
# def test(self):
# print('a')
# class B(A):
# def test(self):
# print('b')
# obj=B()
# print(B.mro())
# class People:
# def __init__(self,name,age,sex):
# self.name=name
# self.age=age
# self.sex=sex
# class Student(People):
# def __init__(self,name,age,sex):
# People.__init__(self,name,age,sex)
# stu=Student('tom',18,'male')
# print(stu.__dict__)
# class People:
# def __init__(self,name,age,sex):
# self.name=name
# self.age=age
# self.sex=sex
# class Teacher(People):
# def __init__(self,name,age,sex):
# super(Teacher,self).__init__(name,age,sex)
# tea=Teacher('egon',18,'male')
# print(tea.__dict__) | [
"[email protected]"
] | |
d864a4a18f26361ad7c9a9e508e92e54f8250bc2 | d8b5aba2a1f53fbf3fcfc388c26e547afa76b13f | /modules/andForensics/modules/utils/android_sqlite3.py | 455f93e4b7c5cde1330befb1977acbfd3297ff38 | [
"GPL-3.0-only",
"Apache-2.0"
] | permissive | dfrc-korea/carpe | e88b4e3bcb536355e2a64d00e807bccd631f8c93 | f9299b8ad0cb2a6bbbd5e65f01d2ba06406c70ac | refs/heads/master | 2023-04-28T01:12:49.138443 | 2023-04-18T07:37:39 | 2023-04-18T07:37:39 | 169,518,336 | 75 | 38 | Apache-2.0 | 2023-02-08T00:42:41 | 2019-02-07T04:21:23 | Python | UTF-8 | Python | false | false | 2,933 | py | #-*- coding: utf-8 -*-
import sqlite3
import logging
import sys
logger = logging.getLogger('andForensics')
class SQLite3(object):
def execute_fetch_query_multi_values_order(query, query2, db):
try:
con = sqlite3.connect(db)
except sqlite3.Error as e:
logger.error("SQLite open error. it is an invalid file: %s" % db)
return False
# con.text_factory = str
# con.text_factory = lambda x: x.decode("utf-8") + "foo"
cursor = con.cursor()
try:
cursor.execute(query)
except sqlite3.Error as e:
try:
cursor.execute(query2)
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s, db: %s" % (query2, db))
return False
try:
ret = cursor.fetchall()
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s, db: %s" % (query, db))
return False
con.close()
return ret
def execute_fetch_query_multi_values(query, db):
try:
con = sqlite3.connect(db)
except sqlite3.Error as e:
logger.error("SQLite open error. it is an invalid file: %s" % db)
return False
# con = sqlite3.connect(db)
# # con.text_factory = str
# # con.text_factory = lambda x: x.decode("utf-8") + "foo"
cursor = con.cursor()
try:
cursor.execute(query)
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s, db: %s" % (query, db))
return False
try:
ret = cursor.fetchall()
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s, db: %s" % (query, db))
return False
con.close()
return ret
def execute_fetch_query(query, db):
try:
con = sqlite3.connect(db)
except sqlite3.Error as e:
logger.error("SQLite open error. it is an invalid file: %s" % db)
return False
cursor = con.cursor()
try:
cursor.execute(query)
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s" % query)
return False
try:
ret = cursor.fetchone()
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s" % query)
return False
con.close()
return ret
def execute_commit_query(queries, db):
# con = sqlite3.connect(db.decode('cp949'))
# con = sqlite3.connect(io.StringIO(db.decode('cp949')))
try:
con = sqlite3.connect(db)
except sqlite3.Error as e:
logger.error("SQLite open error. it is an invalid file: %s" % db)
return False
cursor = con.cursor()
query_type = type(queries)
if query_type == list:
for query in queries:
# print('query: %s' % query)
try:
cursor.execute(query)
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s" % query)
return False
elif query_type == str:
try:
cursor.execute(queries)
except sqlite3.Error as e:
logger.error("SQLite query execution error. query: %s" % queries)
return False
else:
print(query_type)
con.commit()
con.close()
return
| [
"[email protected]"
] | |
487ea60364bea7fbadc50bc1b2fd1c36512a6d7c | e9c8094407c351919cc765990dc2b4907d7dc986 | /CRC/check_district_functionality.py | ad1b478d278355540dd1826145d71c1e27efd42a | [] | no_license | chetandg123/System_Test | 343991e37d90c9ae25dbdd9ea06944483e071f33 | 5c8875e298f31dd3feb0726d3967bca7a7daea0a | refs/heads/master | 2022-10-11T13:42:32.537274 | 2020-06-04T16:45:12 | 2020-06-04T16:45:12 | 269,412,983 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 229 | py | import time
import unittest
class CRC(unittest.TestCase):
def setUp(self):
time.sleep(15)
def test_query(self):
print("District Functionality is selected")
def tearDown(self):
time.sleep(15) | [
"[email protected]"
] | |
31c69697e3c23566e5008b2f757956ca5be41372 | d725d4909a144f3218067c78e0339df781ba8145 | /src/plot_utils.py | cadda8ff609970ad7cbbf8f12682808aac707c01 | [
"Apache-2.0"
] | permissive | dhermes/phd-thesis | 05b101aa93c9d8aa72cc069d29ba3b9d3f2384dc | 732c75b4258e6f41b2dafb2929f0e3dbd380239b | refs/heads/master | 2021-06-13T04:22:37.265874 | 2019-11-16T16:35:22 | 2019-11-16T16:35:22 | 139,187,875 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,463 | py | # Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shared utilities and settings for plotting."""
import fractions
import math
import os
import seaborn
# As of ``0.9.0``, this palette has (BLUE, ORANGE, GREEN, RED, PURPLE, BROWN).
_COLORS = seaborn.color_palette(palette="deep", n_colors=6)
BLUE = _COLORS[0]
GREEN = _COLORS[2]
RED = _COLORS[3]
PURPLE = _COLORS[4]
del _COLORS
TEXT_SIZE = 10 # NOTE: Thesis text uses 12 point.
TICK_SIZE = 7
def set_styles():
"""Set the styles used for plotting."""
seaborn.set(style="white")
def get_path(*parts):
"""Get a file path in the ``images/`` directory.
This assumes the script is currently in the ``src/``
directory.
"""
curr_dir = os.path.abspath(os.path.dirname(__file__))
root_dir = os.path.dirname(curr_dir)
images_dir = os.path.join(root_dir, "images")
return os.path.join(images_dir, *parts)
def binomial(n, k):
numerator = math.factorial(n)
denominator = math.factorial(k) * math.factorial(n - k)
result = fractions.Fraction(numerator, denominator)
if float(result) != result:
raise ValueError("Cannot be represented exactly")
return float(result)
def next_float(value, greater=True):
"""Gets the next (or previous) floating point value."""
frac, exponent = math.frexp(value)
if greater:
if frac == -0.5:
ulp = 0.5 ** 54
else:
ulp = 0.5 ** 53
else:
if frac == 0.5:
ulp = -0.5 ** 54
else:
ulp = -0.5 ** 53
return (frac + ulp) * 2.0 ** exponent
def to_float(v):
"""Converts an MPF (``mpmath`` float) to a ``float``."""
f = float(v)
if f == v:
return f
if f < v:
low = f
high = next_float(f, greater=True)
else:
low = next_float(f, greater=False)
high = f
d_low = v - low
d_high = high - v
if d_low < d_high:
return low
else:
return high
| [
"[email protected]"
] | |
568db6eb13884c292c10227362bb121ed53af89d | 4e4c22dfabb1a0fa89f0f51f58737273412a30e0 | /audit/backend/ssh_interactive.py | 41119aecffd21653ed1bf7b9824da53cc1eeb199 | [] | no_license | shaoqianliang/fort_machine | 4cb271d5ef29c924c09172ff397e2af8562ee4ba | cf7e3d4c6682831ce04bcde478930ab7e85abb01 | refs/heads/master | 2020-04-28T15:24:02.056674 | 2019-04-12T23:50:35 | 2019-04-12T23:50:35 | 175,372,042 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,469 | py | #!/usr/bin/env python
# Copyright (C) 2003-2007 Robey Pointer <[email protected]>
#
# This file is part of paramiko.
#
# Paramiko is free software; you can redistribute it and/or modify it under the
# terms of the GNU Lesser General Public License as published by the Free
# Software Foundation; either version 2.1 of the License, or (at your option)
# any later version.
#
# Paramiko is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
# A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
# details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Paramiko; if not, write to the Free Software Foundation, Inc.,
# 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
import base64
from binascii import hexlify
import getpass
import os
import select
import socket
import sys
import time
import traceback
from paramiko.py3compat import input
from audit import models
import paramiko
try:
import interactive
except ImportError:
from . import interactive
def ssh_session(bind_host_user, user_obj):
# now connect
hostname = bind_host_user.host.ip_addr
port = bind_host_user.host.port
username = bind_host_user.host_user.username
password = bind_host_user.host_user.password
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((hostname, port))
except Exception as e:
print('*** Connect failed: ' + str(e))
traceback.print_exc()
sys.exit(1)
try:
t = paramiko.Transport(sock)
try:
t.start_client()
except paramiko.SSHException:
print('*** SSH negotiation failed.')
sys.exit(1)
try:
keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
except IOError:
try:
keys = paramiko.util.load_host_keys(os.path.expanduser('~/ssh/known_hosts'))
except IOError:
print('*** Unable to open host keys file')
keys = {}
# check server's host key -- this is important.
key = t.get_remote_server_key()
if hostname not in keys:
print('*** WARNING: Unknown host key!')
elif key.get_name() not in keys[hostname]:
print('*** WARNING: Unknown host key!')
elif keys[hostname][key.get_name()] != key:
print('*** WARNING: Host key has changed!!!')
sys.exit(1)
else:
print('*** Host key OK.')
if not t.is_authenticated():
manual_auth(t, username, password)
if not t.is_authenticated():
print('*** Authentication failed. :(')
t.close()
sys.exit(1)
chan = t.open_session()
chan.get_pty() # terminal
chan.invoke_shell()
print('*** Here we go!\n')
session_obj = models.SessionLog.objects.create(account=user_obj.account,
host_user_bind=bind_host_user)
interactive.interactive_shell(chan, session_obj)
chan.close()
t.close()
except Exception as e:
print('*** Caught exception: ' + str(e.__class__) + ': ' + str(e))
traceback.print_exc()
try:
t.close()
except:
pass
sys.exit(1)
| [
"[email protected]"
] | |
f87411e9cc4b3fab0ab835191ccf46313621ed6d | 48832d27da16256ee62c364add45f21b968ee669 | /res/scripts/client/gui/scaleform/daapi/view/lobby/cybersport/cybersportintroview.py | bb59b4023457fdaa2d97314c8e40330d4006d4b5 | [] | no_license | webiumsk/WOT-0.9.15.1 | 0752d5bbd7c6fafdd7f714af939ae7bcf654faf7 | 17ca3550fef25e430534d079876a14fbbcccb9b4 | refs/heads/master | 2021-01-20T18:24:10.349144 | 2016-08-04T18:08:34 | 2016-08-04T18:08:34 | 64,955,694 | 0 | 0 | null | null | null | null | WINDOWS-1250 | Python | false | false | 16,723 | py | # 2016.08.04 19:50:28 Střední Evropa (letní čas)
# Embedded file name: scripts/client/gui/Scaleform/daapi/view/lobby/cyberSport/CyberSportIntroView.py
from gui.Scaleform.genConsts.TOOLTIPS_CONSTANTS import TOOLTIPS_CONSTANTS
from helpers.i18n import makeString as _ms
from adisp import process
from gui import SystemMessages
from gui.ClientUpdateManager import g_clientUpdateManager
from gui.prb_control.prb_helpers import unitFunctionalProperty
from gui.shared import events
from gui.shared.gui_items.Vehicle import VEHICLE_CLASS_NAME as _VCN
from gui.shared.ItemsCache import g_itemsCache
from gui.shared.events import CSVehicleSelectEvent
from gui.shared.event_bus import EVENT_BUS_SCOPE
from gui.shared.formatters import text_styles, icons
from gui.clubs import formatters as club_fmts, events_dispatcher as club_events, contexts as club_ctx
from gui.clubs.club_helpers import MyClubListener, tryToConnectClubBattle
from gui.clubs.settings import CLIENT_CLUB_STATE, getLadderChevron256x256, LADDER_CHEVRON_ICON_PATH, CLIENT_CLUB_RESTRICTIONS
from gui.Scaleform.daapi.view.lobby.rally.vo_converters import makeVehicleVO
from gui.Scaleform.daapi.view.meta.CyberSportIntroMeta import CyberSportIntroMeta
from gui.Scaleform.locale.RES_ICONS import RES_ICONS
from gui.Scaleform.locale.TOOLTIPS import TOOLTIPS
from gui.Scaleform.locale.CYBERSPORT import CYBERSPORT
from gui.Scaleform.genConsts.CYBER_SPORT_ALIASES import CYBER_SPORT_ALIASES
from gui.game_control.battle_availability import isHourInForbiddenList
from predefined_hosts import g_preDefinedHosts
_ACCEPTED_VEH_TYPES = (_VCN.LIGHT_TANK, _VCN.MEDIUM_TANK, _VCN.HEAVY_TANK)
class _IntroViewVO(object):
def __init__(self):
self.__data = {'clubId': 0,
'ladderIconSource': '',
'isLadderBtnEnabled': True,
'isClockIconVisible': False,
'clockIconSource': '',
'isRequestWaitingTextVisible': False,
'requestWaitingText': '',
'teamHeaderText': '',
'teamDescriptionText': '',
'isTeamDescriptionBackVisible': False,
'isTeamDescriptionTooltip': False,
'teamDescriptionTooltip': '',
'createBtnLabel': '',
'createBtnTooltip': '',
'isCreateBtnEnabled': False,
'isCreateBtnVisible': False,
'isTeamAdditionalBtnVisible': False,
'teamAdditionalBtnLabel': '',
'teamAdditionalBtnTooltip': '',
'isCancelBtnVisible': False,
'cancelBtnLabel': '',
'cancelBtnTooltip': '',
'isCanCreateBattle': False,
'isCanJoinBattle': False,
'isNeedAddPlayers': False,
'isHaveTeamToShow': False}
def getData(self):
return self.__data
def acceptNavigationByChevron(self, isAccepted):
self.__data['isLadderBtnEnabled'] = isAccepted
def setClubLadderChevron(self, club):
ladderInfo = club.getLadderInfo()
if ladderInfo.isInLadder():
chevron = getLadderChevron256x256(ladderInfo.getDivision())
else:
chevron = getLadderChevron256x256()
self.__data['ladderIconSource'] = chevron
def setNoClubChevron(self, isApplicationSent):
if isApplicationSent:
self.__data['isClockIconVisible'] = True
self.__data['clockIconSource'] = RES_ICONS.MAPS_ICONS_LIBRARY_CYBERSPORT_CLOCKICON
self.__data['isRequestWaitingTextVisible'] = True
self.__data['requestWaitingText'] = text_styles.alert(CYBERSPORT.WINDOW_INTRO_REQUESTWAITING)
else:
self.__data['ladderIconSource'] = '%s/256/empty.png' % LADDER_CHEVRON_ICON_PATH
def setClubLabel(self, label):
self.__data['teamHeaderText'] = text_styles.promoSubTitle(label)
def setClubDBbID(self, ClubDBbID):
self.__data['clubId'] = ClubDBbID
def setClubDescription(self, description, isBackVisible = False):
self.__data['teamDescriptionText'] = description
self.__data['isTeamDescriptionBackVisible'] = isBackVisible
def setClubDescriptionTooltip(self, tooltip):
self.__data['isTeamDescriptionTooltip'] = True
self.__data['teamDescriptionTooltip'] = tooltip
def showCreateButton(self, label, tooltip, enabled = True):
self.__data['isCreateBtnVisible'] = True
self.__data['isCreateBtnEnabled'] = enabled
self.__data['createBtnLabel'] = label
self.__data['createBtnTooltip'] = tooltip
def showAdditionalButton(self, label, tooltip):
self.__data['isTeamAdditionalBtnVisible'] = True
self.__data['teamAdditionalBtnLabel'] = label
self.__data['teamAdditionalBtnTooltip'] = tooltip
def showCancelButton(self, label, tooltip):
self.__data['isCancelBtnVisible'] = True
self.__data['cancelBtnLabel'] = label
self.__data['cancelBtnTooltip'] = tooltip
def moveToTheUnitByCreateButton(self):
self.__data['isCanCreateBattle'] = self.__data['isCanJoinBattle'] = True
def needAddPlayers(self):
self.__data['isNeedAddPlayers'] = True
def openClubProfileByChevronClick(self):
self.__data['isHaveTeamToShow'] = True
def fillDefault(self):
self.__data['ladderIconSource'] = getLadderChevron256x256()
self.__data['isRequestWaitingTextVisible'] = True
self.__data['requestWaitingText'] = text_styles.alert('#cybersport:window/intro/unavailableWaiting')
self.setClubLabel(_ms(CYBERSPORT.WINDOW_INTRO_TEAM_HEADER_STATICTEAMS))
self.setClubDescription(text_styles.error('#cybersport:window/intro/team/description/unavailable'), isBackVisible=True)
self.showCreateButton(_ms(CYBERSPORT.WINDOW_INTRO_CREATE_BTN_ASSEMBLETEAM), TOOLTIPS.CYBERSPORT_INTRO_CREATEBTN_ASSEMBLETEAM, enabled=False)
class CyberSportIntroView(CyberSportIntroMeta, MyClubListener):
def __init__(self):
super(CyberSportIntroView, self).__init__()
self._section = 'selectedIntroVehicles'
def showSelectorPopup(self):
rosterSettings = self.unitFunctional.getRosterSettings()
self.fireEvent(events.LoadViewEvent(CYBER_SPORT_ALIASES.VEHICLE_SELECTOR_POPUP_PY, ctx={'isMultiSelect': False,
'infoText': CYBERSPORT.WINDOW_VEHICLESELECTOR_INFO_INTRO,
'componentsOffset': 45,
'selectedVehicles': self.__getSelectedVehicles(),
'section': 'cs_intro_view_vehicle',
'levelsRange': rosterSettings.getLevelsRange(),
'vehicleTypes': _ACCEPTED_VEH_TYPES}), scope=EVENT_BUS_SCOPE.LOBBY)
def showStaticTeamProfile(self):
club = self.getClub()
if club is not None:
club_events.showClubProfile(club.getClubDbID())
return
def showStaticTeamStaff(self):
club = self.getClub()
if club is not None:
club_events.showClubProfile(club.getClubDbID(), viewIdx=1)
return
def joinClubUnit(self):
tryToConnectClubBattle(self.getClub(), self.clubsState.getJoiningTime())
@process
def cancelWaitingTeamRequest(self):
state = self.clubsState
if self.clubsState.getStateID() == CLIENT_CLUB_STATE.SENT_APP:
result = yield self.clubsCtrl.sendRequest(club_ctx.RevokeApplicationCtx(state.getClubDbID(), 'clubs/app/revoke'))
if result.isSuccess():
SystemMessages.pushMessage(club_fmts.getAppRevokeSysMsg(self.getClub()))
@unitFunctionalProperty
def unitFunctional(self):
return None
def setData(self, initialData):
pass
def onClubUpdated(self, club):
self.__updateClubData()
def onClubsSeasonStateChanged(self, seasonState):
self.__updateClubData()
def onClubUnitInfoChanged(self, unitInfo):
self.__updateClubData()
def onAccountClubStateChanged(self, state):
self.__updateClubData()
def onAccountClubRestrictionsChanged(self):
self.__updateClubData()
def onClubNameChanged(self, name):
self.__updateClubData()
def onClubLadderInfoChanged(self, ladderInfo):
self.__updateClubData()
def onClubMembersChanged(self, members):
self.__updateClubData()
def onStatusChanged(self):
self.__updateClubData()
def _populate(self):
super(CyberSportIntroView, self)._populate()
self.addListener(CSVehicleSelectEvent.VEHICLE_SELECTED, self.__updateSelectedVehicles)
data = {'titleLblText': text_styles.promoTitle(CYBERSPORT.WINDOW_INTRO_TITLE),
'descrLblText': text_styles.main(CYBERSPORT.WINDOW_INTRO_DESCRIPTION),
'listRoomTitleLblText': text_styles.promoSubTitle(CYBERSPORT.WINDOW_INTRO_SEARCH_TITLE),
'listRoomDescrLblText': text_styles.main(CYBERSPORT.WINDOW_INTRO_SEARCH_DESCRIPTION),
'listRoomBtnLabel': _ms(CYBERSPORT.WINDOW_INTRO_SEARCH_BTN),
'autoTitleLblText': text_styles.middleTitle(CYBERSPORT.WINDOW_INTRO_AUTO_TITLE),
'autoDescrLblText': text_styles.main(CYBERSPORT.WINDOW_INTRO_AUTO_DESCRIPTION),
'vehicleBtnTitleTfText': text_styles.standard(CYBERSPORT.BUTTON_CHOOSEVEHICLES_SELECTED)}
if self.__isLadderRegulated():
data.update({'regulationsInfoText': '{0}{1}'.format(icons.info(), text_styles.main(CYBERSPORT.LADDERREGULATIONS_INFO)),
'regulationsInfoTooltip': TOOLTIPS_CONSTANTS.LADDER_REGULATIONS})
self.as_setTextsS(data)
self.__updateClubData()
self.__updateAutoSearchVehicle(self.__getSelectedVehicles())
self.startMyClubListening()
self.clubsCtrl.getAvailabilityCtrl().onStatusChanged += self.onStatusChanged
def _dispose(self):
self.stopMyClubListening()
self.removeListener(CSVehicleSelectEvent.VEHICLE_SELECTED, self.__updateSelectedVehicles)
g_clientUpdateManager.removeObjectCallbacks(self)
self.clubsCtrl.getAvailabilityCtrl().onStatusChanged -= self.onStatusChanged
super(CyberSportIntroView, self)._dispose()
def __updateClubData(self):
resultVO = _IntroViewVO()
club = self.getClub()
if self.clubsState.getStateID() == CLIENT_CLUB_STATE.HAS_CLUB and club:
profile = self.clubsCtrl.getProfile()
limits = self.clubsCtrl.getLimits()
resultVO.setClubLabel(club.getUserName())
resultVO.setClubDBbID(club.getClubDbID())
resultVO.setClubLadderChevron(club)
resultVO.showAdditionalButton(_ms(CYBERSPORT.WINDOW_INTRO_ADDITIONALBTN_LIST), TOOLTIPS.CYBERSPORT_INTRO_ADDITIONALBTN)
resultVO.moveToTheUnitByCreateButton()
resultVO.openClubProfileByChevronClick()
if club.hasActiveUnit():
unitInfo = club.getUnitInfo()
resultVO.showCreateButton(_ms(CYBERSPORT.WINDOW_INTRO_CREATE_BTN_JOINTEAM), TOOLTIPS.CYBERSPORT_INTRO_CREATEBTN_JOINTEAM)
if unitInfo.isInBattle():
isInBattleIcon = icons.makeImageTag(RES_ICONS.MAPS_ICONS_LIBRARY_SWORDSICON, 16, 16, -3, 0)
resultVO.setClubDescription(text_styles.neutral('%s %s' % (isInBattleIcon, _ms(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_TEAMINBATTLE))))
else:
resultVO.setClubDescription(text_styles.neutral(CYBERSPORT.STATICFORMATIONPROFILEWINDOW_STATUSLBL_CLUBISCALLED))
else:
canCreateUnit = limits.canCreateUnit(profile, club)
if canCreateUnit.success:
resultVO.setClubDescription(text_styles.neutral(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_ASSEMBLINGTEAM))
resultVO.showCreateButton(_ms(CYBERSPORT.WINDOW_INTRO_CREATE_BTN_ASSEMBLETEAM), TOOLTIPS.CYBERSPORT_INTRO_CREATEBTN_ASSEMBLETEAM)
elif canCreateUnit.reason == CLIENT_CLUB_RESTRICTIONS.NOT_ENOUGH_MEMBERS:
if club.getPermissions().isOwner():
resultVO.setClubDescription(text_styles.main(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_NOTENOUGHPLAYERS))
resultVO.showCreateButton(_ms(CYBERSPORT.WINDOW_INTRO_CREATE_BTN_ADDPLAYERS), TOOLTIPS.CYBERSPORT_INTRO_CREATEBTN_ADDPLAYERS)
else:
resultVO.setClubDescription(text_styles.error(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_OWNERASSEMBLINGTEAM), isBackVisible=True)
resultVO.showCreateButton(_ms('#cybersport:window/intro/create/btn/private/seeStaff'), '#tooltips:cyberSport/intro/createBtn/addPlayers/private')
resultVO.needAddPlayers()
else:
resultVO.setClubDescription(text_styles.error(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_NOTENOUGHPERMISSIONS_ASSEMBLINGTEAM), isBackVisible=True)
resultVO.showCreateButton(_ms(CYBERSPORT.WINDOW_INTRO_CREATE_BTN_ASSEMBLETEAM), '#tooltips:StaticFormationProfileWindow/actionBtn/notEnoughPermissions', enabled=False)
elif self.clubsState.getStateID() == CLIENT_CLUB_STATE.NO_CLUB:
resultVO.setNoClubChevron(isApplicationSent=False)
resultVO.setClubLabel(_ms(CYBERSPORT.WINDOW_INTRO_TEAM_HEADER_STATICTEAMS))
resultVO.setClubDescription(text_styles.main(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_CREATEORFIND))
resultVO.showCreateButton(_ms(CYBERSPORT.WINDOW_INTRO_CREATE_BTN_LOOK), TOOLTIPS.CYBERSPORT_INTRO_CREATEBTN_LOOK)
elif self.clubsState.getStateID() == CLIENT_CLUB_STATE.SENT_APP:
resultVO.setNoClubChevron(isApplicationSent=True)
resultVO.openClubProfileByChevronClick()
if club is not None:
resultVO.setClubLabel(club.getUserName())
resultVO.setClubLadderChevron(club)
resultVO.setClubDescription(text_styles.neutral(CYBERSPORT.WINDOW_INTRO_TEAM_DESCRIPTION_WAITINGFORREQUEST))
resultVO.showCancelButton(_ms(CYBERSPORT.WINDOW_INTRO_CANCEL_BTN_LABEL), TOOLTIPS.CYBERSPORT_INTRO_CANCELBTN)
resultVO.showAdditionalButton(_ms(CYBERSPORT.WINDOW_INTRO_ADDITIONALBTN_LIST), TOOLTIPS.CYBERSPORT_INTRO_ADDITIONALBTN)
else:
resultVO.fillDefault()
resultVO.acceptNavigationByChevron(False)
isBattlesAvailable, _ = self.clubsCtrl.getAvailabilityCtrl().getStatus()
if not isBattlesAvailable:
resultVO.setClubDescriptionTooltip(TOOLTIPS_CONSTANTS.LADDER_REGULATIONS)
resultVO.setClubDescription('{0}{1}'.format(icons.alert(), text_styles.main(CYBERSPORT.LADDERREGULATIONS_WARNING)), True)
self.as_setStaticTeamDataS(resultVO.getData())
return
def __updateSelectedVehicles(self, event):
if event.ctx is not None and len(event.ctx) > 0:
vehIntCD = int(event.ctx[0])
self.unitFunctional.setSelectedVehicles(self._section, [vehIntCD])
self.__updateAutoSearchVehicle([vehIntCD])
return
def __updateAutoSearchVehicle(self, vehsIntCD):
if len(vehsIntCD):
vehIntCD = vehsIntCD[0]
vehicle = g_itemsCache.items.getItemByCD(vehIntCD)
levelsRange = self.unitFunctional.getRosterSettings().getLevelsRange()
if vehicle.level not in levelsRange:
isReadyVehicle = False
warnTooltip = TOOLTIPS.CYBERSPORT_INTRO_SELECTEDVEHICLEWARN_INCOMPATIBLELEVEL
elif vehicle.type not in _ACCEPTED_VEH_TYPES:
isReadyVehicle = False
warnTooltip = TOOLTIPS.CYBERSPORT_INTRO_SELECTEDVEHICLEWARN_INCOMPATIBLETYPE
else:
warnTooltip, isReadyVehicle = '', vehicle.isReadyToPrebattle()
self.as_setSelectedVehicleS(makeVehicleVO(vehicle), isReadyVehicle, warnTooltip)
else:
self.as_setNoVehiclesS(TOOLTIPS.CYBERSPORT_NOVEHICLESINHANGAR)
def __getSelectedVehicles(self):
return self.unitFunctional.getSelectedVehicles(self._section)
def __isLadderRegulated(self):
"""Check if ladder regulation label should be shown.
Method returns True if there are some regulation on the peripheries, or
if some peripheries are unavailable. Returns False otherwise.
"""
availabilityCtrl = self.clubsCtrl.getAvailabilityCtrl()
for hostItem in g_preDefinedHosts.hosts():
if availabilityCtrl.getForbiddenPeriods(hostItem.peripheryID) or not availabilityCtrl.isServerAvailable(hostItem.peripheryID):
return True
return False
# okay decompyling c:\Users\PC\wotsources\files\originals\res\scripts\client\gui\scaleform\daapi\view\lobby\cybersport\cybersportintroview.pyc
# decompiled 1 files: 1 okay, 0 failed, 0 verify failed
# 2016.08.04 19:50:28 Střední Evropa (letní čas)
| [
"[email protected]"
] | |
bfd55d416797f1436f81925f4ec800b9a3895717 | eacfc1c0b2acd991ec2cc7021664d8e79c9e58f6 | /ccpnmr2.4/python/memops/format/compatibility/upgrade/v_2_0_a3/MapInfo.py | d1f75c3688f4b053d116e3783fe8125172c46e56 | [] | no_license | edbrooksbank/ccpnmr2.4 | cfecb0896dcf8978d796e6327f7e05a3f233a921 | f279ca9bb2d972b1ce075dad5fcc16e6f4a9496c | refs/heads/master | 2021-06-30T22:29:44.043951 | 2019-03-20T15:01:09 | 2019-03-20T15:01:09 | 176,757,815 | 0 | 1 | null | 2020-07-24T14:40:26 | 2019-03-20T14:59:23 | HTML | UTF-8 | Python | false | false | 98,298 | py |
# Packages, classElements and AbstractDataTypes skipped in new model
# (prefix, typeName, elemName, newGuid, elemType)
skipElements = [
('ACCO', 'AccessControlStore', 'permissions', 'www.ccpn.ac.uk_Fogh_2006-09-04-17:21:38_00005', 'MetaRole'),
('ACCO', 'Permission', 'accessControlStore', 'www.ccpn.ac.uk_Fogh_2006-09-04-17:21:38_00004', 'MetaRole'),
('ANAL', 'AxisPanel', 'spectrumWindow', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00017', 'MetaRole'),
('ANAL', 'SlicePanel', 'spectrumWindow', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00015', 'MetaRole'),
('ANAL', 'SpectrumWindow', 'axisPanels', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00018', 'MetaRole'),
('ANAL', 'SpectrumWindow', 'slicePanels', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00016', 'MetaRole'),
('ANAL', 'SpectrumWindow', 'spectrumWindowViews', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00014', 'MetaRole'),
('ANAL', 'SpectrumWindowView', 'spectrumWindow', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00013', 'MetaRole'),
('ANPR', 'AnnealProtocol', 'runs', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:42:09_00001', 'MetaRole'),
('ANPR', 'EnergyTerm', 'nmrSimEnergyTerms', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:43:08_00004', 'MetaRole'),
('COOR', 'Model', 'inputToNmrRuns', 'www.ccpn.ac.uk_Fogh_2007-11-23-12:00:13_00001', 'MetaRole'),
('COOR', 'StructureEnsemble', 'outputFromNmrRun', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:45:53_00002', 'MetaRole'),
('IMPL', 'MemopsRoot', 'currentNmrSimStore', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentNmrSimStore', 'MetaRole'),
('IMPL', 'MemopsRoot', 'nmrSimStores', 'ccpn_automatic_memops.Implementation.MemopsRoot.nmrSimStore', 'MetaRole'),
('MOLS', 'MolSystem', 'nmrSimRuns', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:46:22_00002', 'MetaRole'),
('NMR', 'AbstractMeasurementList', 'inputToNmrRuns', 'www.ccpn.ac.uk_Fogh_2008-03-14-15:14:21_00001', 'MetaRole'),
('NMR', 'AbstractMeasurementList', 'outputFromNmrRun', 'www.ccpn.ac.uk_Fogh_2008-03-14-15:14:54_00001', 'MetaRole'),
('NMR', 'PeakList', 'inputToNmrRuns', 'www.ccpn.ac.uk_Fogh_2008-03-14-15:14:56_00002', 'MetaRole'),
('NMR', 'PeakList', 'outputFromNmrRun', 'www.ccpn.ac.uk_Fogh_2008-03-14-15:14:56_00004', 'MetaRole'),
('NMRC', 'AbstractConstraintList', 'nmrSimEnergyTerms', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:43:08_00010', 'MetaRole'),
('NMRC', 'NmrConstraintStore', 'inputToNmrRuns', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:45:51_00001', 'MetaRole'),
('NMRC', 'NmrConstraintStore', 'outputFromNmrRun', 'www.ccpn.ac.uk_Fogh_2007-06-05-13:45:51_00003', 'MetaRole'),
('NSIM', None, None, 'www.ccpn.ac.uk_Fogh_2007-06-05-13:42:05_00010', 'MetaPackage'),
('TEMP', 'MultiTypesValue', None, 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:49_00001', 'MetaClass'),
]
# classElements skipped in new model, but available for simple data transfer
# (prefix, typeName, elemName, newGuid, elemMap, valueTypeGuid)
delayElements = [
]
# MetaConstraints added in new model
# (qualifiedName, guid)
newConstraints = [
('cambridge.WmsProtocol.InterfaceParameter.Only_simple_parameter_types_can_have_defaults', 'www.ccpn.ac.uk_Fogh_2011-10-14-11:18:08_00002'),
('cambridge.WmsProtocol.InterfaceParameter.hicard.hicard_consistent_with_ProtocolParameter_hicard', 'www.ccpn.ac.uk_Fogh_2013-10-11-09:59:51_00001'),
('cambridge.WmsProtocol.InterfaceParameter.locard.locard_consistent_with_ProtocolParameter_locard', 'www.ccpn.ac.uk_Fogh_2013-10-11-09:59:51_00002'),
('cambridge.WmsProtocol.InterfaceParameter.multiple_defaultStrings_only_for_hicard_ne_1', 'www.ccpn.ac.uk_Fogh_2011-10-14-11:18:08_00001'),
('cambridge.WmsProtocol.ProtocolParameter.Only_simple_parameter_types_can_have_defaults', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:18_00001'),
('cambridge.WmsProtocol.ProtocolParameter.container.Container_is_container_type_or_content_but_not_container_is_simple_type', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:21_00003'),
('cambridge.WmsProtocol.ProtocolParameter.container.No_cyclical_ProtocolParameter_dependency', 'www.ccpn.ac.uk_Fogh_2010-05-20-14:35:13_00003'),
('cambridge.WmsProtocol.ProtocolParameter.multiple_defaultStrings_only_for_hicard_ne_1', 'www.ccpn.ac.uk_Fogh_2011-06-09-17:19:17_00001'),
('ccp.general.Template.FloatMatrixObject.data.data_empty_or_equal_to_size', 'www.ccpn.ac.uk_Fogh_2011-03-30-18:05:06_00001'),
('ccp.molecule.MolStructure.Atom.index.index_point_to_object_in_StructureEnsemble_orderedAtoms', 'www.ccpn.ac.uk_Fogh_2011-04-07-12:25:59_00001'),
('ccp.nmr.Nmr.Resonance.resonanceGroup.Only_active_ResonanceGroups_can_have_resonances', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:56:26_00001'),
('ccp.nmr.Nmr.ResonanceGroup.chains.Only_active_ResonanceGroups_can_have_chains', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:56:26_00002'),
('ccp.nmr.Nmr.ResonanceGroup.isActive.Only_active_ResonanceGroups_can_have_residue_chains_resonances', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:56:26_00005'),
('ccp.nmr.Nmr.ResonanceGroup.residue.Only_active_ResonanceGroups_can_have_residue', 'www.ccpn.ac.uk_Fogh_2011-08-05-12:08:54_00001'),
('ccp.nmr.NmrCalc.Data.parameterGroup.No_cyclical_parameter_grouping', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:23_00003'),
('ccp.nmr.NmrCalc.MolResidueData.residueSeqIds.Either_single_chain_code_or_residueSeqIds_match_chainCodes_one_to_one', 'www.ccpn.ac.uk_Fogh_2011-03-30-18:00:04_00002'),
('ccp.nmr.NmrCalc.Run.masterRun.derived_runs_cannot_be_nested', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00003'),
('ccp.nmr.NmrConstraint.CsaConstraint.resonance.value_isotopeCode_eq_self_parentList_isotopeCode', 'www.ccpn.ac.uk_Fogh_2011-08-02-16:15:18_00001'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.consistent_matrix_size', 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:22_00003'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.consistent_number_of_reference_atoms', 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:22_00004'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.consistent_number_valuesPerPoint', 'www.ccpn.ac.uk_Fogh_2012-04-13-14:02:18_00001'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.consistent_referencing_dimension', 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:22_00002'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.nd_distribution_is_normalised', 'www.ccpn.ac.uk_Fogh_2012-04-13-14:02:18_00002'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.numbers_are_positive', 'www.ccpn.ac.uk_Fogh_2012-04-13-14:02:18_00003'),
('ccp.nmr.NmrReference.ChemAtomNmrDistrib.refAtoms.len_refatoms_eq_ndim', 'www.ccpn.ac.uk_Fogh_2012-04-13-13:40:44_00005'),
('ccp.nmr.NmrScreen.ExperimentHit.normalisedChange.absvalue_le_1', 'www.ccpn.ac.uk_Fogh_2012-04-18-15:31:23_00002'),
('ccp.nmr.NmrScreen.RegionWeight.intervals_do_not_overlap', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:06_00005'),
('ccp.nmr.NmrScreen.RegionWeight.minPpm_lt_maxPpm', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:06_00004'),
('ccpnmr.AnalysisV3.AtomSetMapping.atomSetMappings.no_atomSetMapping_cycles', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00009'),
('ccpnmr.AnalysisV3.SpectrumView.windowPanel.WIndowPanel_moduleCode_is_SpectrumView_SpectrumMapping_Window_code', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00012'),
('ccpnmr.AnalysisWindow.WindowAxis.axisUnit.AxisUnit_compatible_with_AxisType', 'www.ccpn.ac.uk_Fogh_2011-12-02-09:49:52_00001'),
('memops.Implementation.HexString.HexStringFormat', 'www.ccpn.ac.uk_Fogh_2011-12-02-09:49:50_00002'),
('memops.Implementation.RgbaColor.length_is_9', 'www.ccpn.ac.uk_Fogh_2011-12-02-09:49:50_00004'),
]
# Mandatory classElements added in new model
# New ClassElements with locard !=0, no default, not derived or Implementation
# (prefix, typeName, elemName, newGuid)
newMandatories = [
('ANA3', 'AnalysisDataDim', 'analysisSpectrum', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00034'),
('ANA3', 'AnalysisDataDim', 'dataDim', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00002'),
('ANA3', 'AnalysisLayout', 'analysisProjectV3', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00041'),
('ANA3', 'AnalysisLayout', 'layout', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00002'),
('ANA3', 'AnalysisPanel', 'analysisProjectV3', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00043'),
('ANA3', 'AnalysisPanel', 'panel', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00022'),
('ANA3', 'AnalysisPeakList', 'analysisSpectrum', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00036'),
('ANA3', 'AnalysisPeakList', 'peakList', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:36_00003'),
('ANA3', 'AnalysisProjectV3', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00009'),
('ANA3', 'AnalysisProjectV3', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00030'),
('ANA3', 'AnalysisProjectV3', 'nmrProject', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00004'),
('ANA3', 'AnalysisSpectrum', 'analysisProject', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00025'),
('ANA3', 'AnalysisSpectrum', 'dataSource', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:36_00001'),
('ANA3', 'AnnotationSetting', 'analysisProject', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00019'),
('ANA3', 'AnnotationSetting', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00010'),
('ANA3', 'AnnotationSetting', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00009'),
('ANA3', 'AtomSetMapping', 'elementSymbol', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00014'),
('ANA3', 'AtomSetMapping', 'name', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00010'),
('ANA3', 'AtomSetMapping', 'residueMapping', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00023'),
('ANA3', 'AxisMapping', 'analysisDataDim', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00003'),
('ANA3', 'AxisMapping', 'spectrumMapping', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00002'),
('ANA3', 'AxisMapping', 'windowAxis', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00011'),
('ANA3', 'ChainMapping', 'analysisProjectV3', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00001'),
('ANA3', 'ChainMapping', 'chainCode', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00021'),
('ANA3', 'ChainMapping', 'molSystemCode', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00020'),
('ANA3', 'PeakListView', 'analysisPeakList', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00011'),
('ANA3', 'PeakListView', 'spectrumView', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00007'),
('ANA3', 'PeakSetting', 'analysisProject', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00021'),
('ANA3', 'PeakSetting', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00015'),
('ANA3', 'PeakSetting', 'pickNonadjacent', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00022'),
('ANA3', 'PeakSetting', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00014'),
('ANA3', 'PrintSetting', 'analysisProject', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00023'),
('ANA3', 'PrintSetting', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00029'),
('ANA3', 'PrintSetting', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00028'),
('ANA3', 'ResidueMapping', 'chainMapping', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00017'),
('ANA3', 'ResidueMapping', 'seqId', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00026'),
('ANA3', 'SpectrumMapping', 'analysisSpectrum', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00001'),
('ANA3', 'SpectrumMapping', 'window', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00011'),
('ANA3', 'SpectrumView', 'spectrumMapping', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00004'),
('ANA3', 'SpectrumView', 'windowPanel', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00010'),
('ANA3', 'StoredContour', 'analysisSpectrum', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00038'),
('ANA3', 'StoredContour', 'dims', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00013'),
('ANA3', 'StoredContour', 'path', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00001'),
('ANA3', 'StoredContour', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00012'),
('ANAL', 'AxisPanel', 'spectrumWindowPane', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00007'),
('ANAL', 'SlicePanel', 'spectrumWindowPane', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00005'),
('ANAL', 'SpectrumWindowPane', 'serial', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00009'),
('ANAL', 'SpectrumWindowPane', 'spectrumWindow', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00001'),
('ANAL', 'SpectrumWindowView', 'spectrumWindowPane', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00003'),
('ANAW', 'AbstractModule', 'analysisWindowStore', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00011'),
('ANAW', 'AbstractModule', 'code', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00008'),
('ANAW', 'AnalysisWindowStore', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00007'),
('ANAW', 'AnalysisWindowStore', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00017'),
('ANAW', 'AxisType', 'analysisWindowStore', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00013'),
('ANAW', 'AxisType', 'code', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00022'),
('ANAW', 'AxisUnit', 'analysisWindowStore', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00015'),
('ANAW', 'AxisUnit', 'unit', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00031'),
('ANAW', 'Module', 'defaultSize', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00033'),
('ANAW', 'ModuleParameter', 'module', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00006'),
('ANAW', 'ModuleParameter', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00007'),
('ANAW', 'WindowAxis', 'axisType', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00018'),
('ANAW', 'WindowAxis', 'axisUnit', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00029'),
('ANAW', 'WindowAxis', 'label', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00015'),
('ANAW', 'WindowAxis', 'window', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00009'),
('ANAY', 'AbstractMarking', 'axisCode', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00007'),
('ANAY', 'AbstractMarking', 'position', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00005'),
('ANAY', 'AbstractMarking', 'unit', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00006'),
('ANAY', 'AbstractPanel', 'layout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00017'),
('ANAY', 'AbstractPanel', 'moduleCode', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00019'),
('ANAY', 'AbstractPanel', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00018'),
('ANAY', 'ActionLink', 'panel', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00012'),
('ANAY', 'ActionLink', 'role', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00007'),
('ANAY', 'ActionLink', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00006'),
('ANAY', 'ActionLink', 'target', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00014'),
('ANAY', 'ActionLinkParameter', 'actionLink', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00004'),
('ANAY', 'ActionLinkParameter', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00008'),
('ANAY', 'AxisGroup', 'layout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00019'),
('ANAY', 'AxisGroup', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00012'),
('ANAY', 'Layout', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00011'),
('ANAY', 'Layout', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00003'),
('ANAY', 'LayoutParameter', 'layout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00001'),
('ANAY', 'LayoutParameter', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00012'),
('ANAY', 'Mark', 'layout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00013'),
('ANAY', 'Mark', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00016'),
('ANAY', 'MarkDim', 'mark', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00014'),
('ANAY', 'MarkDim', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00021'),
('ANAY', 'PanelAxis', 'label', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00025'),
('ANAY', 'PanelAxis', 'windowPanel', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00008'),
('ANAY', 'PanelGroupParameter', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00029'),
('ANAY', 'PanelGroupParameter', 'windowPanelGroup', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00001'),
('ANAY', 'PanelParameter', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00001'),
('ANAY', 'PanelParameter', 'panel', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00016'),
('ANAY', 'Ruler', 'layout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00015'),
('ANAY', 'Ruler', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00003'),
('ANAY', 'WindowPanelGroup', 'layout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00021'),
('ANAY', 'WindowPanelGroup', 'serial', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00003'),
('CALC', 'ConstraintStoreData', 'constraintStoreSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00003'),
('CALC', 'Data', 'run', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00016'),
('CALC', 'Data', 'serial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00009'),
('CALC', 'DerivedListData', 'derivedDataListSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00018'),
('CALC', 'EnergyTermParameter', 'code', 'www.ccpn.ac.uk_Fogh_2010-05-18-17:35:48_00005'),
('CALC', 'EnergyTermParameter', 'energyTerm', 'www.ccpn.ac.uk_Fogh_2010-05-18-17:35:48_00003'),
('CALC', 'EnergyTermParameter', 'value', 'www.ccpn.ac.uk_Fogh_2010-05-18-17:35:48_00006'),
('CALC', 'MeasurementListData', 'measurementListSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00023'),
('CALC', 'MolResidueData', 'chainCodes', 'www.ccpn.ac.uk_Fogh_2011-03-30-18:00:04_00003'),
('CALC', 'MolResidueData', 'molSystemCode', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00009'),
('CALC', 'MolSystemData', 'molSystemCode', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00019'),
('CALC', 'NmrCalcStore', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:00_00001'),
('CALC', 'NmrCalcStore', 'name', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00007'),
('CALC', 'NmrCalcStore', 'nmrProjectName', 'www.ccpn.ac.uk_Fogh_2010-05-10-13:46:58_00001'),
('CALC', 'Run', 'nmrCalcStore', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00005'),
('CALC', 'Run', 'serial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00022'),
('CALC', 'RunParameter', 'run', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00018'),
('CALC', 'RunParameter', 'serial', 'www.ccpn.ac.uk_Fogh_2009-06-04-16:11:57_00001'),
('CALC', 'StructureEnsembleData', 'ensembleId', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00044'),
('CALC', 'StructureEnsembleData', 'molSystemCode', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00043'),
('CALC', 'TensorData', 'orientationMatrix', 'www.ccpn.ac.uk_Fogh_2010-05-17-12:06:21_00001'),
('CALC', 'ViolationListData', 'constraintStoreSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00037'),
('CALC', 'ViolationListData', 'violationListSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00038'),
('COOR', 'Atom', 'index', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:27_00006'),
('COOR', 'DataMatrix', 'name', 'www.ccpn.ac.uk_Fogh_2011-03-30-17:56:39_00012'),
('COOR', 'DataMatrix', 'structureEnsemble', 'www.ccpn.ac.uk_Fogh_2011-03-30-17:56:39_00030'),
('COOR', 'Model', 'index', 'www.ccpn.ac.uk_Fogh_2011-03-30-17:56:39_00013'),
('DLOC', 'Component', 'serial', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00001'),
('DLOC', 'Component', 'shapeMatrix', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00006'),
('DLOC', 'ShapeMatrix', 'isResolved', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00010'),
('DLOC', 'ShapeMatrix', 'numShapes', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00008'),
('NMRR', 'ChemAtomNmrDistrib', 'chemCompNmrRef', 'www.ccpn.ac.uk_Fogh_2010-05-14-17:17:46_00003'),
('NMRR', 'ChemAtomNmrDistrib', 'refAtoms', 'www.ccpn.ac.uk_Fogh_2010-05-14-17:17:46_00001'),
('NMRR', 'ChemAtomNmrDistrib', 'refPoints', 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:27_00003'),
('NMRR', 'ChemAtomNmrDistrib', 'refValues', 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:27_00004'),
('NMRR', 'ChemAtomNmrDistrib', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:27_00001'),
('NMRR', 'ChemAtomNmrDistrib', 'valuesPerPoint', 'www.ccpn.ac.uk_Fogh_2012-04-13-13:40:44_00003'),
('NMRS', 'ExperimentHit', 'trialExperiment', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00009'),
('NMRS', 'ExperimentHit', 'trialHit', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00020'),
('NMRS', 'ExperimentWeight', 'expCode', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00001'),
('NMRS', 'ExperimentWeight', 'trialSet', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00005'),
('NMRS', 'Mixture', 'nmrScreen', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00026'),
('NMRS', 'Mixture', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00016'),
('NMRS', 'MixtureComponent', 'componentName', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00004'),
('NMRS', 'MixtureComponent', 'componentType', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00005'),
('NMRS', 'MixtureComponent', 'mixture', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00012'),
('NMRS', 'MixtureComponent', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00024'),
('NMRS', 'NmrScreen', 'code', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00034'),
('NMRS', 'NmrScreen', 'memopsRoot', 'ccpn_automatic_ccp.nmr.NmrScreen.NmrScreen.memopsRoot'),
('NMRS', 'RegionWeight', 'maxPpm', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00005'),
('NMRS', 'RegionWeight', 'minPpm', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00004'),
('NMRS', 'RegionWeight', 'serial', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00003'),
('NMRS', 'RegionWeight', 'trialSet', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00010'),
('NMRS', 'Trial', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00007'),
('NMRS', 'Trial', 'trialSet', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00008'),
('NMRS', 'TrialExperiment', 'expCode', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00007'),
('NMRS', 'TrialExperiment', 'mixture', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00001'),
('NMRS', 'TrialExperiment', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00013'),
('NMRS', 'TrialGroup', 'nmrScreen', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00030'),
('NMRS', 'TrialGroup', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00016'),
('NMRS', 'TrialHit', 'componentName', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00011'),
('NMRS', 'TrialHit', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00022'),
('NMRS', 'TrialHit', 'trial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00001'),
('NMRS', 'TrialSet', 'nmrScreen', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00001'),
('NMRS', 'TrialSet', 'serial', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00012'),
('REFD', 'RefDataStore', 'memopsRoot', 'ccpn_automatic_ccp.lims.RefData.RefDataStore.memopsRoot'),
('REFD', 'RefDataStore', 'refSampleComponentStore', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00019'),
('REFD', 'RefNmrSpectrum', 'componentName', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00021'),
('REFD', 'RefNmrSpectrum', 'refDataStore', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00016'),
('REFD', 'RefNmrSpectrum', 'serial', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00051'),
('REFD', 'RefNmrSpectrum', 'solvent', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00055'),
('STER', 'RefStereochemistry', 'numCoreAtoms', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:26_00001'),
('TEMP', 'FloatMatrixObject', 'shape', 'www.ccpn.ac.uk_Fogh_2011-03-30-18:02:26_00016'),
('WMS', 'Project', 'location', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00006'),
('WMS', 'Project', 'name', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00005'),
('WMS', 'Project', 'wmsSegment', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00030'),
('WMS', 'ProjectVersion', 'creationTime', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00016'),
('WMS', 'ProjectVersion', 'project', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00003'),
('WMS', 'ProjectVersion', 'versionTag', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00014'),
('WMS', 'RawFile', 'location', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00019'),
('WMS', 'RawFile', 'path', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00018'),
('WMS', 'RawFile', 'project', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:56_00001'),
('WMS', 'RawFile', 'serial', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00017'),
('WMS', 'Task', 'serial', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00001'),
('WMS', 'Task', 'wmsSegment', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00032'),
('WMS', 'WmsSegment', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:54_00001'),
('WMS', 'WmsSegment', 'name', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00034'),
('WMSP', 'EnumValue', 'interfaceParameter', 'www.ccpn.ac.uk_Fogh_2011-05-26-12:12:14_00002'),
('WMSP', 'EnumValue', 'serial', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00001'),
('WMSP', 'InterfaceLabel', 'label', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:20_00002'),
('WMSP', 'InterfaceLabel', 'protocolInterface', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:21_00001'),
('WMSP', 'InterfaceLabel', 'serial', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:20_00001'),
('WMSP', 'InterfaceParameter', 'protocolInterface', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00022'),
('WMSP', 'InterfaceParameter', 'protocolParameter', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00030'),
('WMSP', 'InterfaceParameter', 'serial', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00003'),
('WMSP', 'ProtocolAccess', 'localUserName', 'www.ccpn.ac.uk_Fogh_2010-05-10-13:46:55_00003'),
('WMSP', 'ProtocolAccess', 'protocolService', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00010'),
('WMSP', 'ProtocolAccess', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00016'),
('WMSP', 'ProtocolAccess', 'userName', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00017'),
('WMSP', 'ProtocolInterface', 'name', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00026'),
('WMSP', 'ProtocolInterface', 'title', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00027'),
('WMSP', 'ProtocolInterface', 'wmsProtocol', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00035'),
('WMSP', 'ProtocolParameter', 'paramType', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00001'),
('WMSP', 'ProtocolParameter', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00019'),
('WMSP', 'ProtocolParameter', 'wmsProtocol', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00010'),
('WMSP', 'ProtocolService', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00012'),
('WMSP', 'ProtocolService', 'wmsProtocol', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00012'),
('WMSP', 'WmsProtocol', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00061'),
('WMSP', 'WmsProtocol', 'name', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00014'),
('WMSQ', 'AbstractQuery', 'criteria', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00019'),
('WMSQ', 'AbstractQuery', 'date', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00020'),
('WMSQ', 'ProjectQuery', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00025'),
('WMSQ', 'ProjectQuery', 'wmsQueryStore', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00051'),
('WMSQ', 'ProjectResult', 'projectName', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00029'),
('WMSQ', 'ProjectResult', 'projectQuery', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00023'),
('WMSQ', 'ProjectResult', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00027'),
('WMSQ', 'ProjectResult', 'wmsSegmentName', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00028'),
('WMSQ', 'ProjectVersionQuery', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00034'),
('WMSQ', 'ProjectVersionQuery', 'wmsQueryStore', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00053'),
('WMSQ', 'ProjectVersionResult', 'projectName', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00038'),
('WMSQ', 'ProjectVersionResult', 'projectVersionQuery', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00032'),
('WMSQ', 'ProjectVersionResult', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00036'),
('WMSQ', 'ProjectVersionResult', 'versionTag', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00039'),
('WMSQ', 'ProjectVersionResult', 'wmsSegmentName', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00037'),
('WMSQ', 'TaskQuery', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00044'),
('WMSQ', 'TaskQuery', 'wmsQueryStore', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00055'),
('WMSQ', 'TaskResult', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00046'),
('WMSQ', 'TaskResult', 'taskQuery', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00042'),
('WMSQ', 'TaskResult', 'taskSerial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00048'),
('WMSQ', 'TaskResult', 'wmsSegmentName', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00047'),
('WMSQ', 'WmsQueryStore', 'memopsRoot', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00063'),
('WMSQ', 'WmsQueryStore', 'serial', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00057'),
]
# Packages, classElements and AbstractDataTypes added in new model
# Optional, i.e. excluding mandatory classElements given above
# (prefix, typeName, elemName, newGuid)
newElements = [
('ACCO', 'User', 'isSuperuser', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00060'),
('ACCO', 'User', 'passwordHashed', 'www.ccpn.ac.uk_Fogh_2009-08-19-17:31:11_00005'),
('AFFI', 'Organisation', 'province', 'www.ccpn.ac.uk_Fogh_2009-01-19-14:21:00_00001'),
('AFFI', 'PersonInGroup', 'photo', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:20_00001'),
('ANA3', None, None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:14_00001'),
('ANA3', 'AnalysisDataDim', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00001'),
('ANA3', 'AnalysisDataDim', 'assignTolerance', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00005'),
('ANA3', 'AnalysisDataDim', 'axisMappings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00004'),
('ANA3', 'AnalysisDataDim', 'chemShiftWeight', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00006'),
('ANA3', 'AnalysisDataDim', 'noeTolerance', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00007'),
('ANA3', 'AnalysisDataDim', 'peakFindBoxwidth', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00008'),
('ANA3', 'AnalysisDataDim', 'peakFindMinLineWIdth', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00009'),
('ANA3', 'AnalysisDataDim', 'refSamplePlane', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00010'),
('ANA3', 'AnalysisLayout', None, 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:29_00001'),
('ANA3', 'AnalysisLayout', 'currentChains', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00008'),
('ANA3', 'AnalysisLayout', 'currentMolSystem', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00006'),
('ANA3', 'AnalysisLayout', 'currentPeakLists', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00018'),
('ANA3', 'AnalysisLayout', 'currentPeaks', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00016'),
('ANA3', 'AnalysisLayout', 'currentResonances', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00012'),
('ANA3', 'AnalysisLayout', 'currentRestraintSet', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00014'),
('ANA3', 'AnalysisLayout', 'currentSpectra', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00020'),
('ANA3', 'AnalysisLayout', 'currentSpinSystems', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00010'),
('ANA3', 'AnalysisLayout', 'currentStructures', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00004'),
('ANA3', 'AnalysisPanel', None, 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:29_00002'),
('ANA3', 'AnalysisPanel', 'currentChains', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00036'),
('ANA3', 'AnalysisPanel', 'currentMolSystem', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00038'),
('ANA3', 'AnalysisPanel', 'currentPeakLists', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00026'),
('ANA3', 'AnalysisPanel', 'currentPeaks', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00028'),
('ANA3', 'AnalysisPanel', 'currentResonances', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00032'),
('ANA3', 'AnalysisPanel', 'currentRestraintSet', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00030'),
('ANA3', 'AnalysisPanel', 'currentSpectra', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00024'),
('ANA3', 'AnalysisPanel', 'currentSpinSystems', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00034'),
('ANA3', 'AnalysisPanel', 'currentStructures', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00040'),
('ANA3', 'AnalysisPeakList', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00002'),
('ANA3', 'AnalysisPeakList', 'noeIntensityType', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00013'),
('ANA3', 'AnalysisPeakList', 'noeRefDistance', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00014'),
('ANA3', 'AnalysisPeakList', 'noeRefIntensity', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00015'),
('ANA3', 'AnalysisPeakList', 'peakListViews', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00012'),
('ANA3', 'AnalysisPeakList', 'symbolColor', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00017'),
('ANA3', 'AnalysisPeakList', 'symbolStyle', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00016'),
('ANA3', 'AnalysisPeakList', 'textColor', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00018'),
('ANA3', 'AnalysisProjectV3', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:14_00002'),
('ANA3', 'AnalysisProjectV3', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00042'),
('ANA3', 'AnalysisProjectV3', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00044'),
('ANA3', 'AnalysisProjectV3', 'analysisSpectra', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00026'),
('ANA3', 'AnalysisProjectV3', 'annotationSettings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00020'),
('ANA3', 'AnalysisProjectV3', 'autoBackupFreq', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00031'),
('ANA3', 'AnalysisProjectV3', 'chainMappings', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00002'),
('ANA3', 'AnalysisProjectV3', 'contourToUnaliased', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00032'),
('ANA3', 'AnalysisProjectV3', 'currentAnnotationSetting', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00027'),
('ANA3', 'AnalysisProjectV3', 'currentPeakSetting', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00029'),
('ANA3', 'AnalysisProjectV3', 'currentPrintSetting', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00028'),
('ANA3', 'AnalysisProjectV3', 'details', 'www.ccpn.ac.uk_Fogh_2012-04-13-13:40:40_00002'),
('ANA3', 'AnalysisProjectV3', 'globalContourScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00033'),
('ANA3', 'AnalysisProjectV3', 'peakSettings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00022'),
('ANA3', 'AnalysisProjectV3', 'printSettings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00024'),
('ANA3', 'AnalysisSpectrum', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:14_00006'),
('ANA3', 'AnalysisSpectrum', 'analysisDataDims', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00035'),
('ANA3', 'AnalysisSpectrum', 'analysisPeakLists', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00037'),
('ANA3', 'AnalysisSpectrum', 'contourDir', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00049'),
('ANA3', 'AnalysisSpectrum', 'font', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00050'),
('ANA3', 'AnalysisSpectrum', 'negColors', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00051'),
('ANA3', 'AnalysisSpectrum', 'negLevelBase', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00041'),
('ANA3', 'AnalysisSpectrum', 'negLevelChanger', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00042'),
('ANA3', 'AnalysisSpectrum', 'negLevelMode', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00043'),
('ANA3', 'AnalysisSpectrum', 'negLevelNum', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00044'),
('ANA3', 'AnalysisSpectrum', 'pickThreshold', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00040'),
('ANA3', 'AnalysisSpectrum', 'posColors', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00001'),
('ANA3', 'AnalysisSpectrum', 'posLevelBase', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00045'),
('ANA3', 'AnalysisSpectrum', 'posLevelChanger', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00046'),
('ANA3', 'AnalysisSpectrum', 'posLevelMode', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00047'),
('ANA3', 'AnalysisSpectrum', 'posLevelNum', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00048'),
('ANA3', 'AnalysisSpectrum', 'rank', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00002'),
('ANA3', 'AnalysisSpectrum', 'shortcut', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00003'),
('ANA3', 'AnalysisSpectrum', 'sliceColor', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00004'),
('ANA3', 'AnalysisSpectrum', 'spectrumMappings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00049'),
('ANA3', 'AnalysisSpectrum', 'storedContours', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00039'),
('ANA3', 'AnalysisSpectrum', 'useBoundingBox', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00005'),
('ANA3', 'AnalysisSpectrum', 'useCompression', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00008'),
('ANA3', 'AnalysisSpectrum', 'usePeakArrow', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00006'),
('ANA3', 'AnalysisSpectrum', 'usePrecalculated', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00007'),
('ANA3', 'AnnotationSetting', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:14_00003'),
('ANA3', 'AnnotationSetting', 'meritBad', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00007'),
('ANA3', 'AnnotationSetting', 'meritGood', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00008'),
('ANA3', 'AnnotationSetting', 'meritMediocre', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00009'),
('ANA3', 'AnnotationSetting', 'useAtom', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00004'),
('ANA3', 'AnnotationSetting', 'useChain', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00002'),
('ANA3', 'AnnotationSetting', 'useDetail', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00005'),
('ANA3', 'AnnotationSetting', 'useMerit', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00006'),
('ANA3', 'AnnotationSetting', 'useMolSys', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00001'),
('ANA3', 'AnnotationSetting', 'useNumbersFirst', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00011'),
('ANA3', 'AnnotationSetting', 'useOneLetterCodes', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:01_00012'),
('ANA3', 'AnnotationSetting', 'useResidue', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00003'),
('ANA3', 'AtomSetMapping', None, 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:56_00003'),
('ANA3', 'AtomSetMapping', 'atomSetMappings', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00008'),
('ANA3', 'AtomSetMapping', 'atomSetSerials', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00011'),
('ANA3', 'AtomSetMapping', 'chemAtomSet', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00005'),
('ANA3', 'AtomSetMapping', 'mappingType', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00013'),
('ANA3', 'AtomSetMapping', 'resonanceSerials', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00012'),
('ANA3', 'AxisMapping', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00009'),
('ANA3', 'ChainMapping', None, 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:56_00001'),
('ANA3', 'ChainMapping', 'residueMappings', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00018'),
('ANA3', 'PeakListView', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00025'),
('ANA3', 'PeakListView', 'isAnnotationDrawn', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00012'),
('ANA3', 'PeakListView', 'isSymbolDrawn', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00013'),
('ANA3', 'PeakSetting', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:14_00004'),
('ANA3', 'PeakSetting', 'drawMethod', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00016'),
('ANA3', 'PeakSetting', 'intensityScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00025'),
('ANA3', 'PeakSetting', 'pickBuffer', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00018'),
('ANA3', 'PeakSetting', 'pickDrop', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00019'),
('ANA3', 'PeakSetting', 'pickMax', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00020'),
('ANA3', 'PeakSetting', 'pickMin', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00021'),
('ANA3', 'PeakSetting', 'pickScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00023'),
('ANA3', 'PeakSetting', 'pickThickness', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00024'),
('ANA3', 'PeakSetting', 'pickVolumeMethod', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00017'),
('ANA3', 'PeakSetting', 'pixelSize', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00027'),
('ANA3', 'PeakSetting', 'volumeScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00026'),
('ANA3', 'PrintSetting', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:14_00005'),
('ANA3', 'PrintSetting', 'fileName', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00031'),
('ANA3', 'PrintSetting', 'font', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00030'),
('ANA3', 'PrintSetting', 'inColor', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00032'),
('ANA3', 'PrintSetting', 'orientation', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00033'),
('ANA3', 'PrintSetting', 'otherHeight', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00034'),
('ANA3', 'PrintSetting', 'otherUnit', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00036'),
('ANA3', 'PrintSetting', 'otherWidth', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00035'),
('ANA3', 'PrintSetting', 'outputFormat', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00037'),
('ANA3', 'PrintSetting', 'paperSize', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00038'),
('ANA3', 'PrintSetting', 'scaling', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00039'),
('ANA3', 'PrintSetting', 'showFileName', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00041'),
('ANA3', 'PrintSetting', 'showsDateTime', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00040'),
('ANA3', 'PrintSetting', 'tickBottom', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00045'),
('ANA3', 'PrintSetting', 'tickInside', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00043'),
('ANA3', 'PrintSetting', 'tickLeft', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00046'),
('ANA3', 'PrintSetting', 'tickOurside', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00042'),
('ANA3', 'PrintSetting', 'tickRight', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00047'),
('ANA3', 'PrintSetting', 'tickTop', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00044'),
('ANA3', 'PrintSetting', 'title', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00048'),
('ANA3', 'ResidueMapping', None, 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:56_00002'),
('ANA3', 'ResidueMapping', 'atomSetMappings', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00024'),
('ANA3', 'SpectrumMapping', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00008'),
('ANA3', 'SpectrumMapping', 'axisMappings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00003'),
('ANA3', 'SpectrumMapping', 'isInToolbar', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00006'),
('ANA3', 'SpectrumMapping', 'spectrumViews', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00005'),
('ANA3', 'SpectrumView', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00024'),
('ANA3', 'SpectrumView', 'intensityScaling', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:11_00001'),
('ANA3', 'SpectrumView', 'isInToolbar', 'www.ccpn.ac.uk_Fogh_2012-05-14-13:21:11_00001'),
('ANA3', 'SpectrumView', 'isNegVisible', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00009'),
('ANA3', 'SpectrumView', 'isPosVisible', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00010'),
('ANA3', 'SpectrumView', 'peakListViews', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:32_00008'),
('ANA3', 'StoredContour', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00003'),
('ANAL', 'SpectrumWindow', 'isZeroLineShown', 'www.ccpn.ac.uk_Fogh_2013-05-03-11:50:01_00001'),
('ANAL', 'SpectrumWindow', 'spectrumWindowPanes', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00002'),
('ANAL', 'SpectrumWindowPane', None, 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:49_00001'),
('ANAL', 'SpectrumWindowPane', 'axisPanels', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00008'),
('ANAL', 'SpectrumWindowPane', 'name', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00010'),
('ANAL', 'SpectrumWindowPane', 'slicePanels', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00006'),
('ANAL', 'SpectrumWindowPane', 'spectrumWindowViews', 'www.ccpn.ac.uk_Fogh_2008-09-24-15:20:52_00004'),
('ANAL', 'SpectrumWindowView', 'isContourLineVisible', 'www.ccpn.ac.uk_Fogh_2013-05-07-17:07:06_00001'),
('ANAP', 'AnalysisProfile', 'sendBugReports', 'www.ccpn.ac.uk_Fogh_2010-11-17-16:21:37_00004'),
('ANAP', 'AnalysisProfile', 'userEmail', 'www.ccpn.ac.uk_Fogh_2010-11-17-16:21:37_00003'),
('ANAP', 'AnalysisProfile', 'userName', 'www.ccpn.ac.uk_Fogh_2010-11-17-16:21:37_00001'),
('ANAP', 'AnalysisProfile', 'userOrganisation', 'www.ccpn.ac.uk_Fogh_2010-11-17-16:21:37_00002'),
('ANAW', None, None, 'www.ccpn.ac.uk_Fogh_2011-11-30-10:49:23_00001'),
('ANAW', 'AbstractModule', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00004'),
('ANAW', 'AbstractModule', 'details', 'www.ccpn.ac.uk_Fogh_2012-09-10-14:34:37_00001'),
('ANAW', 'AbstractModule', 'helpUrl', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00010'),
('ANAW', 'AbstractModule', 'keywords', 'www.ccpn.ac.uk_Fogh_2012-09-10-14:34:37_00002'),
('ANAW', 'AbstractModule', 'name', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00009'),
('ANAW', 'AbstractModule', 'parameters', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00007'),
('ANAW', 'AnalysisWindowStore', None, 'www.ccpn.ac.uk_Fogh_2011-11-30-10:49:23_00002'),
('ANAW', 'AnalysisWindowStore', 'axisTypes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00014'),
('ANAW', 'AnalysisWindowStore', 'axisUnits', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00016'),
('ANAW', 'AnalysisWindowStore', 'modules', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00012'),
('ANAW', 'AxisType', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00010'),
('ANAW', 'AxisType', 'axisUnits', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00021'),
('ANAW', 'AxisType', 'diagonalExclusion', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00023'),
('ANAW', 'AxisType', 'isSampled', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00024'),
('ANAW', 'AxisType', 'isotopeCodes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00025'),
('ANAW', 'AxisType', 'measurementType', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00026'),
('ANAW', 'AxisType', 'numDecimals', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00027'),
('ANAW', 'AxisType', 'region', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00028'),
('ANAW', 'AxisType', 'windowAxes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00019'),
('ANAW', 'AxisUnit', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00026'),
('ANAW', 'AxisUnit', 'axisTypes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00020'),
('ANAW', 'AxisUnit', 'isBackwards', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00032'),
('ANAW', 'AxisUnit', 'windowAxes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:00:37_00030'),
('ANAW', 'Module', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00005'),
('ANAW', 'Module', 'defaultGridCell', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00001'),
('ANAW', 'Module', 'defaultGridSpan', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00002'),
('ANAW', 'Module', 'isCollapsible', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00006'),
('ANAW', 'Module', 'isModal', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00003'),
('ANAW', 'Module', 'isPopout', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00004'),
('ANAW', 'Module', 'isWizard', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00005'),
('ANAW', 'ModuleParameter', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00027'),
('ANAW', 'ModuleParameter', 'description', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00008'),
('ANAW', 'Window', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00006'),
('ANAW', 'Window', 'axes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00010'),
('ANAW', 'Window', 'spectrumMappings', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00012'),
('ANAW', 'WindowAxis', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00007'),
('ANAW', 'WindowAxis', 'axisMappings', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:55:31_00010'),
('ANAW', 'WindowAxis', 'boundAxis', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:01:45_00014'),
('ANAY', None, None, 'www.ccpn.ac.uk_Fogh_2011-11-30-10:49:23_00003'),
('ANAY', 'AbstractMarking', None, 'www.ccpn.ac.uk_Fogh_2011-11-30-10:49:23_00004'),
('ANAY', 'AbstractPanel', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00015'),
('ANAY', 'AbstractPanel', 'actionLinks', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00013'),
('ANAY', 'AbstractPanel', 'gridCell', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00020'),
('ANAY', 'AbstractPanel', 'gridSpan', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00021'),
('ANAY', 'AbstractPanel', 'isCollapsed', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00001'),
('ANAY', 'AbstractPanel', 'layoutArea', 'www.ccpn.ac.uk_Fogh_2012-08-16-17:30:29_00001'),
('ANAY', 'AbstractPanel', 'listenLinks', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00015'),
('ANAY', 'AbstractPanel', 'name', 'www.ccpn.ac.uk_Fogh_2012-05-03-14:05:48_00001'),
('ANAY', 'AbstractPanel', 'parameters', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:23_00017'),
('ANAY', 'AbstractPanel', 'rank', 'www.ccpn.ac.uk_Fogh_2012-04-18-15:31:21_00001'),
('ANAY', 'ActionLink', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00016'),
('ANAY', 'ActionLink', 'parameters', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00005'),
('ANAY', 'ActionLinkParameter', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00028'),
('ANAY', 'ActionLinkParameter', 'description', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00009'),
('ANAY', 'AxisGroup', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00020'),
('ANAY', 'AxisGroup', 'panelAxes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00011'),
('ANAY', 'Layout', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00011'),
('ANAY', 'Layout', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00001'),
('ANAY', 'Layout', 'axisGroups', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00020'),
('ANAY', 'Layout', 'details', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00004'),
('ANAY', 'Layout', 'helpUrl', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00006'),
('ANAY', 'Layout', 'isActive', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00007'),
('ANAY', 'Layout', 'isDefault', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00008'),
('ANAY', 'Layout', 'keywords', 'www.ccpn.ac.uk_Fogh_2012-05-14-13:21:09_00001'),
('ANAY', 'Layout', 'marks', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00014'),
('ANAY', 'Layout', 'panels', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00018'),
('ANAY', 'Layout', 'parameters', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00002'),
('ANAY', 'Layout', 'rank', 'www.ccpn.ac.uk_Fogh_2012-09-10-14:34:35_00001'),
('ANAY', 'Layout', 'rulers', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00016'),
('ANAY', 'Layout', 'shortcut', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00005'),
('ANAY', 'Layout', 'showWindowDepth', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00011'),
('ANAY', 'Layout', 'showWindowLabels', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00009'),
('ANAY', 'Layout', 'showWindowMidpoint', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00010'),
('ANAY', 'Layout', 'windowPanelGroups', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00022'),
('ANAY', 'LayoutArea', None, 'www.ccpn.ac.uk_Fogh_2012-08-16-17:30:27_00002'),
('ANAY', 'LayoutParameter', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00030'),
('ANAY', 'LayoutParameter', 'description', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00013'),
('ANAY', 'Mark', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00012'),
('ANAY', 'Mark', 'color', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00017'),
('ANAY', 'Mark', 'dashLength', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00018'),
('ANAY', 'Mark', 'gapLength', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00019'),
('ANAY', 'Mark', 'lineWidth', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00020'),
('ANAY', 'Mark', 'markDims', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00015'),
('ANAY', 'MarkDim', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00013'),
('ANAY', 'ModulePanel', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00018'),
('ANAY', 'ModulePanel', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00021'),
('ANAY', 'PanelAxis', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00019'),
('ANAY', 'PanelAxis', 'axisGroup', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:02:57_00010'),
('ANAY', 'PanelAxis', 'region', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00026'),
('ANAY', 'PanelAxis', 'showScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00027'),
('ANAY', 'PanelGroupParameter', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00031'),
('ANAY', 'PanelGroupParameter', 'description', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:43_00030'),
('ANAY', 'PanelParameter', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00029'),
('ANAY', 'PanelParameter', 'description', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00002'),
('ANAY', 'Ruler', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00014'),
('ANAY', 'Ruler', 'color', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00006'),
('ANAY', 'Ruler', 'dashLength', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00007'),
('ANAY', 'Ruler', 'lineWidth', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00004'),
('ANAY', 'SpectrumSharing', None, 'www.ccpn.ac.uk_Fogh_2012-08-16-17:30:27_00001'),
('ANAY', 'WindowPanel', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00017'),
('ANAY', 'WindowPanel', 'panelAxes', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00009'),
('ANAY', 'WindowPanel', 'showCouplings', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00016'),
('ANAY', 'WindowPanel', 'showMultiplets', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00015'),
('ANAY', 'WindowPanel', 'showPosition', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00017'),
('ANAY', 'WindowPanel', 'showStructure', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00014'),
('ANAY', 'WindowPanel', 'spectrumViews', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00011'),
('ANAY', 'WindowPanel', 'useMultiplePeakLists', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00013'),
('ANAY', 'WindowPanel', 'windowPanelGroup', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:12_00002'),
('ANAY', 'WindowPanel1d', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00023'),
('ANAY', 'WindowPanel1d', 'labelAngle', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:12_00001'),
('ANAY', 'WindowPanel1d', 'showAsStack', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00021'),
('ANAY', 'WindowPanel1d', 'showIntegrals', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00020'),
('ANAY', 'WindowPanel1d', 'showPeakPickLevel', 'www.ccpn.ac.uk_Fogh_2012-09-17-10:38:43_00001'),
('ANAY', 'WindowPanel1d', 'stackOffset', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00022'),
('ANAY', 'WindowPanel1d', 'useAutoScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00019'),
('ANAY', 'WindowPanel1d', 'valueScale', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:03:44_00018'),
('ANAY', 'WindowPanelGroup', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00021'),
('ANAY', 'WindowPanelGroup', 'gridCell', 'www.ccpn.ac.uk_Fogh_2012-04-13-13:34:58_00002'),
('ANAY', 'WindowPanelGroup', 'gridSpan', 'www.ccpn.ac.uk_Fogh_2012-04-13-13:40:40_00001'),
('ANAY', 'WindowPanelGroup', 'isGridGroup', 'www.ccpn.ac.uk_Fogh_2012-04-13-13:34:58_00001'),
('ANAY', 'WindowPanelGroup', 'parameters', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00002'),
('ANAY', 'WindowPanelGroup', 'spectrumSharing', 'www.ccpn.ac.uk_Fogh_2012-08-16-17:30:29_00002'),
('ANAY', 'WindowPanelGroup', 'windowPanels', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:12_00003'),
('ANAY', 'WindowPanelNd', None, 'www.ccpn.ac.uk_Fogh_2011-11-16-17:07:15_00022'),
('ANAY', 'WindowPanelNd', 'aspectRatio', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00005'),
('ANAY', 'WindowPanelNd', 'showCrosshairTrace', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00006'),
('ANPR', 'AnnealProtocol', 'applicationVersion', 'www.ccpn.ac.uk_Fogh_2010-05-18-10:52:09_00002'),
('ANPR', 'AnnealProtocol', 'methodStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-18-10:52:09_00001'),
('CALC', None, None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:57_00001'),
('CALC', 'ConstraintStoreData', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00005'),
('CALC', 'ConstraintStoreData', 'constraintListSerials', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00004'),
('CALC', 'Data', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:57_00006'),
('CALC', 'Data', 'details', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00013'),
('CALC', 'Data', 'parameterGroup', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:23_00001'),
('CALC', 'Data', 'runParameters', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00027'),
('CALC', 'DerivedListData', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00004'),
('CALC', 'EnergyTerm', None, 'www.ccpn.ac.uk_Fogh_2010-05-18-17:35:44_00001'),
('CALC', 'EnergyTerm', 'annealEnergyTermSerial', 'www.ccpn.ac.uk_Fogh_2010-05-11-15:26:01_00003'),
('CALC', 'EnergyTerm', 'energyTermParameters', 'www.ccpn.ac.uk_Fogh_2010-05-18-17:35:48_00004'),
('CALC', 'EnergyTermParameter', None, 'www.ccpn.ac.uk_Fogh_2010-05-18-17:35:44_00002'),
('CALC', 'ExternalData', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00002'),
('CALC', 'ExternalData', 'dataLocationStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-11-18:03:21_00001'),
('CALC', 'ExternalData', 'dataStoreSerial', 'www.ccpn.ac.uk_Fogh_2010-05-11-18:28:30_00001'),
('CALC', 'FloatMatrixData', None, 'www.ccpn.ac.uk_Fogh_2011-04-05-18:08:58_00001'),
('CALC', 'IoRole', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:57_00007'),
('CALC', 'MeasurementListData', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00003'),
('CALC', 'MolResidueData', None, 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:11_00003'),
('CALC', 'MolResidueData', 'residueSeqIds', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00011'),
('CALC', 'MolSystemData', None, 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:11_00002'),
('CALC', 'MolSystemData', 'chainCodes', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00020'),
('CALC', 'MolSystemData', 'symmetrySetId', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00021'),
('CALC', 'NmrCalcStore', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:57_00002'),
('CALC', 'NmrCalcStore', 'runs', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00006'),
('CALC', 'NmrCalcStore', 'validationStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-10-13:46:58_00002'),
('CALC', 'ParameterGroup', None, 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:17_00001'),
('CALC', 'ParameterGroup', 'data', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:23_00002'),
('CALC', 'PeakListData', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00007'),
('CALC', 'PeakListData', 'dataSourceSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00034'),
('CALC', 'PeakListData', 'experimentSerial', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00033'),
('CALC', 'PeakListData', 'peakListSerial', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00029'),
('CALC', 'Run', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:57_00004'),
('CALC', 'Run', 'affiliationStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-11-15:26:01_00008'),
('CALC', 'Run', 'annealProtocolApp', 'www.ccpn.ac.uk_Fogh_2010-05-11-15:26:01_00010'),
('CALC', 'Run', 'annealProtocolCode', 'www.ccpn.ac.uk_Fogh_2010-05-11-15:26:01_00011'),
('CALC', 'Run', 'annealProtocolStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-21-15:54:39_00001'),
('CALC', 'Run', 'data', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00017'),
('CALC', 'Run', 'derivedRuns', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00001'),
('CALC', 'Run', 'details', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00024'),
('CALC', 'Run', 'masterRun', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00002'),
('CALC', 'Run', 'methodStoreName', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00004'),
('CALC', 'Run', 'operator', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00041'),
('CALC', 'Run', 'operatorSerial', 'www.ccpn.ac.uk_Fogh_2010-05-11-15:26:01_00009'),
('CALC', 'Run', 'runParameters', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00019'),
('CALC', 'Run', 'softwareName', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00005'),
('CALC', 'Run', 'softwareVersion', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00006'),
('CALC', 'Run', 'status', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00023'),
('CALC', 'Run', 'structureGenerationSerial', 'www.ccpn.ac.uk_Fogh_2010-05-11-15:26:01_00007'),
('CALC', 'Run', 'wmsProtocolName', 'www.ccpn.ac.uk_Fogh_2012-06-04-14:36:41_00007'),
('CALC', 'RunIo', None, 'www.ccpn.ac.uk_Fogh_2010-05-05-14:19:57_00001'),
('CALC', 'RunIo', 'code', 'www.ccpn.ac.uk_Fogh_2010-05-05-14:35:56_00001'),
('CALC', 'RunIo', 'ioRole', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:03_00011'),
('CALC', 'RunIo', 'name', 'www.ccpn.ac.uk_Fogh_2010-05-18-13:57:23_00001'),
('CALC', 'RunParameter', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00001'),
('CALC', 'RunParameter', 'data', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:04_00028'),
('CALC', 'SpectrumData', None, 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:11_00005'),
('CALC', 'SpectrumData', 'dataSourceSerial', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00034'),
('CALC', 'SpectrumData', 'experimentSerial', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00033'),
('CALC', 'SpinSystemData', None, 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:11_00001'),
('CALC', 'SpinSystemData', 'resonanceGroupSerials', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00038'),
('CALC', 'Status', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:57_00005'),
('CALC', 'StructureEnsembleData', None, 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:11_00004'),
('CALC', 'StructureEnsembleData', 'modelSerials', 'www.ccpn.ac.uk_Fogh_2009-05-20-16:32:16_00045'),
('CALC', 'TensorData', None, 'www.ccpn.ac.uk_Fogh_2010-05-17-12:06:16_00001'),
('CALC', 'ViolationListData', None, 'www.ccpn.ac.uk_Fogh_2009-04-16-16:23:58_00006'),
('CHEM', 'AbstractChemAtom', 'coreStereochemistries', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:27_00001'),
('CHEM', 'Stereochemistry', 'coreAtoms', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:27_00002'),
('COOR', 'Atom', 'altLocationCode', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:27_00005'),
('COOR', 'DataMatrix', None, 'www.ccpn.ac.uk_Fogh_2011-03-30-17:55:16_00001'),
('COOR', 'DataMatrix', 'details', 'www.ccpn.ac.uk_Fogh_2011-04-06-10:33:05_00002'),
('COOR', 'DataMatrix', 'unit', 'www.ccpn.ac.uk_Fogh_2011-04-06-10:33:05_00001'),
('COOR', 'EnsembleDataNames', None, 'www.ccpn.ac.uk_Fogh_2011-04-05-18:06:52_00001'),
('COOR', 'StructureEnsemble', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00003'),
('COOR', 'StructureEnsemble', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00039'),
('COOR', 'StructureEnsemble', 'dataMatrices', 'www.ccpn.ac.uk_Fogh_2011-03-30-17:56:39_00031'),
('COOR', 'StructureEnsemble', 'details', 'www.ccpn.ac.uk_Fogh_2009-01-21-15:56:25_00001'),
('COOR', 'StructureEnsemble', 'orderedAtoms', 'www.ccpn.ac.uk_Fogh_2011-04-06-17:18:19_00001'),
('COOR', 'StructureEnsemble', 'softwareName', 'www.ccpn.ac.uk_Fogh_2011-04-06-10:33:05_00003'),
('DLOC', 'BlockedBinaryMatrix', 'blockHeaderSize', 'www.ccpn.ac.uk_Fogh_2009-04-23-15:13:31_00001'),
('DLOC', 'Component', None, 'www.ccpn.ac.uk_Fogh_2009-06-19-17:41:34_00001'),
('DLOC', 'Component', 'amplitude', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00002'),
('DLOC', 'Component', 'annotation', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00005'),
('DLOC', 'Component', 'regionId', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00003'),
('DLOC', 'Component', 'status', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00004'),
('DLOC', 'ComponentStatus', None, 'www.ccpn.ac.uk_Fogh_2009-06-19-17:41:34_00002'),
('DLOC', 'MatrixFileType', None, 'www.ccpn.ac.uk_Fogh_2009-06-11-15:00:22_00001'),
('DLOC', 'MimeTypeDataStore', 'photoPersons', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:19_00001'),
('DLOC', 'NumericMatrix', 'fileType', 'www.ccpn.ac.uk_Fogh_2009-06-11-15:00:26_00001'),
('DLOC', 'ShapeMatrix', 'components', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00007'),
('DLOC', 'ShapeMatrix', 'isReconstructable', 'www.ccpn.ac.uk_Fogh_2009-06-19-17:42:00_00009'),
('ENTR', 'RelatedEntry', 'details', 'www.ccpn.ac.uk_Fogh_2009-01-19-14:21:01_00007'),
('HADD', 'HaddockPartner', 'airUpperDistanceLimit', 'www.ccpn.ac.uk_Fogh_2009-02-20-13:12:00_00001'),
('HADD', 'HaddockPartner', 'semiFlexMode', 'www.ccpn.ac.uk_Fogh_2009-02-20-13:12:00_00002'),
('HADD', 'Run', 'cnsExecutable', 'www.ccpn.ac.uk_Fogh_2009-02-20-13:12:00_00004'),
('HADD', 'Run', 'cpuNumber', 'www.ccpn.ac.uk_Fogh_2009-02-20-13:12:00_00006'),
('HADD', 'Run', 'haddockDir', 'www.ccpn.ac.uk_Fogh_2009-02-20-13:12:00_00003'),
('HADD', 'Run', 'queueCommand', 'www.ccpn.ac.uk_Fogh_2009-02-20-13:12:00_00005'),
('HADD', 'SemiFlexMode', None, 'www.ccpn.ac.uk_Fogh_2009-02-20-13:11:56_00001'),
('IMPL', 'HexString', None, 'www.ccpn.ac.uk_Fogh_2011-12-02-09:49:50_00001'),
('IMPL', 'MemopsRoot', 'accessControlOn', 'www.ccpn.ac.uk_Fogh_2009-08-19-17:31:11_00004'),
('IMPL', 'MemopsRoot', 'analysisProjectV3s', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00010'),
('IMPL', 'MemopsRoot', 'analysisWindowStores', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00008'),
('IMPL', 'MemopsRoot', 'currentAnalysisProjectV3', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentAnalysisProjectV3'),
('IMPL', 'MemopsRoot', 'currentAnalysisWindowStore', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentAnalysisWindowStore'),
('IMPL', 'MemopsRoot', 'currentGroupName', 'www.ccpn.ac.uk_Fogh_2009-08-19-17:31:11_00003'),
('IMPL', 'MemopsRoot', 'currentLayout', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentLayout'),
('IMPL', 'MemopsRoot', 'currentNmrCalcStore', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentNmrCalcStore'),
('IMPL', 'MemopsRoot', 'currentNmrScreen', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentNmrScreen'),
('IMPL', 'MemopsRoot', 'currentRefDataStore', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentRefDataStore'),
('IMPL', 'MemopsRoot', 'currentWmsProtocol', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentWmsProtocol'),
('IMPL', 'MemopsRoot', 'currentWmsQueryStore', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentWmsQueryStore'),
('IMPL', 'MemopsRoot', 'currentWmsSegment', 'ccpn_automatic_memops.Implementation.MemopsRoot.currentWmsSegment'),
('IMPL', 'MemopsRoot', 'layouts', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:35_00012'),
('IMPL', 'MemopsRoot', 'nmrCalcStores', 'www.ccpn.ac.uk_Fogh_2009-04-16-16:24:00_00002'),
('IMPL', 'MemopsRoot', 'nmrScreens', 'ccpn_automatic_memops.Implementation.MemopsRoot.nmrScreen'),
('IMPL', 'MemopsRoot', 'refDataStores', 'ccpn_automatic_memops.Implementation.MemopsRoot.refDataStore'),
('IMPL', 'MemopsRoot', 'wmsProtocols', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00062'),
('IMPL', 'MemopsRoot', 'wmsQueryStores', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00064'),
('IMPL', 'MemopsRoot', 'wmsSegments', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:54_00002'),
('IMPL', 'RgbaColor', None, 'www.ccpn.ac.uk_Fogh_2011-12-02-09:49:50_00003'),
('IMPL', 'ThreeValueAnswer', None, 'www.ccpn.ac.uk_Fogh_2010-11-17-16:21:33_00001'),
('LMOL', 'LabeledMixture', 'name', 'www.ccpn.ac.uk_Fogh_2010-04-30-17:48:49_00001'),
('MOLE', 'Molecule', 'fragmentDetails', 'www.ccpn.ac.uk_Fogh_2009-01-19-14:20:59_00001'),
('MOLE', 'Molecule', 'mutationDetails', 'www.ccpn.ac.uk_Fogh_2009-01-19-14:20:59_00002'),
('MOLS', 'Atom', 'molSysAtomValidations', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00002'),
('MOLS', 'Chain', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00007'),
('MOLS', 'Chain', 'analysisPanel', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00035'),
('MOLS', 'Chain', 'molSysChainValidations', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00004'),
('MOLS', 'MolSystem', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00005'),
('MOLS', 'MolSystem', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00037'),
('MOLS', 'MolSystem', 'molSystemValidations', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00006'),
('MOLS', 'MolSystem', 'nmrScreens', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00032'),
('MOLS', 'Residue', 'molSysResidueValidations', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00008'),
('NMR', 'AbstractDataDim', 'analysisDataDims', 'www.ccpn.ac.uk_Fogh_2011-11-30-10:53:58_00001'),
('NMR', 'DataSource', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00019'),
('NMR', 'DataSource', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00023'),
('NMR', 'DataSource', 'analysisSpectra', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:36_00002'),
('NMR', 'DataSource', 'refNmrSpectra', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00049'),
('NMR', 'DataSource', 'trialExperiments', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00011'),
('NMR', 'ExpTransfer', 'isDirect', 'www.ccpn.ac.uk_Fogh_2010-04-30-17:48:48_00001'),
('NMR', 'Experiment', 'userExpCode', 'www.ccpn.ac.uk_Fogh_2012-07-23-11:52:27_00001'),
('NMR', 'NmrProject', 'analysisProjectV3', 'www.ccpn.ac.uk_Fogh_2011-12-01-15:00:59_00003'),
('NMR', 'Peak', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00015'),
('NMR', 'Peak', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00027'),
('NMR', 'Peak', 'height', 'www.ccpn.ac.uk_Fogh_2012-04-12-17:58:12_00001'),
('NMR', 'Peak', 'volume', 'www.ccpn.ac.uk_Fogh_2012-04-12-17:58:12_00002'),
('NMR', 'PeakDim', 'realValueImpl', 'www.ccpn.ac.uk_Fogh_2010-04-30-17:48:48_00002'),
('NMR', 'PeakList', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00017'),
('NMR', 'PeakList', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00025'),
('NMR', 'PeakList', 'analysisPeakLists', 'www.ccpn.ac.uk_Fogh_2011-11-30-11:04:36_00004'),
('NMR', 'Resonance', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00011'),
('NMR', 'Resonance', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00031'),
('NMR', 'ResonanceGroup', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00009'),
('NMR', 'ResonanceGroup', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00033'),
('NMR', 'ResonanceGroup', 'clusterCode', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:56:26_00003'),
('NMR', 'ResonanceGroup', 'isActive', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:56:26_00004'),
('NMR', 'T1RhoList', 'tempCalibMethod', 'www.ccpn.ac.uk_Fogh_2009-02-05-11:33:18_00001'),
('NMR', 'T1RhoList', 'tempControlMethod', 'www.ccpn.ac.uk_Fogh_2009-02-05-11:33:18_00002'),
('NMR', 'T2List', 'tempCalibMethod', 'www.ccpn.ac.uk_Fogh_2009-02-05-11:33:18_00003'),
('NMR', 'T2List', 'tempControlMethod', 'www.ccpn.ac.uk_Fogh_2009-02-05-11:33:18_00004'),
('NMR', 'TempCalibMethod', None, 'www.ccpn.ac.uk_Fogh_2009-02-05-11:33:12_00001'),
('NMR', 'TempControlMethod', None, 'www.ccpn.ac.uk_Fogh_2009-02-05-11:33:12_00002'),
('NMRC', 'FixedResonance', 'covalentlyBound', 'www.ccpn.ac.uk_Fogh_2012-06-25-14:41:56_00001'),
('NMRC', 'NmrConstraintStore', 'analysisLayouts', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00013'),
('NMRC', 'NmrConstraintStore', 'analysisPanels', 'www.ccpn.ac.uk_Fogh_2011-12-02-15:08:31_00029'),
('NMRR', 'ChemAtomNmrDistrib', None, 'www.ccpn.ac.uk_Fogh_2010-05-14-15:28:22_00001'),
('NMRR', 'ChemAtomNmrRef', 'shiftDistributions', 'www.ccpn.ac.uk_Fogh_2010-05-14-17:17:46_00002'),
('NMRR', 'ChemCompNmrRef', 'chemAtomNmrDistribs', 'www.ccpn.ac.uk_Fogh_2010-05-14-17:17:46_00004'),
('NMRS', None, None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00001'),
('NMRS', 'ComponentType', None, 'www.ccpn.ac.uk_Fogh_2012-03-28-17:18:50_00001'),
('NMRS', 'ExpCode', None, 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:06_00001'),
('NMRS', 'ExperimentHit', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00013'),
('NMRS', 'ExperimentHit', 'figOfMerit', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00010'),
('NMRS', 'ExperimentHit', 'meritCode', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00011'),
('NMRS', 'ExperimentHit', 'normalisedChange', 'www.ccpn.ac.uk_Fogh_2012-04-18-15:31:23_00001'),
('NMRS', 'ExperimentWeight', None, 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:44_00001'),
('NMRS', 'ExperimentWeight', 'changeIsPositive', 'www.ccpn.ac.uk_Fogh_2012-07-09-10:23:18_00001'),
('NMRS', 'ExperimentWeight', 'intensityScale', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00003'),
('NMRS', 'ExperimentWeight', 'meritThreshold', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00002'),
('NMRS', 'ExperimentWeight', 'weight', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00004'),
('NMRS', 'Mixture', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00007'),
('NMRS', 'Mixture', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00021'),
('NMRS', 'Mixture', 'mixtureComponents', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00013'),
('NMRS', 'Mixture', 'name', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00017'),
('NMRS', 'Mixture', 'sample', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:57_00001'),
('NMRS', 'Mixture', 'trialExperiments', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00002'),
('NMRS', 'Mixture', 'trials', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00015'),
('NMRS', 'MixtureComponent', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00008'),
('NMRS', 'NmrScreen', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00006'),
('NMRS', 'NmrScreen', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00040'),
('NMRS', 'NmrScreen', 'endDate', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00039'),
('NMRS', 'NmrScreen', 'mixtures', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00027'),
('NMRS', 'NmrScreen', 'name', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00035'),
('NMRS', 'NmrScreen', 'objective', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00037'),
('NMRS', 'NmrScreen', 'pH', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:16:06_00002'),
('NMRS', 'NmrScreen', 'refDataStoreNames', 'www.ccpn.ac.uk_Fogh_2012-04-12-17:58:11_00001'),
('NMRS', 'NmrScreen', 'sampleType', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00036'),
('NMRS', 'NmrScreen', 'startDate', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00038'),
('NMRS', 'NmrScreen', 'target', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00033'),
('NMRS', 'NmrScreen', 'temperature', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:16:06_00001'),
('NMRS', 'NmrScreen', 'trialGroups', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00031'),
('NMRS', 'NmrScreen', 'trialSets', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00002'),
('NMRS', 'NmrScreen', 'userProtocolCode', 'www.ccpn.ac.uk_Fogh_2012-07-23-11:52:27_00002'),
('NMRS', 'RegionWeight', None, 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:06_00003'),
('NMRS', 'RegionWeight', 'weight', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00006'),
('NMRS', 'Trial', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00009'),
('NMRS', 'Trial', 'date', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00009'),
('NMRS', 'Trial', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00010'),
('NMRS', 'Trial', 'mixtures', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00014'),
('NMRS', 'Trial', 'name', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00008'),
('NMRS', 'Trial', 'trialGroups', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00004'),
('NMRS', 'Trial', 'trialHits', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00002'),
('NMRS', 'TrialExperiment', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00012'),
('NMRS', 'TrialExperiment', 'dataSources', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00012'),
('NMRS', 'TrialExperiment', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00015'),
('NMRS', 'TrialExperiment', 'experimentHits', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00008'),
('NMRS', 'TrialExperiment', 'name', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00014'),
('NMRS', 'TrialGroup', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00011'),
('NMRS', 'TrialGroup', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00018'),
('NMRS', 'TrialGroup', 'name', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00017'),
('NMRS', 'TrialGroup', 'phaseType', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00019'),
('NMRS', 'TrialGroup', 'trials', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00003'),
('NMRS', 'TrialHit', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00010'),
('NMRS', 'TrialHit', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00025'),
('NMRS', 'TrialHit', 'experimentHits', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00021'),
('NMRS', 'TrialHit', 'figOfMerit', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00023'),
('NMRS', 'TrialHit', 'isConfirmed', 'www.ccpn.ac.uk_Fogh_2012-05-18-12:04:57_00001'),
('NMRS', 'TrialHit', 'meritCode', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:56_00024'),
('NMRS', 'TrialHit', 'refNmrSpectra', 'www.ccpn.ac.uk_Fogh_2012-03-29-15:58:32_00001'),
('NMRS', 'TrialSet', None, 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:06_00002'),
('NMRS', 'TrialSet', 'details', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00016'),
('NMRS', 'TrialSet', 'evaluateOnlyUnambiguous', 'www.ccpn.ac.uk_Fogh_2012-07-16-12:06:02_00001'),
('NMRS', 'TrialSet', 'evaluateSingleResonance', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00008'),
('NMRS', 'TrialSet', 'experimentWeights', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00006'),
('NMRS', 'TrialSet', 'identifyAllosteric', 'www.ccpn.ac.uk_Fogh_2012-07-06-13:03:50_00007'),
('NMRS', 'TrialSet', 'name', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00013'),
('NMRS', 'TrialSet', 'regionWeights', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00011'),
('NMRS', 'TrialSet', 'trials', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00009'),
('NMRS', 'TrialSet', 'useInverseEffects', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00014'),
('NMRS', 'TrialSet', 'useVolume', 'www.ccpn.ac.uk_Fogh_2012-05-21-18:09:12_00015'),
('NMRX', 'NmrExpPrototype', 'priority', 'www.ccpn.ac.uk_Fogh_2010-05-12-17:41:37_00001'),
('REFD', None, None, 'www.ccpn.ac.uk_Fogh_2012-03-28-17:19:49_00001'),
('REFD', 'RefDataStore', None, 'www.ccpn.ac.uk_Fogh_2012-03-28-17:19:49_00002'),
('REFD', 'RefDataStore', 'name', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00020'),
('REFD', 'RefDataStore', 'refNmrSpectra', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00017'),
('REFD', 'RefNmrSpectrum', None, 'www.ccpn.ac.uk_Fogh_2012-03-29-14:18:52_00001'),
('REFD', 'RefNmrSpectrum', 'concentration', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00053'),
('REFD', 'RefNmrSpectrum', 'dataSource', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00050'),
('REFD', 'RefNmrSpectrum', 'details', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00056'),
('REFD', 'RefNmrSpectrum', 'name', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00052'),
('REFD', 'RefNmrSpectrum', 'pH', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:38_00054'),
('REFD', 'RefNmrSpectrum', 'temperature', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00022'),
('REFD', 'RefNmrSpectrum', 'trialHits', 'www.ccpn.ac.uk_Fogh_2012-03-29-15:58:32_00002'),
('REFS', 'RefSampleComponentStore', 'refDataStores', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00018'),
('SAM', 'AbstractSample', 'solvent', 'www.ccpn.ac.uk_Fogh_2012-03-28-17:22:44_00014'),
('SAM', 'Sample', 'mixtures', 'www.ccpn.ac.uk_Fogh_2009-11-19-14:51:57_00002'),
('SAM', 'Solvent', None, 'www.ccpn.ac.uk_Fogh_2009-11-19-14:50:32_00005'),
('SYMM', 'MolSystemSymmetrySet', 'details', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:53:29_00002'),
('SYMM', 'MolSystemSymmetrySet', 'name', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:53:29_00001'),
('TAXO', 'NaturalSource', 'kingdom', 'www.ccpn.ac.uk_Fogh_2009-01-19-14:21:01_00001'),
('TAXO', 'NaturalSource', 'superKingdom', 'www.ccpn.ac.uk_Fogh_2009-01-19-14:21:01_00002'),
('TEMP', 'FloatMatrixObject', None, 'www.ccpn.ac.uk_Fogh_2011-03-30-17:55:17_00001'),
('TEMP', 'FloatMatrixObject', 'data', 'www.ccpn.ac.uk_Fogh_2011-03-30-18:03:29_00002'),
('TEMP', 'FloatMatrixObject', 'defaultValue', 'www.ccpn.ac.uk_Fogh_2011-03-30-18:03:29_00001'),
('VALD', 'MolSysAtomValidation', None, 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:06_00004'),
('VALD', 'MolSysAtomValidation', 'atoms', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00001'),
('VALD', 'MolSysChainValidation', None, 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:06_00002'),
('VALD', 'MolSysChainValidation', 'chains', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00003'),
('VALD', 'MolSysResidueValidation', None, 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:06_00003'),
('VALD', 'MolSysResidueValidation', 'residues', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00007'),
('VALD', 'MolSystemValidation', None, 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:06_00001'),
('VALD', 'MolSystemValidation', 'molSystems', 'www.ccpn.ac.uk_Fogh_2009-09-08-17:17:13_00005'),
('WMS', None, None, 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:52_00001'),
('WMS', 'Project', None, 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:52_00005'),
('WMS', 'Project', 'details', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00007'),
('WMS', 'Project', 'nmrCalcRunSerial', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:56_00005'),
('WMS', 'Project', 'nmrCalcStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:56_00004'),
('WMS', 'Project', 'projectVersions', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00004'),
('WMS', 'Project', 'rawFiles', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:56_00002'),
('WMS', 'ProjectVersion', None, 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:52_00006'),
('WMS', 'ProjectVersion', 'createdByTask', 'www.ccpn.ac.uk_Fogh_2009-03-09-12:00:17_00005'),
('WMS', 'ProjectVersion', 'outputTasks', 'www.ccpn.ac.uk_Fogh_2009-03-09-12:00:17_00007'),
('WMS', 'ProjectVersion', 'status', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00015'),
('WMS', 'ProjectVersion', 'summary', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:56_00008'),
('WMS', 'RawFile', None, 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:52_00004'),
('WMS', 'RawFile', 'details', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00020'),
('WMS', 'Task', None, 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:52_00007'),
('WMS', 'Task', 'dateCompleted', 'www.ccpn.ac.uk_Fogh_2009-03-09-12:00:17_00009'),
('WMS', 'Task', 'dateStarted', 'www.ccpn.ac.uk_Fogh_2009-03-09-12:00:17_00008'),
('WMS', 'Task', 'generatedVersion', 'www.ccpn.ac.uk_Fogh_2009-03-09-12:00:17_00004'),
('WMS', 'Task', 'inputVersion', 'www.ccpn.ac.uk_Fogh_2009-03-09-12:00:17_00006'),
('WMS', 'Task', 'nmrCalcRunSerial', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00005'),
('WMS', 'Task', 'nmrCalcStoreName', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00004'),
('WMS', 'Task', 'operatorId', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00002'),
('WMS', 'Task', 'protocolName', 'www.ccpn.ac.uk_Fogh_2010-05-10-13:46:55_00001'),
('WMS', 'Task', 'status', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00002'),
('WMS', 'Task', 'summary', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00003'),
('WMS', 'WmsSegment', None, 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:52_00002'),
('WMS', 'WmsSegment', 'details', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00009'),
('WMS', 'WmsSegment', 'projects', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00031'),
('WMS', 'WmsSegment', 'tasks', 'www.ccpn.ac.uk_Fogh_2009-01-29-15:16:56_00033'),
('WMSP', None, None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00001'),
('WMSP', 'EnumValue', None, 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:22_00003'),
('WMSP', 'EnumValue', 'label', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00002'),
('WMSP', 'EnumValue', 'value', 'www.ccpn.ac.uk_Fogh_2011-05-26-12:12:14_00001'),
('WMSP', 'InterfaceLabel', None, 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:18_00002'),
('WMSP', 'InterfaceObject', None, 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:18_00003'),
('WMSP', 'InterfaceObject', 'col', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00012'),
('WMSP', 'InterfaceObject', 'colspan', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00014'),
('WMSP', 'InterfaceObject', 'interfaceGroup', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:20_00003'),
('WMSP', 'InterfaceObject', 'row', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00011'),
('WMSP', 'InterfaceObject', 'rowspan', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00013'),
('WMSP', 'InterfaceParameter', None, 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:22_00002'),
('WMSP', 'InterfaceParameter', 'defaultStrings', 'www.ccpn.ac.uk_Fogh_2011-10-14-11:18:11_00001'),
('WMSP', 'InterfaceParameter', 'enumValues', 'www.ccpn.ac.uk_Fogh_2011-05-26-12:12:14_00003'),
('WMSP', 'InterfaceParameter', 'hicard', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00005'),
('WMSP', 'InterfaceParameter', 'isEditable', 'www.ccpn.ac.uk_Fogh_2011-05-23-10:59:43_00001'),
('WMSP', 'InterfaceParameter', 'isOrdered', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00010'),
('WMSP', 'InterfaceParameter', 'locard', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00006'),
('WMSP', 'ParamType', None, 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:22_00006'),
('WMSP', 'ProtocolAccess', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00005'),
('WMSP', 'ProtocolAccess', 'password', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00018'),
('WMSP', 'ProtocolInterface', None, 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:22_00001'),
('WMSP', 'ProtocolInterface', 'details', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00029'),
('WMSP', 'ProtocolInterface', 'info', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00028'),
('WMSP', 'ProtocolInterface', 'interfaceLabels', 'www.ccpn.ac.uk_Fogh_2011-10-11-16:36:21_00002'),
('WMSP', 'ProtocolInterface', 'interfaceParameters', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00023'),
('WMSP', 'ProtocolParameter', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00003'),
('WMSP', 'ProtocolParameter', 'code', 'www.ccpn.ac.uk_Fogh_2010-05-20-10:42:03_00001'),
('WMSP', 'ProtocolParameter', 'container', 'www.ccpn.ac.uk_Fogh_2010-05-20-14:35:13_00001'),
('WMSP', 'ProtocolParameter', 'content', 'www.ccpn.ac.uk_Fogh_2010-05-20-14:35:13_00002'),
('WMSP', 'ProtocolParameter', 'defaultStrings', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00008'),
('WMSP', 'ProtocolParameter', 'hicard', 'www.ccpn.ac.uk_Fogh_2011-06-09-13:36:54_00002'),
('WMSP', 'ProtocolParameter', 'interfaceParameters', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00031'),
('WMSP', 'ProtocolParameter', 'ioRole', 'www.ccpn.ac.uk_Fogh_2010-05-20-14:35:13_00005'),
('WMSP', 'ProtocolParameter', 'locard', 'www.ccpn.ac.uk_Fogh_2011-06-09-13:36:54_00001'),
('WMSP', 'ProtocolParameter', 'name', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00009'),
('WMSP', 'ProtocolService', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00004'),
('WMSP', 'ProtocolService', 'login', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00014'),
('WMSP', 'ProtocolService', 'protocolAccesss', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00011'),
('WMSP', 'ProtocolService', 'result', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00017'),
('WMSP', 'ProtocolService', 'run', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00015'),
('WMSP', 'ProtocolService', 'status', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00016'),
('WMSP', 'ProtocolService', 'url', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00013'),
('WMSP', 'WmsProtocol', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00002'),
('WMSP', 'WmsProtocol', 'annealProtocolCode', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:53:29_00005'),
('WMSP', 'WmsProtocol', 'annealProtocolStoreName', 'www.ccpn.ac.uk_Fogh_2011-08-05-11:53:29_00004'),
('WMSP', 'WmsProtocol', 'details', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00015'),
('WMSP', 'WmsProtocol', 'protocolInterfaces', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00036'),
('WMSP', 'WmsProtocol', 'protocolParameters', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00011'),
('WMSP', 'WmsProtocol', 'protocolServices', 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:57_00013'),
('WMSP', 'WmsProtocol', 'softwareName', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00038'),
('WMSP', 'WmsProtocol', 'softwareVersion', 'www.ccpn.ac.uk_Fogh_2011-03-22-17:23:24_00039'),
('WMSQ', None, None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00006'),
('WMSQ', 'AbstractQuery', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00008'),
('WMSQ', 'AbstractQuery', 'userName', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00021'),
('WMSQ', 'ProjectQuery', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00009'),
('WMSQ', 'ProjectQuery', 'projectResults', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00024'),
('WMSQ', 'ProjectResult', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00012'),
('WMSQ', 'ProjectVersionQuery', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00010'),
('WMSQ', 'ProjectVersionQuery', 'projectVersionResults', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00033'),
('WMSQ', 'ProjectVersionResult', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00013'),
('WMSQ', 'TaskQuery', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00011'),
('WMSQ', 'TaskQuery', 'taskResults', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00043'),
('WMSQ', 'TaskResult', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00014'),
('WMSQ', 'WmsQueryStore', None, 'www.ccpn.ac.uk_Fogh_2010-05-06-12:26:54_00007'),
('WMSQ', 'WmsQueryStore', 'projectQueries', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00052'),
('WMSQ', 'WmsQueryStore', 'projectVersionQueries', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00054'),
('WMSQ', 'WmsQueryStore', 'taskQueries', 'www.ccpn.ac.uk_Fogh_2010-05-06-13:30:17_00056'),
]
# Class elements that exist in both models but that require handcode for
# transfer. E.g. elements that go from derived to non-derived.
# Note that old derivation functions can not be relied on to work during
# data transfer
# (prefix, typeName, elemName, newGuid, elemType)
neutraliseElements = [
('CHEM', 'Stereochemistry', 'stereoClass', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:25_00007'),
]
# Differences between equivalent classElements and AbstractDataTypes :
# name changes
# (prefix, typeName, elemName, newName, newGuid
renames = [
('ACCO', 'AccessObject', 'dataObjects', 'dataObject', 'www.ccpn.ac.uk_Fogh_2006-12-31-09:03:01_00015'),
('ENTR', 'Entry', 'spectrometerListDetails', 'experimentListDetails', 'www.ccpn.ac.uk_Fogh_2008-09-26-14:12:30_00007'),
('NMR', 'Experiment', 't1rhoList', 't1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:13_00019'),
('NMR', 'NmrProject', 'structureAnalysiss', 'structureAnalyses', 'www.ccpn.ac.uk_Fogh_2008-03-06-18:40:31_00001'),
('NMR', 'T1rhoList', None, 'T1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:48_00020'),
('NMR', 'T1rhoList', 'coherenceType', 'T1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:13_00023'),
('NMR', 'T1rhoList', 'experiments', 'T1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:13_00020'),
('NMR', 'T1rhoList', 'measurements', 'T1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:13_00018'),
('NMR', 'T1rhoList', 'sf', 'T1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:13_00022'),
('NMR', 'T1rhoList', 'unit', 'T1RhoList', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:13_00021'),
]
# ValueType changes
# change types are : 'ignore': do nothing, 'delay': available for calculation
# (prefix, typeName, elemName, action, newGuid, elemMap, valueTypeGuid)
typeChanges = [
('ANAL', 'AnalysisProject', 'printWinFileName', 'delay', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:51_00015', {'eType': 'cplx', 'tag': 'ANAL.AnalysisProject.printWinFileName', 'type': 'attr', 'name': 'printWinFileName'}, 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00035'),
('ANAP', 'Macro', 'path', 'delay', 'www.ccpn.ac.uk_Fogh_2006-08-17-15:11:12_00001', {'eType': 'cplx', 'tag': 'ANAP.Macro.path', 'type': 'attr', 'name': 'path'}, 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00035'),
('NMR', 'PeakDimComponent', 'scalingFactor', 'delay', 'www.ccpn.ac.uk_Fogh_2006-10-25-11:33:28_00005', {'proc': 'direct', 'tag': 'NMR.PeakDimComponent.scalingFactor', 'type': 'attr', 'name': 'scalingFactor'}, 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00032'),
('NMR', 'ResonanceGroup', 'details', 'delay', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:20:11_00004', {'eType': 'cplx', 'tag': 'NMR.ResonanceGroup.details', 'type': 'attr', 'name': 'details'}, 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00035'),
('NMRC', 'FixedResonance', 'name', 'delay', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:11_00011', {'eType': 'cplx', 'tag': 'NMRC.FixedResonance.name', 'type': 'attr', 'name': 'name'}, 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00035'),
('STER', 'RefStereochemistry', 'details', 'ignore', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:28_00023', {'eType': 'cplx', 'tag': 'STER.RefStereochemistry.details', 'type': 'attr', 'name': 'details'}, None),
('TEMP', 'MultiTypeValue', 'textValue', 'ignore', 'www.ccpn.ac.uk_Fogh_2007-11-13-15:55:55_00005', {'eType': 'cplx', 'tag': 'TEMP.MultiTypeValue.textValue', 'type': 'attr', 'name': 'textValue'}, None),
]
# Different elements with matching qualifiedNames
# (element.qName, differentTags, oldGuid, newGuid
nameMatches = [
]
# Differences for matching elements,
# excluding those where only names and/or valueTypes differ
# (oldElem.qName, newElem.name, oldGuid, newGuid, differentTags
allDiffs = [
('ccp.lims.Holder.Holder.holderCategories', 'holderCategories', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:27_00006', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:27_00006', set(['locard'])),
('ccp.lims.Sample.AbstractSample.sampleCategories', 'sampleCategories', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:22:46_00003', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:22:46_00003', set(['locard'])),
('ccp.molecule.ChemComp.Stereochemistry.refStereochemistry', 'refStereochemistry', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:25_00003', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:25_00003', set(['isDerived', 'changeability', 'locard'])),
('ccp.molecule.MolStructure.Atom', 'Atom', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:55_00004', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:55_00004', set(['destructorCodeStubs', 'constructorCodeStubs', 'keyNames'])),
('ccp.molecule.MolStructure.Coord', 'Coord', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:55_00001', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:55_00001', set(['isDerived', 'keyNames'])),
('ccp.molecule.MolStructure.Coord.altLocationCode', 'altLocationCode', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00042', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00042', set(['isDerived', 'documentation', 'changeability'])),
('ccp.molecule.MolStructure.Coord.bFactor', 'bFactor', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00046', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00046', set(['isDerived'])),
('ccp.molecule.MolStructure.Coord.occupancy', 'occupancy', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00047', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00047', set(['isDerived'])),
('ccp.molecule.MolStructure.Coord.x', 'x', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00043', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00043', set(['defaultValue', 'isDerived', 'locard'])),
('ccp.molecule.MolStructure.Coord.y', 'y', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00044', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00044', set(['defaultValue', 'isDerived', 'locard'])),
('ccp.molecule.MolStructure.Coord.z', 'z', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00045', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:36_00045', set(['defaultValue', 'isDerived', 'locard'])),
('ccp.molecule.Stereochemistry.RefStereochemistry.values', 'values', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:28_00022', 'www.ccpn.ac.uk_Fogh_2006-08-16-18:23:28_00022', set(['locard'])),
('ccp.nmr.Nmr.NoeValueType', 'NoeValueType', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:50_00001', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:50_00001', set(['enumeration'])),
('ccp.nmr.Nmr.PeakDim.realValue', 'realValue', 'www.ccpn.ac.uk_Fogh_2006-10-25-11:32:25_00001', 'www.ccpn.ac.uk_Fogh_2006-10-25-11:32:25_00001', set(['isDerived', 'documentation'])),
('ccp.nmr.NmrEntry.DataBaseName', 'DataBaseName', 'www.ccpn.ac.uk_Fogh_2008-07-11-16:03:02_00001', 'www.ccpn.ac.uk_Fogh_2008-07-11-16:03:02_00001', set(['enumeration'])),
('ccp.nmr.NmrExpPrototype.ExpTransferType', 'ExpTransferType', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:45_00029', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:45_00029', set(['documentation', 'enumeration'])),
('ccpnmr.Analysis.AnalysisProject.printWinScaling', 'printWinScaling', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:52_00013', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:52_00013', set(['defaultValue'])),
('ccpnmr.Analysis.AnalysisProject.printWinTickPlacement', 'printWinTickPlacement', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:52_00010', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:52_00010', set(['locard'])),
('ccpnmr.Analysis.AxisPanel', 'AxisPanel', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:22_00007', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:22_00007', set(['parentRole'])),
('ccpnmr.Analysis.PeakDrawMethod', 'PeakDrawMethod', 'www.ccpn.ac.uk_Fogh_2006-10-03-11:26:03_00002', 'www.ccpn.ac.uk_Fogh_2006-10-03-11:26:03_00002', set(['enumeration'])),
('ccpnmr.Analysis.PeakFindVolumeMethod', 'PeakFindVolumeMethod', 'www.ccpn.ac.uk_Fogh_2006-10-03-11:26:03_00003', 'www.ccpn.ac.uk_Fogh_2006-10-03-11:26:03_00003', set(['enumeration'])),
('ccpnmr.Analysis.PopupOption', 'PopupOption', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:50_00002', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:50_00002', set(['supertype', 'supertypes'])),
('ccpnmr.Analysis.SlicePanel', 'SlicePanel', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:22_00008', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:22_00008', set(['parentRole'])),
('ccpnmr.Analysis.SpectrumWindow.aspectRatio', 'aspectRatio', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00025', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00025', set(['container'])),
('ccpnmr.Analysis.SpectrumWindow.sliceRange', 'sliceRange', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00028', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00028', set(['container'])),
('ccpnmr.Analysis.SpectrumWindowView', 'SpectrumWindowView', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:22_00003', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:22_00003', set(['parentRole'])),
('ccpnmr.Analysis.SymbolStyle', 'SymbolStyle', 'www.ccpn.ac.uk_Fogh_2008-05-05-18:37:53_00002', 'www.ccpn.ac.uk_Fogh_2008-05-05-18:37:53_00002', set(['enumeration'])),
('memops.AccessControl.AccessObject.permissions', 'permissions', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00001', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:27_00001', set(['hierarchy'])),
('memops.AccessControl.Permission', 'Permission', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:54_00018', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:54_00018', set(['parentRole', 'keyNames'])),
('memops.AccessControl.Permission.accessObject', 'accessObject', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:26_00042', 'www.ccpn.ac.uk_Fogh_2006-08-17-14:16:26_00042', set(['hierarchy', 'aggregation'])),
('memops.Implementation.PathString', 'PathString', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:54_00003', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:54_00003', set(['length'])),
('memops.Implementation.RgbColor', 'RgbColor', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:50_00007', 'www.ccpn.ac.uk_Fogh_2008-05-05-15:12:50_00007', set(['supertype', 'supertypes', 'documentation'])),
('memops.Implementation.StorageFormat', 'StorageFormat', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00054', 'www.ccpn.ac.uk_Fogh_2006-08-16-14:22:53_00054', set(['enumeration'])),
]
| [
"[email protected]"
] | |
6b886ff3e8ceaf9a22c3d83ecd4644649ed20e2b | d6411d6d766adf97490b5229780952a23a3ec93e | /exportToBox2D.py | 520e5c8b2251d0a95da09748a244b69b76da98c3 | [] | no_license | NCCA/Box2DExport | 22db2dfa4d934a3cb19cb8ca3dc6e98d4c8f5f70 | a81d2a6c02aae0cc45ff573fb8ab625c9cd5454d | refs/heads/main | 2022-11-18T20:23:52.118572 | 2022-11-16T17:03:02 | 2022-11-16T17:03:02 | 24,465,700 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,426 | py | import maya.OpenMaya as OM
import maya.OpenMayaAnim as OMA
import maya.OpenMayaMPx as OMX
import maya.cmds as cmds
import sys, math
structure="""
typedef struct
{
std::string name;
float tx;
float ty;
float width;
float height;
b2BodyType type;
}Body; \n
"""
def exportBox2D() :
# basicFilter = "*.b2d"
# file=cmds.fileDialog2(caption="Please select file to save",fileFilter=basicFilter, dialogStyle=2)
file="test.bd2"
if file !="" :
dagIt = OM.MItDag(OM.MItDag.kDepthFirst, OM.MFn.kTransform)
object = OM.MObject
ofile=open(file[0],'w')
ofile.write(structure)
ofile.write('\n\nBody bodies[]={\n')
numBodies=0
while not dagIt.isDone():
object = dagIt.currentItem()
depNode = OM.MFnDependencyNode(object)
if object.apiTypeStr() =="kTransform" :
fn = OM.MFnTransform(object)
child = fn.child(0)
if child.apiTypeStr()=="kMesh" :
name=fn.name()
ofile.write('\t{ "%s",' %(name) )
x=cmds.getAttr("%s.translateX" %(name))
ofile.write('%sf,' %(x))
y=cmds.getAttr("%s.translateY" %(name))
ofile.write('%sf,' %(y))
width=cmds.getAttr("%s.scaleX" %(name))
ofile.write('%sf,' %(width))
height=cmds.getAttr("%s.scaleY" %(name))
ofile.write('%sf,' %(height))
type=cmds.getAttr("%s.Box2D" %(name))
ofile.write('%s },\n' %(type))
numBodies=numBodies+1
dagIt.next()
ofile.write("};\n")
ofile.close()
exportBox2D() | [
"[email protected]"
] | |
7c579b4d5629b63b895546455d32a4745a7ba2ee | 35351364eef7f058b358141aca6f3b74717841b8 | /src/taxpasta/infrastructure/application/kraken2/kraken2_profile_standardisation_service.py | 3d1268f8a406dc1afb07d7f990d8d53b7e29b3f8 | [
"Apache-2.0"
] | permissive | taxprofiler/taxpasta | 75f59c4bb234be9e93418d7eeaadfd73865e0df3 | 98713deaeec2e92b2f020860d264bccc9a25dbd1 | refs/heads/dev | 2023-08-31T15:04:17.971556 | 2023-08-24T20:01:50 | 2023-08-24T20:01:50 | 499,589,621 | 21 | 8 | Apache-2.0 | 2023-09-10T07:49:26 | 2022-06-03T17:06:44 | Python | UTF-8 | Python | false | false | 1,871 | py | # Copyright (c) 2022 Moritz E. Beber
# Copyright (c) 2022 Maxime Borry
# Copyright (c) 2022 James A. Fellows Yates
# Copyright (c) 2022 Sofia Stamouli.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provide a standardisation service for kraken2 profiles."""
import pandera as pa
from pandera.typing import DataFrame
from taxpasta.application.service import ProfileStandardisationService
from taxpasta.domain.model import StandardProfile
from .kraken2_profile import Kraken2Profile
class Kraken2ProfileStandardisationService(ProfileStandardisationService):
"""Define a standardisation service for kraken2 profiles."""
@classmethod
@pa.check_types(lazy=True)
def transform(
cls, profile: DataFrame[Kraken2Profile]
) -> DataFrame[StandardProfile]:
"""
Tidy up and standardize a given kraken2 profile.
Args:
profile: A taxonomic profile generated by kraken2.
Returns:
A standardized profile.
"""
return (
profile[[Kraken2Profile.taxonomy_id, Kraken2Profile.direct_assigned_reads]]
.copy()
.rename(
columns={
Kraken2Profile.taxonomy_id: StandardProfile.taxonomy_id,
Kraken2Profile.direct_assigned_reads: StandardProfile.count,
}
)
)
| [
"[email protected]"
] | |
0275e902be4106106025a6572c63ae75e2419353 | 21b5ad37b812ed78799d4efc1649579cc83d32fb | /career_advice/migrations/0005_merge_20200412_0918.py | da2433e841795a5ffac4c98c6ade0117d9040f76 | [] | no_license | SaifulAbir/django-js-api | b6f18c319f8109884e71095ad49e08e50485bb25 | fbf174b9cde2e7d25b4898f511df9c6f96d406cf | refs/heads/master | 2023-02-12T16:09:21.508702 | 2021-01-14T09:05:15 | 2021-01-14T09:05:15 | 329,713,528 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 285 | py | # Generated by Django 3.0.3 on 2020-04-12 09:18
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('career_advice', '0004_auto_20200406_0429'),
('career_advice', '0004_auto_20200408_0830'),
]
operations = [
]
| [
"[email protected]"
] | |
5f5e3209d2fceeaecaa1b2c52f3cb5efe5bf924d | cdd33a31d5b57a4a02803dded5e96a815fbb06d7 | /examples/dagster_examples_tests/test_examples.py | 502cc68abe72b5555628b27e6a1738166a9790cd | [
"Apache-2.0"
] | permissive | david-alexander-white/dagster | 4f177c167150316a5056901aa2522ab778d1d163 | 1c341500bb2380e14873b59b7e25503270188bda | refs/heads/master | 2020-12-07T04:40:02.676080 | 2020-01-06T17:37:40 | 2020-01-07T22:19:01 | 232,633,648 | 1 | 0 | Apache-2.0 | 2020-01-08T18:42:28 | 2020-01-08T18:42:27 | null | UTF-8 | Python | false | false | 679 | py | from __future__ import print_function
from click.testing import CliRunner
from dagster.cli.pipeline import execute_list_command, pipeline_list_command
from dagster.utils import script_relative_path
def no_print(_):
return None
def test_list_command():
runner = CliRunner()
execute_list_command(
{
'repository_yaml': script_relative_path('../repository.yaml'),
'python_file': None,
'module_name': None,
'fn_name': None,
},
no_print,
)
result = runner.invoke(
pipeline_list_command, ['-y', script_relative_path('../repository.yaml')]
)
assert result.exit_code == 0
| [
"[email protected]"
] | |
40816b4509834099df0a36e835300937d0875954 | 63e0b2a87237df482f559e428c068fb0bdae3786 | /python/tts_aksk_demo.py | 49a804c8c915b5d56eab9d7353895a66a8047692 | [
"Apache-2.0"
] | permissive | liwenxiang/ais-sdk | b547a16de630073e7552aad7425405d1b91d1a7e | 76240abc49795e914988f3cafb6d08f60dbdcb4c | refs/heads/master | 2020-05-13T21:54:56.771333 | 2019-04-04T10:06:58 | 2019-04-04T10:06:58 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 813 | py | # -*- coding:utf-8 -*-
from ais_sdk.utils import decode_to_wave_file
from ais_sdk.tts import tts_aksk
import json
if __name__ == '__main__':
#
# access text to speech,post data by token
#
app_key = '*************'
app_secret = '************'
# call interface use the default config
result = tts_aksk(app_key, app_secret, '语音合成为你的业务增加交互的能力.')
result_obj = json.loads(result)
decode_to_wave_file(result_obj['result']['data'], 'data/tts_use_aksk_default_config.wav')
# call interface use the specific config
result = tts_aksk(app_key, app_secret, '这里是语音合成的测试。', 'xiaoyu', '0', '16k')
result_obj = json.loads(result)
decode_to_wave_file(result_obj['result']['data'], 'data/tts_use_aksk_specific_config.wav') | [
"[email protected]"
] | |
c8b14009abb9b4e6502cc36537e863c28dc908b6 | 8058a6999c702ba9331cc75459cc97a597364448 | /myapp/feedapp/apps.py | 8f3e6c06d44c730276e0bd61a767e9c43089d378 | [] | no_license | shivaconceptsolution/django-project-sample-new | eb587a6002a9350c28fcf31c1209c74de090010d | 1efcd14b3d73c70e898e10f43d37a57281510d18 | refs/heads/master | 2020-09-10T08:02:07.635998 | 2019-11-14T12:41:23 | 2019-11-14T12:41:23 | 221,694,562 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 94 | py | from django.apps import AppConfig
class FeedappConfig(AppConfig):
name = 'feedapp'
| [
"[email protected]"
] | |
82c96800c7360a392e36d9f829b797033956880b | d3efc82dfa61fb82e47c82d52c838b38b076084c | /Autocase_Result/MEDIUM/YW_ZXBMM_SZXJ_097.py | 3725e08c3c5e6aebcbf3e46bfbad955ec13ddc7d | [] | no_license | nantongzyg/xtp_test | 58ce9f328f62a3ea5904e6ed907a169ef2df9258 | ca9ab5cee03d7a2f457a95fb0f4762013caa5f9f | refs/heads/master | 2022-11-30T08:57:45.345460 | 2020-07-30T01:43:30 | 2020-07-30T01:43:30 | 280,388,441 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,073 | py | #!/usr/bin/python
# -*- encoding: utf-8 -*-
import sys
sys.path.append("/home/yhl2/workspace/xtp_test/xtp/api")
from xtp_test_case import *
sys.path.append("/home/yhl2/workspace/xtp_test/service")
from ServiceConfig import *
from mainService import *
from QueryStkPriceQty import *
from log import *
sys.path.append("/home/yhl2/workspace/xtp_test/mysql")
from CaseParmInsertMysql import *
sys.path.append("/home/yhl2/workspace/xtp_test/utils")
from QueryOrderErrorMsg import queryOrderErrorMsg
class YW_ZXBMM_SZXJ_097(xtp_test_case):
# YW_ZXBMM_SZXJ_097
def test_YW_ZXBMM_SZXJ_097(self):
title = '深圳A股股票交易日限价委托卖-错误的数量(数量>100万)'
# 定义当前测试用例的期待值
# 期望状态:初始、未成交、部成、全成、部撤已报、部撤、已报待撤、已撤、废单、撤废、内部撤单
# xtp_ID和cancel_xtpID默认为0,不需要变动
case_goal = {
'期望状态': '废单',
'errorID': 11000107,
'errorMSG': queryOrderErrorMsg(11000107),
'是否生成报单': '是',
'是否是撤废': '否',
'xtp_ID': 0,
'cancel_xtpID': 0,
}
logger.warning(title)
# 定义委托参数信息------------------------------------------
# 参数:证券代码、市场、证券类型、证券状态、交易状态、买卖方向(B买S卖)、期望状态、Api
stkparm = QueryStkPriceQty('999999', '2', '1', '2', '0', 'S', case_goal['期望状态'], Api)
# 如果下单参数获取失败,则用例失败
if stkparm['返回结果'] is False:
rs = {
'用例测试结果': stkparm['返回结果'],
'测试错误原因': '获取下单参数失败,' + stkparm['错误原因'],
}
self.assertEqual(rs['用例测试结果'], True)
else:
wt_reqs = {
'business_type': Api.const.XTP_BUSINESS_TYPE['XTP_BUSINESS_TYPE_CASH'],
'order_client_id':2,
'market': Api.const.XTP_MARKET_TYPE['XTP_MKT_SZ_A'],
'ticker': stkparm['证券代码'],
'side': Api.const.XTP_SIDE_TYPE['XTP_SIDE_SELL'],
'price_type': Api.const.XTP_PRICE_TYPE['XTP_PRICE_LIMIT'],
'price': stkparm['随机中间价'],
'quantity': 1000100,
'position_effect': Api.const.XTP_POSITION_EFFECT_TYPE['XTP_POSITION_EFFECT_INIT']
}
ParmIni(Api, case_goal['期望状态'], wt_reqs['price_type'])
CaseParmInsertMysql(case_goal, wt_reqs)
rs = serviceTest(Api, case_goal, wt_reqs)
logger.warning('执行结果为' + str(rs['用例测试结果']) + ','
+ str(rs['用例错误源']) + ',' + str(rs['用例错误原因']))
self.assertEqual(rs['用例测试结果'], True) # 0
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
51f1669223491d79b767c54eee33c490e7696ab8 | a06b1f68a43622c21b1dbdd8680f21d588a45219 | /theory/espim/2D/plate/FSDT/transverse_shear_edge_based/stiffmatrices.py | 560948f4d2b1b61c7f3a8349af531c9f91bf3667 | [
"BSD-2-Clause"
] | permissive | i5misswrong/meshless | 6eac7e7ddbe51160ee37358ce36525b26b6c6843 | 27f9729050cedec2d7c1a716104d068608827c0f | refs/heads/master | 2021-01-15T22:51:33.502229 | 2017-07-05T13:59:17 | 2017-07-05T13:59:17 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,528 | py | import sympy
from sympy import Matrix
from meshless.sympytools import print_as_sparse, print_as_array, print_as_full
sympy.var('nx1, ny1')
sympy.var('nx2, ny2')
sympy.var('nx3, ny3')
sympy.var('nx4, ny4')
sympy.var('f11, f12, f13, f14')
sympy.var('f21, f22, f23, f24')
sympy.var('f31, f32, f33, f34')
sympy.var('f41, f42, f43, f44')
sympy.var('A11, A12, A16, A22, A26, A66')
sympy.var('B11, B12, B16, B22, B26, B66')
sympy.var('D11, D12, D16, D22, D26, D66')
sympy.var('E44, E45, E55')
sympy.var('le1, le2, le3, le4, Ac')
su1 = Matrix([[f11, 0, 0, 0, 0, f12, 0, 0, 0, 0, f13, 0, 0, 0, 0, f14, 0, 0, 0, 0]])
sv1 = Matrix([[0, f11, 0, 0, 0, 0, f12, 0, 0, 0, 0, f13, 0, 0, 0, 0, f14, 0, 0, 0]])
sw1 = Matrix([[0, 0, f11, 0, 0, 0, 0, f12, 0, 0, 0, 0, f13, 0, 0, 0, 0, f14, 0, 0]])
sphix1 = Matrix([[0, 0, 0, f11, 0, 0, 0, 0, f12, 0, 0, 0, 0, f13, 0, 0, 0, 0, f14, 0]])
sphiy1 = Matrix([[0, 0, 0, 0, f11, 0, 0, 0, 0, f12, 0, 0, 0, 0, f13, 0, 0, 0, 0, f14]])
su2 = Matrix([[f21, 0, 0, 0, 0, f22, 0, 0, 0, 0, f23, 0, 0, 0, 0, f24, 0, 0, 0, 0]])
sv2 = Matrix([[0, f21, 0, 0, 0, 0, f22, 0, 0, 0, 0, f23, 0, 0, 0, 0, f24, 0, 0, 0]])
sw2 = Matrix([[0, 0, f21, 0, 0, 0, 0, f22, 0, 0, 0, 0, f23, 0, 0, 0, 0, f24, 0, 0]])
sphix2 = Matrix([[0, 0, 0, f21, 0, 0, 0, 0, f22, 0, 0, 0, 0, f23, 0, 0, 0, 0, f24, 0]])
sphiy2 = Matrix([[0, 0, 0, 0, f21, 0, 0, 0, 0, f22, 0, 0, 0, 0, f23, 0, 0, 0, 0, f24]])
su3 = Matrix([[f31, 0, 0, 0, 0, f32, 0, 0, 0, 0, f33, 0, 0, 0, 0, f34, 0, 0, 0, 0]])
sv3 = Matrix([[0, f31, 0, 0, 0, 0, f32, 0, 0, 0, 0, f33, 0, 0, 0, 0, f34, 0, 0, 0]])
sw3 = Matrix([[0, 0, f31, 0, 0, 0, 0, f32, 0, 0, 0, 0, f33, 0, 0, 0, 0, f34, 0, 0]])
sphix3 = Matrix([[0, 0, 0, f31, 0, 0, 0, 0, f32, 0, 0, 0, 0, f33, 0, 0, 0, 0, f34, 0]])
sphiy3 = Matrix([[0, 0, 0, 0, f31, 0, 0, 0, 0, f32, 0, 0, 0, 0, f33, 0, 0, 0, 0, f34]])
su4 = Matrix([[f41, 0, 0, 0, 0, f42, 0, 0, 0, 0, f43, 0, 0, 0, 0, f44, 0, 0, 0, 0]])
sv4 = Matrix([[0, f41, 0, 0, 0, 0, f42, 0, 0, 0, 0, f43, 0, 0, 0, 0, f44, 0, 0, 0]])
sw4 = Matrix([[0, 0, f41, 0, 0, 0, 0, f42, 0, 0, 0, 0, f43, 0, 0, 0, 0, f44, 0, 0]])
sphix4 = Matrix([[0, 0, 0, f41, 0, 0, 0, 0, f42, 0, 0, 0, 0, f43, 0, 0, 0, 0, f44, 0]])
sphiy4 = Matrix([[0, 0, 0, 0, f41, 0, 0, 0, 0, f42, 0, 0, 0, 0, f43, 0, 0, 0, 0, f44]])
A = Matrix([[A11, A12, A16],
[A12, A22, A26],
[A16, A26, A66]])
B = Matrix([[B11, B12, B16],
[B12, B22, B26],
[B16, B26, B66]])
D = Matrix([[D11, D12, D16],
[D12, D22, D26],
[D16, D26, D66]])
E = Matrix([[E44, E45],
[E45, E55]])
# membrane
Bm = 1/Ac * (
le1*Matrix([nx1*su1,
ny1*sv1,
ny1*su1 + nx1*sv1])
+ le2*Matrix([nx2*su2,
ny2*sv2,
ny2*su2 + nx2*sv2])
+ le3*Matrix([nx3*su3,
ny3*sv3,
ny3*su3 + nx3*sv3])
+ le4*Matrix([nx4*su4,
ny4*sv4,
ny4*su4 + nx4*sv4])
)
# bending
Bb = 1/Ac * (
le1*Matrix([nx1*sphix1,
ny1*sphiy1,
ny1*sphix1 + nx1*sphiy1])
+ le2*Matrix([nx2*sphix2,
ny2*sphiy2,
ny2*sphix2 + nx2*sphiy2])
+ le3*Matrix([nx3*sphix3,
ny3*sphiy3,
ny3*sphix3 + nx3*sphiy3])
+ le4*Matrix([nx4*sphix4,
ny4*sphiy4,
ny4*sphix4 + nx4*sphiy4])
)
K = Ac*(Bm.transpose() * A * Bm
+ Bm.transpose() * B * Bb
+ Bb.transpose() * B * Bm
+ Bb.transpose() * D * Bb)
print_as_full(K, 'k0', dofpernode=5)
# transverse shear terms
sympy.var('a1, b1, c1, d1, Ac1')
sympy.var('a2, b2, c2, d2, Ac2')
# Tria1: mid1 -> node1 -> node2
# Tria2: node1 -> mid2 -> node2
#mid 1
Tria1Bs1 = 1/(2*Ac1) * Matrix([
[0, 0, b1-d1, Ac1, 0],
[0, 0, c1-a1, 0, Ac1]])
#node 1
Tria1Bs2 = 1/(2*Ac1) * Matrix([
[0, 0, d1, a1*d1/2, b1*d1/2],
[0, 0, -c1, -a1*c1/2, -b1*c1/2]])
#node 2
Tria1Bs3 = 1/(2*Ac1) * Matrix([
[0, 0, -b1, -b1*c1/2, -b1*d1/2],
[0, 0, a1, a1*c1/2, a1*d1/2]])
#node 1
Tria2Bs1 = 1/(2*Ac2) * Matrix([
[0, 0, b2-d2, Ac2, 0],
[0, 0, c2-a2, 0, Ac2]])
#mid 2
Tria2Bs2 = 1/(2*Ac2) * Matrix([
[0, 0, d2, a2*d2/2, b2*d2/2],
[0, 0, -c2, -a2*c2/2, -b2*c2/2]])
#node 2
Tria2Bs3 = 1/(2*Ac2) * Matrix([
[0, 0, -b2, -b2*c2/2, -b2*d2/2],
[0, 0, a2, a2*c2/2, a2*d2/2]])
ZERO = Tria1Bs1*0
#node 1 , node 2 , other 1 , other 2
BsTria1 = Matrix([Tria1Bs2.T + 1/3*Tria1Bs1.T, Tria1Bs3.T + 1/3*Tria1Bs1.T, 1/3*Tria1Bs1.T, ZERO.T ]).T
BsTria2 = Matrix([Tria2Bs1.T + 1/3*Tria2Bs2.T, Tria2Bs3.T + 1/3*Tria2Bs2.T, ZERO.T , 1/3*Tria2Bs2.T]).T
Bs = 1/Ac*(Ac1*BsTria1 + Ac2*BsTria2)
K = Ac*Bs.transpose()*E*Bs
print_as_full(K, 'k0s_interior_edge', dofpernode=5)
#mid 1
Tria1Bs1 = 1/(2*Ac) * Matrix([
[0, 0, b1-d1, Ac, 0],
[0, 0, c1-a1, 0, Ac]])
#node 1
Tria1Bs2 = 1/(2*Ac) * Matrix([
[0, 0, d1, a1*d1/2, b1*d1/2],
[0, 0, -c1, -a1*c1/2, -b1*c1/2]])
#node 2
Tria1Bs3 = 1/(2*Ac) * Matrix([
[0, 0, -b1, -b1*c1/2, -b1*d1/2],
[0, 0, a1, a1*c1/2, a1*d1/2]])
#node 1 , node 2 , other 1
BsTria1 = Matrix([Tria1Bs2.T + 1/3*Tria1Bs1.T, Tria1Bs3.T + 1/3*Tria1Bs1.T, 1/3*Tria1Bs1.T]).T
Bs = BsTria1
K = Ac*Bs.transpose()*E*Bs
print_as_full(K, 'k0s_boundary_edge', dofpernode=5)
| [
"[email protected]"
] | |
afdbe16747b31d11aa6ffd23da8bdb456945b5a7 | c36b028acbcb8c7416c13e011dd1f6fef3825a6b | /treehand.py | 12c71618d86a403dd284c85d8930b837919e52bd | [] | no_license | pglen/dbgui | 32184e1bd27dad001e52d93bb5a3feb088168921 | 790a9dd3fe9d30399550faef246d2e83467636bd | refs/heads/master | 2023-01-21T08:46:05.598868 | 2023-01-10T15:21:34 | 2023-01-10T15:21:34 | 153,726,394 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,690 | py | #!/usr/bin/env python
from __future__ import print_function
import os, sys, getopt, signal
import gi
gi.require_version("Gtk", "3.0")
from gi.repository import Gtk
from gi.repository import Gdk
from gi.repository import GObject
from gi.repository import GLib
class TreeHand():
def __init__(self, tree_sel_row):
self.treestore = None
self.tree = self.create_tree(self)
self.tree.set_headers_visible(False)
self.tree.get_selection().set_mode(Gtk.SelectionMode.MULTIPLE)
self.stree = Gtk.ScrolledWindow()
self.stree.add(self.tree)
self.tree.connect("cursor-changed", tree_sel_row)
# Tree handlers
def start_tree(self):
if not self.treestore:
self.treestore = Gtk.TreeStore(str)
# Delete previous contents
try:
while True:
root = self.treestore.get_iter_first()
self.treestore.remove(root)
except:
#print( sys.exc_info())
pass
piter = self.treestore.append(None, ["Loading .."])
self.treestore.append(piter, ["None .."])
# -------------------------------------------------------------------------
def create_tree(self, match, text = None):
self.start_tree()
tv = Gtk.TreeView(self.treestore)
tv.set_enable_search(True)
cell = Gtk.CellRendererText()
tvcolumn = Gtk.TreeViewColumn()
tvcolumn.pack_start(cell, True)
tvcolumn.add_attribute(cell, 'text', 0)
tv.append_column(tvcolumn)
return tv
def update_treestore(self, text):
#print( "was", was)
# Delete previous contents
try:
while True:
root = self.treestore.get_iter_first()
self.treestore.remove(root)
except:
pass
#print( sys.exc_info() )
if not text:
self.treestore.append(None, ["No Match",])
return
cnt = 0; piter2 = None; next = False
try:
for line in text:
piter = self.treestore.append(None, [line])
if next:
next = False; piter2 = piter
#if cnt == was:
# next = True
cnt += 1
except:
pass
#print( sys.exc_info())
if piter2:
self.tree.set_cursor(self.treestore.get_path(piter2))
else:
root = self.treestore.get_iter_first()
self.tree.set_cursor(self.treestore.get_path(root))
def append_treestore(self, text):
piter = self.treestore.append(None, [text])
# EOF
| [
"[email protected]"
] | |
b2eec97a9e55d61e6b24331daca7712bdc299e93 | b15d2787a1eeb56dfa700480364337216d2b1eb9 | /accelbyte_py_sdk/api/ugc/operations/public_channel/public_create_channel.py | ede3403280ff18ce2acd0694c96810b63b78a3ae | [
"MIT"
] | permissive | AccelByte/accelbyte-python-sdk | dedf3b8a592beef5fcf86b4245678ee3277f953d | 539c617c7e6938892fa49f95585b2a45c97a59e0 | refs/heads/main | 2023-08-24T14:38:04.370340 | 2023-08-22T01:08:03 | 2023-08-22T01:08:03 | 410,735,805 | 2 | 1 | MIT | 2022-08-02T03:54:11 | 2021-09-27T04:00:10 | Python | UTF-8 | Python | false | false | 8,118 | py | # Copyright (c) 2021 AccelByte Inc. All Rights Reserved.
# This is licensed software from AccelByte Inc, for limitations
# and restrictions contact your company contract manager.
#
# Code generated. DO NOT EDIT!
# template file: ags_py_codegen
# pylint: disable=duplicate-code
# pylint: disable=line-too-long
# pylint: disable=missing-function-docstring
# pylint: disable=missing-module-docstring
# pylint: disable=too-many-arguments
# pylint: disable=too-many-branches
# pylint: disable=too-many-instance-attributes
# pylint: disable=too-many-lines
# pylint: disable=too-many-locals
# pylint: disable=too-many-public-methods
# pylint: disable=too-many-return-statements
# pylint: disable=too-many-statements
# pylint: disable=unused-import
# AccelByte Gaming Services Ugc Service (2.11.3)
from __future__ import annotations
from typing import Any, Dict, List, Optional, Tuple, Union
from .....core import Operation
from .....core import HeaderStr
from .....core import HttpResponse
from ...models import ModelsChannelResponse
from ...models import ModelsPublicChannelRequest
from ...models import ResponseError
class PublicCreateChannel(Operation):
"""Create Channel (PublicCreateChannel)
Required permission NAMESPACE:{namespace}:USER:{userId}:CHANNEL [CREATE]
Required Permission(s):
- NAMESPACE:{namespace}:USER:{userId}:CHANNEL [CREATE]
Properties:
url: /ugc/v1/public/namespaces/{namespace}/users/{userId}/channels
method: POST
tags: ["Public Channel"]
consumes: ["application/json", "application/octet-stream"]
produces: ["application/json"]
securities: [BEARER_AUTH]
body: (body) REQUIRED ModelsPublicChannelRequest in body
namespace: (namespace) REQUIRED str in path
user_id: (userId) REQUIRED str in path
Responses:
201: Created - ModelsChannelResponse (Created)
400: Bad Request - ResponseError (Bad Request)
401: Unauthorized - ResponseError (Unauthorized)
500: Internal Server Error - ResponseError (Internal Server Error)
"""
# region fields
_url: str = "/ugc/v1/public/namespaces/{namespace}/users/{userId}/channels"
_method: str = "POST"
_consumes: List[str] = ["application/json", "application/octet-stream"]
_produces: List[str] = ["application/json"]
_securities: List[List[str]] = [["BEARER_AUTH"]]
_location_query: str = None
body: ModelsPublicChannelRequest # REQUIRED in [body]
namespace: str # REQUIRED in [path]
user_id: str # REQUIRED in [path]
# endregion fields
# region properties
@property
def url(self) -> str:
return self._url
@property
def method(self) -> str:
return self._method
@property
def consumes(self) -> List[str]:
return self._consumes
@property
def produces(self) -> List[str]:
return self._produces
@property
def securities(self) -> List[List[str]]:
return self._securities
@property
def location_query(self) -> str:
return self._location_query
# endregion properties
# region get methods
# endregion get methods
# region get_x_params methods
def get_all_params(self) -> dict:
return {
"body": self.get_body_params(),
"path": self.get_path_params(),
}
def get_body_params(self) -> Any:
if not hasattr(self, "body") or self.body is None:
return None
return self.body.to_dict()
def get_path_params(self) -> dict:
result = {}
if hasattr(self, "namespace"):
result["namespace"] = self.namespace
if hasattr(self, "user_id"):
result["userId"] = self.user_id
return result
# endregion get_x_params methods
# region is/has methods
# endregion is/has methods
# region with_x methods
def with_body(self, value: ModelsPublicChannelRequest) -> PublicCreateChannel:
self.body = value
return self
def with_namespace(self, value: str) -> PublicCreateChannel:
self.namespace = value
return self
def with_user_id(self, value: str) -> PublicCreateChannel:
self.user_id = value
return self
# endregion with_x methods
# region to methods
def to_dict(self, include_empty: bool = False) -> dict:
result: dict = {}
if hasattr(self, "body") and self.body:
result["body"] = self.body.to_dict(include_empty=include_empty)
elif include_empty:
result["body"] = ModelsPublicChannelRequest()
if hasattr(self, "namespace") and self.namespace:
result["namespace"] = str(self.namespace)
elif include_empty:
result["namespace"] = ""
if hasattr(self, "user_id") and self.user_id:
result["userId"] = str(self.user_id)
elif include_empty:
result["userId"] = ""
return result
# endregion to methods
# region response methods
# noinspection PyMethodMayBeStatic
def parse_response(
self, code: int, content_type: str, content: Any
) -> Tuple[
Union[None, ModelsChannelResponse], Union[None, HttpResponse, ResponseError]
]:
"""Parse the given response.
201: Created - ModelsChannelResponse (Created)
400: Bad Request - ResponseError (Bad Request)
401: Unauthorized - ResponseError (Unauthorized)
500: Internal Server Error - ResponseError (Internal Server Error)
---: HttpResponse (Undocumented Response)
---: HttpResponse (Unexpected Content-Type Error)
---: HttpResponse (Unhandled Error)
"""
pre_processed_response, error = self.pre_process_response(
code=code, content_type=content_type, content=content
)
if error is not None:
return None, None if error.is_no_content() else error
code, content_type, content = pre_processed_response
if code == 201:
return ModelsChannelResponse.create_from_dict(content), None
if code == 400:
return None, ResponseError.create_from_dict(content)
if code == 401:
return None, ResponseError.create_from_dict(content)
if code == 500:
return None, ResponseError.create_from_dict(content)
return self.handle_undocumented_response(
code=code, content_type=content_type, content=content
)
# endregion response methods
# region static methods
@classmethod
def create(
cls, body: ModelsPublicChannelRequest, namespace: str, user_id: str, **kwargs
) -> PublicCreateChannel:
instance = cls()
instance.body = body
instance.namespace = namespace
instance.user_id = user_id
return instance
@classmethod
def create_from_dict(
cls, dict_: dict, include_empty: bool = False
) -> PublicCreateChannel:
instance = cls()
if "body" in dict_ and dict_["body"] is not None:
instance.body = ModelsPublicChannelRequest.create_from_dict(
dict_["body"], include_empty=include_empty
)
elif include_empty:
instance.body = ModelsPublicChannelRequest()
if "namespace" in dict_ and dict_["namespace"] is not None:
instance.namespace = str(dict_["namespace"])
elif include_empty:
instance.namespace = ""
if "userId" in dict_ and dict_["userId"] is not None:
instance.user_id = str(dict_["userId"])
elif include_empty:
instance.user_id = ""
return instance
@staticmethod
def get_field_info() -> Dict[str, str]:
return {
"body": "body",
"namespace": "namespace",
"userId": "user_id",
}
@staticmethod
def get_required_map() -> Dict[str, bool]:
return {
"body": True,
"namespace": True,
"userId": True,
}
# endregion static methods
| [
"[email protected]"
] | |
81cdfcc0b542f443cf22c5b6650f06961619aecb | 4979df3343d7b99a9a826bd1cb946ae79fac260c | /tests/core/test_compiler.py | 4c31895384c81437789919e69b48a45af242ea1f | [
"BSD-3-Clause"
] | permissive | e-calder/enaml | 753ff329fb8a2192bddbe7166581ed530fb270be | 8f02a3c1a80c0a6930508551c7de1d345095173d | refs/heads/master | 2021-07-30T01:18:29.222672 | 2021-07-27T08:51:50 | 2021-07-27T08:51:50 | 206,089,494 | 0 | 0 | NOASSERTION | 2019-09-03T13:52:44 | 2019-09-03T13:52:44 | null | UTF-8 | Python | false | false | 1,467 | py | #------------------------------------------------------------------------------
# Copyright (c) 2020, Nucleic Development Team.
#
# Distributed under the terms of the Modified BSD License.
#
# The full license is in the file COPYING.txt, distributed with this software.
#------------------------------------------------------------------------------
import traceback as tb
from textwrap import dedent
import pytest
from utils import compile_source
def test_validate_declarative_1():
""" Test that we reject children that are not type in enamldef.
This also serves to test the good working of try_squash_raise.
"""
source = dedent("""\
from enaml.widgets.api import *
a = 1
enamldef Main(Window):
a:
pass
""")
with pytest.raises(TypeError) as exc:
Main = compile_source(source, 'Main')
ftb = "\n".join(tb.format_tb(exc.tb))
assert " validate_declarative" not in ftb
def test_validate_declarative_2():
""" Test that we reject children that are not declarative in enamldef.
This also serves to test the good working of try_squash_raise.
"""
source = dedent("""\
from enaml.widgets.api import *
class A:
pass
enamldef Main(Window):
A:
pass
""")
with pytest.raises(TypeError) as exc:
Main = compile_source(source, 'Main')
ftb = "\n".join(tb.format_tb(exc.tb))
assert " validate_declarative" not in ftb
| [
"[email protected]"
] | |
4aae37809f53da42d03a976e5a6283a33bae61c8 | 334d0a4652c44d0c313e11b6dcf8fb89829c6dbe | /checkov/terraform/checks/resource/azure/AzureDefenderOnContainerRegistry.py | 4a4d616fcf8b8cf44c547f290b395ec7d758874c | [
"Apache-2.0"
] | permissive | schosterbarak/checkov | 4131e03b88ae91d82b2fa211f17e370a6f881157 | ea6d697de4de2083c8f6a7aa9ceceffd6b621b58 | refs/heads/master | 2022-05-22T18:12:40.994315 | 2022-04-28T07:44:05 | 2022-04-28T07:59:17 | 233,451,426 | 0 | 0 | Apache-2.0 | 2020-03-23T12:12:23 | 2020-01-12T20:07:15 | Python | UTF-8 | Python | false | false | 994 | py | from checkov.common.models.enums import CheckCategories, CheckResult
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from typing import List
class AzureDefenderOnContainerRegistry(BaseResourceCheck):
def __init__(self):
name = "Ensure that Azure Defender is set to On for Container Registries"
id = "CKV_AZURE_86"
supported_resources = ['azurerm_security_center_subscription_pricing']
categories = [CheckCategories.GENERAL_SECURITY]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
def scan_resource_conf(self, conf):
return CheckResult.PASSED if conf.get('resource_type', [None])[0] != 'ContainerRegistry' \
or conf.get('tier', [None])[0] == 'Standard' else CheckResult.FAILED
def get_evaluated_keys(self) -> List[str]:
return ['resource_type', 'tier']
check = AzureDefenderOnContainerRegistry()
| [
"[email protected]"
] | |
54a8323ed3240105110fca78f0cb928d7777c030 | 2234300b2316bc0e7b9fcc28de567c62c98a55b5 | /setup.py | b91c43d560eae77a6cfb609534438a62739b253a | [
"MIT"
] | permissive | PierrePaul/ABToast | 9f96249c1f3987421c1a68c81af084bd1a914d85 | edf65f0e86aace18a33a624c13c8ce936c5940eb | refs/heads/master | 2020-07-21T03:08:14.356334 | 2016-11-14T19:00:44 | 2016-11-14T19:00:44 | 73,740,655 | 0 | 0 | null | 2016-11-14T19:41:40 | 2016-11-14T19:41:40 | null | UTF-8 | Python | false | false | 468 | py | # _*_ coding: utf-8 _*_
from distutils.core import setup
from setuptools import find_packages
setup(
name='django-abtoast',
version='1.0.3',
author='Hiten Sharma',
author_email='[email protected]',
packages=find_packages(),
url='https://github.com/htadg/ABToast',
license='MIT License',
description='ABToast is an A/B Testing app that is developed in django.',
long_description=open('README.md').read(),
zip_safe=False,
)
| [
"[email protected]"
] | |
3e73f0b579944c014d3fa2bc2a0b32c898ef649a | e6dab5aa1754ff13755a1f74a28a201681ab7e1c | /.parts/lib/django-1.2/django/contrib/gis/db/backends/postgis/adapter.py | a262c444b710587602c0dbc34d4b43ead0fc3cac | [] | no_license | ronkagan/Euler_1 | 67679203a9510147320f7c6513eefd391630703e | 022633cc298475c4f3fd0c6e2bde4f4728713995 | refs/heads/master | 2021-01-06T20:45:52.901025 | 2014-09-06T22:34:16 | 2014-09-06T22:34:16 | 23,744,842 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 115 | py | /home/action/.parts/packages/googleappengine/1.9.4/lib/django-1.2/django/contrib/gis/db/backends/postgis/adapter.py | [
"[email protected]"
] | |
894cf0ded218d4c3ec26f8fe45a2c3b61bbc23e9 | 27cd4886e5d08cca23bf36e24339ff1155b7db10 | /generators/splash/BagModules/adc_sar_templates_fdsoi/capdac_7b.py | ff152c65df1d3c87f87af13dd87f33d543d0924c | [
"BSD-3-Clause",
"BSD-2-Clause"
] | permissive | ucb-art/laygo | 8539accac6e9888122e8e0afd160d294ffb56bfc | 8f62ec1971480cb27cb592421fd97f590379cff9 | refs/heads/master | 2021-01-11T08:49:24.306674 | 2020-06-18T15:01:50 | 2020-06-18T15:01:50 | 194,750,788 | 24 | 9 | null | null | null | null | UTF-8 | Python | false | false | 3,759 | py | # -*- coding: utf-8 -*-
########################################################################################################################
#
# Copyright (c) 2014, Regents of the University of California
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are permitted provided that the
# following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following
# disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the
# following disclaimer in the documentation and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
########################################################################################################################
import os
import pkg_resources
from bag.design import Module
yaml_file = pkg_resources.resource_filename(__name__, os.path.join('netlist_info', 'capdac_7b.yaml'))
class adc_sar_templates__capdac_7b(Module):
"""Module for library adc_sar_templates cell capdac_7b.
Fill in high level description here.
"""
def __init__(self, bag_config, parent=None, prj=None, **kwargs):
Module.__init__(self, bag_config, yaml_file, parent=parent, prj=prj, **kwargs)
def design(self):
"""To be overridden by subclasses to design this module.
This method should fill in values for all parameters in
self.parameters. To design instances of this module, you can
call their design() method or any other ways you coded.
To modify schematic structure, call:
rename_pin()
delete_instance()
replace_instance_master()
reconnect_instance_terminal()
restore_instance()
array_instance()
"""
pass
def get_layout_params(self, **kwargs):
"""Returns a dictionary with layout parameters.
This method computes the layout parameters used to generate implementation's
layout. Subclasses should override this method if you need to run post-extraction
layout.
Parameters
----------
kwargs :
any extra parameters you need to generate the layout parameters dictionary.
Usually you specify layout-specific parameters here, like metal layers of
input/output, customizable wire sizes, and so on.
Returns
-------
params : dict[str, any]
the layout parameters dictionary.
"""
return {}
def get_layout_pin_mapping(self):
"""Returns the layout pin mapping dictionary.
This method returns a dictionary used to rename the layout pins, in case they are different
than the schematic pins.
Returns
-------
pin_mapping : dict[str, str]
a dictionary from layout pin names to schematic pin names.
"""
return {} | [
"[email protected]"
] | |
b30ddcdc45696a58718f254483b7b0596091e699 | 334bb5c9d948287d8746e81f0438ac5f3ef4c7c8 | /examples/full-screen/simple-demos/colorcolumn.py | b81bcabe0e43a48b686372e51b64a9271c64eedf | [
"BSD-3-Clause",
"Python-2.0"
] | permissive | davidtavarez/python-prompt-toolkit | fc6629694cfdaa227c5c908f7cdb0b73b9eedd1a | ceeed2bb4cb8467cefc112987121b3afd37d773a | refs/heads/master | 2020-04-02T23:39:50.099817 | 2018-10-25T20:50:02 | 2018-10-25T20:50:02 | 154,874,871 | 1 | 0 | BSD-3-Clause | 2018-10-26T18:08:46 | 2018-10-26T18:08:46 | null | UTF-8 | Python | false | false | 1,906 | py | #!/usr/bin/env python
"""
Colorcolumn example.
"""
from __future__ import unicode_literals
from prompt_toolkit.application import Application
from prompt_toolkit.buffer import Buffer
from prompt_toolkit.key_binding import KeyBindings
from prompt_toolkit.layout.containers import HSplit, Window, ColorColumn
from prompt_toolkit.layout.controls import FormattedTextControl, BufferControl
from prompt_toolkit.layout.layout import Layout
LIPSUM = """
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Maecenas
quis interdum enim. Nam viverra, mauris et blandit malesuada, ante est bibendum
mauris, ac dignissim dui tellus quis ligula. Aenean condimentum leo at
dignissim placerat. In vel dictum ex, vulputate accumsan mi. Donec ut quam
placerat massa tempor elementum. Sed tristique mauris ac suscipit euismod. Ut
tempus vehicula augue non venenatis. Mauris aliquam velit turpis, nec congue
risus aliquam sit amet. Pellentesque blandit scelerisque felis, faucibus
consequat ante. Curabitur tempor tortor a imperdiet tincidunt. Nam sed justo
sit amet odio bibendum congue. Quisque varius ligula nec ligula gravida, sed
convallis augue faucibus. Nunc ornare pharetra bibendum. Praesent blandit ex
quis sodales maximus."""
# Create text buffers.
buff = Buffer()
buff.text = LIPSUM
# 1. The layout
color_columns = [
ColorColumn(50),
ColorColumn(80, style='bg:#ff0000'),
ColorColumn(10, style='bg:#ff0000'),
]
body = HSplit([
Window(FormattedTextControl('Press "q" to quit.'), height=1, style='reverse'),
Window(BufferControl(buffer=buff), colorcolumns=color_columns),
])
# 2. Key bindings
kb = KeyBindings()
@kb.add('q')
def _(event):
" Quit application. "
event.app.exit()
# 3. The `Application`
application = Application(
layout=Layout(body),
key_bindings=kb,
full_screen=True)
def run():
application.run()
if __name__ == '__main__':
run()
| [
"[email protected]"
] | |
aa7a716385760f45fec52d6192485a83cb9b1531 | f9369134d8d12e4b542e5529d4283abbda76c07e | /BSTconstruction.py | d890ae7ee8b26bdb2eaca0fc823aa50b9a8cac78 | [] | no_license | yash921/leetcode-questions | 10fdaae874075ddb20331ccbf39dd82d10a9bb11 | f3fa3d3b5843a21bb86f91711105ae2751373e9c | refs/heads/main | 2023-03-27T01:28:13.986142 | 2021-03-20T06:23:40 | 2021-03-20T06:23:40 | 349,645,174 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,017 | py | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sun Sep 6 18:50:47 2020
@author: yash
"""
import pygame
class BST:
def __init__(self, value):
self.value = value
self.left = None
self.right = None
# Average: O(log(n)) time | O(1) space
# Worst: O(n) time | O(1) space
def insert(self, value):
currentNode = self
while True:
if value < currentNode.value:
if currentNode.left is None:
currentNode.left = BST(value)
break
else:
currentNode = currentNode.left
else:
if currentNode.right is None:
currentNode.right = BST(value)
break
else:
currentNode = currentNode.right
return self
# Average: O(log(n)) time | O(1) space
# Worst: O(n) time | O(1) space
def contains(self, value):
currentNode = self
while currentNode is not None:
if value < currentNode.value:
currentNode = currentNode.left
elif value > currentNode.value:
currentNode = currentNode.right
else:
return True
return False
def findClosestValue(tree, target):
return findClosestValueInBstHelper(tree, target, float("inf"))
def findClosestValueInBstHelper(tree, target, closest):
if tree is None:
return closest
if abs(target - closest) > abs(target - tree.value):
closest = tree.value
if target < tree.value:
return findClosestValueInBstHelper(tree.left, target, closest)
elif target > tree.value:
return findClosestValueInBstHelper(tree.right, target, closest)
else:
return closest
myTree = BST(1)
myTree.left = BST(2)
myTree.right = BST(3)
# myTree.insert(2)
# myTree.insert(3)
print(findClosestValue(myTree,4))
| [
"[email protected]"
] | |
6c99e2dfd5e7bbd0c8e994553a6fa3f87d091723 | 9b6f65a28af4c6befdd015d1416d6257138c0219 | /alpha/event/migrations/0028_auto__add_featuredevent.py | 39908750c0b7119ca35eee4be62b82574ac96f0e | [] | no_license | dany431/cityfusion | 5beec53131898e539a892249fa711fb3086fb53c | 4e67464db69cfa21c965e4eb8796a5c727d5a443 | refs/heads/master | 2016-08-11T13:20:17.966539 | 2016-01-13T11:17:12 | 2016-01-13T11:17:12 | 49,567,611 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 15,642 | py | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'FeaturedEvent'
db.create_table(u'event_featuredevent', (
(u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('event', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['event.Event'])),
('start_time', self.gf('django.db.models.fields.DateTimeField')()),
('end_time', self.gf('django.db.models.fields.DateTimeField')()),
('views', self.gf('django.db.models.fields.IntegerField')(default=0)),
('clicks', self.gf('django.db.models.fields.IntegerField')(default=0)),
('cost_currency', self.gf('djmoney.models.fields.CurrencyField')(default='CAD', max_length=3)),
('cost', self.gf('djmoney.models.fields.MoneyField')(default='0.0', max_digits=10, decimal_places=2, default_currency='CAD')),
))
db.send_create_signal(u'event', ['FeaturedEvent'])
def backwards(self, orm):
# Deleting model 'FeaturedEvent'
db.delete_table(u'event_featuredevent')
models = {
u'auth.group': {
'Meta': {'object_name': 'Group'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
u'auth.permission': {
'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
u'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
u'cities.city': {
'Meta': {'object_name': 'City'},
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Country']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.contrib.gis.db.models.fields.PointField', [], {}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'name_std': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'population': ('django.db.models.fields.IntegerField', [], {}),
'region': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Region']", 'null': 'True', 'blank': 'True'}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'subregion': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Subregion']", 'null': 'True', 'blank': 'True'})
},
u'cities.country': {
'Meta': {'ordering': "['name']", 'object_name': 'Country'},
'code': ('django.db.models.fields.CharField', [], {'max_length': '2', 'db_index': 'True'}),
'continent': ('django.db.models.fields.CharField', [], {'max_length': '2'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'population': ('django.db.models.fields.IntegerField', [], {}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'tld': ('django.db.models.fields.CharField', [], {'max_length': '5'})
},
u'cities.region': {
'Meta': {'object_name': 'Region'},
'code': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Country']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'name_std': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
u'cities.subregion': {
'Meta': {'object_name': 'Subregion'},
'code': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Country']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'name_std': ('django.db.models.fields.CharField', [], {'max_length': '200', 'db_index': 'True'}),
'region': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Region']"}),
'slug': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
u'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'event.auditevent': {
'Meta': {'object_name': 'AuditEvent', '_ormbases': [u'event.Event']},
u'event_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['event.Event']", 'unique': 'True', 'primary_key': 'True'}),
'phrases': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['event.AuditPhrase']", 'symmetrical': 'False'})
},
u'event.auditphrase': {
'Meta': {'object_name': 'AuditPhrase'},
'active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'phrase': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
u'event.auditsingleevent': {
'Meta': {'object_name': 'AuditSingleEvent'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'phrases': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['event.AuditPhrase']", 'symmetrical': 'False'})
},
u'event.canadianvenue': {
'Meta': {'object_name': 'CanadianVenue', '_ormbases': [u'event.Venue']},
'postal_code': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'province': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
u'venue_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': u"orm['event.Venue']", 'unique': 'True', 'primary_key': 'True'})
},
u'event.event': {
'Meta': {'object_name': 'Event'},
'audited': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'authentication_key': ('django.db.models.fields.CharField', [], {'max_length': '40'}),
'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2013, 5, 19, 0, 0)', 'auto_now_add': 'True', 'blank': 'True'}),
'cropping': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'email': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'featured': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'featured_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.contrib.gis.db.models.fields.PointField', [], {}),
'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2013, 5, 19, 0, 0)', 'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '250'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']", 'null': 'True', 'blank': 'True'}),
'picture': ('django.db.models.fields.files.ImageField', [], {'max_length': '100', 'null': 'True', 'blank': 'True'}),
'price': ('django.db.models.fields.CharField', [], {'default': "'Free'", 'max_length': '40', 'blank': 'True'}),
'search_index': ('djorm_pgfulltext.fields.VectorField', [], {'default': "''", 'null': 'True', 'db_index': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '255'}),
'tickets': ('django.db.models.fields.CharField', [], {'max_length': '250', 'null': 'True', 'blank': 'True'}),
'venue': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['event.Venue']", 'null': 'True', 'blank': 'True'}),
'viewed_times': ('django.db.models.fields.IntegerField', [], {'default': '0', 'null': 'True', 'blank': 'True'}),
'website': ('django.db.models.fields.URLField', [], {'default': "''", 'max_length': '200', 'null': 'True', 'blank': 'True'})
},
u'event.fakeauditevent': {
'Meta': {'object_name': 'FakeAuditEvent', 'db_table': "u'event_auditevent'", 'managed': 'False'},
'event_ptr_id': ('django.db.models.fields.PositiveIntegerField', [], {'primary_key': 'True', 'db_column': "'event_ptr_id'"})
},
u'event.featuredevent': {
'Meta': {'object_name': 'FeaturedEvent'},
'clicks': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'cost': ('djmoney.models.fields.MoneyField', [], {'default': "'0.0'", 'max_digits': '10', 'decimal_places': '2', 'default_currency': "'CAD'"}),
'cost_currency': ('djmoney.models.fields.CurrencyField', [], {'default': "'CAD'", 'max_length': '3'}),
'end_time': ('django.db.models.fields.DateTimeField', [], {}),
'event': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['event.Event']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'start_time': ('django.db.models.fields.DateTimeField', [], {}),
'views': ('django.db.models.fields.IntegerField', [], {'default': '0'})
},
u'event.reminder': {
'Meta': {'object_name': 'Reminder'},
'date': ('django.db.models.fields.DateTimeField', [], {}),
'email': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'event': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
u'event.singleevent': {
'Meta': {'object_name': 'SingleEvent'},
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'end_time': ('django.db.models.fields.DateTimeField', [], {}),
'event': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'single_events'", 'to': u"orm['event.Event']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'search_index': ('djorm_pgfulltext.fields.VectorField', [], {'default': "''", 'null': 'True', 'db_index': 'True'}),
'start_time': ('django.db.models.fields.DateTimeField', [], {})
},
u'event.venue': {
'Meta': {'object_name': 'Venue'},
'city': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.City']"}),
'country': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['cities.Country']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'location': ('django.contrib.gis.db.models.fields.PointField', [], {}),
'name': ('django.db.models.fields.CharField', [], {'default': "'Default Venue'", 'max_length': '250'}),
'street': ('django.db.models.fields.CharField', [], {'max_length': '250', 'blank': 'True'})
},
u'taggit.tag': {
'Meta': {'object_name': 'Tag'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '100'})
},
u'taggit.taggeditem': {
'Meta': {'object_name': 'TaggedItem'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'taggit_taggeditem_tagged_items'", 'to': u"orm['contenttypes.ContentType']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True'}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "u'taggit_taggeditem_items'", 'to': u"orm['taggit.Tag']"})
}
}
complete_apps = ['event'] | [
"[email protected]"
] | |
25e6bd086f9b240c5225f9b752696fa71aab17ae | 5a281cb78335e06c631181720546f6876005d4e5 | /cloudkitty-9.0.0/cloudkitty/rating/noop.py | bcd23ab7f2e824e0c756b0c453b0e82d1fcdd194 | [
"Apache-2.0"
] | permissive | scottwedge/OpenStack-Stein | d25b2a5bb54a714fc23f0ff0c11fb1fdacad85e8 | 7077d1f602031dace92916f14e36b124f474de15 | refs/heads/master | 2021-03-22T16:07:19.561504 | 2020-03-15T01:31:10 | 2020-03-15T01:31:10 | 247,380,811 | 0 | 0 | Apache-2.0 | 2020-03-15T01:24:15 | 2020-03-15T01:24:15 | null | UTF-8 | Python | false | false | 1,398 | py | # -*- coding: utf-8 -*-
# Copyright 2014 Objectif Libre
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# @author: Stéphane Albert
#
import decimal
from cloudkitty import rating
class Noop(rating.RatingProcessorBase):
module_name = "noop"
description = 'Dummy test module.'
@property
def enabled(self):
"""Check if the module is enabled
:returns: bool if module is enabled
"""
return True
@property
def priority(self):
return 1
def reload_config(self):
pass
def process(self, data):
for cur_data in data:
cur_usage = cur_data['usage']
for service in cur_usage:
for entry in cur_usage[service]:
if 'rating' not in entry:
entry['rating'] = {'price': decimal.Decimal(0)}
return data
| [
"Wayne [email protected]"
] | Wayne [email protected] |
7acbd3f06c296e7b9c504e668609619e1dc8f852 | 1aed14713ddc1a3cea120cb4c8d0d9f79cf62a77 | /test_classification.py | 2ecaf428745589ce698cc0d20d2a4930f9687c77 | [
"MIT"
] | permissive | dazzag24/ResumeParser | 16aa577548f6b300eab758ffb94aacce920b9566 | 06a105f587dd20dd47a9ac81c37e5f4b47d83d9f | refs/heads/master | 2020-04-12T06:55:17.931441 | 2018-12-19T14:21:10 | 2018-12-19T14:21:10 | 162,351,604 | 0 | 0 | MIT | 2018-12-18T22:17:24 | 2018-12-18T22:17:24 | null | UTF-8 | Python | false | false | 7,218 | py | # from glob import glob
# import os
# import pandas as pd
# import matplotlib.pyplot as plt
# import numpy as np
# import json
# from collections import defaultdict
# base_json = 'dataset/resume_dataset.json'
# def pop_annot(raw_line):
# in_line = defaultdict(list, **raw_line)
# if 'annotation' in in_line:
# labels = in_line['annotation']
# for c_lab in labels:
# if len(c_lab['label'])>0:
# in_line[c_lab['label'][0]] += c_lab['points']
# return in_line
# with open(base_json, 'r') as f:
# # data is jsonl and so we parse it line-by-line
# resume_data = [json.loads(f_line) for f_line in f.readlines()]
# resume_df = pd.DataFrame([pop_annot(line) for line in resume_data])
# resume_df['length'] = resume_df['content'].map(len)
# # resume_df['length'].hist()
# # print(resume_df.sample(3))
# def extract_higlights(raw_line):
# in_line = defaultdict(list, **raw_line)
# if 'annotation' in in_line:
# labels = in_line['annotation']
# for c_lab in labels:
# if len(c_lab['label'])>0:
# in_line['highlight'] += [dict(category = c_lab['label'][0], **cpts) for cpts in c_lab['points']]
# return in_line
# resume_hl_df = pd.DataFrame([extract_higlights(line) for line in resume_data])
# resume_hl_df['length'] = resume_hl_df['content'].map(len)
# # resume_hl_df['length'].hist()
# # resume_hl_df.sample(3)
# from string import ascii_lowercase, digits
# valid_chars = ascii_lowercase+digits+'@., '
# focus_col = 'highlight'
# focus_df = resume_hl_df[['content', focus_col, 'length']].copy().dropna()
# # clean up the text but maintain the length
# focus_df['kosher_content'] = resume_df['content'].str.lower().map(lambda c_text: ''.join([c if c in valid_chars else ' ' for c in c_text]))
# # print(focus_col, 'with', focus_df.shape[0], 'complete results')
# # print('First result')
# for _, c_row in focus_df.query('length<2000').sample(1, random_state = 20).iterrows():
# # print(len(c_row['content']))
# for yoe in c_row[focus_col]:
# s,e = yoe['start'], yoe['end']
# print(yoe)
# # print(c_row['content'][s:e+1])
############################################ NOTE ########################################################
#
# Creates NER training data in Spacy format from JSON downloaded from Dataturks.
#
# Outputs the Spacy training data which can be used for Spacy training.
#
############################################################################################################
import json
import random
import logging
from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_fscore_support
from spacy.gold import GoldParse
from spacy.scorer import Scorer
from sklearn.metrics import accuracy_score
def convert_dataturks_to_spacy(dataturks_JSON_FilePath):
try:
training_data = []
lines=[]
with open(dataturks_JSON_FilePath, 'r') as f:
lines = f.readlines()
for line in lines:
data = json.loads(line)
text = data['content']
entities = []
for annotation in data['annotation']:
#only a single point in text annotation.
point = annotation['points'][0]
labels = annotation['label']
# handle both list of labels or a single label.
if not isinstance(labels, list):
labels = [labels]
for label in labels:
#dataturks indices are both inclusive [start, end] but spacy is not [start, end)
entities.append((point['start'], point['end'] + 1 ,label))
training_data.append((text, {"entities" : entities}))
return training_data
except Exception as e:
logging.exception("Unable to process " + dataturks_JSON_FilePath + "\n" + "error = " + str(e))
return None
import spacy
################### Train Spacy NER.###########
def train_spacy():
TRAIN_DATA = convert_dataturks_to_spacy("dataset/resume_dataset.json")
nlp = spacy.blank('en') # create blank Language class
# create the built-in pipeline components and add them to the pipeline
# nlp.create_pipe works for built-ins that are registered with spaCy
if 'ner' not in nlp.pipe_names:
ner = nlp.create_pipe('ner')
nlp.add_pipe(ner, last=True)
# add labels
for _, annotations in TRAIN_DATA:
for ent in annotations.get('entities'):
ner.add_label(ent[2])
# get names of other pipes to disable them during training
other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'ner']
with nlp.disable_pipes(*other_pipes): # only train NER
optimizer = nlp.begin_training()
for itn in range(10):
print("Statring iteration " + str(itn))
random.shuffle(TRAIN_DATA)
losses = {}
for text, annotations in TRAIN_DATA:
nlp.update(
[text], # batch of texts
[annotations], # batch of annotations
drop=0.2, # dropout - make it harder to memorise data
sgd=optimizer, # callable to update weights
losses=losses)
print(losses)
#test the model and evaluate it
examples = convert_dataturks_to_spacy("dataset/resume_dataset_test.json")
tp=0
tr=0
tf=0
ta=0
c=0
for text,annot in examples:
f=open("resume"+str(c)+".txt","w")
doc_to_test=nlp(text)
d={}
for ent in doc_to_test.ents:
d[ent.label_]=[]
for ent in doc_to_test.ents:
d[ent.label_].append(ent.text)
for i in set(d.keys()):
f.write("\n\n")
f.write(i +":"+"\n")
for j in set(d[i]):
f.write(j.replace('\n','')+"\n")
d={}
for ent in doc_to_test.ents:
d[ent.label_]=[0,0,0,0,0,0]
for ent in doc_to_test.ents:
doc_gold_text= nlp.make_doc(text)
gold = GoldParse(doc_gold_text, entities=annot.get("entities"))
y_true = [ent.label_ if ent.label_ in x else 'Not '+ent.label_ for x in gold.ner]
y_pred = [x.ent_type_ if x.ent_type_ ==ent.label_ else 'Not '+ent.label_ for x in doc_to_test]
if(d[ent.label_][0]==0):
#f.write("For Entity "+ent.label_+"\n")
#f.write(classification_report(y_true, y_pred)+"\n")
(p,r,f,s)= precision_recall_fscore_support(y_true,y_pred,average='weighted')
a=accuracy_score(y_true,y_pred)
d[ent.label_][0]=1
d[ent.label_][1]+=p
d[ent.label_][2]+=r
d[ent.label_][3]+=f
d[ent.label_][4]+=a
d[ent.label_][5]+=1
c+=1
for i in d:
print("\n For Entity "+i+"\n")
print("Accuracy : "+str((d[i][4]/d[i][5])*100)+"%")
print("Precision : "+str(d[i][1]/d[i][5]))
print("Recall : "+str(d[i][2]/d[i][5]))
print("F-score : "+str(d[i][3]/d[i][5]))
train_spacy() | [
"[email protected]"
] | |
d3551bbf0e9586456be42b46aba252c03480d773 | 39b0d9c6df77671f540c619aff170441f953202a | /PYTHON LIBRARY/SUB_3/linecache_getline.py | a046ab4287da9bd8bc46a9515c5c9027ab555e45 | [] | no_license | yeboahd24/Python201 | e7d65333f343d9978efff6bf86ce0447d3a40d70 | 484e66a52d4e706b8478473347732e23998c93c5 | refs/heads/main | 2023-02-06T10:24:25.429718 | 2020-12-26T01:08:04 | 2020-12-26T01:08:04 | 306,487,550 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 326 | py | import linecache
from linecache_data import *
filename = make_tempfile()
# Pick out the same line from source and cache.
# (Notice that linecache counts from 1.)
print('SOURCE:')
print('{!r}'.format(lorem.split('\n')[4]))
print()
print('CACHE:')
print('{!r}'.format(linecache.getline(filename, 5)))
cleanup(filename) | [
"[email protected]"
] | |
986ea4fb769b25b6b526c57c670df265c47eca64 | 7172ed9a83a2d3d9a61918bbb9db89a4641f862a | /tests/test_resource_list.py | 89ea147c8912fc597230c055b7c325d2c07f16f3 | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | giorgiobasile/resync | e7bb49661b32a7789248eabf3a640c327c37b343 | c4734648a6ca93e985164450b85f387a349adee2 | refs/heads/master | 2021-01-13T13:39:49.865944 | 2016-05-06T22:33:37 | 2016-05-06T22:33:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,643 | py | import unittest
try: #python2
# Must try this first as io also exists in python2
# but in the wrong one!
import StringIO as io
except ImportError: #python3
import io
import re
from resync.resource import Resource
from resync.resource_list import ResourceList, ResourceListDupeError
from resync.sitemap import SitemapParseError
class TestResourceList(unittest.TestCase):
def test01_same(self):
src = ResourceList()
src.add( Resource('a',timestamp=1) )
src.add( Resource('b',timestamp=2) )
dst = ResourceList()
dst.add( Resource('a',timestamp=1) )
dst.add( Resource('b',timestamp=2) )
( same, changed, deleted, added ) = dst.compare(src)
self.assertEqual( len(same), 2, "2 things unchanged" )
i = iter(same)
self.assertEqual( next(i).uri, 'a', "first was a" )
self.assertEqual( next(i).uri, 'b', "second was b" )
self.assertEqual( len(changed), 0, "nothing changed" )
self.assertEqual( len(deleted), 0, "nothing deleted" )
self.assertEqual( len(added), 0, "nothing added" )
def test02_changed(self):
src = ResourceList()
src.add( Resource('a',timestamp=1) )
src.add( Resource('b',timestamp=2) )
dst = ResourceList()
dst.add( Resource('a',timestamp=3) )
dst.add( Resource('b',timestamp=4) )
( same, changed, deleted, added ) = dst.compare(src)
self.assertEqual( len(same), 0, "0 things unchanged" )
self.assertEqual( len(changed), 2, "2 things changed" )
i = iter(changed)
self.assertEqual( next(i).uri, 'a', "first was a" )
self.assertEqual( next(i).uri, 'b', "second was b" )
self.assertEqual( len(deleted), 0, "nothing deleted" )
self.assertEqual( len(added), 0, "nothing added" )
def test03_deleted(self):
src = ResourceList()
src.add( Resource('a',timestamp=1) )
src.add( Resource('b',timestamp=2) )
dst = ResourceList()
dst.add( Resource('a',timestamp=1) )
dst.add( Resource('b',timestamp=2) )
dst.add( Resource('c',timestamp=3) )
dst.add( Resource('d',timestamp=4) )
( same, changed, deleted, added ) = dst.compare(src)
self.assertEqual( len(same), 2, "2 things unchanged" )
self.assertEqual( len(changed), 0, "nothing changed" )
self.assertEqual( len(deleted), 2, "c and d deleted" )
i = iter(deleted)
self.assertEqual( next(i).uri, 'c', "first was c" )
self.assertEqual( next(i).uri, 'd', "second was d" )
self.assertEqual( len(added), 0, "nothing added" )
def test04_added(self):
src = ResourceList()
src.add( Resource('a',timestamp=1) )
src.add( Resource('b',timestamp=2) )
src.add( Resource('c',timestamp=3) )
src.add( Resource('d',timestamp=4) )
dst = ResourceList()
dst.add( Resource('a',timestamp=1) )
dst.add( Resource('c',timestamp=3) )
( same, changed, deleted, added ) = dst.compare(src)
self.assertEqual( len(same), 2, "2 things unchanged" )
self.assertEqual( len(changed), 0, "nothing changed" )
self.assertEqual( len(deleted), 0, "nothing deleted" )
self.assertEqual( len(added), 2, "b and d added" )
i = iter(added)
self.assertEqual( next(i).uri, 'b', "first was b" )
self.assertEqual( next(i).uri, 'd', "second was d" )
def test05_add(self):
r1 = Resource(uri='a',length=1)
r2 = Resource(uri='b',length=2)
i = ResourceList()
i.add(r1)
self.assertRaises( ResourceListDupeError, i.add, r1)
i.add(r2)
self.assertRaises( ResourceListDupeError, i.add, r2)
# allow dupes
r1d = Resource(uri='a',length=10)
i.add(r1d,replace=True)
self.assertEqual( len(i), 2 )
self.assertEqual( i.resources['a'].length, 10 )
def test06_add_iterable(self):
r1 = Resource(uri='a',length=1)
r2 = Resource(uri='b',length=2)
i = ResourceList()
i.add( [r1,r2] )
self.assertRaises( ResourceListDupeError, i.add, r1)
self.assertRaises( ResourceListDupeError, i.add, r2)
# allow dupes
r1d = Resource(uri='a',length=10)
i.add( [r1d] ,replace=True)
self.assertEqual( len(i), 2 )
self.assertEqual( i.resources['a'].length, 10 )
def test07_has_md5(self):
r1 = Resource(uri='a')
r2 = Resource(uri='b')
i = ResourceList()
self.assertFalse( i.has_md5() )
i.add(r1)
i.add(r2)
self.assertFalse( i.has_md5() )
r1.md5="aabbcc"
self.assertTrue( i.has_md5() )
def test08_iter(self):
i = ResourceList()
i.add( Resource('a',timestamp=1) )
i.add( Resource('b',timestamp=2) )
i.add( Resource('c',timestamp=3) )
i.add( Resource('d',timestamp=4) )
resources=[]
for r in i:
resources.append(r)
self.assertEqual(len(resources), 4)
self.assertEqual( resources[0].uri, 'a')
self.assertEqual( resources[3].uri, 'd')
def test20_as_xml(self):
rl = ResourceList()
rl.add( Resource('a',timestamp=1) )
rl.add( Resource('b',timestamp=2) )
xml = rl.as_xml()
self.assertTrue( re.search(r'<rs:md .*capability="resourcelist"', xml), 'XML has capability' )
self.assertTrue( re.search(r'<url><loc>a</loc><lastmod>1970-01-01T00:00:01Z</lastmod></url>', xml), 'XML has resource a' )
def test30_parse(self):
xml='<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n\
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:rs="http://www.openarchives.org/rs/terms/">\
<rs:md at="2013-08-07" capability="resourcelist" completed="2013-08-08" />\
<url><loc>/tmp/rs_test/src/file_a</loc><lastmod>2012-03-14T18:37:36Z</lastmod><rs:md change="updated" length="12" /></url>\
<url><loc>/tmp/rs_test/src/file_b</loc><lastmod>2012-03-14T18:37:36Z</lastmod><rs:md length="32" /></url>\
</urlset>'
rl=ResourceList()
rl.parse(fh=io.StringIO(xml))
self.assertEqual( len(rl.resources), 2, 'got 2 resources')
self.assertEqual( rl.md['capability'], 'resourcelist', 'capability set' )
self.assertEqual( rl.md_at, '2013-08-07' )
self.assertEqual( rl.md_completed, '2013-08-08' )
def test31_parse_no_capability(self):
xml='<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n\
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">\
<url><loc>http://example.com/res1</loc><lastmod>2012-03-14T18:37:36Z</lastmod></url>\
</urlset>'
rl=ResourceList()
rl.parse(fh=io.StringIO(xml))
self.assertEqual( len(rl.resources), 1, 'got 1 resource')
self.assertEqual( rl.md['capability'], 'resourcelist', 'capability set by reading routine' )
self.assertFalse( 'from' in rl.md )
def test32_parse_bad_capability(self):
# the <rs:md capability="bad_capability".. should give error
xml='<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n\
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:rs="http://www.openarchives.org/rs/terms/">\
<rs:md capability="bad_capability" from="2013-01-01"/>\
<url><loc>http://example.com/bad_res_1</loc><lastmod>2012-03-14T18:37:36Z</lastmod></url>\
</urlset>'
rl=ResourceList()
self.assertRaises( SitemapParseError, rl.parse, fh=io.StringIO(xml) )
if __name__ == '__main__':
suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestResourceList)
unittest.TextTestRunner().run(suite)
| [
"[email protected]"
] | |
ba713802b829240da77702cbc404256e8622c024 | aa1972e6978d5f983c48578bdf3b51e311cb4396 | /mas_nitro-python-1.0/sample/system_version.py | f30d7ea2bffb3a8239bf8a2d6c1e2d8bf5d973eb | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference"
] | permissive | MayankTahil/nitro-ide | 3d7ddfd13ff6510d6709bdeaef37c187b9f22f38 | 50054929214a35a7bb19ed10c4905fffa37c3451 | refs/heads/master | 2020-12-03T02:27:03.672953 | 2017-07-05T18:09:09 | 2017-07-05T18:09:09 | 95,933,896 | 2 | 5 | null | 2017-07-05T16:51:29 | 2017-07-01T01:03:20 | HTML | UTF-8 | Python | false | false | 3,023 | py | #!/usr/bin/env python
'''
* Copyright (c) 2008-2015 Citrix Systems, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
'''
import sys
from massrc.com.citrix.mas.nitro.exception.nitro_exception import nitro_exception
from massrc.com.citrix.mas.nitro.resource.config.mps.mps import mps
from massrc.com.citrix.mas.nitro.service.nitro_service import nitro_service
class system_version :
def __init__(self):
ipaddress=""
username=""
password=""
@staticmethod
def main(cls, args_):
if(len(args_) < 3):
print("Usage: run.bat <ip> <username> <password>")
return
config = system_version()
config.ip = args_[1]
config.username = args_[2]
config.password = args_[3]
try :
client = nitro_service(config.ip,"http","v1")
client.set_credential(config.username,config.password)
client.timeout = 1800
client.login()
config.run_sample(client)
client.logout()
except nitro_exception as e:
print("Exception::errorcode="+str(e.errorcode)+",message="+ e.message)
except Exception as e:
print("Exception::message="+str(e.args))
return
def run_sample(self, client) :
self.get_mps(client)
def get_mps(self,client) :
try:
result = mps()
simplelist = mps.get(client,result)
print "--------------"
print "Response Came :"
print "--------------"
for item in simplelist :
print "Product : "+ item.product+ " | Session Build : " +item.build_number
except nitro_exception as e :
print "--------------"
print "Exception :"
print "--------------"
print "ErrorCode : "+ str(e.errorcode)
print "Message : " +e.message
except Exception as e:
raise e
#
# Main thread of execution
#
if __name__ == '__main__':
try:
print len(sys.argv)
if len(sys.argv) < 3:
sys.exit()
else:
ipaddress=sys.argv[1]
username=sys.argv[2]
password=sys.argv[3]
system_version().main(system_version(),sys.argv)
except SystemExit:
print("Exception::Usage: Sample.py <directory path of Nitro.py> <nsip> <username> <password>")
| [
"[email protected]"
] | |
3e8bd385f8e17649c7021b1e65085eb5bb3cf686 | d41c15b9c68ab2ee70740044d25d620e6b90a09e | /app/mod_cmd/commands/login.py | c37dcd065386da1d5e8778f595de2e190332b205 | [
"Apache-2.0"
] | permissive | jaycode/Arthur.workspace | 9093b54cda983d2e8b6745b894403b5fa1282b56 | 7a581104141ee5f556e058b1276b4087a2921dfc | refs/heads/master | 2021-01-10T10:36:35.599700 | 2016-03-21T19:37:49 | 2016-03-21T19:37:49 | 55,436,635 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 198 | py | """Login
"""
def run(project = None, args = [], **kwargs):
"""(todo) Login
login [username]
Args:
username: Do I need to explain this?
"""
return [project, instruction] | [
"[email protected]"
] | |
b2db1c9425158d5ccb537f011f9110822c097969 | 184f13269249b08e5b62444ece10af8a3a35c9a5 | /migrate_create_parse_website_testing_front_apps_api_cypress/004_cypress_fov_suite_2e2/004_2_fov_html_js_python/web_scraping_python/GOOD_003_META_web_scraping_beautifulsoup.py | 865145dc48f2bca5ef5628c7996f32d174353be7 | [
"MIT"
] | permissive | bflaven/BlogArticlesExamples | 3decf588098897b104d429054b8e44efec796557 | aca40bb33f1ad4e140ddd67d6bb39bdd029ef266 | refs/heads/master | 2023-09-04T16:57:57.498673 | 2023-09-01T13:14:12 | 2023-09-01T13:14:12 | 42,390,873 | 9 | 4 | MIT | 2023-03-02T22:39:06 | 2015-09-13T09:48:34 | HTML | UTF-8 | Python | false | false | 23,422 | py | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
[env]
# Conda Environment
# NO CONDA ENV
conda create --name po_launcher_e2e_cypress python=3.9.13
conda info --envs
source activate po_launcher_e2e_cypress
source activate parse_website
conda deactivate
# if needed to remove
conda env remove -n [NAME_OF_THE_CONDA_ENVIRONMENT]
# update conda
conda update -n base -c defaults conda
# to export requirements
pip freeze > po_launcher_e2e_cypress.txt
# to install
pip install -r po_launcher_e2e_cypress.txt
# update conda
conda update -n base -c defaults conda
[path]
cd /Users/brunoflaven/Documents/03_git/BlogArticlesExamples/migrate_create_parse_website_testing_front_apps_api_cypress/004_cypress_fov_suite_2e2/004_2_fov_html_js_python/web_scraping_python
[file]
python GOOD_003_META_web_scraping_beautifulsoup.py
"""
import time
from bs4 import BeautifulSoup
import requests
import sys
from datetime import datetime
print("\n--- 1. Grabing the site")
with requests.Session() as se:
se.headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.110 Safari/537.36",
"Accept-Encoding": "gzip, deflate",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Accept-Language": "en"
}
# F24_EN
# urls = [
# 'https://www.france24.com/en/',
# 'https://www.france24.com/en/france/',
# 'https://www.france24.com/en/africa/',
# 'https://www.france24.com/en/middle-east/',
# 'https://www.france24.com/en/americas/',
# 'https://www.france24.com/en/europe/',
# 'https://www.france24.com/en/asia-pacific/',
# ]
# F24_ES
# urls = [
# 'https://www.france24.com/es/',
# 'https://www.france24.com/es/am%C3%A9rica-latina/',
# 'https://www.france24.com/es/eeuu-canad%C3%A1/',
# 'https://www.france24.com/es/europa/',
# 'https://www.france24.com/es/francia/',
# 'https://www.france24.com/es/asia-pac%C3%ADfico/',
# 'https://www.france24.com/es/medio-oriente/',
# 'https://www.france24.com/es/%C3%A1frica/',
# ]
# F24_AR
# urls = [
# 'https://www.france24.com/ar/',
# 'https://www.france24.com/ar/%D9%81%D8%B1%D9%86%D8%B3%D8%A7/',
# 'https://www.france24.com/ar/%D8%A7%D9%84%D9%85%D8%BA%D8%A7%D8%B1%D8%A8%D9%8A%D8%A9/',
# 'https://www.france24.com/ar/%D8%A7%D9%84%D8%B4%D8%B1%D9%82-%D8%A7%D9%84%D8%A3%D9%88%D8%B3%D8%B7/',
# 'https://www.france24.com/ar/%D8%A3%D9%81%D8%B1%D9%8A%D9%82%D9%8A%D8%A7/',
# 'https://www.france24.com/ar/%D8%A3%D9%88%D8%B1%D9%88%D8%A8%D8%A7/',
# 'https://www.france24.com/ar/%D8%A3%D9%85%D8%B1%D9%8A%D9%83%D8%A7/',
# 'https://www.france24.com/ar/%D8%A2%D8%B3%D9%8A%D8%A7/',
# ]
"""
['url', 'slug',''], // keep it empty
"""
# *** RFI ***
# RFI_EN
# urls = [
# 'https://www.rfi.fr/en/',
# 'https://www.rfi.fr/en/france/',
# 'https://www.rfi.fr/en/africa/',
# 'https://www.rfi.fr/en/international/',
# 'https://www.rfi.fr/en/science-technology/',
# 'https://www.rfi.fr/en/culture/',
# 'https://www.rfi.fr/en/sport/',
# ]
# *** START FORM HERE ***
# RFI_TW (DONE)
# urls = [
# 'https://www.rfi.fr/tw/',
# 'https://www.rfi.fr/tw/%E4%B8%AD%E5%9C%8B/',
# 'https://www.rfi.fr/tw/%E6%B3%95%E5%9C%8B/',
# 'https://www.rfi.fr/tw/%E6%B8%AF%E6%BE%B3%E5%8F%B0/',
# 'https://www.rfi.fr/tw/%E4%BA%9E%E6%B4%B2/',
# 'https://www.rfi.fr/tw/%E7%BE%8E%E6%B4%B2/',
# ]
# RFI_CN (DONE)
# urls = [
# 'https://www.rfi.fr/cn/',
# 'https://www.rfi.fr/cn/%E4%B8%AD%E5%9B%BD/',
# 'https://www.rfi.fr/cn/%E6%B3%95%E5%9B%BD/',
# 'https://www.rfi.fr/cn/%E6%B8%AF%E6%BE%B3%E5%8F%B0/',
# 'https://www.rfi.fr/cn/%E4%BA%9A%E6%B4%B2',
# 'https://www.rfi.fr/cn/%E7%BE%8E%E6%B4%B2',
# ]
# RFI_ES (DONE)
# urls = [
# 'https://www.rfi.fr/es/',
# 'https://www.rfi.fr/es/am%C3%A9ricas/',
# 'https://www.rfi.fr/es/francia/',
# 'https://www.rfi.fr/es/europa/',
# 'https://www.rfi.fr/es/medioambiente/',
# 'https://www.rfi.fr/es/salud/',
# 'https://www.rfi.fr/es/cultura/',
# 'https://www.rfi.fr/es/econom%C3%ADa/',
# 'https://www.rfi.fr/es/deportes/',
# ]
# RFI_FA
# urls = [
# 'https://www.rfi.fr/fa/',
# 'https://www.rfi.fr/fa/%D8%AC%D8%A7%D9%85%D8%B9%D9%87-%D9%88-%D9%85%D8%AD%DB%8C%D8%B7-%D8%B2%DB%8C%D8%B3%D8%AA/',
# 'https://www.rfi.fr/fa/%D9%81%D8%B1%D9%87%D9%86%DA%AF-%D9%88-%D8%B2%D9%86%D8%AF%DA%AF%DB%8C/',
# 'https://www.rfi.fr/fa/%D8%A7%DB%8C%D8%B1%D8%A7%D9%86/',
# 'https://www.rfi.fr/fa/%D8%A7%D9%81%D8%BA%D8%A7%D9%86%D8%B3%D8%AA%D8%A7%D9%86/',
# 'https://www.rfi.fr/fa/%D8%AC%D9%87%D8%A7%D9%86/',
# ]
# RFI_HA
urls = [
'https://www.rfi.fr/ha/',
'https://www.rfi.fr/ha/duniya/',
'https://www.rfi.fr/ha/afrika/',
'https://www.rfi.fr/ha/najeriya/',
'https://www.rfi.fr/ha/nijar/',
'https://www.rfi.fr/ha/wasanni/',
]
# RFI_KM
# urls = [
# 'https://www.rfi.fr/km/',
# 'https://www.rfi.fr/km/%E1%9E%80%E1%9E%98%E1%9F%92%E1%9E%96%E1%9E%BB%E1%9E%87%E1%9E%B6/',
# 'https://www.rfi.fr/km/%E1%9E%A2%E1%9E%B6%E1%9E%9F%E1%9F%8A%E1%9E%B8/',
# 'https://www.rfi.fr/km/%E1%9E%A2%E1%9E%BA%E1%9E%9A%E1%9F%89%E1%9E%BB%E1%9E%94/',
# 'https://www.rfi.fr/km/%E1%9E%A2%E1%9E%B6%E1%9E%98%E1%9F%81%E1%9E%9A%E1%9E%B7%E1%9E%80/',
# 'https://www.rfi.fr/km/%E1%9E%A2%E1%9E%B6%E1%9E%A0%E1%9F%92%E1%9E%9C%E1%9F%92%E1%9E%9A%E1%9E%B7%E1%9E%80/',
# 'https://www.rfi.fr/km/%E1%9E%8A%E1%9E%BE%E1%9E%98%E1%9E%94%E1%9E%BC%E1%9E%96%E1%9F%8C%E1%9E%B6/',
# ]
# RFI_SW
# urls = [
# 'https://www.rfi.fr/sw/',
# 'https://www.rfi.fr/sw/afrika/',
# 'https://www.rfi.fr/sw/eac/',
# 'https://www.rfi.fr/sw/siasa-uchumi/',
# 'https://www.rfi.fr/sw/makala/',
# 'https://www.rfi.fr/sw/michezo/',
# ]
# RFI_MA
# urls = [
# 'https://www.rfi.fr/ma/',
# 'https://www.rfi.fr/ma/farafinna/',
# 'https://www.rfi.fr/ma/duni%C9%B2a/',
# 'https://www.rfi.fr/ma/s%C9%94r%C9%94/',
# 'https://www.rfi.fr/ma/k%C9%9Bn%C9%9Bya/',
# 'https://www.rfi.fr/ma/lamini/',
# 'https://www.rfi.fr/ma/kalan/',
# 'https://www.rfi.fr/ma/musow/',
# 'https://www.rfi.fr/ma/sigida/',
# ]
# RFI_FF
# urls = [
# 'https://www.rfi.fr/ff/',
# 'https://www.rfi.fr/ff/afrik/',
# 'https://www.rfi.fr/ff/winndere/',
# 'https://www.rfi.fr/ff/faggudu/',
# 'https://www.rfi.fr/ff/cellal/',
# 'https://www.rfi.fr/ff/taariindi/',
# 'https://www.rfi.fr/ff/needi/',
# 'https://www.rfi.fr/ff/rew%C9%93e/',
# 'https://www.rfi.fr/ff/renndo/',
# ]
# RFI_PT
# urls = [
# 'https://www.rfi.fr/pt/',
# 'https://www.rfi.fr/pt/%C3%A1frica-lus%C3%B3fona/',
# 'https://www.rfi.fr/pt/fran%C3%A7a/',
# 'https://www.rfi.fr/pt/%C3%A1frica/',
# 'https://www.rfi.fr/pt/internacional/',
# ]
# RFI_BR
# urls = [
# 'https://www.rfi.fr/br/',
# 'https://www.rfi.fr/br/fran%C3%A7a/',
# 'https://www.rfi.fr/br/brasil/',
# 'https://www.rfi.fr/br/am%C3%A9ricas/',
# 'https://www.rfi.fr/br/mundo/',
# 'https://www.rfi.fr/br/cultura/',
# 'https://www.rfi.fr/br/ci%C3%AAncias/',
# ]
# RFI_RU
# urls = [
# 'https://www.rfi.fr/ru/',
# 'https://www.rfi.fr/ru/%D1%80%D0%BE%D1%81%D1%81%D0%B8%D1%8F/',
# 'https://www.rfi.fr/ru/%D1%84%D1%80%D0%B0%D0%BD%D1%86%D0%B8%D1%8F/',
# 'https://www.rfi.fr/ru/%D0%B5%D0%B2%D1%80%D0%BE%D0%BF%D0%B0/',
# 'https://www.rfi.fr/ru/%D0%BA%D1%83%D0%BB%D1%8C%D1%82%D1%83%D1%80a-%D1%81%D1%82%D0%B8%D0%BB%D1%8C-%D0%B6%D0%B8%D0%B7%D0%BD%D0%B8/',
# 'https://www.rfi.fr/ru/%D1%83%D0%BA%D1%80%D0%B0%D0%B8%D0%BD%D0%B0/',
# 'https://www.rfi.fr/ru/%D1%81%D0%BF%D0%BE%D1%80%D1%82/',
# ]
# RFI_UK
# urls = [
# 'https://www.rfi.fr/uk/',
# 'https://www.rfi.fr/uk/y%D0%BA%D1%80%D0%B0%D1%97%D0%BD%D0%B0/',
# 'https://www.rfi.fr/uk/%D1%94%D0%B2%D1%80%D0%BE%D0%BF%D0%B0/',
# 'https://www.rfi.fr/uk/%D0%BC%D1%96%D0%B6%D0%BD%D0%B0%D1%80%D0%BE%D0%B4%D0%BD%D1%96-%D0%BD%D0%BE%D0%B2%D0%B8%D0%BD%D0%B8/',
# 'https://www.rfi.fr/uk/%D1%83%D0%BA%D1%80%D0%B0%D1%97%D0%BD%D1%86%D1%96-%D0%B7%D0%B0-%D0%BA%D0%BE%D1%80%D0%B4%D0%BE%D0%BD%D0%BE%D0%BC/',
# ]
# RFI_VI
# urls = [
# 'https://www.rfi.fr/vi/',
# 'https://www.rfi.fr/vi/vi%E1%BB%87t-nam/',
# 'https://www.rfi.fr/vi/ch%C3%A2u-%C3%A1/',
# 'https://www.rfi.fr/vi/ph%C3%A1p/',
# 'https://www.rfi.fr/vi/qu%E1%BB%91c-t%E1%BA%BF/',
# 'https://www.rfi.fr/vi/ph%C3%A2n-t%C3%ADch/',
# 'https://www.rfi.fr/vi/%C4%91i%E1%BB%83m-b%C3%A1o/',
# ]
# RFI_FR
# urls = [
# 'https://www.rfi.fr/fr/',
# 'https://www.rfi.fr/fr/afrique/',
# 'https://www.rfi.fr/fr/am%C3%A9riques/',
# 'https://www.rfi.fr/fr/asie-pacifique/',
# 'https://www.rfi.fr/fr/europe/',
# 'https://www.rfi.fr/fr/moyen-orient/',
# 'https://www.rfi.fr/fr/france/',
# 'https://www.rfi.fr/fr/monde/',
# ]
# MCD_AR
# urls = [
# 'https://www.mc-doualiya.com/',
# 'https://www.mc-doualiya.com//%D8%A7%D9%84%D8%B4%D8%B1%D9%82-%D8%A7%D9%84%D8%A3%D9%88%D8%B3%D8%B7/',
# 'https://www.mc-doualiya.com//%D8%A7%D9%84%D9%85%D8%BA%D8%B1%D8%A8-%D8%A7%D9%84%D8%B9%D8%B1%D8%A8%D9%8A/',
# 'https://www.mc-doualiya.com//%D9%81%D8%B1%D9%86%D8%B3%D8%A7/',
# 'https://www.mc-doualiya.com//%D8%A3%D9%81%D8%B1%D9%8A%D9%82%D9%8A%D8%A7/',
# 'https://www.mc-doualiya.com//%D8%A3%D9%88%D8%B1%D9%88%D8%A8%D8%A7/',
# 'https://www.mc-doualiya.com//%D8%A3%D9%85%D8%B1%D9%8A%D9%83%D8%A7/',
# 'https://www.mc-doualiya.com//%D8%A2%D8%B3%D9%8A%D8%A7-%D8%A7%D9%84%D9%87%D8%A7%D8%AF%D8%A6/',
# ]
# *** FLE ***
# FLE_FR
# urls = [
# 'https://francaisfacile.rfi.fr/fr/',
# 'https://francaisfacile.rfi.fr/fr/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/fr/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/fr/exercices/',
# 'https://francaisfacile.rfi.fr/fr/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/fr/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/fr/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/fr/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/fr/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/fr/enseigner/',
# 'https://francaisfacile.rfi.fr/fr/exercices/a1/',
# 'https://francaisfacile.rfi.fr/fr/exercices/a2/',
# 'https://francaisfacile.rfi.fr/fr/exercices/b1/',
# 'https://francaisfacile.rfi.fr/fr/exercices/b2/',
# 'https://francaisfacile.rfi.fr/fr/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/fr/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
# FLE_ES
# urls = [
# 'https://francaisfacile.rfi.fr/es/',
# 'https://francaisfacile.rfi.fr/es/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/es/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/es/exercices/',
# 'https://francaisfacile.rfi.fr/es/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/es/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/es/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/es/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/es/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/es/enseigner/',
# 'https://francaisfacile.rfi.fr/es/exercices/a1/',
# 'https://francaisfacile.rfi.fr/es/exercices/a2/',
# 'https://francaisfacile.rfi.fr/es/exercices/b1/',
# 'https://francaisfacile.rfi.fr/es/exercices/b2/',
# 'https://francaisfacile.rfi.fr/es/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/es/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
#FLE_RU
# urls = [
# 'https://francaisfacile.rfi.fr/ru/',
# 'https://francaisfacile.rfi.fr/ru/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/ru/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/ru/exercices/',
# 'https://francaisfacile.rfi.fr/ru/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/ru/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/ru/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/ru/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/ru/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/ru/enseigner/',
# 'https://francaisfacile.rfi.fr/ru/exercices/a1/',
# 'https://francaisfacile.rfi.fr/ru/exercices/a2/',
# 'https://francaisfacile.rfi.fr/ru/exercices/b1/',
# 'https://francaisfacile.rfi.fr/ru/exercices/b2/',
# 'https://francaisfacile.rfi.fr/ru/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/ru/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
#FLE_AR
# urls = [
# 'https://francaisfacile.rfi.fr/ar/',
# 'https://francaisfacile.rfi.fr/ar/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/ar/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/ar/exercices/',
# 'https://francaisfacile.rfi.fr/ar/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/ar/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/ar/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/ar/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/ar/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/ar/enseigner/',
# 'https://francaisfacile.rfi.fr/ar/exercices/a1/',
# 'https://francaisfacile.rfi.fr/ar/exercices/a2/',
# 'https://francaisfacile.rfi.fr/ar/exercices/b1/',
# 'https://francaisfacile.rfi.fr/ar/exercices/b2/',
# 'https://francaisfacile.rfi.fr/ar/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/ar/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
#FLE_EN
# urls = [
# 'https://francaisfacile.rfi.fr/en/',
# 'https://francaisfacile.rfi.fr/en/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/en/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/en/exercices/',
# 'https://francaisfacile.rfi.fr/en/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/en/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/en/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/en/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/en/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/en/enseigner/',
# 'https://francaisfacile.rfi.fr/en/exercices/a1/',
# 'https://francaisfacile.rfi.fr/en/exercices/a2/',
# 'https://francaisfacile.rfi.fr/en/exercices/b1/',
# 'https://francaisfacile.rfi.fr/en/exercices/b2/',
# 'https://francaisfacile.rfi.fr/en/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/en/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
#FLE_BR
# urls = [
# 'https://francaisfacile.rfi.fr/br/',
# 'https://francaisfacile.rfi.fr/br/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/br/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/br/exercices/',
# 'https://francaisfacile.rfi.fr/br/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/br/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/br/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/br/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/br/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/br/enseigner/',
# 'https://francaisfacile.rfi.fr/br/exercices/a1/',
# 'https://francaisfacile.rfi.fr/br/exercices/a2/',
# 'https://francaisfacile.rfi.fr/br/exercices/b1/',
# 'https://francaisfacile.rfi.fr/br/exercices/b2/',
# 'https://francaisfacile.rfi.fr/br/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/br/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
# FLE_CN
# urls = [
# 'https://francaisfacile.rfi.fr/cn/',
# 'https://francaisfacile.rfi.fr/cn/podcasts/journal-en-fran%C3%A7ais-facile/',
# 'https://francaisfacile.rfi.fr/cn/podcasts/s%C3%A9lection/',
# 'https://francaisfacile.rfi.fr/cn/exercices/',
# 'https://francaisfacile.rfi.fr/cn/dipl%C3%B4mes-tests/',
# 'https://francaisfacile.rfi.fr/cn/tester-son-niveau/',
# 'https://francaisfacile.rfi.fr/cn/comprendre-actualit%C3%A9-fran%C3%A7ais/',
# 'https://francaisfacile.rfi.fr/cn/communiquer-quotidien/',
# 'https://francaisfacile.rfi.fr/cn/r%C3%A9viser/',
# 'https://francaisfacile.rfi.fr/cn/enseigner/',
# 'https://francaisfacile.rfi.fr/cn/exercices/a1/',
# 'https://francaisfacile.rfi.fr/cn/exercices/a2/',
# 'https://francaisfacile.rfi.fr/cn/exercices/b1/',
# 'https://francaisfacile.rfi.fr/cn/exercices/b2/',
# 'https://francaisfacile.rfi.fr/cn/exercices/c1c2/',
# 'https://francaisfacile.rfi.fr/cn/podcasts/les-mots-de-l-actualit%C3%A9/',
# ]
# *** OBS ***
# OBS_FR
# urls = [
# 'https://observers.france24.com/fr/',
# 'https://observers.france24.com/fr/tous-les-articles/',
# 'https://observers.france24.com/fr/tag/europe/',
# 'https://observers.france24.com/fr/tag/environnement/',
# 'https://observers.france24.com/fr/tag/droits-de-l-homme/',
# ]
#OBS_EN
# urls = [
# 'https://observers.france24.com/en/',
# 'https://observers.france24.com/en/all-articles/',
# 'https://observers.france24.com/en/tag/africa/',
# 'https://observers.france24.com/en/tag/environment/',
# 'https://observers.france24.com/en/tag/human-rights/',
# ]
#OBS_AR
# urls = [
# 'https://observers.france24.com/ar/',
# 'https://observers.france24.com/ar/%D8%AA%D8%A7%D8%BA/%D8%A8%D9%8A%D8%A6%D8%A9/',
# 'https://observers.france24.com/ar/%D9%87%D9%84-%D8%AA%D9%88%D8%AF-%D8%A7%D9%84%D9%85%D8%B3%D8%A7%D9%87%D9%85%D8%A9',
# ]
#OBS_FA
# urls = [
# 'https://observers.rfi.fr/fa/',
# 'https://observers.rfi.fr/fa/%DA%AF%D8%B1%D9%88%D9%87-%D9%88%D8%A7%DA%98%D9%87/%D8%B4%D8%A7%DB%8C%D8%B9%D9%87/',
# 'https://observers.rfi.fr/fa/%D8%B9%D8%B6%D9%88-%D9%86%D8%A7%D8%B8%D8%B1%D8%A7%D9%86-%D8%B4%D9%88%DB%8C%D8%AF',
# ]
for url in urls:
# print(url)
# get page source
# response = se.get(url)
# print(response)
# print(response.text)
html = se.get(url)
time.sleep(2)
# status
# <Response [200]>
# print(html)
# html
# print(html.text)
# Load in BeautifulSoup
# soup = BeautifulSoup(html.text, "html.parser")
# print(soup)
# Make a GET request to the webpage and get its content
response = requests.get(url)
content = response.content
# Parse the content using BeautifulSoup
soup = BeautifulSoup(html.text, 'html.parser')
# Get the page slug from the URL
slug = url.split('/')[-2]
# Create an array to store the metadata
metadata = [
['url', 'slug', slug]
]
# Find all the metadata tags on the page
meta_tags = soup.find_all('meta')
# Loop through each meta tag and extract its properties
for tag in meta_tags:
# tag_type = tag.get('property') or tag.get('name')
tag_type_property = tag.get('property')
tag_type_name = tag.get('name')
tag_content = tag.get('content')
if tag_type_property and tag_content:
metadata.append(['property_tag',
tag_type_property, tag_content])
if tag_type_name and tag_content:
metadata.append(['name_tag', tag_type_name, tag_content])
# Find the title tag and extract its text
title_tag = soup.find('title')
if title_tag:
metadata.append(['title_tag', 'title', title_tag.text])
print('\n //--- '+url)
# Print the metadata array
print(metadata)
""" chat GPT query
# query
In python, with BeautifulSoup, can you write a parsing script that grab in the source page: the slug value, the tag title value and all the tags metadata values e.g og:image, og:locale, description, twitter:card... etc and store it in an array like the below
page_asie_pacifique: [
['slug','asie-pacifique/'],
// title
['title_tag','title', 'XXX_TEXT'],
// property
['property_tag','og:image', 'XXX_TEXT'],
['property_tag','og:locale', 'XXX_TEXT'],
['property_tag', 'og:site_name', 'XXX_TEXT'],
['property_tag', 'og:type', 'website', 'XXX_TEXT'],
['property_tag', 'og:title', 'XXX_TEXT'],
['property_tag', 'og:description', "XXX_TEXT"],
['property_tag', 'og:url', 'XXX_TEXT'],
['property_tag', 'fb:app_id', 'XXX_TEXT'],
['property_tag', 'fb:pages', 'XXX_TEXT'],
// name
['name_tag', 'description', "XXX_TEXT"],
['name_tag', 'twitter:card', 'summary'],
['name_tag', 'twitter:site', 'XXX_TEXT'],
['name_tag', 'twitter:creator', 'XXX_TEXT'],
['name_tag', 'keywords', 'XXX_TEXT'],
['name_tag', 'apple-mobile-web-app-title', 'XXX_TEXT'],
['name_tag', 'application-name', 'XXX_TEXT'],
['name_tag', 'msapplication-TileColor', 'XXX_TEXT'],
['name_tag', 'theme-color', 'XXX_TEXT'],
['name_tag', 'apple-mobile-web-app-capable', 'XXX_TEXT']
]
# OUTPUT FROM ChatGPT
Yes, it's possible to write a parsing script using BeautifulSoup in Python to extract the metadata from a webpage and store it in an array in the format you specified. Here's an example script that demonstrates how to do it:
from bs4 import BeautifulSoup
import requests
# URL of the webpage to parse
url = 'https://example.com/'
# Make a GET request to the webpage and get its content
response = requests.get(url)
content = response.content
# Parse the content using BeautifulSoup
soup = BeautifulSoup(content, 'html.parser')
# Get the page slug from the URL
slug = url.split('/')[-2]
# Create an array to store the metadata
metadata = [
['slug', slug]
]
# Find all the metadata tags on the page
meta_tags = soup.find_all('meta')
# Loop through each meta tag and extract its properties
for tag in meta_tags:
tag_type = tag.get('property') or tag.get('name')
tag_content = tag.get('content')
if tag_type and tag_content:
metadata.append([tag.name+'_tag', tag_type, tag_content])
# Find the title tag and extract its text
title_tag = soup.find('title')
if title_tag:
metadata.append(['title_tag', 'title', title_tag.text])
# Print the metadata array
print(metadata)
This script first makes a GET request to the webpage and gets its content. Then it uses BeautifulSoup to parse the HTML content and extract the metadata tags on the page. For each metadata tag, it checks its property or name attribute to determine its type and extracts its content attribute. It stores this information in the metadata array in the format you specified.
The script also finds the title tag on the page and extracts its text. It stores this information in the metadata array as well.
Finally, the script prints the metadata array. You can modify this script to save the metadata array to a file or database for further processing.
# TRY_1 See https://stackoverflow.com/questions/36768068/get-meta-tag-content-property-with-beautifulsoup-and-python
title = soup.find("meta", property="og:title")
url = soup.find("meta", property="og:url")
print(title["content"] if title else "No meta title given")
print(url["content"] if url else "No meta url given")
"""
# DEPOT
# urls = [
# 'http://fashiontoast.com/',
# 'http://becauseimaddicted.net/',
# 'http://www.lefashion.com/',
# 'http://www.seaofshoes.com/',
# ]
# urls = [
# 'https://www.rfi.fr/fr/'
# ]
| [
"[email protected]"
] | |
40f6e5873c826d97e8e486bb641cd1a516f4eb39 | 8b1c47d1ee06bfd2642305bc8c6723ccbe7f9f0d | /pdot.py | 8daa9401b64244dc5827d08408e8b9371a91b5da | [] | no_license | tribeiro/DRATools | 6e594e3e0a497be9aa439c0cc76aa4923c7a5def | 8b73bd50e8cde0ab4225df495970c4199f5f1f1b | refs/heads/master | 2016-08-03T17:44:20.505742 | 2016-02-05T15:44:20 | 2016-02-05T15:44:20 | 6,463,425 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,562 | py | #! /usr/bin/env python
'''
Calculates Pdot for a O-C time of eclipses, considering Pdot constant over time.
'''
import sys,os
import numpy as np
import pylab as py
######################################################################
def main(argv):
'''
Main function. Reads input parameters and run iteractive procedure.
Run with -h to get help on input parameters.
Defaults to CAL87 data on ribeiro & lopes de oliveira (2014)
'''
from optparse import OptionParser
parser = OptionParser()
parser.add_option( '--HJD0',
help='Reference time of first eclipse in\
HJD. Ephemeris.',
type='float',
default=2450111.5144)
parser.add_option( '--P0',
help='Reference orbital period in days.',
type='float',
default=0.44267714)
parser.add_option( '--DT0',
help='Measured difference between observed\
and calculated eclipse time.',
type='float',
default=0.)
parser.add_option( '--E',
help='Number of observed cycle, with \
respect to HJD0, the reference first\
eclipse.',
type='float',
default=5997)
parser.add_option( '--sigmaDT0',
help='Uncertainty in the determination\
of DT0.',
type='float',
default=0.)
opt,args = parser.parse_args(argv)
print '''
PDOT.PY - calculates Pdot for given O-C between eclipses,
considers Pdot is constant.
c - Tiago Ribeiro - UFS - 2013
'''
print 'HJD0 = %f'%opt.HJD0
print 'P0 = %f'%opt.P0
print 'DT0 = %f'%opt.DT0
print 'E = %f'%opt.E
# First iteration. Considers T0' = T0 to obtain DT
DT = opt.P0 * opt.E # Calculated time of eclipse
Pdot = opt.DT0 / DT / opt.E # calculated Pdot
print 'Pdot = %e'%(Pdot)
difPdot = 1.0
print '---------------------------------------------------'
print '|Pdot | difPdot | relDif |'
while (difPdot/Pdot > 1e-10):
DT = opt.P0 * opt.E + Pdot * DT * opt.E # Calculated time of eclipse
oldPdot = Pdot
Pdot = opt.DT0 / DT / opt.E # calculated Pdot
difPdot = np.abs(Pdot - oldPdot)
print '|%14.8e|%18.10e|%15.8e|'%(Pdot,difPdot,difPdot/Pdot)
print '---------------------------------------------------'
if opt.sigmaDT0 > 0:
sDT0 = opt.sigmaDT0
sigmaPdot = np.sqrt( (sDT0 / opt.E / DT)**2. + (sDT0 * opt.DT0 / opt.E / DT**2.)**2. )
print 'Pdot = %e +/- %e'%(Pdot,sigmaPdot)
else:
print 'Pdot = %e'%(Pdot)
######################################################################
if __name__ == '__main__':
main(sys.argv)
###################################################################### | [
"[email protected]"
] | |
568d1d5d130d876efb8e9236f37533fd74439534 | feba3c32aac7f17d8fbaf6ef7bb4d229844f8247 | /machine_learning/clustering/spectral_clustering/SpectralClustering/main.py | 4478a211b938139541a188e35d40ad5479bc2d81 | [] | no_license | lisunshine1234/mlp-algorithm-python | d48aa1336ae7c4925a0e30f4f09fa6de21f83d0e | 898359a10f65f16e94f3bb27cc61f3837806ca68 | refs/heads/master | 2023-05-01T11:11:47.465491 | 2021-05-24T13:53:40 | 2021-05-24T13:53:40 | 291,934,886 | 0 | 0 | null | 2021-05-24T13:42:15 | 2020-09-01T08:00:17 | Python | UTF-8 | Python | false | false | 5,872 | py | import numpy as np
import run as r
'''
[id]
144
[name]
SpectralClustering
[input]
x_train 训练集 训练集标签数据集 二维数组 必须 定数
y_train 测试集 测试集数据集 二维数组 必须 定数
n_clusters 簇数 默认为8,投影子空间的尺寸,可选整数,整数 字符串 不必须 定数
eigen_solver eigen_solver 默认为None,使用特征值分解策略。 AMG需要安装pyamg。在非常大且稀疏的问题上,它可能会更快,但也可能导致不稳定,可选'lobpcg','amg','arpack' 字符串 不必须 定数
n_components 组件数 默认为None,用于频谱嵌入的本征向量数,可选整数,整数 字符串 不必须 定数
random_state 随机种子 默认为None,伪随机数生成器,用于在'eigen_solver=' amg'时通过K-Means初始化来分解lobpcg本征向量。使用int可以确定随机性,可选整数 整数 不必须 定数
n_init 随机初始化数量 默认为10,k均值算法将在不同质心种子下运行的次数。就惯性而言,最终结果将是n_init个连续运行的最佳输出,可选整数 整数 不必须 定数
gamma gamma 默认为1.,rbf,poly,Sigmoid,laplacian和chi2内核的内核系数。忽略了'affinity=' nearest_neighbors,可选浮点数 浮点数 不必须 定数
affinity 亲和力 默认为rbf,如何构造亲和力矩阵。-'nearest_neighbors':通过计算最近邻居的图来构造亲和矩阵。-'rbf':使用径向基函数(RBF)内核构造亲和矩阵。-'precomputed':将'X'解释为预先计算的亲和力矩阵。 -'precomputed_nearest_neighbors':将'X'解释为预先计算的最近邻居的稀疏图,并通过选择'n_neighbors'最近邻居构建亲和力矩阵,可选,'rbf' 字符串 不必须 定数
n_neighbors 邻居数量 默认为10,使用最近邻居方法构造亲和力矩阵时要使用的邻居数量。忽略了'affinity=' rbf',可选整数,整数 字符串 不必须 定数
eigen_tol eigen_tol 默认为0.0,当“ arpack”时,拉普拉斯矩阵特征分解的停止准则,可选浮点数 浮点数 不必须 定数
assign_labels 分配标签策略 默认为kmeans,用于在嵌入空间中分配标签的策略。拉普拉斯嵌入后,有两种分配标签的方法。可以应用k均值,它是一种流行的选择。但是它也可能对初始化敏感。离散化是另一种对随机初始化不太敏感的方法,可选'kmeans','discretize' 字符串 不必须 定数
degree 度 默认为3,多项式内核的度。被其他内核忽略,可选浮点数 浮点数 不必须 定数
coef0 coef0 默认为1,多项式和S形核的系数为零。被其他内核忽略,可选浮点数 浮点数 不必须 定数
kernel_params kernel参数 默认为None,作为可调用对象传递的内核的参数(关键字参数)和值。被其他内核忽略,可选字符串,字符串,字典 字符串 不必须 定数
n_jobs CPU数量 默认为None,要运行的并行作业数。 'None'表示1,可选整数 整数 不必须 定数
[output]
affinity_matrix_ 亲和矩阵 用于聚类的亲和矩阵 二维数组
labels_ labels_ 每个点的标签 一维数组
[outline]
将聚类应用于规范化拉普拉斯算子的投影。
[describe]
将聚类应用于规范化拉普拉斯算子的投影。
在实践中,当各个群集的结构高度不凸,或更普遍地说,当群集的中心和散布的度量值不适合完整群集时,频谱群集非常有用。
例如,当簇在2D平面上嵌套圆时。
如果亲和力是图的邻接矩阵,则可以使用此方法查找归一化图割。
当调用'fit'时,将使用任一核函数构造亲和矩阵,例如距离为'd(X,X)'的欧几里德的高斯(aka RBF)核:: np.exp(-gamma * d( X,X)** 2)或k最近邻居连接矩阵。
或者,使用'预先计算',可以使用用户提供的亲和力矩阵。
'''
def main(x_train, y_train,
n_clusters=8, eigen_solver=None, n_components=None, random_state=None, n_init=10, gamma=1., affinity='rbf', n_neighbors=10, eigen_tol=0.0,
assign_labels='kmeans', degree=3, coef0=1, kernel_params=None, n_jobs=None
):
if type(x_train) is str:
x_train = eval(x_train)
if type(y_train) is str:
y_train = eval(y_train)
if type(n_clusters) is str:
n_clusters = eval(n_clusters)
if type(n_components) is str:
n_components = eval(n_components)
if type(random_state) is str:
random_state = eval(random_state)
if type(n_init) is str:
n_init = eval(n_init)
if type(gamma) is str:
gamma = eval(gamma)
if type(n_neighbors) is str:
n_neighbors = eval(n_neighbors)
if type(eigen_tol) is str:
eigen_tol = eval(eigen_tol)
if type(degree) is str:
degree = eval(degree)
if type(coef0) is str:
coef0 = eval(coef0)
if type(kernel_params) is str:
kernel_params = eval(kernel_params)
if type(n_jobs) is str:
n_jobs = eval(n_jobs)
return r.run(x_train=x_train, y_train=y_train, n_clusters=n_clusters,
eigen_solver=eigen_solver,
n_components=n_components,
random_state=random_state,
n_init=n_init,
gamma=gamma,
affinity=affinity,
n_neighbors=n_neighbors,
eigen_tol=eigen_tol,
assign_labels=assign_labels,
degree=degree,
coef0=coef0,
kernel_params=kernel_params,
n_jobs=n_jobs)
if __name__ == '__main__':
import numpy as np
import json
array = np.loadtxt('D:\\123_2.csv', delimiter=',')
array = array[0:20, :]
y = array[:, -1].tolist()
x = np.delete(array, -1, axis=1).tolist()
array = array.tolist()
back = main(x, y)
print(back)
for i in back:
print(i + ":" + str(back[i]))
json.dumps(back) | [
"[email protected]"
] | |
b5e44acdf849d67dd76df15fee9528740e2d4810 | f9d564f1aa83eca45872dab7fbaa26dd48210d08 | /huaweicloud-sdk-cloudide/huaweicloudsdkcloudide/v2/model/show_price_response.py | 319556bbd2c00ca7611e5913f4d058e565a6f5bc | [
"Apache-2.0"
] | permissive | huaweicloud/huaweicloud-sdk-python-v3 | cde6d849ce5b1de05ac5ebfd6153f27803837d84 | f69344c1dadb79067746ddf9bfde4bddc18d5ecf | refs/heads/master | 2023-09-01T19:29:43.013318 | 2023-08-31T08:28:59 | 2023-08-31T08:28:59 | 262,207,814 | 103 | 44 | NOASSERTION | 2023-06-22T14:50:48 | 2020-05-08T02:28:43 | Python | UTF-8 | Python | false | false | 4,007 | py | # coding: utf-8
import six
from huaweicloudsdkcore.sdk_response import SdkResponse
from huaweicloudsdkcore.utils.http_utils import sanitize_for_serialization
class ShowPriceResponse(SdkResponse):
"""
Attributes:
openapi_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
sensitive_list = []
openapi_types = {
'prices': 'list[ResourcePrice]',
'status': 'str'
}
attribute_map = {
'prices': 'prices',
'status': 'status'
}
def __init__(self, prices=None, status=None):
"""ShowPriceResponse
The model defined in huaweicloud sdk
:param prices: 技术栈价格列表
:type prices: list[:class:`huaweicloudsdkcloudide.v2.ResourcePrice`]
:param status: 状态
:type status: str
"""
super(ShowPriceResponse, self).__init__()
self._prices = None
self._status = None
self.discriminator = None
if prices is not None:
self.prices = prices
if status is not None:
self.status = status
@property
def prices(self):
"""Gets the prices of this ShowPriceResponse.
技术栈价格列表
:return: The prices of this ShowPriceResponse.
:rtype: list[:class:`huaweicloudsdkcloudide.v2.ResourcePrice`]
"""
return self._prices
@prices.setter
def prices(self, prices):
"""Sets the prices of this ShowPriceResponse.
技术栈价格列表
:param prices: The prices of this ShowPriceResponse.
:type prices: list[:class:`huaweicloudsdkcloudide.v2.ResourcePrice`]
"""
self._prices = prices
@property
def status(self):
"""Gets the status of this ShowPriceResponse.
状态
:return: The status of this ShowPriceResponse.
:rtype: str
"""
return self._status
@status.setter
def status(self, status):
"""Sets the status of this ShowPriceResponse.
状态
:param status: The status of this ShowPriceResponse.
:type status: str
"""
self._status = status
def to_dict(self):
"""Returns the model properties as a dict"""
result = {}
for attr, _ in six.iteritems(self.openapi_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
if attr in self.sensitive_list:
result[attr] = "****"
else:
result[attr] = value
return result
def to_str(self):
"""Returns the string representation of the model"""
import simplejson as json
if six.PY2:
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
return json.dumps(sanitize_for_serialization(self), ensure_ascii=False)
def __repr__(self):
"""For `print`"""
return self.to_str()
def __eq__(self, other):
"""Returns true if both objects are equal"""
if not isinstance(other, ShowPriceResponse):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""Returns true if both objects are not equal"""
return not self == other
| [
"[email protected]"
] | |
af8323190f0fb2a1c3ac5e3730d69a87178e772e | 7c7f5141dac7fd6d2bb5d9b1b154c37fe8ccd140 | /pandapower/test/__init__.py | a9e365de3222c2cc7631b9833645df8d98cac140 | [
"MIT"
] | permissive | lucassm/cigre-montecarlo | a6c2f5508d917a1dcbe657d34ea136448026a999 | fd354b9c3ade460b46687ba312f51212dad17151 | refs/heads/master | 2021-01-19T13:27:24.768686 | 2017-02-18T20:49:50 | 2017-02-18T20:49:50 | 82,392,136 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 92 | py | from .toolbox import *
from .result_test_network_generator import *
from .conftest import *
| [
"[email protected]"
] | |
ba15ea6b6a8de11aca7e0fee71fc5594b4862c2b | c4a0669126f2fbf757ac3b33a8279ef32305bbd7 | /Python Crash Course/Chapter 13/13.1 Stars/main.py | d13a05bea7ad0b6f82c5294c2a451c874c0f822f | [] | no_license | ezeutno/PycharmProject | 822b5a7da05729c5241a03b7413548a34b12e4a5 | bdb87599885287d2d7cd5cd703b62197563722b8 | refs/heads/master | 2021-07-18T20:55:08.605486 | 2017-10-24T03:14:10 | 2017-10-24T03:14:10 | 105,782,136 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 470 | py | import pygame
from settings import Settings
from pygame.sprite import Group
import game_fuction as gf
def main():
pygame.init()
ai_settings = Settings()
screen = pygame.display.set_mode((ai_settings.screen_width, ai_settings.screen_height))
pygame.display.set_caption('Stars')
stars = Group()
gf.create_multilayer(ai_settings, screen, stars)
while True:
gf.check_events()
gf.update_screen(ai_settings, screen, stars)
main() | [
"[email protected]"
] | |
e0493a95982145e381c01ff67d16292c6a0349f0 | d191a04a3ded41175ea84ae88ebddb4f262b7fb1 | /Company test/nvidia.py | c4986c711e53efb20d3e34f2e58950269a5d26df | [] | no_license | YLyeliang/now_leet_code_practice | ae4aea945bae72ec08b11e57a8f8a3e81e704a54 | 204d770e095aec43800a9771fe88dd553463d2f7 | refs/heads/master | 2022-06-13T20:22:51.266813 | 2022-05-24T05:29:32 | 2022-05-24T05:29:32 | 205,753,056 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 363 | py | x=int(input())
y=int(input())
z=int(input())
A=[]
B=[]
for _ in range(x):
A.append(list(map(int,input().split())))
for _ in range(y):
B.append(list(map(int,input().split())))
C=[['']*z for _ in range(x)]
for i in range(x):
for j in range(z):
C[i][j]=str(sum([A[i][k]*B[k][j] for k in range(y)]))
for l in range(x):
print(" ".join(C[l])) | [
"[email protected]"
] | |
8e5a1d8fbec942a0e35a39675415f7e7b2f42cd4 | 3a18b8ba06a58231f4ecb2c1a231722cdf862e6b | /python_code/dillonsCode/websites_folder/todo_tutorial/venv/lib/python3.8/site-packages/werkzeug/middleware/proxy_fix.py | 2a7af0cdfe43370cd52976eac8ac734c1585167c | [] | no_license | dillonallen92/codeStorage | 98dd7f5a8ecb062e37313a1323aacd362ffc44c7 | 23351e0b3348de922283f6494762db9f291579d6 | refs/heads/master | 2023-07-07T10:32:40.428607 | 2023-06-26T23:26:47 | 2023-06-26T23:28:13 | 141,781,205 | 0 | 1 | null | 2023-02-12T00:21:58 | 2018-07-21T04:30:51 | Mathematica | UTF-8 | Python | false | false | 7,161 | py | """
X-Forwarded-For Proxy Fix
=========================
This module provides a middleware that adjusts the WSGI environ based on
``X-Forwarded-`` headers that proxies in front of an application may
set.
When an application is running behind a proxy server, WSGI may see the
request as coming from that server rather than the real client. Proxies
set various headers to track where the request actually came from.
This middleware should only be used if the application is actually
behind such a proxy, and should be configured with the number of proxies
that are chained in front of it. Not all proxies set all the headers.
Since incoming headers can be faked, you must set how many proxies are
setting each header so the middleware knows what to trust.
.. autoclass:: ProxyFix
:copyright: 2007 Pallets
:license: BSD-3-Clause
"""
import typing as t
from ..http import parse_list_header
if t.TYPE_CHECKING:
from _typeshed.wsgi import StartResponse
from _typeshed.wsgi import WSGIApplication
from _typeshed.wsgi import WSGIEnvironment
class ProxyFix:
"""Adjust the WSGI environ based on ``X-Forwarded-`` that proxies in
front of the application may set.
- ``X-Forwarded-For`` sets ``REMOTE_ADDR``.
- ``X-Forwarded-Proto`` sets ``wsgi.url_scheme``.
- ``X-Forwarded-Host`` sets ``HTTP_HOST``, ``SERVER_NAME``, and
``SERVER_PORT``.
- ``X-Forwarded-Port`` sets ``HTTP_HOST`` and ``SERVER_PORT``.
- ``X-Forwarded-Prefix`` sets ``SCRIPT_NAME``.
You must tell the middleware how many proxies set each header so it
knows what values to trust. It is a security issue to trust values
that came from the client rather than a proxy.
The original values of the headers are stored in the WSGI
environ as ``werkzeug.proxy_fix.orig``, a dict.
:param app: The WSGI application to wrap.
:param x_for: Number of values to trust for ``X-Forwarded-For``.
:param x_proto: Number of values to trust for ``X-Forwarded-Proto``.
:param x_host: Number of values to trust for ``X-Forwarded-Host``.
:param x_port: Number of values to trust for ``X-Forwarded-Port``.
:param x_prefix: Number of values to trust for
``X-Forwarded-Prefix``.
.. code-block:: python
from werkzeug.middleware.proxy_fix import ProxyFix
# App is behind one proxy that sets the -For and -Host headers.
app = ProxyFix(app, x_for=1, x_host=1)
.. versionchanged:: 1.0
Deprecated code has been removed:
* The ``num_proxies`` argument and attribute.
* The ``get_remote_addr`` method.
* The environ keys ``orig_remote_addr``,
``orig_wsgi_url_scheme``, and ``orig_http_host``.
.. versionchanged:: 0.15
All headers support multiple values. The ``num_proxies``
argument is deprecated. Each header is configured with a
separate number of trusted proxies.
.. versionchanged:: 0.15
Original WSGI environ values are stored in the
``werkzeug.proxy_fix.orig`` dict. ``orig_remote_addr``,
``orig_wsgi_url_scheme``, and ``orig_http_host`` are deprecated
and will be removed in 1.0.
.. versionchanged:: 0.15
Support ``X-Forwarded-Port`` and ``X-Forwarded-Prefix``.
.. versionchanged:: 0.15
``X-Forwarded-Host`` and ``X-Forwarded-Port`` modify
``SERVER_NAME`` and ``SERVER_PORT``.
"""
def __init__(
self,
app: "WSGIApplication",
x_for: int = 1,
x_proto: int = 1,
x_host: int = 0,
x_port: int = 0,
x_prefix: int = 0,
) -> None:
self.app = app
self.x_for = x_for
self.x_proto = x_proto
self.x_host = x_host
self.x_port = x_port
self.x_prefix = x_prefix
def _get_real_value(self, trusted: int, value: t.Optional[str]) -> t.Optional[str]:
"""Get the real value from a list header based on the configured
number of trusted proxies.
:param trusted: Number of values to trust in the header.
:param value: Comma separated list header value to parse.
:return: The real value, or ``None`` if there are fewer values
than the number of trusted proxies.
.. versionchanged:: 1.0
Renamed from ``_get_trusted_comma``.
.. versionadded:: 0.15
"""
if not (trusted and value):
return None
values = parse_list_header(value)
if len(values) >= trusted:
return values[-trusted]
return None
def __call__(
self, environ: "WSGIEnvironment", start_response: "StartResponse"
) -> t.Iterable[bytes]:
"""Modify the WSGI environ based on the various ``Forwarded``
headers before calling the wrapped application. Store the
original environ values in ``werkzeug.proxy_fix.orig_{key}``.
"""
environ_get = environ.get
orig_remote_addr = environ_get("REMOTE_ADDR")
orig_wsgi_url_scheme = environ_get("wsgi.url_scheme")
orig_http_host = environ_get("HTTP_HOST")
environ.update(
{
"werkzeug.proxy_fix.orig": {
"REMOTE_ADDR": orig_remote_addr,
"wsgi.url_scheme": orig_wsgi_url_scheme,
"HTTP_HOST": orig_http_host,
"SERVER_NAME": environ_get("SERVER_NAME"),
"SERVER_PORT": environ_get("SERVER_PORT"),
"SCRIPT_NAME": environ_get("SCRIPT_NAME"),
}
}
)
x_for = self._get_real_value(self.x_for, environ_get("HTTP_X_FORWARDED_FOR"))
if x_for:
environ["REMOTE_ADDR"] = x_for
x_proto = self._get_real_value(
self.x_proto, environ_get("HTTP_X_FORWARDED_PROTO")
)
if x_proto:
environ["wsgi.url_scheme"] = x_proto
x_host = self._get_real_value(self.x_host, environ_get("HTTP_X_FORWARDED_HOST"))
if x_host:
environ["HTTP_HOST"] = environ["SERVER_NAME"] = x_host
# "]" to check for IPv6 address without port
if ":" in x_host and not x_host.endswith("]"):
environ["SERVER_NAME"], environ["SERVER_PORT"] = x_host.rsplit(":", 1)
x_port = self._get_real_value(self.x_port, environ_get("HTTP_X_FORWARDED_PORT"))
if x_port:
host = environ.get("HTTP_HOST")
if host:
# "]" to check for IPv6 address without port
if ":" in host and not host.endswith("]"):
host = host.rsplit(":", 1)[0]
environ["HTTP_HOST"] = f"{host}:{x_port}"
environ["SERVER_PORT"] = x_port
x_prefix = self._get_real_value(
self.x_prefix, environ_get("HTTP_X_FORWARDED_PREFIX")
)
if x_prefix:
environ["SCRIPT_NAME"] = x_prefix
return self.app(environ, start_response)
| [
"[email protected]"
] | |
d69b9f745e9ad17b53c5384804fdd18190e526a7 | f82bfba767a44bc15557eb2d2ae2558c83cfb0e1 | /catkin_ws/src/tracking/src/diagnose.py | 808ad7f138ea91084f29763cd4647e031bf82b90 | [] | no_license | championway/argbot | 4d89a233541a38cd8c8293c55f981b78aad276b6 | a2c49a4a9df28675063aeb8d8ff6768f424526a1 | refs/heads/master | 2020-04-04T14:39:04.156209 | 2020-01-01T17:06:54 | 2020-01-01T17:06:54 | 156,006,637 | 8 | 2 | null | 2019-03-30T12:36:06 | 2018-11-03T17:24:24 | Jupyter Notebook | UTF-8 | Python | false | false | 4,534 | py | #!/usr/bin/env python
import numpy as np
import cv2
import roslib
import rospy
import tf
import struct
import math
import time
from sensor_msgs.msg import Image, LaserScan
from sensor_msgs.msg import CameraInfo, CompressedImage
from geometry_msgs.msg import PoseArray, Pose, PoseStamped, Point
from visualization_msgs.msg import Marker, MarkerArray
from nav_msgs.msg import OccupancyGrid, MapMetaData, Odometry
import rospkg
from cv_bridge import CvBridge, CvBridgeError
from dynamic_reconfigure.server import Server
from control.cfg import pos_PIDConfig, ang_PIDConfig
from duckiepond_vehicle.msg import UsvDrive
from std_srvs.srv import SetBool, SetBoolResponse
from PID import PID_control
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
from ssd import build_ssd
from matplotlib import pyplot as plt
class Diagnose():
def __init__(self):
self.node_name = rospy.get_name()
rospy.loginfo("[%s] Initializing " %(self.node_name))
self.frame_id = 'odom'
#self.image_sub = rospy.Subscriber("/BRIAN/camera_node/image/compressed", Image, self.img_cb, queue_size=1)
self.image_sub = rospy.Subscriber("/BRIAN/camera_node/image/compressed", CompressedImage, self.img_cb, queue_size=1, buff_size = 2**24)
self.pub_cmd = rospy.Publisher("/MONICA/cmd_drive", UsvDrive, queue_size = 1)
self.pub_goal = rospy.Publisher("/goal_point", Marker, queue_size = 1)
self.image_pub = rospy.Publisher("/predict_img", Image, queue_size = 1)
self.station_keeping_srv = rospy.Service("/station_keeping", SetBool, self.station_keeping_cb)
self.pos_control = PID_control("Position_tracking")
self.ang_control = PID_control("Angular_tracking")
self.ang_station_control = PID_control("Angular_station")
self.pos_station_control = PID_control("Position_station")
self.pos_srv = Server(pos_PIDConfig, self.pos_pid_cb, "Position_tracking")
self.ang_srv = Server(ang_PIDConfig, self.ang_pid_cb, "Angular_tracking")
self.pos_station_srv = Server(pos_PIDConfig, self.pos_station_pid_cb, "Angular_station")
self.ang_station_srv = Server(ang_PIDConfig, self.ang_station_pid_cb, "Position_station")
self.initialize_PID()
def img_cb(self, msg):
try:
np_arr = np.fromstring(msg.data, np.uint8)
cv_image = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)
#cv_image = self.bridge.imgmsg_to_cv2(msg, "bgr8")
except CvBridgeError as e:
print(e)
(rows, cols, channels) = cv_image.shape
self.width = cols
self.height = rows
predict = self.predict(cv_image)
if predict is None:
return
angle, dis = predict[0], predict[1]
self.tracking_control(angle, dis)
def tracking_control(self, goal_angle, goal_distance):
if self.is_station_keeping:
rospy.loginfo("Station Keeping")
pos_output, ang_output = self.station_keeping(goal_distance, goal_angle)
else:
pos_output, ang_output = self.control(goal_distance, goal_angle)
cmd_msg = UsvDrive()
cmd_msg.left = self.cmd_constarin(pos_output + ang_output)
cmd_msg.right = self.cmd_constarin(pos_output - ang_output)
self.pub_cmd.publish(cmd_msg)
#self.publish_goal(self.goal)
def predict(self, img):
# Image Preprocessing (vgg use BGR image as training input)
image = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
x = cv2.resize(image, (300, 300)).astype(np.float32)
x -= (104.0, 117.0, 123.0)
x = x.astype(np.float32)
x = x[:, :, ::-1].copy()
x = torch.from_numpy(x).permute(2, 0, 1)
#SSD Prediction
xx = Variable(x.unsqueeze(0))
if torch.cuda.is_available():
xx = xx.cuda()
y = self.net(xx)
scale = torch.Tensor(img.shape[1::-1]).repeat(2)
detections = y.data
max_prob = 0
coords = None
for i in range(self.ROBOT_NUM):
if detections[0, 1, i, 0].numpy() > self.predict_prob and detections[0, 1, i, 0].numpy() > max_prob:
max_prob = detections[0, 1, i, 0].numpy()
score = detections[0, 1, i, 0]
pt = (detections[0, 1, i,1:]*scale).cpu().numpy()
coords = (pt[0], pt[1]), pt[2]-pt[0]+1, pt[3]-pt[1]+1
if coords is None:
return None
angle, dis, center = self.BBx2AngDis(coords)
cv2.circle(img, (int(center[0]), int(center[1])), 10, (0,0,255), -1)
cv2.rectangle(img, (int(coords[0][0]), int(coords[0][1])),\
(int(coords[0][0] + coords[1]), int(coords[0][1] + coords[2])),(0,0,255),5)
try:
img = self.draw_cmd(img, dis, angle)
self.image_pub.publish(self.bridge.cv2_to_imgmsg(img, "bgr8"))
except CvBridgeError as e:
print(e)
if __name__ == '__main__':
rospy.init_node('diagnose')
foo = Diagnose()
rospy.spin() | [
"[email protected]"
] | |
08c90ebe0b76a7df4692e61b1223d169fa34fbe2 | e23a4f57ce5474d468258e5e63b9e23fb6011188 | /095_os_and_sys/_exercises/exercises/Programming_Python/04_File and Directory Tools/04_012_os.open mode flags.py | 2336e6e88475b4379afd36620f42eff0ea5e9cbc | [] | no_license | syurskyi/Python_Topics | 52851ecce000cb751a3b986408efe32f0b4c0835 | be331826b490b73f0a176e6abed86ef68ff2dd2b | refs/heads/master | 2023-06-08T19:29:16.214395 | 2023-05-29T17:09:11 | 2023-05-29T17:09:11 | 220,583,118 | 3 | 2 | null | 2023-02-16T03:08:10 | 2019-11-09T02:58:47 | Python | UTF-8 | Python | false | false | 1,564 | py | # _______ __
# fdfile _ __.o... _'C:\temp\spam.txt', __.O_R.. | __.O_B..
# __.r... ? 20
# # b'Hello stdio file\r\nHe'
# # ######################################################################################################################
#
# __.ls.. ? 0 0 # go back to start of file
# __.r... ? 100 # binary mode retains "\r\n"
# # b'Hello stdio file\r\nHello descriptor file\n'
# # ######################################################################################################################
#
# __.ls.. ?, 0, 0
# __.w.. ?, b'HELLO') # overwrite first 5 bytes
# # 5
# # C:\temp> type spam.txt
# # HELLO stdio file
# # Hello descriptor file
# # ######################################################################################################################
#
# file _ o... _'C:\temp\spam.txt' ___ # same but with open/objects
# ?.r... 20
# # b'HELLO stdio file\r\nHe'
# # ######################################################################################################################
#
# ?.se.. 0
# ?.r... 100
# # b'HELLO stdio file\r\nHello descriptor file\n'
# # ######################################################################################################################
#
# ?.se.. 0
# ?.w.. _'Jello'
# # 5
# # ######################################################################################################################
#
# ?.se.. 0
# ?.r...
# # b'Jello stdio file\r\nHello descriptor file\n'
# # ###################################################################################################################### | [
"[email protected]"
] | |
62ab7743a9f1074052811f9d7645e054d1514cc3 | e3fc83e77e218f7b8df4b14b0753fd65afd4b923 | /downloaded_kernels/house_sales/converted_notebooks/kernel_121.py | 36d0e7d411d81763309cb78183e37fd1f9b83211 | [
"MIT"
] | permissive | jupste/wranglesearch | 982684fdaa7914af59758880fdc3a4ff3346477f | a6978fae73eee8ece6f1db09f2f38cf92f03b3ad | refs/heads/master | 2023-06-18T04:46:34.474046 | 2021-07-15T23:43:24 | 2021-07-15T23:43:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,981 | py | #!/usr/bin/env python
# coding: utf-8
# In[ ]:
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
plt.figure(figsize=(20, 5))
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn import linear_model
from sklearn.preprocessing import MinMaxScaler
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
import seaborn as sns
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.feature_selection import RFE
def performance_metric(y_true, y_predict, normalize=True):
score = r2_score(y_true, y_predict)
return score
data = pd.read_csv("../input/kc_house_data.csv", encoding = "ISO-8859-1")
Y = data["price"]
X = data[["bedrooms", "bathrooms", "sqft_living", "sqft_lot", "floors", "waterfront", "view", "grade", "sqft_above", "sqft_basement", "yr_built", "yr_renovated", "zipcode", "lat", "long"]]
colnames = X.columns
#ranking columns
ranks = {}
def ranking(ranks, names, order=1):
minmax = MinMaxScaler()
ranks = minmax.fit_transform(order*np.array([ranks]).T).T[0]
ranks = map(lambda x: round(x,2), ranks)
return dict(zip(names, ranks))
for i, col in enumerate(X.columns):
# 3 plots here hence 1, 3
plt.subplot(1, 15, i+1)
x = X[col]
y = Y
plt.plot(x, y, 'o')
# Create regression line
plt.plot(np.unique(x), np.poly1d(np.polyfit(x, y, 1))(np.unique(x)))
plt.title(col)
plt.xlabel(col)
plt.ylabel('prices')
#Splitting the datasets
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=10)
#Models
#Decision Tree Regressor
DTR = tree.DecisionTreeRegressor()
DTR = DTR.fit(X_train,y_train)
ranks["DTR"] = ranking(np.abs(DTR.feature_importances_), colnames)
Y_target_DTR = DTR.predict(X_test)
#Decision Tree Classifier
DTC = DecisionTreeClassifier(max_depth=None, min_samples_split=2, random_state=0)
DTC = DTC.fit(X_train, y_train)
ranks["DTC"] = ranking(np.abs(DTC.feature_importances_), colnames)
Y_target_DTC = DTC.predict(X_test)
#LARS Lasso
LARS_L = linear_model.LassoLars(alpha=.4)
LARS_L = LARS_L.fit(X_train, y_train)
ranks["LARS_L"] = ranking(np.abs(LARS_L.coef_), colnames)
Y_target_lars_l = LARS_L.predict(X_test)
#Bayesian Ridge
BR = linear_model.BayesianRidge()
BR = BR.fit(X_train, y_train)
ranks["BR"] = ranking(np.abs(BR.coef_), colnames)
Y_target_BR = BR.predict(X_test)
#Random Forest Regressor
RFR = RandomForestRegressor(n_jobs=-1, n_estimators=50, verbose=0)
RFR = RFR.fit(X_train,y_train)
ranks["RFR"] = ranking(RFR.feature_importances_, colnames);
#print(ranks["RFR"])
Y_target_RFR = RFR.predict(X_test)
#Recursive Feature Elimination on Random Forest Regressor
RFE_RFR = RFE(RFR, n_features_to_select=10, step = 1)
RFE_RFR.fit(X_train,y_train)
Y_target_RFE_RFR = RFE_RFR.predict(X_test)
#Extra Trees Classifier
ETC = ExtraTreesClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=0)
ETC = ETC.fit(X_train, y_train)
ranks["ETC"] = ranking(np.abs(ETC.feature_importances_), colnames)
Y_target_ETC = ETC.predict(X_test)
#Recursive Feature Elimination on Decision Tree Regressor
RFE = RFE(DTR, n_features_to_select=10, step =1 )
RFE.fit(X_train,y_train)
Y_target_RFE = RFE.predict(X_test)
#Ranking inputs
r = {}
for name in colnames:
r[name] = round(np.mean([ranks[method][name]
for method in ranks.keys()]), 2)
methods = sorted(ranks.keys())
ranks["Mean"] = r
methods.append("Mean")
print("\t%s" % "\t".join(methods))
for name in colnames:
print("%s\t%s" % (name, "\t".join(map(str,
[ranks[method][name] for method in methods]))))
#seaborn plot
#create dataframe
meanplot = pd.DataFrame(list(r.items()), columns= ['Feature','Mean Ranking'])
meanplot = meanplot.sort_values('Mean Ranking', ascending=False)
#plot proper
sns.factorplot(x="Mean Ranking", y="Feature", data = meanplot, kind="bar",
size=14, aspect=1.9, palette='coolwarm')
#R2 metrics for each model
print("\nR2 score, Decision Tree Regressor:")
print(performance_metric(y_test, Y_target_DTR))
print("\nR2 score, Decision Tree Classifier:")
print(performance_metric(y_test, Y_target_DTC))
print("\nR2 score, LARS Lasso:")
print(performance_metric(y_test, Y_target_lars_l))
print("\nR2 score, Bayesian Ridge:")
print(performance_metric(y_test, Y_target_BR))
print("\nR2 score, Random Forest Regressor:")
print(performance_metric(y_test, Y_target_RFR))
print("\nR2 score, Recursive Feature Eliminition on Random Forest Regressor:")
print(performance_metric(y_test, Y_target_RFE_RFR))
print("\nR2 score, Extra Trees Classifier:")
print(performance_metric(y_test, Y_target_ETC))
print("\nR2 score, Recursive Feature Eliminition on Decision Tree Regressor:")
print(performance_metric(y_test, Y_target_RFE))
# In[ ]:
| [
"[email protected]"
] | |
c2c571d1543df3e9ae04c706f4659fbe4e3352ec | 9a181799f7b87aace15f0db9afedd861259a48c2 | /At least 1 edge between any 2 vertexes in directed graph.py | 65caf712b4e55b9f6fc994149d45f68c162b02bd | [] | no_license | D-Katt/Coding-examples | 77bea4cf1099019b12bbafd967c1c017adf4e9b8 | 81e8b47857513b7961cab4c09b8c27c20b8b8081 | refs/heads/master | 2021-12-25T05:01:05.026469 | 2021-12-17T13:43:57 | 2021-12-17T13:43:57 | 226,685,637 | 4 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,675 | py | # Ориентированный граф называется полуполным, если между любой парой
# его различных вершин есть хотя бы одно ребро. Для заданного списком ребер
# графа проверьте, является ли он полуполным.
# Сначала вводятся числа n ( 1 <= n <= 100) – количество вершин в графе
# и m ( 1 <= m <= n(n - 1)) – количество ребер. Затем следует m пар чисел –
# ребра графа. Номера вершин начинаются с 0.
# Выведите «YES», если граф является полуполным, и «NO» в противном случае.
from itertools import combinations
n, m = (int(s) for s in input().split())
# n - количество вершин, m - количество ребер
Graph = [[0] * n for _ in range(n)] # Заготовка под матрицу смежности
for i in range(m): # Считываем ребра попарно
a, b = (int(s) for s in input().split())
Graph[a][b] += 1
vertexes = [i for i in range(n)] # Список всех вершин для последующего перебора комбинаций
pairs = combinations(vertexes, 2) # Список комбинаций всех вершин
for a, b in pairs:
if Graph[a][b] + Graph[b][a] < 1: # Проверяем наличие хотя бы одной связи
print("NO") # в каждой комбинации вершин.
exit()
print("YES")
| [
"[email protected]"
] | |
6e0790e32d260b990eddc5aac28a8e17cb474c33 | 6b2a8dd202fdce77c971c412717e305e1caaac51 | /solutions_5630113748090880_0/Python/musicman3320/argus.py | 5443e0aafb386bdc58fc19821a2d13ddc2e2698f | [] | no_license | alexandraback/datacollection | 0bc67a9ace00abbc843f4912562f3a064992e0e9 | 076a7bc7693f3abf07bfdbdac838cb4ef65ccfcf | refs/heads/master | 2021-01-24T18:27:24.417992 | 2017-05-23T09:23:38 | 2017-05-23T09:23:38 | 84,313,442 | 2 | 4 | null | null | null | null | UTF-8 | Python | false | false | 593 | py | fin = open('B-small-attempt0.in', 'r')
fout = open('B-small-attempt0.out','w')
numtests = int(fin.readline().rstrip())
for test in range(numtests):
N = int(fin.readline().rstrip())
heightCounts = [0]*2501
for i in range(2*N-1):
page = [int(h) for h in str(fin.readline().rstrip()).split(" ")]
for h in page:
heightCounts[h] = heightCounts[h] + 1
result = []
for h in range(len(heightCounts)):
if heightCounts[h] % 2 == 1:
result = result + [str(h)]
outstr = "Case #" + str(test+1) + ": " + str(' '.join(result)) + "\n"
# print outstr.rstrip()
fout.write(outstr)
| [
"[email protected]"
] | |
98ae99df2b35e5c0d48d6310c4502d8027b57ff4 | 4314b77d958174db744ae29cd914d24435246cd0 | /sparse_ll.py | 5bf449c07b0c7a2b6a45ecffee23624d0ea43e35 | [] | no_license | danieljtait/solid-sniffle | 87aa262a2281be831c4408e6e06871d8affeb65a | 31abbe1387468100708e7d13baa166f97e094af8 | refs/heads/master | 2021-01-11T00:39:22.628372 | 2016-10-17T08:22:00 | 2016-10-17T08:22:00 | 70,501,791 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,648 | py |
import numpy as np
import matplotlib.pyplot as plt
from main import init_dbw_example,potential,Hamiltonian,pStationary,HJacobi_integrator
from main import pTransition
from scipy.interpolate import UnivariateSpline
X = np.loadtxt('dbwData1.txt',delimiter=',')
T = 0.5
tt = np.linspace(0.,T,50)
# For now fix diffusion constant
D2 = .5
z = np.sort(X)
def objFunc(par):
# Set up the necessary functions
def f(x):
return -(-x**4 + 2*par*x**2)
def fgrad(x):
return -(4*x*(par-x**2))
U = potential(f,fgrad)
H = Hamiltonian(U,lambda x: D2)
eps = 0.0
def func(x,p):
return eps*(p-H.seperatrix(x))**2
H.set_add_term(func)
Pst = None
rootPar = np.sqrt(par)
#xRep = [-0.63,0.63]
xRep = [z[20],z[-20]]
try :
val = 0.
J = HJacobi_integrator(H,Pst)
pT1 = pTransition(J)
pT1.make(xRep[0],tt)
pT2 = pTransition(J)
pT2.make(xRep[1],tt)
xx = J.xx
"""
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(xx,pT1(xx))
ax.plot(xx,pT2(xx))
"""
val = 0.
for i in range(X.size-1):
x = X[i]
xT = X[i+1]
if x < xRep[0] :
val += np.log(pT1(xT))
elif x > xRep[1] :
val += np.log(pT2(xT))
else:
w = abs(x-xRep[0])/(xRep[1]-xRep[0])
val += np.log( w*pT1(xT) + (1-w)*pT2(xT) )
return -val
except:
return np.inf
print z[20],z[-20]
ll = []
pars = np.linspace(0.6,0.81,15)
for p in pars:
ll.append(objFunc(p))
ll=np.array(ll)
print pars[np.where(ll == ll.min())[0]]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(pars,ll)
from scipy.optimize import minimize
res = minimize(objFunc,[0.7],method='Nelder-Mead',options={ 'disp': True , 'xatol' :1e-2})
print res
plt.show()
| [
"[email protected]"
] | |
07a3b814581a863498fc66da22f05128fbf8aa7d | 59b3dce3c770e70b2406cc1dd623a2b1f68b8394 | /python_3/lessons/Properties/src/test_teacher.py | f4f1839acbf9975d7e2d0791f8b5ab9b6c999217 | [] | no_license | patrickbeeson/python-classes | 04ed7b54fc4e1152a191eeb35d42adc214b08e39 | b5041e71badd1ca2c013828e3b2910fb02e9728f | refs/heads/master | 2020-05-20T07:17:36.693960 | 2015-01-23T14:41:46 | 2015-01-23T14:41:46 | 29,736,517 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,391 | py | import unittest
from teacher import Teacher
class TestTeacher(unittest.TestCase):
def setUp(self):
self.teacher = Teacher('steve',
'holden',
'63',
['Python 3-1','Python 3-2','Python 3-3'],
5)
def test_get(self):
self.assertEqual(self.teacher.first_name, 'Steve')
self.assertEqual(self.teacher.last_name, 'Holden')
self.assertEqual(self.teacher.age, 63)
self.assertEqual(self.teacher.classes, ['Python 3-1','Python 3-2','Python 3-3'])
self.assertEqual(self.teacher.grade, 'Fifth')
self.teacher.description = 'curmudgeon'
self.assertEqual(self.teacher.description, 'curmudgeon')
def test_set(self):
self.teacher.age = 21
self.assertEqual(self.teacher._age, 21)
self.assertEqual(self.teacher.age, 21)
self.assertRaises(ValueError, self.setAgeWrong)
def setAgeWrong(self):
self.teacher.age = 'twentyone'
def test_delete(self):
del self.teacher.grade
self.assertEqual(self.teacher.age, 64)
self.assertRaises(AttributeError, self.accessGrade)
def accessGrade(self):
return self.teacher.grade
if __name__ == "__main__":
unittest.main()
| [
"[email protected]"
] | |
6794f14678710d8ace89c78a28304ab8181c1c25 | d0002c42833f416d13c2452e3aaf31e34e474231 | /Multibox-FHD-Skin-4ATV/usr/lib/enigma2/python/Components/Renderer/AMBCicontrol.py | 2e7cb609b6a9bb4740eb067d30931f6eb66d2c69 | [] | no_license | stein17/Skins-for-openATV | b146b9d62a1c3149b02af09253a225db43783768 | ad67a0336e8cdba54bf6c5fda42cb12e2b820b05 | refs/heads/master | 2023-08-14T21:31:18.530737 | 2022-08-29T00:35:44 | 2022-08-29T00:35:44 | 94,653,168 | 12 | 18 | null | 2022-04-26T06:02:50 | 2017-06-17T22:47:47 | Python | UTF-8 | Python | false | false | 1,832 | py | #by Nikolasi and jbleyel
from Components.Renderer.Renderer import Renderer
from enigma import ePixmap, eDVBCI_UI, eDVBCIInterfaces, eEnv
from Tools.Directories import fileExists
from Components.Converter.Poll import Poll
class AMBCicontrol(Renderer, Poll):
searchPaths = [eEnv.resolve('${datadir}/enigma2/Multibox/%s/')]
def __init__(self):
Poll.__init__(self)
Renderer.__init__(self)
self.path = 'module'
self.slot = 0
self.nameCache = { }
self.pngname = ""
def applySkin(self, desktop, parent):
attribs = []
for (attrib, value,) in self.skinAttributes:
if attrib == 'path':
self.path = value
elif attrib == 'slot':
self.slot = int(value)
else:
attribs.append((attrib, value))
self.skinAttributes = attribs
return Renderer.applySkin(self, desktop, parent)
GUI_WIDGET = ePixmap
def changed(self, what):
self.poll_interval = 1000
self.poll_enabled = True
if self.instance:
text = "nomodule"
pngname = ''
if what[0] != self.CHANGED_CLEAR:
service = self.source.service
if service:
NUM_CI=eDVBCIInterfaces.getInstance().getNumOfSlots()
if NUM_CI > 0:
state = eDVBCI_UI.getInstance().getState(self.slot)
if state != -1:
if state == 0:
text = "nomodule"
elif state == 1:
text = "initmodule"
elif state == 2:
text = "ready"
pngname = self.nameCache.get(text, "")
if pngname == "":
pngname = self.findPicon(text)
if pngname != "":
self.nameCache[text] = pngname
else:
return
if self.pngname != pngname:
self.instance.setPixmapFromFile(pngname)
self.pngname = pngname
def findPicon(self, serviceName):
for path in self.searchPaths:
pngname = (path % self.path) + serviceName + ".png"
if fileExists(pngname):
return pngname
return ""
| [
"[email protected]"
] | |
f697416ba21ad9c5bf52caaad7472d1afcf3e15f | fc678a0a5ede80f593a29ea8f43911236ed1b862 | /575-DistributeCandies.py | 4fa220fe6385d42dd5572d2b3e9528ca85753fa5 | [] | no_license | dq-code/leetcode | 4be0b1b154f8467aa0c07e08b5e0b6bd93863e62 | 14dcf9029486283b5e4685d95ebfe9979ade03c3 | refs/heads/master | 2020-12-13T15:57:30.171516 | 2017-11-07T17:43:19 | 2017-11-07T17:43:19 | 35,846,262 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 207 | py | class Solution(object):
def distributeCandies(self, candies):
"""
:type candies: List[int]
:rtype: int
"""
return min(len(candies) / 2, len(set(candies))) | [
"[email protected]"
] | |
f0f572dd9a8b601278b6a14b38b8ab2ede39f5d8 | e60487a8f5aad5aab16e671dcd00f0e64379961b | /python_stack/Algos/list_comprehension/interview2.py | 88cd08b39166faf1fc9acacbb6ec43f53c1757b6 | [] | no_license | reenadangi/python | 4fde31737e5745bc5650d015e3fa4354ce9e87a9 | 568221ba417dda3be7f2ef1d2f393a7dea6ccb74 | refs/heads/master | 2021-08-18T08:25:40.774877 | 2021-03-27T22:20:17 | 2021-03-27T22:20:17 | 247,536,946 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 685 | py | # LIST COMPREHENSIONS
# 1. return a list of numbers between (1-100)
# 2. List - I want n for each n in nums[1,2,3,4,5,6,7,8,9,10] -(even/odd)
# 3. List of dictionary- return list of brand and models for year>2000
# cars=[
# {'brand':'Ford','model':'Mustang','year':1964},
# {'brand':'Ford','model':'Ranger','year':1960},
# {'brand':'Audi','model':'A8','year':2008},
# {'brand':'BMW','model':'X7','year':2007}
# ]
# 4. Creating a dictionary with list comprehensions
# brands=['Ford','Audi','BMW']
# cars=['Ranger','A8','X7']
# {'Ford':'Ranger','Audi':'A8','BMW','X7'}
# I want a dict 'brand':'car' for each brand and car
# [expression iteration condition]
| [
"[email protected]"
] | |
c1f8bb3be62c97ced5dc256c82611d9584b30bc9 | 960b3a17a4011264a001304e64bfb76d669b8ac5 | /mstrio/modeling/schema/__init__.py | 01d0f898ac0d5d563da16c9addf1e2a2610c83f7 | [
"Apache-2.0"
] | permissive | MicroStrategy/mstrio-py | 012d55df782a56dab3a32e0217b9cbfd0b59b8dd | c6cea33b15bcd876ded4de25138b3f5e5165cd6d | refs/heads/master | 2023-08-08T17:12:07.714614 | 2023-08-03T12:30:11 | 2023-08-03T12:30:11 | 138,627,591 | 84 | 60 | Apache-2.0 | 2023-07-31T06:43:33 | 2018-06-25T17:23:55 | Python | UTF-8 | Python | false | false | 323 | py | # flake8: noqa
from .attribute import *
from .fact import *
from .helpers import *
from .schema_management import (
SchemaLockStatus,
SchemaLockType,
SchemaManagement,
SchemaTask,
SchemaTaskStatus,
SchemaUpdateType,
)
from .table import *
from .transformation import *
from .user_hierarchy import *
| [
"[email protected]"
] | |
a1224e592c6abcd25d41d21b5503c12b326683c4 | 8e24e8bba2dd476f9fe612226d24891ef81429b7 | /geeksforgeeks/python/python_all/170_8.py | 8e5e0f599ffbaa4d32d8dfe5afc7b1f9031f152f | [] | no_license | qmnguyenw/python_py4e | fb56c6dc91c49149031a11ca52c9037dc80d5dcf | 84f37412bd43a3b357a17df9ff8811eba16bba6e | refs/heads/master | 2023-06-01T07:58:13.996965 | 2021-06-15T08:39:26 | 2021-06-15T08:39:26 | 349,059,725 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,392 | py | Python | Check if list is strictly increasing
The test for monotonic sequence is a utility that has manifold applications in
mathematics and hence every sphere related to mathematics. As mathematics and
Computer Science generally go parallel, mathematical operations such as
checking for strictly increasing sequence can be useful to gather knowledge
of. Same argument can be extended for strictly decreasing lists also. Lets
discuss certain ways to perform this test.
**Method #1 : Usingall() + zip()**
The all() generally checks for all the elements fed to it. The task of zip()
is to link list beginning from beginning and list beginning from first
element, so that a check can be performed on all elements.
__
__
__
__
__
__
__
# Python3 code to demonstrate
# to check for strictly increasing list
# using zip() + all()
# initializing list
test_list = [1, 4, 5, 7, 8, 10]
# printing original lists
print ("Original list : " + str(test_list))
# using zip() + all()
# to check for strictly increasing list
res = all(i < j for i, j in zip(test_list,
test_list[1:]))
# printing result
print ("Is list strictly increasing ? : " + str(res))
---
__
__
**Output:**
Original list : [1, 4, 5, 7, 8, 10]
Is list strictly increasing ? : True
**Method #2 : Usingreduce() \+ lambda**
reduce() coupled with lambda can also perform this task of checking for
monotonicity. reduce function is used to cumulate the result as True or False,
lambda function checks for each index value with next index value.
__
__
__
__
__
__
__
# Python3 code to demonstrate
# to check for strictly increasing list
# using reduce() + lambda
# initializing list
test_list = [1, 4, 5, 7, 8, 10]
# printing original lists
print ("Original list : " + str(test_list))
# using reduce() + lambda
# to check for strictly increasing list
res = bool(lambda test_list: reduce(lambda i, j: j if
i < j else 9999, test_list) != 9999)
# printing result
print ("Is list strictly increasing ? : " + str(res))
---
__
__
**Output:**
Original list : [1, 4, 5, 7, 8, 10]
Is list strictly increasing ? : True
| [
"[email protected]"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.