blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
616
| content_id
stringlengths 40
40
| detected_licenses
sequencelengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 777
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 149
values | src_encoding
stringclasses 26
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 3
10.2M
| extension
stringclasses 188
values | content
stringlengths 3
10.2M
| authors
sequencelengths 1
1
| author_id
stringlengths 1
132
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2a9ec919aa12c00f7699703cd9f9a960cd3df308 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03696/s927043333.py | 8af9a814da95dcba853bbf295beefd9db08be572 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 131 | py | n = input()
s = input()
ss = s
for i in range(50):
s = s.replace('()','')
l = s.count(')')
r = s.count('(')
print('('*l+ss+')'*r) | [
"[email protected]"
] | |
d1d76cd6e4cb3c8a367de07f3c437ae79778c91d | f8d3f814067415485bb439d7fe92dc2bbe22a048 | /models/research/differential_privacy/pate/ICLR2018/smooth_sensitivity_table.py | 4ea0a7f5a9b072437eded0fb6fc00d919ef95d9e | [
"Apache-2.0"
] | permissive | gmonkman/python | 2f9ab8f159c01f6235c86cb0cd52062cd3fdedd3 | 9123aa6baf538b662143b9098d963d55165e8409 | refs/heads/master | 2023-04-09T15:53:29.746676 | 2022-11-26T20:35:21 | 2022-11-26T20:35:21 | 60,254,898 | 0 | 2 | null | 2023-03-24T22:58:39 | 2016-06-02T10:25:27 | Python | UTF-8 | Python | false | false | 14,228 | py | # Copyright 2017 The 'Scalable Private Learning with PATE' Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Performs privacy analysis of GNMax with smooth sensitivity.
A script in support of the paper "Scalable Private Learning with PATE" by
Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar,
Ulfar Erlingsson (https://arxiv.org/abs/1802.08908).
Several flavors of the GNMax algorithm can be analyzed.
- Plain GNMax (argmax w/ Gaussian noise) is assumed when arguments threshold
and sigma2 are missing.
- Confident GNMax (thresholding + argmax w/ Gaussian noise) is used when
threshold, sigma1, and sigma2 are given.
- Interactive GNMax (two- or multi-round) is triggered by specifying
baseline_file, which provides baseline values for votes selection in Step 1.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import os
import sys
sys.path.append('..') # Main modules reside in the parent directory.
from absl import app
from absl import flags
import numpy as np
import core as pate
import smooth_sensitivity as pate_ss
FLAGS = flags.FLAGS
flags.DEFINE_string('counts_file', None, 'Counts file.')
flags.DEFINE_string('baseline_file', None, 'File with baseline scores.')
flags.DEFINE_boolean('data_independent', False,
'Force data-independent bounds.')
flags.DEFINE_float('threshold', None, 'Threshold for step 1 (selection).')
flags.DEFINE_float('sigma1', None, 'Sigma for step 1 (selection).')
flags.DEFINE_float('sigma2', None, 'Sigma for step 2 (argmax).')
flags.DEFINE_integer('queries', None, 'Number of queries made by the student.')
flags.DEFINE_float('delta', 1e-8, 'Target delta.')
flags.DEFINE_float(
'order', None,
'Fixes a Renyi DP order (if unspecified, finds an optimal order from a '
'hardcoded list).')
flags.DEFINE_integer(
'teachers', None,
'Number of teachers (if unspecified, derived from the counts file).')
flags.mark_flag_as_required('counts_file')
flags.mark_flag_as_required('sigma2')
def _check_conditions(sigma, num_classes, orders):
"""Symbolic-numeric verification of conditions C5 and C6.
The conditions on the beta function are verified by constructing the beta
function symbolically, and then checking that its derivative (computed
symbolically) is non-negative within the interval of conjectured monotonicity.
The last check is performed numerically.
"""
print('Checking conditions C5 and C6 for all orders.')
sys.stdout.flush()
conditions_hold = True
for order in orders:
cond5, cond6 = pate_ss.check_conditions(sigma, num_classes, order)
conditions_hold &= cond5 and cond6
if not cond5:
print('Condition C5 does not hold for order =', order)
elif not cond6:
print('Condition C6 does not hold for order =', order)
if conditions_hold:
print('Conditions C5-C6 hold for all orders.')
sys.stdout.flush()
return conditions_hold
def _compute_rdp(votes, baseline, threshold, sigma1, sigma2, delta, orders,
data_ind):
"""Computes the (data-dependent) RDP curve for Confident GNMax."""
rdp_cum = np.zeros(len(orders))
rdp_sqrd_cum = np.zeros(len(orders))
answered = 0
for i, v in enumerate(votes):
if threshold is None:
logq_step1 = 0 # No thresholding, always proceed to step 2.
rdp_step1 = np.zeros(len(orders))
else:
logq_step1 = pate.compute_logpr_answered(threshold, sigma1,
v - baseline[i,])
if data_ind:
rdp_step1 = pate.compute_rdp_data_independent_threshold(sigma1, orders)
else:
rdp_step1 = pate.compute_rdp_threshold(logq_step1, sigma1, orders)
if data_ind:
rdp_step2 = pate.rdp_data_independent_gaussian(sigma2, orders)
else:
logq_step2 = pate.compute_logq_gaussian(v, sigma2)
rdp_step2 = pate.rdp_gaussian(logq_step2, sigma2, orders)
q_step1 = np.exp(logq_step1)
rdp = rdp_step1 + rdp_step2 * q_step1
# The expression below evaluates
# E[(cost_of_step_1 + Bernoulli(pr_of_step_2) * cost_of_step_2)^2]
rdp_sqrd = (
rdp_step1**2 + 2 * rdp_step1 * q_step1 * rdp_step2 +
q_step1 * rdp_step2**2)
rdp_sqrd_cum += rdp_sqrd
rdp_cum += rdp
answered += q_step1
if ((i + 1) % 1000 == 0) or (i == votes.shape[0] - 1):
rdp_var = rdp_sqrd_cum / i - (
rdp_cum / i)**2 # Ignore Bessel's correction.
eps_total, order_opt = pate.compute_eps_from_delta(orders, rdp_cum, delta)
order_opt_idx = np.searchsorted(orders, order_opt)
eps_std = ((i + 1) * rdp_var[order_opt_idx])**.5 # Std of the sum.
print(
'queries = {}, E[answered] = {:.2f}, E[eps] = {:.3f} (std = {:.5f}) '
'at order = {:.2f} (contribution from delta = {:.3f})'.format(
i + 1, answered, eps_total, eps_std, order_opt,
-math.log(delta) / (order_opt - 1)))
sys.stdout.flush()
_, order_opt = pate.compute_eps_from_delta(orders, rdp_cum, delta)
return order_opt
def _find_optimal_smooth_sensitivity_parameters(
votes, baseline, num_teachers, threshold, sigma1, sigma2, delta, ind_step1,
ind_step2, order):
"""Optimizes smooth sensitivity parameters by minimizing a cost function.
The cost function is
exact_eps + cost of GNSS + two stds of noise,
which captures that upper bound of the confidence interval of the sanitized
privacy budget.
Since optimization is done with full view of sensitive data, the results
cannot be released.
"""
rdp_cum = 0
answered_cum = 0
ls_cum = 0
# Define a plausible range for the beta values.
betas = np.arange(.3 / order, .495 / order, .01 / order)
cost_delta = math.log(1 / delta) / (order - 1)
for i, v in enumerate(votes):
if threshold is None:
log_pr_answered = 0
rdp1 = 0
ls_step1 = np.zeros(num_teachers)
else:
log_pr_answered = pate.compute_logpr_answered(threshold, sigma1,
v - baseline[i,])
if ind_step1: # apply data-independent bound for step 1 (thresholding).
rdp1 = pate.compute_rdp_data_independent_threshold(sigma1, order)
ls_step1 = np.zeros(num_teachers)
else:
rdp1 = pate.compute_rdp_threshold(log_pr_answered, sigma1, order)
ls_step1 = pate_ss.compute_local_sensitivity_bounds_threshold(
v - baseline[i,], num_teachers, threshold, sigma1, order)
pr_answered = math.exp(log_pr_answered)
answered_cum += pr_answered
if ind_step2: # apply data-independent bound for step 2 (GNMax).
rdp2 = pate.rdp_data_independent_gaussian(sigma2, order)
ls_step2 = np.zeros(num_teachers)
else:
logq_step2 = pate.compute_logq_gaussian(v, sigma2)
rdp2 = pate.rdp_gaussian(logq_step2, sigma2, order)
# Compute smooth sensitivity.
ls_step2 = pate_ss.compute_local_sensitivity_bounds_gnmax(
v, num_teachers, sigma2, order)
rdp_cum += rdp1 + pr_answered * rdp2
ls_cum += ls_step1 + pr_answered * ls_step2 # Expected local sensitivity.
if ind_step1 and ind_step2:
# Data-independent bounds.
cost_opt, beta_opt, ss_opt, sigma_ss_opt = None, 0., 0., np.inf
else:
# Data-dependent bounds.
cost_opt, beta_opt, ss_opt, sigma_ss_opt = np.inf, None, None, None
for beta in betas:
ss = pate_ss.compute_discounted_max(beta, ls_cum)
# Solution to the minimization problem:
# min_sigma {order * exp(2 * beta)/ sigma^2 + 2 * ss * sigma}
sigma_ss = ((order * math.exp(2 * beta)) / ss)**(1 / 3)
cost_ss = pate_ss.compute_rdp_of_smooth_sensitivity_gaussian(
beta, sigma_ss, order)
# Cost captures exact_eps + cost of releasing SS + two stds of noise.
cost = rdp_cum + cost_ss + 2 * ss * sigma_ss
if cost < cost_opt:
cost_opt, beta_opt, ss_opt, sigma_ss_opt = cost, beta, ss, sigma_ss
if ((i + 1) % 100 == 0) or (i == votes.shape[0] - 1):
eps_before_ss = rdp_cum + cost_delta
eps_with_ss = (
eps_before_ss + pate_ss.compute_rdp_of_smooth_sensitivity_gaussian(
beta_opt, sigma_ss_opt, order))
print('{}: E[answered queries] = {:.1f}, RDP at {} goes from {:.3f} to '
'{:.3f} +/- {:.3f} (ss = {:.4}, beta = {:.4f}, sigma_ss = {:.3f})'.
format(i + 1, answered_cum, order, eps_before_ss, eps_with_ss,
ss_opt * sigma_ss_opt, ss_opt, beta_opt, sigma_ss_opt))
sys.stdout.flush()
# Return optimal parameters for the last iteration.
return beta_opt, ss_opt, sigma_ss_opt
####################
# HELPER FUNCTIONS #
####################
def _load_votes(counts_file, baseline_file, queries):
counts_file_expanded = os.path.expanduser(counts_file)
print('Reading raw votes from ' + counts_file_expanded)
sys.stdout.flush()
votes = np.load(counts_file_expanded)
print('Shape of the votes matrix = {}'.format(votes.shape))
if baseline_file is not None:
baseline_file_expanded = os.path.expanduser(baseline_file)
print('Reading baseline values from ' + baseline_file_expanded)
sys.stdout.flush()
baseline = np.load(baseline_file_expanded)
if votes.shape != baseline.shape:
raise ValueError(
'Counts file and baseline file must have the same shape. Got {} and '
'{} instead.'.format(votes.shape, baseline.shape))
else:
baseline = np.zeros_like(votes)
if queries is not None:
if votes.shape[0] < queries:
raise ValueError('Expect {} rows, got {} in {}'.format(
queries, votes.shape[0], counts_file))
# Truncate the votes matrix to the number of queries made.
votes = votes[:queries,]
baseline = baseline[:queries,]
else:
print('Process all {} input rows. (Use --queries flag to truncate.)'.format(
votes.shape[0]))
return votes, baseline
def _count_teachers(votes):
s = np.sum(votes, axis=1)
num_teachers = int(max(s))
if min(s) != num_teachers:
raise ValueError(
'Matrix of votes is malformed: the number of votes is not the same '
'across rows.')
return num_teachers
def _is_data_ind_step1(num_teachers, threshold, sigma1, orders):
if threshold is None:
return True
return np.all(
pate.is_data_independent_always_opt_threshold(num_teachers, threshold,
sigma1, orders))
def _is_data_ind_step2(num_teachers, num_classes, sigma, orders):
return np.all(
pate.is_data_independent_always_opt_gaussian(num_teachers, num_classes,
sigma, orders))
def main(argv):
del argv # Unused.
if (FLAGS.threshold is None) != (FLAGS.sigma1 is None):
raise ValueError(
'--threshold flag and --sigma1 flag must be present or absent '
'simultaneously.')
if FLAGS.order is None:
# Long list of orders.
orders = np.concatenate((np.arange(2, 100 + 1, .5),
np.logspace(np.log10(100), np.log10(500),
num=100)))
# Short list of orders.
# orders = np.round(
# np.concatenate((np.arange(2, 50 + 1, 1),
# np.logspace(np.log10(50), np.log10(1000), num=20))))
else:
orders = np.array([FLAGS.order])
votes, baseline = _load_votes(FLAGS.counts_file, FLAGS.baseline_file,
FLAGS.queries)
if FLAGS.teachers is None:
num_teachers = _count_teachers(votes)
else:
num_teachers = FLAGS.teachers
num_classes = votes.shape[1]
order = _compute_rdp(votes, baseline, FLAGS.threshold, FLAGS.sigma1,
FLAGS.sigma2, FLAGS.delta, orders,
FLAGS.data_independent)
ind_step1 = _is_data_ind_step1(num_teachers, FLAGS.threshold, FLAGS.sigma1,
order)
ind_step2 = _is_data_ind_step2(num_teachers, num_classes, FLAGS.sigma2, order)
if FLAGS.data_independent or (ind_step1 and ind_step2):
print('Nothing to do here, all analyses are data-independent.')
return
if not _check_conditions(FLAGS.sigma2, num_classes, [order]):
return # Quit early: sufficient conditions for correctness fail to hold.
beta_opt, ss_opt, sigma_ss_opt = _find_optimal_smooth_sensitivity_parameters(
votes, baseline, num_teachers, FLAGS.threshold, FLAGS.sigma1,
FLAGS.sigma2, FLAGS.delta, ind_step1, ind_step2, order)
print('Optimal beta = {:.4f}, E[SS_beta] = {:.4}, sigma_ss = {:.2f}'.format(
beta_opt, ss_opt, sigma_ss_opt))
if __name__ == '__main__':
app.run(main)
| [
"[email protected]"
] | |
4f1483114b60a70f6e62341fea4736adc8e4abbf | 35f9695acb95029f2dd87a2cc214b0b34935de17 | /update.py | 9f4da1693c181b6386fe9b140233603190f5e838 | [
"BSD-3-Clause"
] | permissive | dragonix11/aurdiff | b5382e7fd38f4d2c370ad157fcaf18d8ba48c0d9 | b4ffdb2afcd30cac7cf24ca42fab0f0cdc7130e0 | refs/heads/master | 2021-01-18T18:43:29.145163 | 2013-11-10T10:19:26 | 2013-11-10T10:19:26 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,337 | py | # Copyright 2013 Virgil Dupras (http://www.hardcoded.net)
#
# This software is licensed under the "BSD" License as described in the "LICENSE" file,
# which should be included with this package.
import os.path as op
from urllib.request import urlopen
import subprocess
import json
from bs4 import BeautifulSoup
HERE = op.dirname(__file__)
AUR_FOLDER = op.join(HERE, 'aur')
BASE_URL = 'https://aur.archlinux.org'
def get_pkg_list():
result = [] # (name, version)
URL = BASE_URL + '/packages/?SB=a&SO=d&O=0&PP=250'
with urlopen(URL) as fp:
contents = fp.read()
soup = BeautifulSoup(contents)
table = soup('table', class_='results')[0]
rows = table.tbody('tr')
for row in rows:
# Strangely enough, when querying through urlopen, we don't have the checkbox column. Is
# this column added through JS?
pair = (row('td')[1].text, row('td')[2].text)
result.append(pair)
return result
def download_pkgbuild(pkgname):
URL = '%s/packages/%s/' % (BASE_URL, pkgname)
with urlopen(URL) as fp:
contents = fp.read()
soup = BeautifulSoup(contents)
pkgbuild_url = BASE_URL + soup('div', id='actionlist')[0].ul('li')[0].a['href']
with urlopen(pkgbuild_url) as fp:
contents = fp.read()
with open(op.join(AUR_FOLDER, pkgname), 'wb') as fp:
fp.write(contents)
def main():
json_path = op.join(HERE, 'lastupdate.json')
with open(json_path, 'rt') as fp:
info = json.load(fp)
lastname = info['name']
lastversion = info['version']
pkglist = get_pkg_list()
if (lastname, lastversion) in pkglist:
index = pkglist.index((lastname, lastversion))
pkglist = pkglist[:index]
if not pkglist:
print("Nothing to update")
return
for name, version in reversed(pkglist):
print("Updating %s to %s" % (name, version))
download_pkgbuild(name)
subprocess.call(['git', 'add', op.join(AUR_FOLDER, name)])
lastname, lastversion = pkglist[0]
info = {'name': lastname, 'version': lastversion}
with open(json_path, 'wt') as fp:
json.dump(info, fp)
subprocess.call(['git', 'add', json_path])
commit_msg = "Updated %d packages" % len(pkglist)
subprocess.call(['git', 'commit', '-m', commit_msg])
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
2c2f2040fde54d6c77504275f121070bd8c62399 | f67469cba32f16399ef2e65d2731c5eae36a53b3 | /config/settings/base.py | 0fc9bcb527d5fe44323aef0ac7bfc93b4daf6ca8 | [
"MIT"
] | permissive | ZedeLab/WhatsUpAddis-BE | 1f3fecc9c5705eca500f13aa5a831fcf81fb5faf | 702d14eff969673ce88dbd6f4cad690cbb580c30 | refs/heads/master | 2023-04-06T03:53:57.537028 | 2018-11-30T21:54:24 | 2018-11-30T21:54:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,476 | py | """
Django settings for config project.
Generated by 'django-admin startproject' using Django 2.0.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
from decouple import config
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = config('SECRET_KEY')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
# Third Party apps
INSTALLED_APPS += [
'authtools',
]
# Project apps
INSTALLED_APPS += [
'accounts',
'core',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'config.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates'), ],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'config.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': config('DB_NAME'),
'USER': config('DB_USER'),
'PASSWORD': config('DB_PASSWORD'),
'HOST': config('DB_HOST'),
'PORT': '',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Africa/Addis_Ababa'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
# Custom Auth User Model
AUTH_USER_MODEL = 'accounts.User'
| [
"[email protected]"
] | |
f9d5e2852c279495088d3720a464eafb1411bdd6 | 0607fa7255fa47608407b3cfd819e83d55ba9eab | /InvenTree/stock/views.py | 26a957a64403ad18198d94f4949cfe424beec2bd | [
"MIT"
] | permissive | IsThisNameGoodEnough/InvenTree | f7d71aa8c33f69654b2bb4d3827d4a60290df8ad | fa789036e0ae7d56ced3c9e1f2d2ff596983a365 | refs/heads/master | 2020-07-26T02:31:34.316571 | 2019-09-13T14:14:45 | 2019-09-13T14:14:45 | 208,505,299 | 0 | 0 | MIT | 2019-09-14T21:20:24 | 2019-09-14T21:20:24 | null | UTF-8 | Python | false | false | 31,264 | py | """
Django views for interacting with Stock app
"""
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.core.exceptions import ValidationError
from django.views.generic.edit import FormMixin
from django.views.generic import DetailView, ListView
from django.forms.models import model_to_dict
from django.forms import HiddenInput
from django.urls import reverse
from django.utils.translation import ugettext as _
from InvenTree.views import AjaxView
from InvenTree.views import AjaxUpdateView, AjaxDeleteView, AjaxCreateView
from InvenTree.views import QRCodeView
from InvenTree.status_codes import StockStatus
from InvenTree.helpers import str2bool, DownloadFile, GetExportFormats
from InvenTree.helpers import ExtractSerialNumbers
from datetime import datetime
import tablib
from company.models import Company
from part.models import Part
from .models import StockItem, StockLocation, StockItemTracking
from .forms import EditStockLocationForm
from .forms import CreateStockItemForm
from .forms import EditStockItemForm
from .forms import AdjustStockForm
from .forms import TrackingEntryForm
from .forms import SerializeStockForm
from .forms import ExportOptionsForm
class StockIndex(ListView):
""" StockIndex view loads all StockLocation and StockItem object
"""
model = StockItem
template_name = 'stock/location.html'
context_obect_name = 'locations'
def get_context_data(self, **kwargs):
context = super(StockIndex, self).get_context_data(**kwargs).copy()
# Return all top-level locations
locations = StockLocation.objects.filter(parent=None)
context['locations'] = locations
context['items'] = StockItem.objects.all()
return context
class StockLocationDetail(DetailView):
"""
Detailed view of a single StockLocation object
"""
context_object_name = 'location'
template_name = 'stock/location.html'
queryset = StockLocation.objects.all()
model = StockLocation
class StockItemDetail(DetailView):
"""
Detailed view of a single StockItem object
"""
context_object_name = 'item'
template_name = 'stock/item.html'
queryset = StockItem.objects.all()
model = StockItem
class StockLocationEdit(AjaxUpdateView):
"""
View for editing details of a StockLocation.
This view is used with the EditStockLocationForm to deliver a modal form to the web view
"""
model = StockLocation
form_class = EditStockLocationForm
context_object_name = 'location'
ajax_template_name = 'modal_form.html'
ajax_form_title = 'Edit Stock Location'
def get_form(self):
""" Customize form data for StockLocation editing.
Limit the choices for 'parent' field to those which make sense.
"""
form = super(AjaxUpdateView, self).get_form()
location = self.get_object()
# Remove any invalid choices for the 'parent' field
parent_choices = StockLocation.objects.all()
parent_choices = parent_choices.exclude(id__in=location.getUniqueChildren())
form.fields['parent'].queryset = parent_choices
return form
class StockLocationQRCode(QRCodeView):
""" View for displaying a QR code for a StockLocation object """
ajax_form_title = "Stock Location QR code"
def get_qr_data(self):
""" Generate QR code data for the StockLocation """
try:
loc = StockLocation.objects.get(id=self.pk)
return loc.format_barcode()
except StockLocation.DoesNotExist:
return None
class StockExportOptions(AjaxView):
""" Form for selecting StockExport options """
model = StockLocation
ajax_form_title = 'Stock Export Options'
form_class = ExportOptionsForm
def post(self, request, *args, **kwargs):
self.request = request
fmt = request.POST.get('file_format', 'csv').lower()
cascade = str2bool(request.POST.get('include_sublocations', False))
# Format a URL to redirect to
url = reverse('stock-export')
url += '?format=' + fmt
url += '&cascade=' + str(cascade)
data = {
'form_valid': True,
'format': fmt,
'cascade': cascade
}
return self.renderJsonResponse(self.request, self.form_class(), data=data)
def get(self, request, *args, **kwargs):
return self.renderJsonResponse(request, self.form_class())
class StockExport(AjaxView):
""" Export stock data from a particular location.
Returns a file containing stock information for that location.
"""
model = StockItem
def get(self, request, *args, **kwargs):
export_format = request.GET.get('format', 'csv').lower()
# Check if a particular location was specified
loc_id = request.GET.get('location', None)
location = None
if loc_id:
try:
location = StockLocation.objects.get(pk=loc_id)
except (ValueError, StockLocation.DoesNotExist):
pass
# Check if a particular supplier was specified
sup_id = request.GET.get('supplier', None)
supplier = None
if sup_id:
try:
supplier = Company.objects.get(pk=sup_id)
except (ValueError, Company.DoesNotExist):
pass
# Check if a particular part was specified
part_id = request.GET.get('part', None)
part = None
if part_id:
try:
part = Part.objects.get(pk=part_id)
except (ValueError, Part.DoesNotExist):
pass
if export_format not in GetExportFormats():
export_format = 'csv'
filename = 'InvenTree_Stocktake_{date}.{fmt}'.format(
date=datetime.now().strftime("%d-%b-%Y"),
fmt=export_format
)
if location:
# CHeck if locations should be cascading
cascade = str2bool(request.GET.get('cascade', True))
stock_items = location.get_stock_items(cascade)
else:
cascade = True
stock_items = StockItem.objects.all()
if part:
stock_items = stock_items.filter(part=part)
if supplier:
stock_items = stock_items.filter(supplier_part__supplier=supplier)
# Filter out stock items that are not 'in stock'
stock_items = stock_items.filter(customer=None)
stock_items = stock_items.filter(belongs_to=None)
# Pre-fetch related fields to reduce DB queries
stock_items = stock_items.prefetch_related('part', 'supplier_part__supplier', 'location', 'purchase_order', 'build')
# Column headers
headers = [
_('Stock ID'),
_('Part ID'),
_('Part'),
_('Supplier Part ID'),
_('Supplier ID'),
_('Supplier'),
_('Location ID'),
_('Location'),
_('Quantity'),
_('Batch'),
_('Serial'),
_('Status'),
_('Notes'),
_('Review Needed'),
_('Last Updated'),
_('Last Stocktake'),
_('Purchase Order ID'),
_('Build ID'),
]
data = tablib.Dataset(headers=headers)
for item in stock_items:
line = []
line.append(item.pk)
line.append(item.part.pk)
line.append(item.part.full_name)
if item.supplier_part:
line.append(item.supplier_part.pk)
line.append(item.supplier_part.supplier.pk)
line.append(item.supplier_part.supplier.name)
else:
line.append('')
line.append('')
line.append('')
if item.location:
line.append(item.location.pk)
line.append(item.location.name)
else:
line.append('')
line.append('')
line.append(item.quantity)
line.append(item.batch)
line.append(item.serial)
line.append(StockStatus.label(item.status))
line.append(item.notes)
line.append(item.review_needed)
line.append(item.updated)
line.append(item.stocktake_date)
if item.purchase_order:
line.append(item.purchase_order.pk)
else:
line.append('')
if item.build:
line.append(item.build.pk)
else:
line.append('')
data.append(line)
filedata = data.export(export_format)
return DownloadFile(filedata, filename)
class StockItemQRCode(QRCodeView):
""" View for displaying a QR code for a StockItem object """
ajax_form_title = "Stock Item QR Code"
def get_qr_data(self):
""" Generate QR code data for the StockItem """
try:
item = StockItem.objects.get(id=self.pk)
return item.format_barcode()
except StockItem.DoesNotExist:
return None
class StockAdjust(AjaxView, FormMixin):
""" View for enacting simple stock adjustments:
- Take items from stock
- Add items to stock
- Count items
- Move stock
"""
ajax_template_name = 'stock/stock_adjust.html'
ajax_form_title = 'Adjust Stock'
form_class = AdjustStockForm
stock_items = []
def get_GET_items(self):
""" Return list of stock items initally requested using GET.
Items can be retrieved by:
a) List of stock ID - stock[]=1,2,3,4,5
b) Parent part - part=3
c) Parent location - location=78
d) Single item - item=2
"""
# Start with all 'in stock' items
items = StockItem.objects.filter(customer=None, belongs_to=None)
# Client provides a list of individual stock items
if 'stock[]' in self.request.GET:
items = items.filter(id__in=self.request.GET.getlist('stock[]'))
# Client provides a PART reference
elif 'part' in self.request.GET:
items = items.filter(part=self.request.GET.get('part'))
# Client provides a LOCATION reference
elif 'location' in self.request.GET:
items = items.filter(location=self.request.GET.get('location'))
# Client provides a single StockItem lookup
elif 'item' in self.request.GET:
items = [StockItem.objects.get(id=self.request.GET.get('item'))]
# Unsupported query (no items)
else:
items = []
for item in items:
# Initialize quantity to zero for addition/removal
if self.stock_action in ['take', 'add']:
item.new_quantity = 0
# Initialize quantity at full amount for counting or moving
else:
item.new_quantity = item.quantity
return items
def get_POST_items(self):
""" Return list of stock items sent back by client on a POST request """
items = []
for item in self.request.POST:
if item.startswith('stock-id-'):
pk = item.replace('stock-id-', '')
q = self.request.POST[item]
try:
stock_item = StockItem.objects.get(pk=pk)
except StockItem.DoesNotExist:
continue
stock_item.new_quantity = q
items.append(stock_item)
return items
def get_context_data(self):
context = super().get_context_data()
context['stock_items'] = self.stock_items
context['stock_action'] = self.stock_action
context['stock_action_title'] = self.stock_action.capitalize()
return context
def get_form(self):
form = super().get_form()
if not self.stock_action == 'move':
form.fields.pop('destination')
form.fields.pop('set_loc')
return form
def get(self, request, *args, **kwargs):
self.request = request
# Action
self.stock_action = request.GET.get('action', '').lower()
# Pick a default action...
if self.stock_action not in ['move', 'count', 'take', 'add']:
self.stock_action = 'count'
# Choose the form title based on the action
titles = {
'move': 'Move Stock',
'count': 'Count Stock',
'take': 'Remove Stock',
'add': 'Add Stock'
}
self.ajax_form_title = titles[self.stock_action]
# Save list of items!
self.stock_items = self.get_GET_items()
return self.renderJsonResponse(request, self.get_form())
def post(self, request, *args, **kwargs):
self.request = request
self.stock_action = request.POST.get('stock_action', 'invalid').lower()
# Update list of stock items
self.stock_items = self.get_POST_items()
form = self.get_form()
valid = form.is_valid()
for item in self.stock_items:
try:
item.new_quantity = int(item.new_quantity)
except ValueError:
item.error = _('Must enter integer value')
valid = False
continue
if item.new_quantity < 0:
item.error = _('Quantity must be positive')
valid = False
continue
if self.stock_action in ['move', 'take']:
if item.new_quantity > item.quantity:
item.error = _('Quantity must not exceed {x}'.format(x=item.quantity))
valid = False
continue
confirmed = str2bool(request.POST.get('confirm'))
if not confirmed:
valid = False
form.errors['confirm'] = [_('Confirm stock adjustment')]
data = {
'form_valid': valid,
}
if valid:
result = self.do_action()
data['success'] = result
# Special case - Single Stock Item
# If we deplete the stock item, we MUST redirect to a new view
single_item = len(self.stock_items) == 1
if result and single_item:
# Was the entire stock taken?
item = self.stock_items[0]
if item.quantity == 0:
# Instruct the form to redirect
data['url'] = reverse('stock-index')
return self.renderJsonResponse(request, form, data=data)
def do_action(self):
""" Perform stock adjustment action """
if self.stock_action == 'move':
destination = None
set_default_loc = str2bool(self.request.POST.get('set_loc', False))
try:
destination = StockLocation.objects.get(id=self.request.POST.get('destination'))
except StockLocation.DoesNotExist:
pass
except ValueError:
pass
return self.do_move(destination, set_default_loc)
elif self.stock_action == 'add':
return self.do_add()
elif self.stock_action == 'take':
return self.do_take()
elif self.stock_action == 'count':
return self.do_count()
else:
return 'No action performed'
def do_add(self):
count = 0
note = self.request.POST['note']
for item in self.stock_items:
if item.new_quantity <= 0:
continue
item.add_stock(item.new_quantity, self.request.user, notes=note)
count += 1
return _("Added stock to {n} items".format(n=count))
def do_take(self):
count = 0
note = self.request.POST['note']
for item in self.stock_items:
if item.new_quantity <= 0:
continue
item.take_stock(item.new_quantity, self.request.user, notes=note)
count += 1
return _("Removed stock from {n} items".format(n=count))
def do_count(self):
count = 0
note = self.request.POST['note']
for item in self.stock_items:
item.stocktake(item.new_quantity, self.request.user, notes=note)
count += 1
return _("Counted stock for {n} items".format(n=count))
def do_move(self, destination, set_loc=None):
""" Perform actual stock movement """
count = 0
note = self.request.POST['note']
for item in self.stock_items:
# Avoid moving zero quantity
if item.new_quantity <= 0:
continue
# If we wish to set the destination location to the default one
if set_loc:
item.part.default_location = destination
item.part.save()
# Do not move to the same location (unless the quantity is different)
if destination == item.location and item.new_quantity == item.quantity:
continue
item.move(destination, note, self.request.user, quantity=int(item.new_quantity))
count += 1
if count == 0:
return _('No items were moved')
else:
return _('Moved {n} items to {dest}'.format(
n=count,
dest=destination.pathstring))
class StockItemEdit(AjaxUpdateView):
"""
View for editing details of a single StockItem
"""
model = StockItem
form_class = EditStockItemForm
context_object_name = 'item'
ajax_template_name = 'modal_form.html'
ajax_form_title = 'Edit Stock Item'
def get_form(self):
""" Get form for StockItem editing.
Limit the choices for supplier_part
"""
form = super(AjaxUpdateView, self).get_form()
item = self.get_object()
# If the part cannot be purchased, hide the supplier_part field
if not item.part.purchaseable:
form.fields['supplier_part'].widget = HiddenInput()
else:
query = form.fields['supplier_part'].queryset
query = query.filter(part=item.part.id)
form.fields['supplier_part'].queryset = query
if not item.part.trackable:
form.fields.pop('serial')
return form
class StockLocationCreate(AjaxCreateView):
"""
View for creating a new StockLocation
A parent location (another StockLocation object) can be passed as a query parameter
"""
model = StockLocation
form_class = EditStockLocationForm
context_object_name = 'location'
ajax_template_name = 'modal_form.html'
ajax_form_title = 'Create new Stock Location'
def get_initial(self):
initials = super(StockLocationCreate, self).get_initial().copy()
loc_id = self.request.GET.get('location', None)
if loc_id:
try:
initials['parent'] = StockLocation.objects.get(pk=loc_id)
except StockLocation.DoesNotExist:
pass
return initials
class StockItemSerialize(AjaxUpdateView):
""" View for manually serializing a StockItem """
model = StockItem
ajax_template_name = 'stock/item_serialize.html'
ajax_form_title = 'Serialize Stock'
form_class = SerializeStockForm
def get_initial(self):
initials = super().get_initial().copy()
item = self.get_object()
initials['quantity'] = item.quantity
initials['destination'] = item.location.pk
return initials
def get(self, request, *args, **kwargs):
return super().get(request, *args, **kwargs)
def post(self, request, *args, **kwargs):
form = self.get_form()
item = self.get_object()
quantity = request.POST.get('quantity', 0)
serials = request.POST.get('serial_numbers', '')
dest_id = request.POST.get('destination', None)
notes = request.POST.get('note', '')
user = request.user
valid = True
try:
destination = StockLocation.objects.get(pk=dest_id)
except (ValueError, StockLocation.DoesNotExist):
destination = None
try:
numbers = ExtractSerialNumbers(serials, quantity)
except ValidationError as e:
form.errors['serial_numbers'] = e.messages
valid = False
numbers = []
if valid:
try:
item.serializeStock(quantity, numbers, user, notes=notes, location=destination)
except ValidationError as e:
messages = e.message_dict
for k in messages.keys():
if k in ['quantity', 'destination', 'serial_numbers']:
form.errors[k] = messages[k]
else:
form.non_field_errors = messages[k]
valid = False
data = {
'form_valid': valid,
}
return self.renderJsonResponse(request, form, data=data)
class StockItemCreate(AjaxCreateView):
"""
View for creating a new StockItem
Parameters can be pre-filled by passing query items:
- part: The part of which the new StockItem is an instance
- location: The location of the new StockItem
If the parent part is a "tracked" part, provide an option to create uniquely serialized items
rather than a bulk quantity of stock items
"""
model = StockItem
form_class = CreateStockItemForm
context_object_name = 'item'
ajax_template_name = 'modal_form.html'
ajax_form_title = 'Create new Stock Item'
def get_form(self):
""" Get form for StockItem creation.
Overrides the default get_form() method to intelligently limit
ForeignKey choices based on other selections
"""
form = super().get_form()
# If the user has selected a Part, limit choices for SupplierPart
if form['part'].value():
part_id = form['part'].value()
try:
part = Part.objects.get(id=part_id)
# Hide the 'part' field (as a valid part is selected)
form.fields['part'].widget = HiddenInput()
# trackable parts get special consideration
if part.trackable:
form.fields['delete_on_deplete'].widget = HiddenInput()
form.fields['delete_on_deplete'].initial = False
else:
form.fields.pop('serial_numbers')
# If the part is NOT purchaseable, hide the supplier_part field
if not part.purchaseable:
form.fields['supplier_part'].widget = HiddenInput()
else:
# Pre-select the allowable SupplierPart options
parts = form.fields['supplier_part'].queryset
parts = parts.filter(part=part.id)
form.fields['supplier_part'].queryset = parts
# If there is one (and only one) supplier part available, pre-select it
all_parts = parts.all()
if len(all_parts) == 1:
# TODO - This does NOT work for some reason? Ref build.views.BuildItemCreate
form.fields['supplier_part'].initial = all_parts[0].id
except Part.DoesNotExist:
pass
# Otherwise if the user has selected a SupplierPart, we know what Part they meant!
elif form['supplier_part'].value() is not None:
pass
return form
def get_initial(self):
""" Provide initial data to create a new StockItem object
"""
# Is the client attempting to copy an existing stock item?
item_to_copy = self.request.GET.get('copy', None)
if item_to_copy:
try:
original = StockItem.objects.get(pk=item_to_copy)
initials = model_to_dict(original)
self.ajax_form_title = "Copy Stock Item"
except StockItem.DoesNotExist:
initials = super(StockItemCreate, self).get_initial().copy()
else:
initials = super(StockItemCreate, self).get_initial().copy()
part_id = self.request.GET.get('part', None)
loc_id = self.request.GET.get('location', None)
# Part field has been specified
if part_id:
try:
part = Part.objects.get(pk=part_id)
initials['part'] = part
initials['location'] = part.get_default_location()
initials['supplier_part'] = part.default_supplier
except Part.DoesNotExist:
pass
# Location has been specified
if loc_id:
try:
initials['location'] = StockLocation.objects.get(pk=loc_id)
except StockLocation.DoesNotExist:
pass
return initials
def post(self, request, *args, **kwargs):
""" Handle POST of StockItemCreate form.
- Manage serial-number valdiation for tracked parts
"""
form = self.get_form()
valid = form.is_valid()
if valid:
part_id = form['part'].value()
try:
part = Part.objects.get(id=part_id)
quantity = int(form['quantity'].value())
except (Part.DoesNotExist, ValueError):
part = None
quantity = 1
valid = False
if part is None:
form.errors['part'] = [_('Invalid part selection')]
else:
# A trackable part must provide serial numbesr
if part.trackable:
sn = request.POST.get('serial_numbers', '')
sn = str(sn).strip()
# If user has specified a range of serial numbers
if len(sn) > 0:
try:
serials = ExtractSerialNumbers(sn, quantity)
existing = []
for serial in serials:
if not StockItem.check_serial_number(part, serial):
existing.append(serial)
if len(existing) > 0:
exists = ",".join([str(x) for x in existing])
form.errors['serial_numbers'] = [_('The following serial numbers already exist: ({sn})'.format(sn=exists))]
valid = False
# At this point we have a list of serial numbers which we know are valid,
# and do not currently exist
form.clean()
data = form.cleaned_data
for serial in serials:
# Create a new stock item for each serial number
item = StockItem(
part=part,
quantity=1,
serial=serial,
supplier_part=data.get('supplier_part'),
location=data.get('location'),
batch=data.get('batch'),
delete_on_deplete=False,
status=data.get('status'),
notes=data.get('notes'),
URL=data.get('URL'),
)
item.save()
except ValidationError as e:
form.errors['serial_numbers'] = e.messages
valid = False
else:
# For non-serialized items, simply save the form.
# We need to call _post_clean() here because it is prevented in the form implementation
form.clean()
form._post_clean()
form.save()
data = {
'form_valid': valid,
}
return self.renderJsonResponse(request, form, data=data)
class StockLocationDelete(AjaxDeleteView):
"""
View to delete a StockLocation
Presents a deletion confirmation form to the user
"""
model = StockLocation
success_url = '/stock'
ajax_template_name = 'stock/location_delete.html'
context_object_name = 'location'
ajax_form_title = 'Delete Stock Location'
class StockItemDelete(AjaxDeleteView):
"""
View to delete a StockItem
Presents a deletion confirmation form to the user
"""
model = StockItem
success_url = '/stock/'
ajax_template_name = 'stock/item_delete.html'
context_object_name = 'item'
ajax_form_title = 'Delete Stock Item'
class StockItemTrackingDelete(AjaxDeleteView):
"""
View to delete a StockItemTracking object
Presents a deletion confirmation form to the user
"""
model = StockItemTracking
ajax_template_name = 'stock/tracking_delete.html'
ajax_form_title = 'Delete Stock Tracking Entry'
class StockTrackingIndex(ListView):
"""
StockTrackingIndex provides a page to display StockItemTracking objects
"""
model = StockItemTracking
template_name = 'stock/tracking.html'
context_object_name = 'items'
class StockItemTrackingEdit(AjaxUpdateView):
""" View for editing a StockItemTracking object """
model = StockItemTracking
ajax_form_title = 'Edit Stock Tracking Entry'
form_class = TrackingEntryForm
class StockItemTrackingCreate(AjaxCreateView):
""" View for creating a new StockItemTracking object.
"""
model = StockItemTracking
ajax_form_title = "Add Stock Tracking Entry"
form_class = TrackingEntryForm
def post(self, request, *args, **kwargs):
self.request = request
self.form = self.get_form()
valid = False
if self.form.is_valid():
stock_id = self.kwargs['pk']
if stock_id:
try:
stock_item = StockItem.objects.get(id=stock_id)
# Save new tracking information
tracking = self.form.save(commit=False)
tracking.item = stock_item
tracking.user = self.request.user
tracking.quantity = stock_item.quantity
tracking.date = datetime.now().date()
tracking.system = False
tracking.save()
valid = True
except (StockItem.DoesNotExist, ValueError):
pass
data = {
'form_valid': valid
}
return self.renderJsonResponse(request, self.form, data=data)
| [
"[email protected]"
] | |
7c31f7d64627e78b8f3832a823a1b501d4c78424 | 48e1cf7a4a39df57a38246da1f67f3f4dc8f2020 | /With_Sql/main/buy_product_form.py | 59abb15f507001600bb719ae5117f96c258b1c99 | [] | no_license | mdarifulislamroni21/django_website | 601d1d0e5422419895363968a0f4d1c50dfd9daa | 28562e3e8f07ddac49057eba07411f05b08918ff | refs/heads/master | 2023-06-23T22:30:32.317834 | 2021-07-23T04:25:37 | 2021-07-23T04:25:37 | 388,668,153 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 353 | py | from django import forms
from main.models import user_details
#create New Form
class user_details_connect(forms.ModelForm):
class Meta:
model=user_details
fields="__all__"
widgets={
'buydate':forms.TextInput(attrs={'type':'date'}),
'product':forms.TextInput(attrs={'placeholder':'T-Shart'})
}
| [
"[email protected]"
] | |
7faecae779bde3911e044903901c3c9069dfa309 | 4fdaee9f2612a8c429991a2042dffcee80e7a641 | /rootfs/qboxhd/rootfs/usr/local/lib/enigma2/python/Plugins/SystemPlugins/NetworkBrowser/__init__.py | dc8dd57b592a450dadd83765c112f3da212d8dfb | [] | no_license | OpenSH4/qboxhd | 841072db3b0eaecdcac116b5f96268d47115cdec | 91dd37a5311b5c53fb088ab0ce902ee49552ece0 | refs/heads/master | 2020-09-07T17:55:36.114816 | 2012-01-08T21:33:02 | 2012-01-08T21:33:02 | 220,866,062 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 1,186 | py | # -*- coding: ISO-8859-1 -*-
#===============================================================================
# NetworkBrowser and MountManager Plugin by acid-burn
# netscan lib by Nix_niX
# for further License informations see the corresponding License files
# or SourceCodes
#
#===============================================================================
from Components.Language import language
from Tools.Directories import resolveFilename, SCOPE_PLUGINS, SCOPE_LANGUAGE
import os,gettext
PluginLanguageDomain = "NetworkBrowser"
PluginLanguagePath = "SystemPlugins/NetworkBrowser/po"
def localeInit():
lang = language.getLanguage()[:2] # getLanguage returns e.g. "fi_FI" for "language_country"
os.environ["LANGUAGE"] = lang # Enigma doesn't set this (or LC_ALL, LC_MESSAGES, LANG). gettext needs it!
print "[NetworkBrowser] set language to ", lang
gettext.bindtextdomain(PluginLanguageDomain, resolveFilename(SCOPE_PLUGINS, PluginLanguagePath))
def _(txt):
t = gettext.dgettext(PluginLanguageDomain, txt)
if t == txt:
print "[NetworkBrowser] fallback to default translation for", txt
t = gettext.gettext(txt)
return t
localeInit()
language.addCallback(localeInit)
| [
"duopaguilar@0410bcea-ab32-4fec-9f21-c18eae94034e"
] | duopaguilar@0410bcea-ab32-4fec-9f21-c18eae94034e |
c6fe6443ef899c8ed96248028a7916f6110ff9bd | 8e2fa36281924fd28327a49d83f9855c6ff0c619 | /photod-backend/photod/api/urls.py | 254dfe2cbae9365ac31db41010e200dc82289133 | [
"MIT"
] | permissive | basilfx/Photod | 10a7ba9200b7337f01bd2f3ad857caa5b6797b7c | 0056a07e39b61e2f3b1c94f1309917dd8b24b654 | refs/heads/master | 2023-08-23T17:16:48.646493 | 2017-08-15T18:37:59 | 2017-08-15T18:37:59 | 98,336,863 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 282 | py | from django.conf.urls import url
from django.conf import settings
from django.views.decorators.csrf import csrf_exempt
from graphene_django.views import GraphQLView
urlpatterns = [
url(r'^graphql', csrf_exempt(
GraphQLView.as_view(graphiql=settings.DEBUG)
)),
]
| [
"[email protected]"
] | |
cca1348bd5f51fdecc2aa6f2960bc326a23a307d | 2b3e9b32a38f4992c529de56b4baa51e1a674c4e | /ccui/attachments/models.py | 787f5715353a1411f0f00595a423e5a97c5b968d | [] | no_license | camd/caseconductor-ui | 2c4f63fd6c20ee421012d8770b3b873c1b4f4232 | deb6b22ed417740bf947e86938710bd5fa2ee2e7 | refs/heads/master | 2021-01-18T05:36:22.647236 | 2011-10-10T14:48:29 | 2011-10-10T14:48:29 | 2,447,257 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,181 | py | from ..core.api import RemoteObject, ListObject, fields, Named
from ..static.fields import StaticData
class Attachment(Named, RemoteObject):
name = fields.Field()
description = fields.Field()
url = fields.Field()
size = fields.Field()
attachmentType = StaticData("ATTACHMENTTYPE", "attachmentTypeId")
def __unicode__(self):
return self.name
def delete(self):
# to delete an attachment from its canonical URL requires providing an
# entityID and entityType; simpler to delete it via the entity we got
# it from.
source = getattr(self, "linked_from", None)
if source is None:
raise ValueError("Cannot delete attachment without source context.")
return super(Attachment, self).delete(
url=source._location + "/attachments/" + self.id)
class AttachmentList(ListObject):
entryclass = Attachment
api_name = "attachments"
default_url = "attachments"
entries = fields.List(fields.Object(Attachment))
def __iter__(self):
for att in super(AttachmentList, self).__iter__():
att.linked_from = self.linked_from
yield att
| [
"[email protected]"
] | |
d98640284ad3872cd54920e573886ae38180dbb4 | a9c0daa4a7b9a4d7341afcab270c5b5debb8c13f | /env/bin/easy_install | d376d9adfa89d50b9a0aa01da0885b7db2560bbc | [] | no_license | phamcong/alienator-plf | bad8c4e003fd189c43243b31ef2b975b6f154754 | ea65628af66fbca51f2248ceb4ba93f858dbddce | refs/heads/master | 2022-11-26T01:28:38.286261 | 2017-11-07T15:12:08 | 2017-11-07T15:12:08 | 109,412,097 | 0 | 1 | null | 2020-07-25T23:43:17 | 2017-11-03T15:30:22 | JavaScript | UTF-8 | Python | false | false | 283 | #!/Users/cuongpham/Data/Coding/ALIENNOR/aliennor-plf/env/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from setuptools.command.easy_install import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(main())
| [
"[email protected]"
] | ||
f68897246e241e33ad3e985789afccfa8577f59c | 29d9a947fc954e5d182cb2d8292f96c0d27cdde0 | /apps/xrayuploader/advanced_search.py | cb2331799ae21142d69e1f13dec228b6e0b9caf3 | [
"Apache-2.0"
] | permissive | slogan621/tscharts | 43ee950a76323a8b99a8ab88743f2e5e5a1e6f42 | 034e661c3f4739f1dc04b3aef096e6bf6cf6e8d3 | refs/heads/master | 2023-08-10T19:04:50.317820 | 2023-07-29T21:03:27 | 2023-07-29T21:03:27 | 55,322,748 | 25 | 10 | Apache-2.0 | 2023-07-08T20:32:54 | 2016-04-03T00:35:37 | Python | UTF-8 | Python | false | false | 2,491 | py | # advanced_search.py
import wx
from pubsub import pub
class AdvancedSearch(wx.Panel):
def __init__(self, parent):
super().__init__(parent)
self.main_sizer = wx.BoxSizer(wx.VERTICAL)
self.free_text = wx.TextCtrl(self)
self.ui_helper('Free text search:', self.free_text)
self.nasa_center = wx.TextCtrl(self)
self.ui_helper('NASA Center:', self.nasa_center)
self.description = wx.TextCtrl(self)
self.ui_helper('Description:', self.description)
self.description_508 = wx.TextCtrl(self)
self.ui_helper('Description 508:', self.description_508)
self.keywords = wx.TextCtrl(self)
self.ui_helper('Keywords (separate with commas):',
self.keywords)
self.location = wx.TextCtrl(self)
self.ui_helper('Location:', self.location)
self.nasa_id = wx.TextCtrl(self)
self.ui_helper('NASA ID:', self.nasa_id)
self.photographer = wx.TextCtrl(self)
self.ui_helper('Photographer:', self.photographer)
self.secondary_creator = wx.TextCtrl(self)
self.ui_helper('Secondary photographer:', self.secondary_creator)
self.title = wx.TextCtrl(self)
self.ui_helper('Title:', self.title)
search = wx.Button(self, label='Search')
search.Bind(wx.EVT_BUTTON, self.on_search)
self.main_sizer.Add(search, 0, wx.ALL | wx.CENTER, 5)
self.SetSizer(self.main_sizer)
def ui_helper(self, label, textctrl):
sizer = wx.BoxSizer()
lbl = wx.StaticText(self, label=label, size=(150, -1))
sizer.Add(lbl, 0, wx.ALL, 5)
sizer.Add(textctrl, 1, wx.ALL | wx.EXPAND, 5)
self.main_sizer.Add(sizer, 0, wx.EXPAND)
def on_search(self, event):
query = {'q': self.free_text.GetValue(),
'media_type': 'image',
'center': self.nasa_center.GetValue(),
'description': self.description.GetValue(),
'description_508': self.description_508.GetValue(),
'keywords': self.keywords.GetValue(),
'location': self.location.GetValue(),
'nasa_id': self.nasa_id.GetValue(),
'photographer': self.photographer.GetValue(),
'secondary_creator': self.secondary_creator.GetValue(),
'title': self.title.GetValue()}
pub.sendMessage('update_ui')
pub.sendMessage('search_results', query=query) | [
"[email protected]"
] | |
41753a82068da0014a137032c5c1e406e6f0b79c | e7c03b71f26c463b2670c52cd2fddbc198e3c8cb | /apps/webhooks/signals.py | ce67cc1fc3c8fd8ae1badfe76adf4620eba30505 | [] | no_license | nerosketch/djing2 | 71cc96f4829fc047d788dd7d8a94f1035e9740f9 | 1fbb0941f26389cbfdc8015527ab0d426c2e2c01 | refs/heads/master | 2023-01-13T15:12:50.492646 | 2022-11-18T11:24:21 | 2022-11-18T11:24:21 | 196,469,351 | 7 | 3 | null | 2020-02-29T19:38:37 | 2019-07-11T21:50:34 | Python | UTF-8 | Python | false | false | 2,516 | py | import sys
from typing import Optional, Any
from django.dispatch import receiver
from django.db.models.signals import (
post_save, post_delete,
pre_save, pre_delete
)
from rest_framework.serializers import ModelSerializer
from webhooks.models import HookObserverNotificationTypes
from webhooks.tasks import send_update2observers_task
def _model_instance_to_dict(instance, model_class) -> dict:
class _model_serializer(ModelSerializer):
class Meta:
model = model_class
fields = '__all__'
ser = _model_serializer(instance=instance)
return ser.data
def receiver_no_test(*args, **kwargs):
def _wrapper(fn):
if 'test' in sys.argv:
return fn
return receiver(*args, **kwargs)(fn)
return _wrapper
def _send2task(notify_type: HookObserverNotificationTypes, instance: Optional[Any], sender):
app_label_str = sender._meta.app_label
model_str = sender._meta.object_name
if instance:
instance_data = _model_instance_to_dict(
instance=instance,
model_class=sender
)
else:
instance_data = None
send_update2observers_task.delay(
notification_type=notify_type.value,
app_label=app_label_str,
model_str=model_str,
data=instance_data,
)
@receiver_no_test(post_save)
def _post_save_signal_handler(sender, instance, **kwargs):
_send2task(
notify_type=HookObserverNotificationTypes.MODEL_POST_SAVE,
instance=instance,
sender=sender
)
@receiver_no_test(post_delete)
def _post_del_signal_handler(sender, instance, **kwargs):
_send2task(
notify_type=HookObserverNotificationTypes.MODEL_POST_DELETE,
instance=instance,
sender=sender
)
# @receiver(post_init)
# def _post_init_signal_handler(sender, **kwargs):
# print('_post_init_signal_handler', sender, kwargs)
@receiver_no_test(pre_save)
def _pre_save_signal_handler(sender, instance, **kwargs):
_send2task(
notify_type=HookObserverNotificationTypes.MODEL_PRE_SAVE,
instance=instance if instance else None,
sender=sender
)
@receiver_no_test(pre_delete)
def _pre_del_signal_handler(sender, instance, **kwargs):
_send2task(
notify_type=HookObserverNotificationTypes.MODEL_PRE_DELETE,
instance=instance,
sender=sender
)
# @receiver(pre_init)
# def _pre_init_signal_handler(sender, **kwargs):
# print('_pre_init_signal_handler', sender, kwargs)
| [
"[email protected]"
] | |
3f2430aec993d53a12bf2b286d2c6e954df75aa9 | 07bd1848e35bbb75ef4d23f1982af618aa176852 | /chap08/list0801.py | 61d7b443504731e273a6c346de2ce224320e8385 | [] | no_license | kimurakousuke/MeiKaiPython | c0b56be8fcb79b39b0c8364e71e2da76eab613fe | 674f6001060f56cf55e3d7336e6e4ca5f135beaf | refs/heads/master | 2021-02-22T13:01:53.397290 | 2020-03-07T11:19:10 | 2020-03-07T11:19:10 | 245,377,717 | 1 | 0 | null | 2020-03-06T11:16:16 | 2020-03-06T09:22:53 | Python | UTF-8 | Python | false | false | 563 | py | # 元组的列表(所有元素都是元组的列表)
students = [
(2012, '福冈太郎'),
(2013, '长崎龙一'),
(2011, '熊本纹三'),
]
print('students =', students)
print('students[0] =', students[0])
print('students[1] =', students[1])
print('students[2] =', students[2])
print('students[0][0] =', students[0][0])
print('students[0][1] =', students[0][1])
print('students[1][0] =', students[1][0])
print('students[1][1] =', students[1][1])
print('students[2][0] =', students[2][0])
print('students[2][1] =', students[2][1])
| [
"[email protected]"
] | |
96b99049010ef96d63f9812089561f63157f7727 | d9517b7c8bf044778763245b52461acd9e301399 | /parsers.py | b6a3a9cabdd014bbc08af4957996032163731acb | [] | no_license | IIKovalenko/otus_avito_bot | 300c7727167b5c84b8e33c8181631689ba7aa532 | abfc5ac723eb93b49eefa05ef16833b27388a7ef | refs/heads/master | 2020-06-02T12:47:14.104364 | 2018-01-17T18:43:43 | 2018-01-17T18:43:43 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 744 | py | import bs4
def parse_html_page(response_text):
return bs4.BeautifulSoup(response_text, 'html.parser')
def parse_avito_results_page_for_prices(soup):
descriptions = soup.find_all('div', {'class': 'description'})
prices_wrappers = [
d.find('div', {'class': 'about'}) for d in descriptions
]
raw_prices = [w.contents[0] for w in prices_wrappers]
return filter(None, [process_price(p) for p in raw_prices])
def parse_avito_results_page_for_first_photo(soup):
photo_wrapper = soup.find('a', {'class': 'photo-wrapper'})
return 'https:%s' % photo_wrapper.find('img')['src']
def process_price(raw_price):
price = raw_price.strip()[:-4].replace(' ', '')
return int(price) if price != '' else None
| [
"[email protected]"
] | |
c6f76eb5503fb4a9a7c9ab83f8bd10aa3fbb381b | 94594cb9b7e48cf4a66d8564589a9d7981a89dac | /loader.py | 984fa3b4f768e8646d8c167cf6c091fc99ec178c | [] | no_license | xal9wiii4ik/parse_freelance_bot | 26d1d6a2c0257a320288fcc0ab87d3b6d327eb79 | 49db047b74888265f01036e467d084be5ce20fda | refs/heads/master | 2023-04-02T08:58:25.581354 | 2021-04-08T21:54:18 | 2021-04-08T21:54:18 | 324,841,063 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 651 | py | import asyncio
from aiogram import Bot, Dispatcher, types
from aiogram.contrib.fsm_storage.memory import MemoryStorage
from data import config
from data.config import PARSER_DB
from utils.db_api.categories_command import CategoriesCommands
from utils.db_api.costs_commands import CostsCommands
from utils.db_api.payments_commands import PaymentsCommands
bot = Bot(token=config.BOT_TOKEN, parse_mode=types.ParseMode.HTML)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage, loop=asyncio.get_event_loop())
PAYMENTS_TABLE = PaymentsCommands(PARSER_DB)
CATEGORY_TABLE = CategoriesCommands(PARSER_DB)
COSTS_TABLE = CostsCommands(PARSER_DB)
| [
"[email protected]"
] | |
a2c44e6d2ae1253ca3397bc6d37ed63c68b22265 | 3f28b697f570ded0502de70c706200005ab62525 | /env/lib/python2.7/site-packages/scipy/spatial/distance.py | e436acec62f09cbc2243b94def99500b3a1e24a1 | [
"MIT"
] | permissive | Ram-Aditya/Healthcare-Data-Analytics | 5387e41ad8e56af474e10fa2d1c9d8a2847c5ead | d1a15d2cc067410f82a9ded25f7a782ef56b4729 | refs/heads/master | 2022-12-09T12:49:59.027010 | 2019-11-23T20:10:55 | 2019-11-23T20:10:55 | 223,639,339 | 0 | 1 | MIT | 2022-11-22T00:37:48 | 2019-11-23T19:06:20 | Jupyter Notebook | UTF-8 | Python | false | false | 74,146 | py | """
=====================================================
Distance computations (:mod:`scipy.spatial.distance`)
=====================================================
.. sectionauthor:: Damian Eads
Function Reference
------------------
Distance matrix computation from a collection of raw observation vectors
stored in a rectangular array.
.. autosummary::
:toctree: generated/
pdist -- pairwise distances between observation vectors.
cdist -- distances between two collections of observation vectors
squareform -- convert distance matrix to a condensed one and vice versa
Predicates for checking the validity of distance matrices, both
condensed and redundant. Also contained in this module are functions
for computing the number of observations in a distance matrix.
.. autosummary::
:toctree: generated/
is_valid_dm -- checks for a valid distance matrix
is_valid_y -- checks for a valid condensed distance matrix
num_obs_dm -- # of observations in a distance matrix
num_obs_y -- # of observations in a condensed distance matrix
Distance functions between two vectors ``u`` and ``v``. Computing
distances over a large collection of vectors is inefficient for these
functions. Use ``pdist`` for this purpose.
.. autosummary::
:toctree: generated/
braycurtis -- the Bray-Curtis distance.
canberra -- the Canberra distance.
chebyshev -- the Chebyshev distance.
cityblock -- the Manhattan distance.
correlation -- the Correlation distance.
cosine -- the Cosine distance.
dice -- the Dice dissimilarity (boolean).
euclidean -- the Euclidean distance.
hamming -- the Hamming distance (boolean).
jaccard -- the Jaccard distance (boolean).
kulsinski -- the Kulsinski distance (boolean).
mahalanobis -- the Mahalanobis distance.
matching -- the matching dissimilarity (boolean).
minkowski -- the Minkowski distance.
rogerstanimoto -- the Rogers-Tanimoto dissimilarity (boolean).
russellrao -- the Russell-Rao dissimilarity (boolean).
seuclidean -- the normalized Euclidean distance.
sokalmichener -- the Sokal-Michener dissimilarity (boolean).
sokalsneath -- the Sokal-Sneath dissimilarity (boolean).
sqeuclidean -- the squared Euclidean distance.
wminkowski -- the weighted Minkowski distance.
yule -- the Yule dissimilarity (boolean).
"""
# Copyright (C) Damian Eads, 2007-2008. New BSD License.
from __future__ import division, print_function, absolute_import
import warnings
import numpy as np
from scipy.lib.six import callable, string_types
from scipy.lib.six import xrange
from . import _distance_wrap
from ..linalg import norm
import collections
def _copy_array_if_base_present(a):
"""
Copies the array if its base points to a parent array.
"""
if a.base is not None:
return a.copy()
elif np.issubsctype(a, np.float32):
return np.array(a, dtype=np.double)
else:
return a
def _copy_arrays_if_base_present(T):
"""
Accepts a tuple of arrays T. Copies the array T[i] if its base array
points to an actual array. Otherwise, the reference is just copied.
This is useful if the arrays are being passed to a C function that
does not do proper striding.
"""
l = [_copy_array_if_base_present(a) for a in T]
return l
def _convert_to_bool(X):
if X.dtype != np.bool:
X = X.astype(np.bool)
if not X.flags.contiguous:
X = X.copy()
return X
def _convert_to_double(X):
if X.dtype != np.double:
X = X.astype(np.double)
if not X.flags.contiguous:
X = X.copy()
return X
def _validate_vector(u, dtype=None):
# XXX Is order='c' really necessary?
u = np.asarray(u, dtype=dtype, order='c').squeeze()
# Ensure values such as u=1 and u=[1] still return 1-D arrays.
u = np.atleast_1d(u)
if u.ndim > 1:
raise ValueError("Input vector should be 1-D.")
return u
def minkowski(u, v, p):
"""
Computes the Minkowski distance between two 1-D arrays.
The Minkowski distance between 1-D arrays `u` and `v`,
is defined as
.. math::
{||u-v||}_p = (\\sum{|u_i - v_i|^p})^{1/p}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
p : int
The order of the norm of the difference :math:`{||u-v||}_p`.
Returns
-------
d : double
The Minkowski distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
if p < 1:
raise ValueError("p must be at least 1")
dist = norm(u - v, ord=p)
return dist
def wminkowski(u, v, p, w):
"""
Computes the weighted Minkowski distance between two 1-D arrays.
The weighted Minkowski distance between `u` and `v`, defined as
.. math::
\\left(\\sum{(w_i |u_i - v_i|^p)}\\right)^{1/p}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
p : int
The order of the norm of the difference :math:`{||u-v||}_p`.
w : (N,) array_like
The weight vector.
Returns
-------
wminkowski : double
The weighted Minkowski distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
w = _validate_vector(w)
if p < 1:
raise ValueError("p must be at least 1")
dist = norm(w * (u - v), ord=p)
return dist
def euclidean(u, v):
"""
Computes the Euclidean distance between two 1-D arrays.
The Euclidean distance between 1-D arrays `u` and `v`, is defined as
.. math::
{||u-v||}_2
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
euclidean : double
The Euclidean distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
dist = norm(u - v)
return dist
def sqeuclidean(u, v):
"""
Computes the squared Euclidean distance between two 1-D arrays.
The squared Euclidean distance between `u` and `v` is defined as
.. math::
{||u-v||}_2^2.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
sqeuclidean : double
The squared Euclidean distance between vectors `u` and `v`.
"""
# Preserve float dtypes, but convert everything else to np.float64
# for stability.
utype, vtype = None, None
if not (hasattr(u, "dtype") and np.issubdtype(u.dtype, np.inexact)):
utype = np.float64
if not (hasattr(v, "dtype") and np.issubdtype(v.dtype, np.inexact)):
vtype = np.float64
u = _validate_vector(u, dtype=utype)
v = _validate_vector(v, dtype=vtype)
u_v = u - v
return np.dot(u_v, u_v)
def cosine(u, v):
"""
Computes the Cosine distance between 1-D arrays.
The Cosine distance between `u` and `v`, is defined as
.. math::
1 - \\frac{u \\cdot v}
{||u||_2 ||v||_2}.
where :math:`u \\cdot v` is the dot product of :math:`u` and
:math:`v`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
cosine : double
The Cosine distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
dist = 1.0 - np.dot(u, v) / (norm(u) * norm(v))
return dist
def correlation(u, v):
"""
Computes the correlation distance between two 1-D arrays.
The correlation distance between `u` and `v`, is
defined as
.. math::
1 - \\frac{(u - \\bar{u}) \\cdot (v - \\bar{v})}
{{||(u - \\bar{u})||}_2 {||(v - \\bar{v})||}_2}
where :math:`\\bar{u}` is the mean of the elements of `u`
and :math:`x \\cdot y` is the dot product of :math:`x` and :math:`y`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
correlation : double
The correlation distance between 1-D array `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
umu = u.mean()
vmu = v.mean()
um = u - umu
vm = v - vmu
dist = 1.0 - np.dot(um, vm) / (norm(um) * norm(vm))
return dist
def hamming(u, v):
"""
Computes the Hamming distance between two 1-D arrays.
The Hamming distance between 1-D arrays `u` and `v`, is simply the
proportion of disagreeing components in `u` and `v`. If `u` and `v` are
boolean vectors, the Hamming distance is
.. math::
\\frac{c_{01} + c_{10}}{n}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
hamming : double
The Hamming distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
return (u != v).mean()
def jaccard(u, v):
"""
Computes the Jaccard-Needham dissimilarity between two boolean 1-D arrays.
The Jaccard-Needham dissimilarity between 1-D boolean arrays `u` and `v`,
is defined as
.. math::
\\frac{c_{TF} + c_{FT}}
{c_{TT} + c_{FT} + c_{TF}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
jaccard : double
The Jaccard distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
dist = (np.double(np.bitwise_and((u != v),
np.bitwise_or(u != 0, v != 0)).sum())
/ np.double(np.bitwise_or(u != 0, v != 0).sum()))
return dist
def kulsinski(u, v):
"""
Computes the Kulsinski dissimilarity between two boolean 1-D arrays.
The Kulsinski dissimilarity between two boolean 1-D arrays `u` and `v`,
is defined as
.. math::
\\frac{c_{TF} + c_{FT} - c_{TT} + n}
{c_{FT} + c_{TF} + n}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
kulsinski : double
The Kulsinski distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
n = float(len(u))
(nff, nft, ntf, ntt) = _nbool_correspond_all(u, v)
return (ntf + nft - ntt + n) / (ntf + nft + n)
def seuclidean(u, v, V):
"""
Returns the standardized Euclidean distance between two 1-D arrays.
The standardized Euclidean distance between `u` and `v`.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
V : (N,) array_like
`V` is an 1-D array of component variances. It is usually computed
among a larger collection vectors.
Returns
-------
seuclidean : double
The standardized Euclidean distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
V = _validate_vector(V, dtype=np.float64)
if V.shape[0] != u.shape[0] or u.shape[0] != v.shape[0]:
raise TypeError('V must be a 1-D array of the same dimension '
'as u and v.')
return np.sqrt(((u - v) ** 2 / V).sum())
def cityblock(u, v):
"""
Computes the City Block (Manhattan) distance.
Computes the Manhattan distance between two 1-D arrays `u` and `v`,
which is defined as
.. math::
\\sum_i {\\left| u_i - v_i \\right|}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
cityblock : double
The City Block (Manhattan) distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
return abs(u - v).sum()
def mahalanobis(u, v, VI):
"""
Computes the Mahalanobis distance between two 1-D arrays.
The Mahalanobis distance between 1-D arrays `u` and `v`, is defined as
.. math::
\\sqrt{ (u-v) V^{-1} (u-v)^T }
where ``V`` is the covariance matrix. Note that the argument `VI`
is the inverse of ``V``.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
VI : ndarray
The inverse of the covariance matrix.
Returns
-------
mahalanobis : double
The Mahalanobis distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
VI = np.atleast_2d(VI)
delta = u - v
m = np.dot(np.dot(delta, VI), delta)
return np.sqrt(m)
def chebyshev(u, v):
"""
Computes the Chebyshev distance.
Computes the Chebyshev distance between two 1-D arrays `u` and `v`,
which is defined as
.. math::
\\max_i {|u_i-v_i|}.
Parameters
----------
u : (N,) array_like
Input vector.
v : (N,) array_like
Input vector.
Returns
-------
chebyshev : double
The Chebyshev distance between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
return max(abs(u - v))
def braycurtis(u, v):
"""
Computes the Bray-Curtis distance between two 1-D arrays.
Bray-Curtis distance is defined as
.. math::
\\sum{|u_i-v_i|} / \\sum{|u_i+v_i|}
The Bray-Curtis distance is in the range [0, 1] if all coordinates are
positive, and is undefined if the inputs are of length zero.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
braycurtis : double
The Bray-Curtis distance between 1-D arrays `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v, dtype=np.float64)
return abs(u - v).sum() / abs(u + v).sum()
def canberra(u, v):
"""
Computes the Canberra distance between two 1-D arrays.
The Canberra distance is defined as
.. math::
d(u,v) = \\sum_i \\frac{|u_i-v_i|}
{|u_i|+|v_i|}.
Parameters
----------
u : (N,) array_like
Input array.
v : (N,) array_like
Input array.
Returns
-------
canberra : double
The Canberra distance between vectors `u` and `v`.
Notes
-----
When `u[i]` and `v[i]` are 0 for given i, then the fraction 0/0 = 0 is
used in the calculation.
"""
u = _validate_vector(u)
v = _validate_vector(v, dtype=np.float64)
olderr = np.seterr(invalid='ignore')
try:
d = np.nansum(abs(u - v) / (abs(u) + abs(v)))
finally:
np.seterr(**olderr)
return d
def _nbool_correspond_all(u, v):
if u.dtype != v.dtype:
raise TypeError("Arrays being compared must be of the same data type.")
if u.dtype == np.int or u.dtype == np.float_ or u.dtype == np.double:
not_u = 1.0 - u
not_v = 1.0 - v
nff = (not_u * not_v).sum()
nft = (not_u * v).sum()
ntf = (u * not_v).sum()
ntt = (u * v).sum()
elif u.dtype == np.bool:
not_u = ~u
not_v = ~v
nff = (not_u & not_v).sum()
nft = (not_u & v).sum()
ntf = (u & not_v).sum()
ntt = (u & v).sum()
else:
raise TypeError("Arrays being compared have unknown type.")
return (nff, nft, ntf, ntt)
def _nbool_correspond_ft_tf(u, v):
if u.dtype == np.int or u.dtype == np.float_ or u.dtype == np.double:
not_u = 1.0 - u
not_v = 1.0 - v
nft = (not_u * v).sum()
ntf = (u * not_v).sum()
else:
not_u = ~u
not_v = ~v
nft = (not_u & v).sum()
ntf = (u & not_v).sum()
return (nft, ntf)
def yule(u, v):
"""
Computes the Yule dissimilarity between two boolean 1-D arrays.
The Yule dissimilarity is defined as
.. math::
\\frac{R}{c_{TT} * c_{FF} + \\frac{R}{2}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n` and :math:`R = 2.0 * c_{TF} * c_{FT}`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
yule : double
The Yule dissimilarity between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
(nff, nft, ntf, ntt) = _nbool_correspond_all(u, v)
return float(2.0 * ntf * nft) / float(ntt * nff + ntf * nft)
def matching(u, v):
"""
Computes the Matching dissimilarity between two boolean 1-D arrays.
The Matching dissimilarity between two boolean 1-D arrays
`u` and `v`, is defined as
.. math::
\\frac{c_{TF} + c_{FT}}{n}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
matching : double
The Matching dissimilarity between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
(nft, ntf) = _nbool_correspond_ft_tf(u, v)
return float(nft + ntf) / float(len(u))
def dice(u, v):
"""
Computes the Dice dissimilarity between two boolean 1-D arrays.
The Dice dissimilarity between `u` and `v`, is
.. math::
\\frac{c_{TF} + c_{FT}}
{2c_{TT} + c_{FT} + c_{TF}}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) ndarray, bool
Input 1-D array.
v : (N,) ndarray, bool
Input 1-D array.
Returns
-------
dice : double
The Dice dissimilarity between 1-D arrays `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.dtype == np.bool:
ntt = (u & v).sum()
else:
ntt = (u * v).sum()
(nft, ntf) = _nbool_correspond_ft_tf(u, v)
return float(ntf + nft) / float(2.0 * ntt + ntf + nft)
def rogerstanimoto(u, v):
"""
Computes the Rogers-Tanimoto dissimilarity between two boolean 1-D arrays.
The Rogers-Tanimoto dissimilarity between two boolean 1-D arrays
`u` and `v`, is defined as
.. math::
\\frac{R}
{c_{TT} + c_{FF} + R}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n` and :math:`R = 2(c_{TF} + c_{FT})`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
rogerstanimoto : double
The Rogers-Tanimoto dissimilarity between vectors
`u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
(nff, nft, ntf, ntt) = _nbool_correspond_all(u, v)
return float(2.0 * (ntf + nft)) / float(ntt + nff + (2.0 * (ntf + nft)))
def russellrao(u, v):
"""
Computes the Russell-Rao dissimilarity between two boolean 1-D arrays.
The Russell-Rao dissimilarity between two boolean 1-D arrays, `u` and
`v`, is defined as
.. math::
\\frac{n - c_{TT}}
{n}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
russellrao : double
The Russell-Rao dissimilarity between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.dtype == np.bool:
ntt = (u & v).sum()
else:
ntt = (u * v).sum()
return float(len(u) - ntt) / float(len(u))
def sokalmichener(u, v):
"""
Computes the Sokal-Michener dissimilarity between two boolean 1-D arrays.
The Sokal-Michener dissimilarity between boolean 1-D arrays `u` and `v`,
is defined as
.. math::
\\frac{R}
{S + R}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n`, :math:`R = 2 * (c_{TF} + c_{FT})` and
:math:`S = c_{FF} + c_{TT}`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
sokalmichener : double
The Sokal-Michener dissimilarity between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.dtype == np.bool:
ntt = (u & v).sum()
nff = (~u & ~v).sum()
else:
ntt = (u * v).sum()
nff = ((1.0 - u) * (1.0 - v)).sum()
(nft, ntf) = _nbool_correspond_ft_tf(u, v)
return float(2.0 * (ntf + nft)) / float(ntt + nff + 2.0 * (ntf + nft))
def sokalsneath(u, v):
"""
Computes the Sokal-Sneath dissimilarity between two boolean 1-D arrays.
The Sokal-Sneath dissimilarity between `u` and `v`,
.. math::
\\frac{R}
{c_{TT} + R}
where :math:`c_{ij}` is the number of occurrences of
:math:`\\mathtt{u[k]} = i` and :math:`\\mathtt{v[k]} = j` for
:math:`k < n` and :math:`R = 2(c_{TF} + c_{FT})`.
Parameters
----------
u : (N,) array_like, bool
Input array.
v : (N,) array_like, bool
Input array.
Returns
-------
sokalsneath : double
The Sokal-Sneath dissimilarity between vectors `u` and `v`.
"""
u = _validate_vector(u)
v = _validate_vector(v)
if u.dtype == np.bool:
ntt = (u & v).sum()
else:
ntt = (u * v).sum()
(nft, ntf) = _nbool_correspond_ft_tf(u, v)
denom = ntt + 2.0 * (ntf + nft)
if denom == 0:
raise ValueError('Sokal-Sneath dissimilarity is not defined for '
'vectors that are entirely false.')
return float(2.0 * (ntf + nft)) / denom
def pdist(X, metric='euclidean', p=2, w=None, V=None, VI=None):
"""
Pairwise distances between observations in n-dimensional space.
The following are common calling conventions.
1. ``Y = pdist(X, 'euclidean')``
Computes the distance between m points using Euclidean distance
(2-norm) as the distance metric between the points. The points
are arranged as m n-dimensional row vectors in the matrix X.
2. ``Y = pdist(X, 'minkowski', p)``
Computes the distances using the Minkowski distance
:math:`||u-v||_p` (p-norm) where :math:`p \\geq 1`.
3. ``Y = pdist(X, 'cityblock')``
Computes the city block or Manhattan distance between the
points.
4. ``Y = pdist(X, 'seuclidean', V=None)``
Computes the standardized Euclidean distance. The standardized
Euclidean distance between two n-vectors ``u`` and ``v`` is
.. math::
\\sqrt{\\sum {(u_i-v_i)^2 / V[x_i]}}
V is the variance vector; V[i] is the variance computed over all
the i'th components of the points. If not passed, it is
automatically computed.
5. ``Y = pdist(X, 'sqeuclidean')``
Computes the squared Euclidean distance :math:`||u-v||_2^2` between
the vectors.
6. ``Y = pdist(X, 'cosine')``
Computes the cosine distance between vectors u and v,
.. math::
1 - \\frac{u \\cdot v}
{{||u||}_2 {||v||}_2}
where :math:`||*||_2` is the 2-norm of its argument ``*``, and
:math:`u \\cdot v` is the dot product of ``u`` and ``v``.
7. ``Y = pdist(X, 'correlation')``
Computes the correlation distance between vectors u and v. This is
.. math::
1 - \\frac{(u - \\bar{u}) \\cdot (v - \\bar{v})}
{{||(u - \\bar{u})||}_2 {||(v - \\bar{v})||}_2}
where :math:`\\bar{v}` is the mean of the elements of vector v,
and :math:`x \\cdot y` is the dot product of :math:`x` and :math:`y`.
8. ``Y = pdist(X, 'hamming')``
Computes the normalized Hamming distance, or the proportion of
those vector elements between two n-vectors ``u`` and ``v``
which disagree. To save memory, the matrix ``X`` can be of type
boolean.
9. ``Y = pdist(X, 'jaccard')``
Computes the Jaccard distance between the points. Given two
vectors, ``u`` and ``v``, the Jaccard distance is the
proportion of those elements ``u[i]`` and ``v[i]`` that
disagree where at least one of them is non-zero.
10. ``Y = pdist(X, 'chebyshev')``
Computes the Chebyshev distance between the points. The
Chebyshev distance between two n-vectors ``u`` and ``v`` is the
maximum norm-1 distance between their respective elements. More
precisely, the distance is given by
.. math::
d(u,v) = \\max_i {|u_i-v_i|}
11. ``Y = pdist(X, 'canberra')``
Computes the Canberra distance between the points. The
Canberra distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\sum_i \\frac{|u_i-v_i|}
{|u_i|+|v_i|}
12. ``Y = pdist(X, 'braycurtis')``
Computes the Bray-Curtis distance between the points. The
Bray-Curtis distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\frac{\\sum_i {u_i-v_i}}
{\\sum_i {u_i+v_i}}
13. ``Y = pdist(X, 'mahalanobis', VI=None)``
Computes the Mahalanobis distance between the points. The
Mahalanobis distance between two points ``u`` and ``v`` is
:math:`(u-v)(1/V)(u-v)^T` where :math:`(1/V)` (the ``VI``
variable) is the inverse covariance. If ``VI`` is not None,
``VI`` will be used as the inverse covariance matrix.
14. ``Y = pdist(X, 'yule')``
Computes the Yule distance between each pair of boolean
vectors. (see yule function documentation)
15. ``Y = pdist(X, 'matching')``
Computes the matching distance between each pair of boolean
vectors. (see matching function documentation)
16. ``Y = pdist(X, 'dice')``
Computes the Dice distance between each pair of boolean
vectors. (see dice function documentation)
17. ``Y = pdist(X, 'kulsinski')``
Computes the Kulsinski distance between each pair of
boolean vectors. (see kulsinski function documentation)
18. ``Y = pdist(X, 'rogerstanimoto')``
Computes the Rogers-Tanimoto distance between each pair of
boolean vectors. (see rogerstanimoto function documentation)
19. ``Y = pdist(X, 'russellrao')``
Computes the Russell-Rao distance between each pair of
boolean vectors. (see russellrao function documentation)
20. ``Y = pdist(X, 'sokalmichener')``
Computes the Sokal-Michener distance between each pair of
boolean vectors. (see sokalmichener function documentation)
21. ``Y = pdist(X, 'sokalsneath')``
Computes the Sokal-Sneath distance between each pair of
boolean vectors. (see sokalsneath function documentation)
22. ``Y = pdist(X, 'wminkowski')``
Computes the weighted Minkowski distance between each pair of
vectors. (see wminkowski function documentation)
23. ``Y = pdist(X, f)``
Computes the distance between all pairs of vectors in X
using the user supplied 2-arity function f. For example,
Euclidean distance between the vectors could be computed
as follows::
dm = pdist(X, lambda u, v: np.sqrt(((u-v)**2).sum()))
Note that you should avoid passing a reference to one of
the distance functions defined in this library. For example,::
dm = pdist(X, sokalsneath)
would calculate the pair-wise distances between the vectors in
X using the Python function sokalsneath. This would result in
sokalsneath being called :math:`{n \\choose 2}` times, which
is inefficient. Instead, the optimized C version is more
efficient, and we call it using the following syntax.::
dm = pdist(X, 'sokalsneath')
Parameters
----------
X : ndarray
An m by n array of m original observations in an
n-dimensional space.
metric : string or function
The distance metric to use. The distance function can
be 'braycurtis', 'canberra', 'chebyshev', 'cityblock',
'correlation', 'cosine', 'dice', 'euclidean', 'hamming',
'jaccard', 'kulsinski', 'mahalanobis', 'matching',
'minkowski', 'rogerstanimoto', 'russellrao', 'seuclidean',
'sokalmichener', 'sokalsneath', 'sqeuclidean', 'yule'.
w : ndarray
The weight vector (for weighted Minkowski).
p : double
The p-norm to apply (for Minkowski, weighted and unweighted)
V : ndarray
The variance vector (for standardized Euclidean).
VI : ndarray
The inverse of the covariance matrix (for Mahalanobis).
Returns
-------
Y : ndarray
Returns a condensed distance matrix Y. For
each :math:`i` and :math:`j` (where :math:`i<j<n`), the
metric ``dist(u=X[i], v=X[j])`` is computed and stored in entry ``ij``.
See Also
--------
squareform : converts between condensed distance matrices and
square distance matrices.
Notes
-----
See ``squareform`` for information on how to calculate the index of
this entry or to convert the condensed distance matrix to a
redundant square matrix.
"""
# 21. Y = pdist(X, 'test_Y')
#
# Computes the distance between all pairs of vectors in X
# using the distance metric Y but with a more succinct,
# verifiable, but less efficient implementation.
X = np.asarray(X, order='c')
# The C code doesn't do striding.
[X] = _copy_arrays_if_base_present([_convert_to_double(X)])
s = X.shape
if len(s) != 2:
raise ValueError('A 2-dimensional array must be passed.')
m, n = s
dm = np.zeros((m * (m - 1)) // 2, dtype=np.double)
wmink_names = ['wminkowski', 'wmi', 'wm', 'wpnorm']
if w is None and (metric == wminkowski or metric in wmink_names):
raise ValueError('weighted minkowski requires a weight '
'vector `w` to be given.')
if callable(metric):
if metric == minkowski:
def dfun(u, v):
return minkowski(u, v, p)
elif metric == wminkowski:
def dfun(u, v):
return wminkowski(u, v, p, w)
elif metric == seuclidean:
def dfun(u, v):
return seuclidean(u, v, V)
elif metric == mahalanobis:
def dfun(u, v):
return mahalanobis(u, v, V)
else:
dfun = metric
k = 0
for i in xrange(0, m - 1):
for j in xrange(i + 1, m):
dm[k] = dfun(X[i], X[j])
k = k + 1
elif isinstance(metric, string_types):
mstr = metric.lower()
#if X.dtype != np.double and \
# (mstr != 'hamming' and mstr != 'jaccard'):
# TypeError('A double array must be passed.')
if mstr in set(['euclidean', 'euclid', 'eu', 'e']):
_distance_wrap.pdist_euclidean_wrap(_convert_to_double(X), dm)
elif mstr in set(['sqeuclidean', 'sqe', 'sqeuclid']):
_distance_wrap.pdist_sqeuclidean_wrap(_convert_to_double(X), dm)
elif mstr in set(['cityblock', 'cblock', 'cb', 'c']):
_distance_wrap.pdist_city_block_wrap(X, dm)
elif mstr in set(['hamming', 'hamm', 'ha', 'h']):
if X.dtype == np.bool:
_distance_wrap.pdist_hamming_bool_wrap(_convert_to_bool(X), dm)
else:
_distance_wrap.pdist_hamming_wrap(_convert_to_double(X), dm)
elif mstr in set(['jaccard', 'jacc', 'ja', 'j']):
if X.dtype == np.bool:
_distance_wrap.pdist_jaccard_bool_wrap(_convert_to_bool(X), dm)
else:
_distance_wrap.pdist_jaccard_wrap(_convert_to_double(X), dm)
elif mstr in set(['chebychev', 'chebyshev', 'cheby', 'cheb', 'ch']):
_distance_wrap.pdist_chebyshev_wrap(_convert_to_double(X), dm)
elif mstr in set(['minkowski', 'mi', 'm']):
_distance_wrap.pdist_minkowski_wrap(_convert_to_double(X), dm, p)
elif mstr in wmink_names:
_distance_wrap.pdist_weighted_minkowski_wrap(_convert_to_double(X),
dm, p, np.asarray(w))
elif mstr in set(['seuclidean', 'se', 's']):
if V is not None:
V = np.asarray(V, order='c')
if type(V) != np.ndarray:
raise TypeError('Variance vector V must be a numpy array')
if V.dtype != np.double:
raise TypeError('Variance vector V must contain doubles.')
if len(V.shape) != 1:
raise ValueError('Variance vector V must '
'be one-dimensional.')
if V.shape[0] != n:
raise ValueError('Variance vector V must be of the same '
'dimension as the vectors on which the distances '
'are computed.')
# The C code doesn't do striding.
[VV] = _copy_arrays_if_base_present([_convert_to_double(V)])
else:
VV = np.var(X, axis=0, ddof=1)
_distance_wrap.pdist_seuclidean_wrap(_convert_to_double(X), VV, dm)
# Need to test whether vectorized cosine works better.
# Find out: Is there a dot subtraction operator so I can
# subtract matrices in a similar way to multiplying them?
# Need to get rid of as much unnecessary C code as possible.
elif mstr in set(['cosine', 'cos']):
norms = np.sqrt(np.sum(X * X, axis=1))
_distance_wrap.pdist_cosine_wrap(_convert_to_double(X), dm, norms)
elif mstr in set(['old_cosine', 'old_cos']):
norms = np.sqrt(np.sum(X * X, axis=1))
nV = norms.reshape(m, 1)
# The numerator u * v
nm = np.dot(X, X.T)
# The denom. ||u||*||v||
de = np.dot(nV, nV.T)
dm = 1.0 - (nm / de)
dm[xrange(0, m), xrange(0, m)] = 0.0
dm = squareform(dm)
elif mstr in set(['correlation', 'co']):
X2 = X - X.mean(1)[:, np.newaxis]
#X2 = X - np.matlib.repmat(np.mean(X, axis=1).reshape(m, 1), 1, n)
norms = np.sqrt(np.sum(X2 * X2, axis=1))
_distance_wrap.pdist_cosine_wrap(_convert_to_double(X2),
_convert_to_double(dm),
_convert_to_double(norms))
elif mstr in set(['mahalanobis', 'mahal', 'mah']):
if VI is not None:
VI = _convert_to_double(np.asarray(VI, order='c'))
if type(VI) != np.ndarray:
raise TypeError('VI must be a numpy array.')
if VI.dtype != np.double:
raise TypeError('The array must contain 64-bit floats.')
[VI] = _copy_arrays_if_base_present([VI])
else:
V = np.cov(X.T)
VI = _convert_to_double(np.linalg.inv(V).T.copy())
# (u-v)V^(-1)(u-v)^T
_distance_wrap.pdist_mahalanobis_wrap(_convert_to_double(X),
VI, dm)
elif mstr == 'canberra':
_distance_wrap.pdist_canberra_wrap(_convert_to_double(X), dm)
elif mstr == 'braycurtis':
_distance_wrap.pdist_bray_curtis_wrap(_convert_to_double(X), dm)
elif mstr == 'yule':
_distance_wrap.pdist_yule_bool_wrap(_convert_to_bool(X), dm)
elif mstr == 'matching':
_distance_wrap.pdist_matching_bool_wrap(_convert_to_bool(X), dm)
elif mstr == 'kulsinski':
_distance_wrap.pdist_kulsinski_bool_wrap(_convert_to_bool(X), dm)
elif mstr == 'dice':
_distance_wrap.pdist_dice_bool_wrap(_convert_to_bool(X), dm)
elif mstr == 'rogerstanimoto':
_distance_wrap.pdist_rogerstanimoto_bool_wrap(_convert_to_bool(X),
dm)
elif mstr == 'russellrao':
_distance_wrap.pdist_russellrao_bool_wrap(_convert_to_bool(X), dm)
elif mstr == 'sokalmichener':
_distance_wrap.pdist_sokalmichener_bool_wrap(_convert_to_bool(X),
dm)
elif mstr == 'sokalsneath':
_distance_wrap.pdist_sokalsneath_bool_wrap(_convert_to_bool(X), dm)
elif metric == 'test_euclidean':
dm = pdist(X, euclidean)
elif metric == 'test_sqeuclidean':
if V is None:
V = np.var(X, axis=0, ddof=1)
else:
V = np.asarray(V, order='c')
dm = pdist(X, lambda u, v: seuclidean(u, v, V))
elif metric == 'test_braycurtis':
dm = pdist(X, braycurtis)
elif metric == 'test_mahalanobis':
if VI is None:
V = np.cov(X.T)
VI = np.linalg.inv(V)
else:
VI = np.asarray(VI, order='c')
[VI] = _copy_arrays_if_base_present([VI])
# (u-v)V^(-1)(u-v)^T
dm = pdist(X, (lambda u, v: mahalanobis(u, v, VI)))
elif metric == 'test_canberra':
dm = pdist(X, canberra)
elif metric == 'test_cityblock':
dm = pdist(X, cityblock)
elif metric == 'test_minkowski':
dm = pdist(X, minkowski, p=p)
elif metric == 'test_wminkowski':
dm = pdist(X, wminkowski, p=p, w=w)
elif metric == 'test_cosine':
dm = pdist(X, cosine)
elif metric == 'test_correlation':
dm = pdist(X, correlation)
elif metric == 'test_hamming':
dm = pdist(X, hamming)
elif metric == 'test_jaccard':
dm = pdist(X, jaccard)
elif metric == 'test_chebyshev' or metric == 'test_chebychev':
dm = pdist(X, chebyshev)
elif metric == 'test_yule':
dm = pdist(X, yule)
elif metric == 'test_matching':
dm = pdist(X, matching)
elif metric == 'test_dice':
dm = pdist(X, dice)
elif metric == 'test_kulsinski':
dm = pdist(X, kulsinski)
elif metric == 'test_rogerstanimoto':
dm = pdist(X, rogerstanimoto)
elif metric == 'test_russellrao':
dm = pdist(X, russellrao)
elif metric == 'test_sokalsneath':
dm = pdist(X, sokalsneath)
elif metric == 'test_sokalmichener':
dm = pdist(X, sokalmichener)
else:
raise ValueError('Unknown Distance Metric: %s' % mstr)
else:
raise TypeError('2nd argument metric must be a string identifier '
'or a function.')
return dm
def squareform(X, force="no", checks=True):
"""
Converts a vector-form distance vector to a square-form distance
matrix, and vice-versa.
Parameters
----------
X : ndarray
Either a condensed or redundant distance matrix.
force : str, optional
As with MATLAB(TM), if force is equal to 'tovector' or 'tomatrix',
the input will be treated as a distance matrix or distance vector
respectively.
checks : bool, optional
If `checks` is set to False, no checks will be made for matrix
symmetry nor zero diagonals. This is useful if it is known that
``X - X.T1`` is small and ``diag(X)`` is close to zero.
These values are ignored any way so they do not disrupt the
squareform transformation.
Returns
-------
Y : ndarray
If a condensed distance matrix is passed, a redundant one is
returned, or if a redundant one is passed, a condensed distance
matrix is returned.
Notes
-----
1. v = squareform(X)
Given a square d-by-d symmetric distance matrix X,
``v=squareform(X)`` returns a ``d * (d-1) / 2`` (or
`${n \\choose 2}$`) sized vector v.
v[{n \\choose 2}-{n-i \\choose 2} + (j-i-1)] is the distance
between points i and j. If X is non-square or asymmetric, an error
is returned.
2. X = squareform(v)
Given a d*d(-1)/2 sized v for some integer d>=2 encoding distances
as described, X=squareform(v) returns a d by d distance matrix X. The
X[i, j] and X[j, i] values are set to
v[{n \\choose 2}-{n-i \\choose 2} + (j-u-1)] and all
diagonal elements are zero.
"""
X = _convert_to_double(np.asarray(X, order='c'))
if not np.issubsctype(X, np.double):
raise TypeError('A double array must be passed.')
s = X.shape
if force.lower() == 'tomatrix':
if len(s) != 1:
raise ValueError("Forcing 'tomatrix' but input X is not a "
"distance vector.")
elif force.lower() == 'tovector':
if len(s) != 2:
raise ValueError("Forcing 'tovector' but input X is not a "
"distance matrix.")
# X = squareform(v)
if len(s) == 1:
if X.shape[0] == 0:
return np.zeros((1, 1), dtype=np.double)
# Grab the closest value to the square root of the number
# of elements times 2 to see if the number of elements
# is indeed a binomial coefficient.
d = int(np.ceil(np.sqrt(X.shape[0] * 2)))
# Check that v is of valid dimensions.
if d * (d - 1) / 2 != int(s[0]):
raise ValueError('Incompatible vector size. It must be a binomial '
'coefficient n choose 2 for some integer n >= 2.')
# Allocate memory for the distance matrix.
M = np.zeros((d, d), dtype=np.double)
# Since the C code does not support striding using strides.
# The dimensions are used instead.
[X] = _copy_arrays_if_base_present([X])
# Fill in the values of the distance matrix.
_distance_wrap.to_squareform_from_vector_wrap(M, X)
# Return the distance matrix.
M = M + M.transpose()
return M
elif len(s) == 2:
if s[0] != s[1]:
raise ValueError('The matrix argument must be square.')
if checks:
is_valid_dm(X, throw=True, name='X')
# One-side of the dimensions is set here.
d = s[0]
if d <= 1:
return np.array([], dtype=np.double)
# Create a vector.
v = np.zeros((d * (d - 1)) // 2, dtype=np.double)
# Since the C code does not support striding using strides.
# The dimensions are used instead.
[X] = _copy_arrays_if_base_present([X])
# Convert the vector to squareform.
_distance_wrap.to_vector_from_squareform_wrap(X, v)
return v
else:
raise ValueError(('The first argument must be one or two dimensional '
'array. A %d-dimensional array is not '
'permitted') % len(s))
def is_valid_dm(D, tol=0.0, throw=False, name="D", warning=False):
"""
Returns True if input array is a valid distance matrix.
Distance matrices must be 2-dimensional numpy arrays containing
doubles. They must have a zero-diagonal, and they must be symmetric.
Parameters
----------
D : ndarray
The candidate object to test for validity.
tol : float, optional
The distance matrix should be symmetric. `tol` is the maximum
difference between entries ``ij`` and ``ji`` for the distance
metric to be considered symmetric.
throw : bool, optional
An exception is thrown if the distance matrix passed is not valid.
name : str, optional
The name of the variable to checked. This is useful if
throw is set to True so the offending variable can be identified
in the exception message when an exception is thrown.
warning : bool, optional
Instead of throwing an exception, a warning message is
raised.
Returns
-------
valid : bool
True if the variable `D` passed is a valid distance matrix.
Notes
-----
Small numerical differences in `D` and `D.T` and non-zeroness of
the diagonal are ignored if they are within the tolerance specified
by `tol`.
"""
D = np.asarray(D, order='c')
valid = True
try:
s = D.shape
if D.dtype != np.double:
if name:
raise TypeError(('Distance matrix \'%s\' must contain doubles '
'(double).') % name)
else:
raise TypeError('Distance matrix must contain doubles '
'(double).')
if len(D.shape) != 2:
if name:
raise ValueError(('Distance matrix \'%s\' must have shape=2 '
'(i.e. be two-dimensional).') % name)
else:
raise ValueError('Distance matrix must have shape=2 (i.e. '
'be two-dimensional).')
if tol == 0.0:
if not (D == D.T).all():
if name:
raise ValueError(('Distance matrix \'%s\' must be '
'symmetric.') % name)
else:
raise ValueError('Distance matrix must be symmetric.')
if not (D[xrange(0, s[0]), xrange(0, s[0])] == 0).all():
if name:
raise ValueError(('Distance matrix \'%s\' diagonal must '
'be zero.') % name)
else:
raise ValueError('Distance matrix diagonal must be zero.')
else:
if not (D - D.T <= tol).all():
if name:
raise ValueError(('Distance matrix \'%s\' must be '
'symmetric within tolerance %d.')
% (name, tol))
else:
raise ValueError('Distance matrix must be symmetric within'
' tolerance %5.5f.' % tol)
if not (D[xrange(0, s[0]), xrange(0, s[0])] <= tol).all():
if name:
raise ValueError(('Distance matrix \'%s\' diagonal must be'
' close to zero within tolerance %5.5f.')
% (name, tol))
else:
raise ValueError(('Distance matrix \'%s\' diagonal must be'
' close to zero within tolerance %5.5f.')
% tol)
except Exception as e:
if throw:
raise
if warning:
warnings.warn(str(e))
valid = False
return valid
def is_valid_y(y, warning=False, throw=False, name=None):
"""
Returns True if the input array is a valid condensed distance matrix.
Condensed distance matrices must be 1-dimensional
numpy arrays containing doubles. Their length must be a binomial
coefficient :math:`{n \\choose 2}` for some positive integer n.
Parameters
----------
y : ndarray
The condensed distance matrix.
warning : bool, optional
Invokes a warning if the variable passed is not a valid
condensed distance matrix. The warning message explains why
the distance matrix is not valid. `name` is used when
referencing the offending variable.
throws : throw, optional
Throws an exception if the variable passed is not a valid
condensed distance matrix.
name : bool, optional
Used when referencing the offending variable in the
warning or exception message.
"""
y = np.asarray(y, order='c')
valid = True
try:
if type(y) != np.ndarray:
if name:
raise TypeError(('\'%s\' passed as a condensed distance '
'matrix is not a numpy array.') % name)
else:
raise TypeError('Variable is not a numpy array.')
if y.dtype != np.double:
if name:
raise TypeError(('Condensed distance matrix \'%s\' must '
'contain doubles (double).') % name)
else:
raise TypeError('Condensed distance matrix must contain '
'doubles (double).')
if len(y.shape) != 1:
if name:
raise ValueError(('Condensed distance matrix \'%s\' must '
'have shape=1 (i.e. be one-dimensional).')
% name)
else:
raise ValueError('Condensed distance matrix must have shape=1 '
'(i.e. be one-dimensional).')
n = y.shape[0]
d = int(np.ceil(np.sqrt(n * 2)))
if (d * (d - 1) / 2) != n:
if name:
raise ValueError(('Length n of condensed distance matrix '
'\'%s\' must be a binomial coefficient, i.e.'
'there must be a k such that '
'(k \choose 2)=n)!') % name)
else:
raise ValueError('Length n of condensed distance matrix must '
'be a binomial coefficient, i.e. there must '
'be a k such that (k \choose 2)=n)!')
except Exception as e:
if throw:
raise
if warning:
warnings.warn(str(e))
valid = False
return valid
def num_obs_dm(d):
"""
Returns the number of original observations that correspond to a
square, redundant distance matrix.
Parameters
----------
d : ndarray
The target distance matrix.
Returns
-------
num_obs_dm : int
The number of observations in the redundant distance matrix.
"""
d = np.asarray(d, order='c')
is_valid_dm(d, tol=np.inf, throw=True, name='d')
return d.shape[0]
def num_obs_y(Y):
"""
Returns the number of original observations that correspond to a
condensed distance matrix.
Parameters
----------
Y : ndarray
Condensed distance matrix.
Returns
-------
n : int
The number of observations in the condensed distance matrix `Y`.
"""
Y = np.asarray(Y, order='c')
is_valid_y(Y, throw=True, name='Y')
k = Y.shape[0]
if k == 0:
raise ValueError("The number of observations cannot be determined on "
"an empty distance matrix.")
d = int(np.ceil(np.sqrt(k * 2)))
if (d * (d - 1) / 2) != k:
raise ValueError("Invalid condensed distance matrix passed. Must be "
"some k where k=(n choose 2) for some n >= 2.")
return d
def cdist(XA, XB, metric='euclidean', p=2, V=None, VI=None, w=None):
"""
Computes distance between each pair of the two collections of inputs.
The following are common calling conventions:
1. ``Y = cdist(XA, XB, 'euclidean')``
Computes the distance between :math:`m` points using
Euclidean distance (2-norm) as the distance metric between the
points. The points are arranged as :math:`m`
:math:`n`-dimensional row vectors in the matrix X.
2. ``Y = cdist(XA, XB, 'minkowski', p)``
Computes the distances using the Minkowski distance
:math:`||u-v||_p` (:math:`p`-norm) where :math:`p \\geq 1`.
3. ``Y = cdist(XA, XB, 'cityblock')``
Computes the city block or Manhattan distance between the
points.
4. ``Y = cdist(XA, XB, 'seuclidean', V=None)``
Computes the standardized Euclidean distance. The standardized
Euclidean distance between two n-vectors ``u`` and ``v`` is
.. math::
\\sqrt{\\sum {(u_i-v_i)^2 / V[x_i]}}.
V is the variance vector; V[i] is the variance computed over all
the i'th components of the points. If not passed, it is
automatically computed.
5. ``Y = cdist(XA, XB, 'sqeuclidean')``
Computes the squared Euclidean distance :math:`||u-v||_2^2` between
the vectors.
6. ``Y = cdist(XA, XB, 'cosine')``
Computes the cosine distance between vectors u and v,
.. math::
1 - \\frac{u \\cdot v}
{{||u||}_2 {||v||}_2}
where :math:`||*||_2` is the 2-norm of its argument ``*``, and
:math:`u \\cdot v` is the dot product of :math:`u` and :math:`v`.
7. ``Y = cdist(XA, XB, 'correlation')``
Computes the correlation distance between vectors u and v. This is
.. math::
1 - \\frac{(u - \\bar{u}) \\cdot (v - \\bar{v})}
{{||(u - \\bar{u})||}_2 {||(v - \\bar{v})||}_2}
where :math:`\\bar{v}` is the mean of the elements of vector v,
and :math:`x \\cdot y` is the dot product of :math:`x` and :math:`y`.
8. ``Y = cdist(XA, XB, 'hamming')``
Computes the normalized Hamming distance, or the proportion of
those vector elements between two n-vectors ``u`` and ``v``
which disagree. To save memory, the matrix ``X`` can be of type
boolean.
9. ``Y = cdist(XA, XB, 'jaccard')``
Computes the Jaccard distance between the points. Given two
vectors, ``u`` and ``v``, the Jaccard distance is the
proportion of those elements ``u[i]`` and ``v[i]`` that
disagree where at least one of them is non-zero.
10. ``Y = cdist(XA, XB, 'chebyshev')``
Computes the Chebyshev distance between the points. The
Chebyshev distance between two n-vectors ``u`` and ``v`` is the
maximum norm-1 distance between their respective elements. More
precisely, the distance is given by
.. math::
d(u,v) = \\max_i {|u_i-v_i|}.
11. ``Y = cdist(XA, XB, 'canberra')``
Computes the Canberra distance between the points. The
Canberra distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\sum_i \\frac{|u_i-v_i|}
{|u_i|+|v_i|}.
12. ``Y = cdist(XA, XB, 'braycurtis')``
Computes the Bray-Curtis distance between the points. The
Bray-Curtis distance between two points ``u`` and ``v`` is
.. math::
d(u,v) = \\frac{\\sum_i (u_i-v_i)}
{\\sum_i (u_i+v_i)}
13. ``Y = cdist(XA, XB, 'mahalanobis', VI=None)``
Computes the Mahalanobis distance between the points. The
Mahalanobis distance between two points ``u`` and ``v`` is
:math:`(u-v)(1/V)(u-v)^T` where :math:`(1/V)` (the ``VI``
variable) is the inverse covariance. If ``VI`` is not None,
``VI`` will be used as the inverse covariance matrix.
14. ``Y = cdist(XA, XB, 'yule')``
Computes the Yule distance between the boolean
vectors. (see `yule` function documentation)
15. ``Y = cdist(XA, XB, 'matching')``
Computes the matching distance between the boolean
vectors. (see `matching` function documentation)
16. ``Y = cdist(XA, XB, 'dice')``
Computes the Dice distance between the boolean vectors. (see
`dice` function documentation)
17. ``Y = cdist(XA, XB, 'kulsinski')``
Computes the Kulsinski distance between the boolean
vectors. (see `kulsinski` function documentation)
18. ``Y = cdist(XA, XB, 'rogerstanimoto')``
Computes the Rogers-Tanimoto distance between the boolean
vectors. (see `rogerstanimoto` function documentation)
19. ``Y = cdist(XA, XB, 'russellrao')``
Computes the Russell-Rao distance between the boolean
vectors. (see `russellrao` function documentation)
20. ``Y = cdist(XA, XB, 'sokalmichener')``
Computes the Sokal-Michener distance between the boolean
vectors. (see `sokalmichener` function documentation)
21. ``Y = cdist(XA, XB, 'sokalsneath')``
Computes the Sokal-Sneath distance between the vectors. (see
`sokalsneath` function documentation)
22. ``Y = cdist(XA, XB, 'wminkowski')``
Computes the weighted Minkowski distance between the
vectors. (see `wminkowski` function documentation)
23. ``Y = cdist(XA, XB, f)``
Computes the distance between all pairs of vectors in X
using the user supplied 2-arity function f. For example,
Euclidean distance between the vectors could be computed
as follows::
dm = cdist(XA, XB, lambda u, v: np.sqrt(((u-v)**2).sum()))
Note that you should avoid passing a reference to one of
the distance functions defined in this library. For example,::
dm = cdist(XA, XB, sokalsneath)
would calculate the pair-wise distances between the vectors in
X using the Python function `sokalsneath`. This would result in
sokalsneath being called :math:`{n \\choose 2}` times, which
is inefficient. Instead, the optimized C version is more
efficient, and we call it using the following syntax::
dm = cdist(XA, XB, 'sokalsneath')
Parameters
----------
XA : ndarray
An :math:`m_A` by :math:`n` array of :math:`m_A`
original observations in an :math:`n`-dimensional space.
Inputs are converted to float type.
XB : ndarray
An :math:`m_B` by :math:`n` array of :math:`m_B`
original observations in an :math:`n`-dimensional space.
Inputs are converted to float type.
metric : str or callable, optional
The distance metric to use. If a string, the distance function can be
'braycurtis', 'canberra', 'chebyshev', 'cityblock', 'correlation',
'cosine', 'dice', 'euclidean', 'hamming', 'jaccard', 'kulsinski',
'mahalanobis', 'matching', 'minkowski', 'rogerstanimoto', 'russellrao',
'seuclidean', 'sokalmichener', 'sokalsneath', 'sqeuclidean',
'wminkowski', 'yule'.
w : ndarray, optional
The weight vector (for weighted Minkowski).
p : scalar, optional
The p-norm to apply (for Minkowski, weighted and unweighted)
V : ndarray, optional
The variance vector (for standardized Euclidean).
VI : ndarray, optional
The inverse of the covariance matrix (for Mahalanobis).
Returns
-------
Y : ndarray
A :math:`m_A` by :math:`m_B` distance matrix is returned.
For each :math:`i` and :math:`j`, the metric
``dist(u=XA[i], v=XB[j])`` is computed and stored in the
:math:`ij` th entry.
Raises
------
ValueError
An exception is thrown if `XA` and `XB` do not have
the same number of columns.
Examples
--------
Find the Euclidean distances between four 2-D coordinates:
>>> from scipy.spatial import distance
>>> coords = [(35.0456, -85.2672),
... (35.1174, -89.9711),
... (35.9728, -83.9422),
... (36.1667, -86.7833)]
>>> distance.cdist(coords, coords, 'euclidean')
array([[ 0. , 4.7044, 1.6172, 1.8856],
[ 4.7044, 0. , 6.0893, 3.3561],
[ 1.6172, 6.0893, 0. , 2.8477],
[ 1.8856, 3.3561, 2.8477, 0. ]])
Find the Manhattan distance from a 3-D point to the corners of the unit
cube:
>>> a = np.array([[0, 0, 0],
[0, 0, 1],
[0, 1, 0],
[0, 1, 1],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0],
[1, 1, 1]])
>>> b = np.array([[ 0.1, 0.2, 0.4]])
>>> distance.cdist(a, b, 'cityblock')
array([[ 0.7],
[ 0.9],
[ 1.3],
[ 1.5],
[ 1.5],
[ 1.7],
[ 2.1],
[ 2.3]])
"""
# 21. Y = cdist(XA, XB, 'test_Y')
#
# Computes the distance between all pairs of vectors in X
# using the distance metric Y but with a more succint,
# verifiable, but less efficient implementation.
XA = np.asarray(XA, order='c')
XB = np.asarray(XB, order='c')
#if np.issubsctype(X, np.floating) and not np.issubsctype(X, np.double):
# raise TypeError('Floating point arrays must be 64-bit (got %r).' %
# (X.dtype.type,))
# The C code doesn't do striding.
[XA] = _copy_arrays_if_base_present([_convert_to_double(XA)])
[XB] = _copy_arrays_if_base_present([_convert_to_double(XB)])
s = XA.shape
sB = XB.shape
if len(s) != 2:
raise ValueError('XA must be a 2-dimensional array.')
if len(sB) != 2:
raise ValueError('XB must be a 2-dimensional array.')
if s[1] != sB[1]:
raise ValueError('XA and XB must have the same number of columns '
'(i.e. feature dimension.)')
mA = s[0]
mB = sB[0]
n = s[1]
dm = np.zeros((mA, mB), dtype=np.double)
if callable(metric):
if metric == minkowski:
for i in xrange(0, mA):
for j in xrange(0, mB):
dm[i, j] = minkowski(XA[i, :], XB[j, :], p)
elif metric == wminkowski:
for i in xrange(0, mA):
for j in xrange(0, mB):
dm[i, j] = wminkowski(XA[i, :], XB[j, :], p, w)
elif metric == seuclidean:
for i in xrange(0, mA):
for j in xrange(0, mB):
dm[i, j] = seuclidean(XA[i, :], XB[j, :], V)
elif metric == mahalanobis:
for i in xrange(0, mA):
for j in xrange(0, mB):
dm[i, j] = mahalanobis(XA[i, :], XB[j, :], V)
else:
for i in xrange(0, mA):
for j in xrange(0, mB):
dm[i, j] = metric(XA[i, :], XB[j, :])
elif isinstance(metric, string_types):
mstr = metric.lower()
#if XA.dtype != np.double and \
# (mstr != 'hamming' and mstr != 'jaccard'):
# TypeError('A double array must be passed.')
if mstr in set(['euclidean', 'euclid', 'eu', 'e']):
_distance_wrap.cdist_euclidean_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr in set(['sqeuclidean', 'sqe', 'sqeuclid']):
_distance_wrap.cdist_sqeuclidean_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr in set(['cityblock', 'cblock', 'cb', 'c']):
_distance_wrap.cdist_city_block_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr in set(['hamming', 'hamm', 'ha', 'h']):
if XA.dtype == np.bool:
_distance_wrap.cdist_hamming_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB),
dm)
else:
_distance_wrap.cdist_hamming_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr in set(['jaccard', 'jacc', 'ja', 'j']):
if XA.dtype == np.bool:
_distance_wrap.cdist_jaccard_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB),
dm)
else:
_distance_wrap.cdist_jaccard_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr in set(['chebychev', 'chebyshev', 'cheby', 'cheb', 'ch']):
_distance_wrap.cdist_chebyshev_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr in set(['minkowski', 'mi', 'm', 'pnorm']):
_distance_wrap.cdist_minkowski_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm, p)
elif mstr in set(['wminkowski', 'wmi', 'wm', 'wpnorm']):
_distance_wrap.cdist_weighted_minkowski_wrap(_convert_to_double(XA),
_convert_to_double(XB),
dm, p,
_convert_to_double(w))
elif mstr in set(['seuclidean', 'se', 's']):
if V is not None:
V = np.asarray(V, order='c')
if type(V) != np.ndarray:
raise TypeError('Variance vector V must be a numpy array')
if V.dtype != np.double:
raise TypeError('Variance vector V must contain doubles.')
if len(V.shape) != 1:
raise ValueError('Variance vector V must be '
'one-dimensional.')
if V.shape[0] != n:
raise ValueError('Variance vector V must be of the same '
'dimension as the vectors on which the '
'distances are computed.')
# The C code doesn't do striding.
[VV] = _copy_arrays_if_base_present([_convert_to_double(V)])
else:
X = np.vstack([XA, XB])
VV = np.var(X, axis=0, ddof=1)
X = None
del X
_distance_wrap.cdist_seuclidean_wrap(_convert_to_double(XA),
_convert_to_double(XB), VV, dm)
# Need to test whether vectorized cosine works better.
# Find out: Is there a dot subtraction operator so I can
# subtract matrices in a similar way to multiplying them?
# Need to get rid of as much unnecessary C code as possible.
elif mstr in set(['cosine', 'cos']):
normsA = np.sqrt(np.sum(XA * XA, axis=1))
normsB = np.sqrt(np.sum(XB * XB, axis=1))
_distance_wrap.cdist_cosine_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm,
normsA,
normsB)
elif mstr in set(['correlation', 'co']):
XA2 = XA - XA.mean(1)[:, np.newaxis]
XB2 = XB - XB.mean(1)[:, np.newaxis]
#X2 = X - np.matlib.repmat(np.mean(X, axis=1).reshape(m, 1), 1, n)
normsA = np.sqrt(np.sum(XA2 * XA2, axis=1))
normsB = np.sqrt(np.sum(XB2 * XB2, axis=1))
_distance_wrap.cdist_cosine_wrap(_convert_to_double(XA2),
_convert_to_double(XB2),
_convert_to_double(dm),
_convert_to_double(normsA),
_convert_to_double(normsB))
elif mstr in set(['mahalanobis', 'mahal', 'mah']):
if VI is not None:
VI = _convert_to_double(np.asarray(VI, order='c'))
if type(VI) != np.ndarray:
raise TypeError('VI must be a numpy array.')
if VI.dtype != np.double:
raise TypeError('The array must contain 64-bit floats.')
[VI] = _copy_arrays_if_base_present([VI])
else:
X = np.vstack([XA, XB])
V = np.cov(X.T)
X = None
del X
VI = _convert_to_double(np.linalg.inv(V).T.copy())
# (u-v)V^(-1)(u-v)^T
_distance_wrap.cdist_mahalanobis_wrap(_convert_to_double(XA),
_convert_to_double(XB),
VI, dm)
elif mstr == 'canberra':
_distance_wrap.cdist_canberra_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr == 'braycurtis':
_distance_wrap.cdist_bray_curtis_wrap(_convert_to_double(XA),
_convert_to_double(XB), dm)
elif mstr == 'yule':
_distance_wrap.cdist_yule_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB), dm)
elif mstr == 'matching':
_distance_wrap.cdist_matching_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB), dm)
elif mstr == 'kulsinski':
_distance_wrap.cdist_kulsinski_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB), dm)
elif mstr == 'dice':
_distance_wrap.cdist_dice_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB), dm)
elif mstr == 'rogerstanimoto':
_distance_wrap.cdist_rogerstanimoto_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB),
dm)
elif mstr == 'russellrao':
_distance_wrap.cdist_russellrao_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB), dm)
elif mstr == 'sokalmichener':
_distance_wrap.cdist_sokalmichener_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB),
dm)
elif mstr == 'sokalsneath':
_distance_wrap.cdist_sokalsneath_bool_wrap(_convert_to_bool(XA),
_convert_to_bool(XB),
dm)
elif metric == 'test_euclidean':
dm = cdist(XA, XB, euclidean)
elif metric == 'test_seuclidean':
if V is None:
V = np.var(np.vstack([XA, XB]), axis=0, ddof=1)
else:
V = np.asarray(V, order='c')
dm = cdist(XA, XB, lambda u, v: seuclidean(u, v, V))
elif metric == 'test_sqeuclidean':
dm = cdist(XA, XB, lambda u, v: sqeuclidean(u, v))
elif metric == 'test_braycurtis':
dm = cdist(XA, XB, braycurtis)
elif metric == 'test_mahalanobis':
if VI is None:
X = np.vstack([XA, XB])
V = np.cov(X.T)
VI = np.linalg.inv(V)
X = None
del X
else:
VI = np.asarray(VI, order='c')
[VI] = _copy_arrays_if_base_present([VI])
# (u-v)V^(-1)(u-v)^T
dm = cdist(XA, XB, (lambda u, v: mahalanobis(u, v, VI)))
elif metric == 'test_canberra':
dm = cdist(XA, XB, canberra)
elif metric == 'test_cityblock':
dm = cdist(XA, XB, cityblock)
elif metric == 'test_minkowski':
dm = cdist(XA, XB, minkowski, p=p)
elif metric == 'test_wminkowski':
dm = cdist(XA, XB, wminkowski, p=p, w=w)
elif metric == 'test_cosine':
dm = cdist(XA, XB, cosine)
elif metric == 'test_correlation':
dm = cdist(XA, XB, correlation)
elif metric == 'test_hamming':
dm = cdist(XA, XB, hamming)
elif metric == 'test_jaccard':
dm = cdist(XA, XB, jaccard)
elif metric == 'test_chebyshev' or metric == 'test_chebychev':
dm = cdist(XA, XB, chebyshev)
elif metric == 'test_yule':
dm = cdist(XA, XB, yule)
elif metric == 'test_matching':
dm = cdist(XA, XB, matching)
elif metric == 'test_dice':
dm = cdist(XA, XB, dice)
elif metric == 'test_kulsinski':
dm = cdist(XA, XB, kulsinski)
elif metric == 'test_rogerstanimoto':
dm = cdist(XA, XB, rogerstanimoto)
elif metric == 'test_russellrao':
dm = cdist(XA, XB, russellrao)
elif metric == 'test_sokalsneath':
dm = cdist(XA, XB, sokalsneath)
elif metric == 'test_sokalmichener':
dm = cdist(XA, XB, sokalmichener)
else:
raise ValueError('Unknown Distance Metric: %s' % mstr)
else:
raise TypeError('2nd argument metric must be a string identifier '
'or a function.')
return dm
| [
"[email protected]"
] | |
45778cfc1bbbd390bda742a7966334f7c5947b65 | 695bfbc92a1474a29270d46c7b4ae2805240b077 | /ch-02/09-MultilayerNeuralNetwork-blind-1.py | 195b75302a420f7861e71ae399207bba4e5d6e5b | [] | no_license | paulhendricks/python-machine-learning | 801563275e05fb1f611e9114581c5ef2f7b58125 | 8e1cb6bc37067cc239eaee69fd8aa13ffa405b68 | refs/heads/master | 2021-01-19T04:25:30.002300 | 2016-06-13T15:25:31 | 2016-06-13T15:25:31 | 50,059,572 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,045 | py | import numpy as np
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([[0, 1, 1, 0]]).T
synapse_0 = 2 * np.random.random((2, 10)) - 1
synapse_1 = 2 * np.random.random((10, 20)) - 1
synapse_2 = 2 * np.random.random((20, 10)) - 1
synapse_3 = 2 * np.random.random((10, 1)) - 1
for _ in range(10000):
layer_1 = 1 / (1 + np.exp(-np.dot(X, synapse_0)))
layer_2 = 1 / (1 + np.exp(-np.dot(layer_1, synapse_1)))
layer_3 = 1 / (1 + np.exp(-np.dot(layer_2, synapse_2)))
layer_4 = 1 / (1 + np.exp(-np.dot(layer_3, synapse_3)))
layer_4_delta = (y - layer_4) * (layer_4 * (1 - layer_4))
layer_3_delta = np.dot(layer_4_delta, synapse_3.T) * (layer_3 * (1 - layer_3))
layer_2_delta = np.dot(layer_3_delta, synapse_2.T) * (layer_2 * (1 - layer_2))
layer_1_delta = np.dot(layer_2_delta, synapse_1.T) * (layer_1 * (1 - layer_1))
synapse_0 += np.dot(X.T, layer_1_delta)
synapse_1 += np.dot(layer_1.T, layer_2_delta)
synapse_2 += np.dot(layer_2.T, layer_3_delta)
synapse_3 += np.dot(layer_3.T, layer_4_delta)
| [
"[email protected]"
] | |
66691c5eaa3d77e4a81f63cbe55492db46fc75dd | 6af7cf75fd919f05b47d7516da4568f92abef8aa | /actstream/migrations/0001_initial.py | 802ad43280d127b600e43ed99013de043db04460 | [] | no_license | mauler/activity-stream | 43bdb3363eda1362d66e4abec74c17a282e03d5e | 72477ebec544c686e1691566399545642f4ff104 | refs/heads/master | 2020-04-10T00:31:55.970480 | 2015-03-10T13:28:51 | 2015-03-10T13:28:51 | 32,038,589 | 0 | 1 | null | 2015-03-11T20:06:21 | 2015-03-11T20:06:21 | Python | UTF-8 | Python | false | false | 2,361 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import jsonfield.fields
import django.utils.timezone
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('contenttypes', '0001_initial'),
('subs', '0002_profile'),
]
operations = [
migrations.CreateModel(
name='Action',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('verb', models.CharField(max_length=255)),
('description', models.TextField(null=True, blank=True)),
('action_object_object_id', models.CharField(max_length=255, null=True, blank=True)),
('timestamp', models.DateTimeField(default=django.utils.timezone.now)),
('public', models.BooleanField(default=True)),
('data', jsonfield.fields.JSONField(null=True, blank=True)),
('action_object_content_type', models.ForeignKey(related_name='action_object', blank=True, to='contenttypes.ContentType', null=True)),
('actor', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
('target', models.ForeignKey(to='subs.Post')),
],
options={
'ordering': ('-timestamp',),
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Follow',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('actor_only', models.BooleanField(default=True, verbose_name=b'Only follow actions where the object is the target.')),
('started', models.DateTimeField(default=django.utils.timezone.now)),
('follow_object', models.ForeignKey(related_name='followed_by', to=settings.AUTH_USER_MODEL)),
('user', models.ForeignKey(to=settings.AUTH_USER_MODEL)),
],
options={
},
bases=(models.Model,),
),
migrations.AlterUniqueTogether(
name='follow',
unique_together=set([('user', 'follow_object')]),
),
]
| [
"[email protected]"
] | |
9eaed7819d886ec933a65429535fe29bef2129d8 | eda8b801aab469113be4053a7d492dde0da63373 | /metadata-ingestion/src/datahub/ingestion/source/metadata/business_glossary.py | 28df9565d4a868d1ebf6811b9ab727c2fd6b9e6f | [
"Apache-2.0",
"LicenseRef-scancode-unknown-license-reference",
"BSD-2-Clause",
"MIT"
] | permissive | zuizeze/datahub | 630c00934e7b76eeb3a08feda7a112dd745ee18b | 63a37450103eff3e94d798c36bab6e47fbca47dc | refs/heads/master | 2022-09-06T19:53:02.219574 | 2022-08-13T00:35:53 | 2022-08-13T00:35:53 | 242,064,645 | 0 | 0 | Apache-2.0 | 2020-02-21T05:49:26 | 2020-02-21T05:49:25 | null | UTF-8 | Python | false | false | 9,058 | py | import logging
from dataclasses import dataclass, field
from typing import Any, Dict, Iterable, List, Optional, Union
from pydantic import validator
from pydantic.fields import Field
import datahub.metadata.schema_classes as models
from datahub.configuration.common import ConfigModel
from datahub.configuration.config_loader import load_config_file
from datahub.emitter.mce_builder import get_sys_time, make_group_urn, make_user_urn
from datahub.ingestion.api.decorators import ( # SourceCapability,; capability,
SupportStatus,
config_class,
platform_name,
support_status,
)
from datahub.ingestion.api.source import Source, SourceReport
from datahub.ingestion.api.workunit import MetadataWorkUnit, UsageStatsWorkUnit
logger = logging.getLogger(__name__)
valid_status: models.StatusClass = models.StatusClass(removed=False)
auditStamp = models.AuditStampClass(
time=get_sys_time(), actor="urn:li:corpUser:restEmitter"
)
class Owners(ConfigModel):
users: Optional[List[str]]
groups: Optional[List[str]]
class GlossaryTermConfig(ConfigModel):
name: str
description: str
term_source: Optional[str]
source_ref: Optional[str]
source_url: Optional[str]
owners: Optional[Owners]
inherits: Optional[List[str]]
contains: Optional[List[str]]
custom_properties: Optional[Dict[str, str]]
class GlossaryNodeConfig(ConfigModel):
name: str
description: str
owners: Optional[Owners]
terms: Optional[List[GlossaryTermConfig]]
nodes: Optional[List["GlossaryNodeConfig"]]
GlossaryNodeConfig.update_forward_refs()
class DefaultConfig(ConfigModel):
"""Holds defaults for populating fields in glossary terms"""
source: str
owners: Owners
url: Optional[str] = None
source_type: Optional[str] = "INTERNAL"
class BusinessGlossarySourceConfig(ConfigModel):
file: str = Field(description="Path to business glossary file to ingest.")
class BusinessGlossaryConfig(DefaultConfig):
version: str
nodes: Optional[List[GlossaryNodeConfig]]
terms: Optional[List[GlossaryTermConfig]]
@validator("version")
def version_must_be_1(cls, v):
if v != "1":
raise ValueError("Only version 1 is supported")
def make_glossary_node_urn(path: List[str]) -> str:
return "urn:li:glossaryNode:" + ".".join(path)
def make_glossary_term_urn(path: List[str]) -> str:
return "urn:li:glossaryTerm:" + ".".join(path)
def get_owners(owners: Owners) -> models.OwnershipClass:
owners_meta: List[models.OwnerClass] = []
if owners.users is not None:
owners_meta = owners_meta + [
models.OwnerClass(
owner=make_user_urn(o),
type=models.OwnershipTypeClass.DEVELOPER,
)
for o in owners.users
]
if owners.groups is not None:
owners_meta = owners_meta + [
models.OwnerClass(
owner=make_group_urn(o),
type=models.OwnershipTypeClass.DEVELOPER,
)
for o in owners.groups
]
return models.OwnershipClass(owners=owners_meta)
def get_mces(
glossary: BusinessGlossaryConfig,
) -> List[models.MetadataChangeEventClass]:
events: List[models.MetadataChangeEventClass] = []
path: List[str] = []
root_owners = get_owners(glossary.owners)
if glossary.nodes:
for node in glossary.nodes:
events += get_mces_from_node(
node,
path + [node.name],
parentNode=None,
parentOwners=root_owners,
defaults=glossary,
)
if glossary.terms:
for term in glossary.terms:
events += get_mces_from_term(
term,
path + [term.name],
parentNode=None,
parentOwnership=root_owners,
defaults=glossary,
)
return events
def get_mce_from_snapshot(snapshot: Any) -> models.MetadataChangeEventClass:
return models.MetadataChangeEventClass(proposedSnapshot=snapshot)
def get_mces_from_node(
glossaryNode: GlossaryNodeConfig,
path: List[str],
parentNode: Optional[str],
parentOwners: models.OwnershipClass,
defaults: DefaultConfig,
) -> List[models.MetadataChangeEventClass]:
node_urn = make_glossary_node_urn(path)
node_info = models.GlossaryNodeInfoClass(
definition=glossaryNode.description,
parentNode=parentNode,
)
node_owners = parentOwners
if glossaryNode.owners is not None:
assert glossaryNode.owners is not None
node_owners = get_owners(glossaryNode.owners)
node_snapshot = models.GlossaryNodeSnapshotClass(
urn=node_urn,
aspects=[node_info, node_owners, valid_status],
)
mces = [get_mce_from_snapshot(node_snapshot)]
if glossaryNode.nodes:
for node in glossaryNode.nodes:
mces += get_mces_from_node(
node,
path + [node.name],
parentNode=node_urn,
parentOwners=node_owners,
defaults=defaults,
)
if glossaryNode.terms:
for term in glossaryNode.terms:
mces += get_mces_from_term(
term,
path + [term.name],
parentNode=node_urn,
parentOwnership=node_owners,
defaults=defaults,
)
return mces
def get_mces_from_term(
glossaryTerm: GlossaryTermConfig,
path: List[str],
parentNode: Optional[str],
parentOwnership: models.OwnershipClass,
defaults: DefaultConfig,
) -> List[models.MetadataChangeEventClass]:
term_urn = make_glossary_term_urn(path)
aspects: List[
Union[
models.GlossaryTermInfoClass,
models.GlossaryRelatedTermsClass,
models.OwnershipClass,
models.StatusClass,
models.GlossaryTermKeyClass,
models.BrowsePathsClass,
]
] = []
term_info = models.GlossaryTermInfoClass(
definition=glossaryTerm.description,
termSource=glossaryTerm.term_source # type: ignore
if glossaryTerm.term_source is not None
else defaults.source_type,
sourceRef=glossaryTerm.source_ref
if glossaryTerm.source_ref
else defaults.source,
sourceUrl=glossaryTerm.source_url if glossaryTerm.source_url else defaults.url,
parentNode=parentNode,
customProperties=glossaryTerm.custom_properties,
)
aspects.append(term_info)
isA_related = None
hasA_related = None
if glossaryTerm.inherits is not None:
assert glossaryTerm.inherits is not None
isA_related = [make_glossary_term_urn([term]) for term in glossaryTerm.inherits]
if glossaryTerm.contains is not None:
assert glossaryTerm.contains is not None
hasA_related = [
make_glossary_term_urn([term]) for term in glossaryTerm.contains
]
if isA_related is not None or hasA_related is not None:
relatedTerms = models.GlossaryRelatedTermsClass(
isRelatedTerms=isA_related, hasRelatedTerms=hasA_related
)
aspects.append(relatedTerms)
ownership: models.OwnershipClass = parentOwnership
if glossaryTerm.owners is not None:
assert glossaryTerm.owners is not None
ownership = get_owners(glossaryTerm.owners)
aspects.append(ownership)
term_browse = models.BrowsePathsClass(paths=["/" + "/".join(path)])
aspects.append(term_browse)
term_snapshot: models.GlossaryTermSnapshotClass = models.GlossaryTermSnapshotClass(
urn=term_urn,
aspects=aspects,
)
return [get_mce_from_snapshot(term_snapshot)]
@platform_name("Business Glossary")
@config_class(BusinessGlossarySourceConfig)
@support_status(SupportStatus.CERTIFIED)
@dataclass
class BusinessGlossaryFileSource(Source):
"""
This plugin pulls business glossary metadata from a yaml-formatted file. An example of one such file is located in the examples directory [here](../examples/bootstrap_data/business_glossary.yml).
"""
config: BusinessGlossarySourceConfig
report: SourceReport = field(default_factory=SourceReport)
@classmethod
def create(cls, config_dict, ctx):
config = BusinessGlossarySourceConfig.parse_obj(config_dict)
return cls(ctx, config)
def load_glossary_config(self, file_name: str) -> BusinessGlossaryConfig:
config = load_config_file(file_name)
glossary_cfg = BusinessGlossaryConfig.parse_obj(config)
return glossary_cfg
def get_workunits(self) -> Iterable[Union[MetadataWorkUnit, UsageStatsWorkUnit]]:
glossary_config = self.load_glossary_config(self.config.file)
for mce in get_mces(glossary_config):
wu = MetadataWorkUnit(f"{mce.proposedSnapshot.urn}", mce=mce)
self.report.report_workunit(wu)
yield wu
def get_report(self):
return self.report
def close(self):
pass
| [
"[email protected]"
] | |
2617fe025e7ddc018281f853dec2e238a062ff20 | ce76b3ef70b885d7c354b6ddb8447d111548e0f1 | /ask_hand_in_able_problem/number_and_person.py | 4b08ca79400b115b7e926ee6a490cd123528665b | [] | no_license | JingkaiTang/github-play | 9bdca4115eee94a7b5e4ae9d3d6052514729ff21 | 51b550425a91a97480714fe9bc63cb5112f6f729 | refs/heads/master | 2021-01-20T20:18:21.249162 | 2016-08-19T07:20:12 | 2016-08-19T07:20:12 | 60,834,519 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 201 | py |
#! /usr/bin/env python
def part_and_day(str_arg):
early_time(str_arg)
print('ask_hand')
def early_time(str_arg):
print(str_arg)
if __name__ == '__main__':
part_and_day('early_man')
| [
"[email protected]"
] | |
046d8c76bb32eecf5ba9ad872efb2f8333d28fd3 | 6478723d180a8ef39941ba04b80c1eca9f437323 | /244. Shortest Word Distance II.py | c3713fd9f7ca35dff4badc5c14b57690aff0adeb | [] | no_license | NiuNiu-jupiter/Leetcode | 2a49a365898ecca393cb1eb53a47f4501b25952d | e278ae6ded32f6a2d054ae11ad8fcc45e7bd0f86 | refs/heads/master | 2022-11-22T01:05:57.417538 | 2020-07-28T23:34:39 | 2020-07-28T23:34:39 | 182,104,119 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,465 | py | """
Design a class which receives a list of words in the constructor, and implements a method that takes two words word1 and word2 and return the shortest distance between these two words in the list. Your method will be called repeatedly many times with different parameters.
Example:
Assume that words = ["practice", "makes", "perfect", "coding", "makes"].
Input: word1 = “coding”, word2 = “practice”
Output: 3
Input: word1 = "makes", word2 = "coding"
Output: 1
Note:
You may assume that word1 does not equal to word2, and word1 and word2 are both in the list.
"""
class WordDistance:
def __init__(self, words: List[str]):
self.len = len(words)
self.dict = {}
for i, v in enumerate(words):
if not self.dict.get(v):
self.dict[v] = [i]
else:
self.dict[v].append(i)
def shortest(self, word1: str, word2: str) -> int:
l1 , l2 = self.dict[word1],self.dict[word2]
res = self.len
ptr1, ptr2 = 0 , 0
# O(m+n) time complexity
while ptr1 < len(l1) and ptr2 < len(l2):
res = min(res, abs( l1[ptr1] - l2[ptr2]))
if l1[ptr1] < l2[ptr2]:
ptr1 += 1
else:
ptr2 += 1
return res
# Your WordDistance object will be instantiated and called as such:
# obj = WordDistance(words)
# param_1 = obj.shortest(word1,word2)
| [
"[email protected]"
] | |
b0a397839d2484bb418a34d66f90b53b145a961e | 6b6f68f507746e3e39b0e8789af5d044e27d6b0a | /BinarySearch/0108_ConvertSortedArrayIntoBinarySearchTree_E.py | 9f2dbfc81f8e4a56f0487583f5e6e89bc92e38d1 | [] | no_license | PFZ86/LeetcodePractice | bb0012d8b3120451dda1745875836278d3362e45 | 6db9db1934bc0a8142124d8b56bf6c07bdf43d79 | refs/heads/master | 2021-08-28T08:43:27.343395 | 2021-08-17T20:38:32 | 2021-08-17T20:38:32 | 230,925,656 | 1 | 1 | null | 2021-08-17T20:38:32 | 2019-12-30T14:01:27 | Python | UTF-8 | Python | false | false | 876 | py | # https://leetcode.com/problems/convert-sorted-array-to-binary-search-tree/
# Definition for a binary tree node.
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
# Solution 1: binary search-like split
class Solution(object):
def sortedArrayToBST_helper(self, nums, start, end):
if start > end:
return None
mid = start + (end - start)/2
node = ListNode(nums[mid])
node.left = self.sortedArrayToBST_helper(nums, start, mid - 1)
node.right = self.sortedArrayToBST_helper(nums, mid + 1, end)
return node
def sortedArrayToBST(self, nums):
"""
:type nums: List[int]
:rtype: TreeNode
"""
return self.sortedArrayToBST_helper(nums, 0, len(nums) - 1)
| [
"[email protected]"
] | |
7df3bcbfa42d4a7f74ba3a7e9dbcecc69c3e55eb | f88e49b5aa336cea3bfd2fa46bb23048f6bfe875 | /gaussfold/aa/asparagine.py | ac313a65d071a28e58827c86e16e2d7054d3d2ea | [] | no_license | AntoinePassemiers/GDE-GaussFold | c50a05992447f1909b3357db40620e3ede3ffc16 | 323600a75a7b97286fd66d478111140b9496b076 | refs/heads/master | 2020-05-09T23:07:11.930093 | 2019-08-15T21:19:33 | 2019-08-15T21:19:33 | 181,492,718 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,583 | py | # -*- coding: utf-8 -*-
# asparagine.py
# author : Antoine Passemiers
from gaussfold.aa.amino_acid import AminoAcid
from gaussfold.atom import Bond, Group, Carbon, Hydrogen, Oxygen, Nitrogen
class Asparagine(AminoAcid):
def __init__(self, **kwargs):
AminoAcid.__init__(self, 'ASN', 'N', **kwargs)
# Add side chain atoms
self.COG_group = Group('COG')
self.ND2_group = Group('ND2')
self.CB = Carbon('CB')
self.add_atom(self.CB)
self.CG = Carbon('CG', q=0.38)
self.add_atom(self.CG)
self.COG_group.add(self.CG)
self.OD1 = Oxygen('OD1', q=-0.38)
self.add_atom(self.OD1)
self.COG_group.add(self.OD1)
self.ND2 = Nitrogen('ND2', q=-0.56)
self.add_atom(self.ND2)
self.ND2_group.add(self.ND2)
self.add_bond(Bond(self.CB, self.CA))
self.add_bond(Bond(self.CG, self.CB))
self.add_bond(Bond(self.OD1, self.CG, order=2))
self.add_bond(Bond(self.ND2, self.CG))
# Add hydrogens
self.H1 = Hydrogen('H1')
self.add_atom(self.H1)
self.add_bond(Bond(self.H1, self.CB))
self.H2 = Hydrogen('H2')
self.add_atom(self.H2)
self.add_bond(Bond(self.H2, self.CB))
self.HNA = Hydrogen('HNA', q=0.28)
self.add_atom(self.HNA)
self.add_bond(Bond(self.HNA, self.ND2))
self.ND2_group.add(self.HNA)
self.HNB = Hydrogen('HNB', q=0.28)
self.add_atom(self.HNB)
self.add_bond(Bond(self.HNB, self.ND2))
self.ND2_group.add(self.HNB)
| [
"[email protected]"
] | |
8005c05612bbec43b6ca81c5de5062dd6c6fc1ab | bc8d861814d85c240901c30ad4036611d961ffc0 | /src/dicttree/qt/log.py | 4acc51aa531ec8dafbd53ac0cffb84ad2f4eff12 | [] | no_license | chaoflow/dicttree.qt | 2178c306fcc9aea4c653c8e045c5172b702c8750 | 5652415db383ffa2ce62f068edeba1f2528354b2 | refs/heads/master | 2016-09-05T14:41:05.872310 | 2012-06-17T23:57:37 | 2012-06-17T23:57:37 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 89 | py | from dicttree.log import getLogger
logger = getLogger('qt')
getLogger = logger.getChild
| [
"[email protected]"
] | |
09d0608d616c3fb1a0ec0cc817ff935fbc85a61b | 7ade5b48aa5b62594bcbd265030a2a7307ae3b2d | /kaggle_nuclei/full_image/preprocessor_gan_training.py | 4c13198bb3cd71db1428e4444639c55d1f2e7a1b | [] | no_license | SSS135/kaggle_dsbowl2018 | 9c5ca1b490bdbc806f8213c2f6564cbdf1c852d3 | 59d9eab05928178356e8bf744a67607c89f04761 | refs/heads/master | 2020-05-29T15:31:31.083189 | 2018-07-21T15:32:58 | 2018-07-21T15:32:58 | 189,217,873 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,003 | py | import copy
import math
import sys
import torch
import torch.nn.functional as F
import torch.utils.data
from optfn.cosine_annealing import CosineAnnealingRestartParam
from optfn.gadam import GAdam
from optfn.param_groups_getter import get_param_groups
from torch import nn
from torch.autograd import Variable
from torch.nn.modules.batchnorm import _BatchNorm
from tqdm import tqdm
from .dataset import NucleiDataset
from .iou import threshold_iou, iou
from .unet import UNet
def train_preprocessor_gan(train_data, epochs=15, pretrain_epochs=7, resnet=False):
dataset = NucleiDataset(train_data, supersample=1)
dataloader = torch.utils.data.DataLoader(dataset, shuffle=True, batch_size=4, pin_memory=True)
pad = dataloader.dataset.padding
# mask_channels = 2
# if resnet:
# gen_model = FPN(3).cuda()
# gen_optimizer = GAdam(get_param_groups(gen_model), lr=1e-4, betas=(0.9, 0.999), nesterov=0.0,
# weight_decay=5e-4, avg_sq_mode='tensor', amsgrad=False)
# else:
gen_model = UNet(3, 3).cuda()
gen_optimizer = GAdam(get_param_groups(gen_model), lr=1e-4, betas=(0.5, 0.999),
amsgrad=False, nesterov=0.5, weight_decay=1e-5, norm_weight_decay=False)
disc_model = GanD(6).cuda()
disc_optimizer = GAdam(get_param_groups(disc_model), lr=1e-4, betas=(0.5, 0.999),
amsgrad=False, nesterov=0.5, weight_decay=1e-5, norm_weight_decay=False)
gen_model.apply(weights_init)
disc_model.apply(weights_init)
gen_scheduler = CosineAnnealingRestartParam(gen_optimizer, len(dataloader), 2)
disc_scheduler = CosineAnnealingRestartParam(disc_optimizer, len(dataloader), 2)
best_model = gen_model
best_score = -math.inf
sys.stdout.flush()
one = Variable(torch.cuda.FloatTensor([0.95]))
zero = Variable(torch.cuda.FloatTensor([0.05]))
for epoch in range(epochs):
with tqdm(dataloader) as pbar:
if resnet:
gen_model.freeze_pretrained_layers(epoch < pretrain_epochs)
t_iou_ma, f_iou_ma, sdf_loss_ma, cont_loss_ma = 0, 0, 0, 0
for i, (img, mask, sdf, obj_info) in enumerate(pbar):
x_train = Variable(img.cuda())
sdf_train = Variable(sdf.cuda())
mask_train = mask.cuda()
x_train_unpad = x_train[:, :, pad:-pad, pad:-pad]
# mask_train = remove_missized_objects(mask_train, 0.1, 0.6)
mask_target = (mask_train > 0).float().clamp(min=0.05, max=0.95)
mask_target = Variable(mask_target)
# mask_target = (mask_train.data > 0).float() * 2 - 1
# mask_target = mask_target.mul_(3).add_(0.5 * mask_target.clone().normal_(0, 1))
# mask_target = F.sigmoid(Variable(mask_target))
# split_mask = split_labels(mask_train.data, mask_channels).float()
# split_mask = split_mask.sub_(0.5).mul_(6).add_(0.5 * split_mask.clone().normal_(0, 1))
# split_mask = F.softmax(Variable(split_mask), 1)
# split_mask = split_mask.clamp(min=0.05, max=0.95) + split_mask.clone().uniform_(-0.05, 0.05)
# split_mask /= split_mask.sum(1, keepdim=True)
# split_mask = Variable(split_mask)
cont_train = 1 - sdf_train.data ** 2
cont_train = (cont_train.clamp(0.9, 1) - 0.9) * 20 - 1
cont_train = Variable(cont_train)
# discriminator
disc_optimizer.zero_grad()
# for p in disc_model.parameters():
# p.data.clamp_(-0.01, 0.01)
real_input = torch.cat([mask_target, sdf_train, cont_train, x_train_unpad], 1)
real_d, real_features = disc_model(real_input)
loss_real = 0
# loss_real += -real_d.mean()
loss_real += F.binary_cross_entropy_with_logits(real_d, one.expand_as(real_d))
# loss_real += 0.5 * (1 - real_d.clamp(max=1)).pow_(2).mean()
loss_real.backward()
# gen_noise = Variable(x_train.data.new(x_train.shape[0], 1, *x_train.shape[2:]).normal_(0, 1))
# gen_input = torch.cat([x_train, gen_noise], 1)
out = gen_model(x_train)[:, :, pad:-pad, pad:-pad]
out_sdf, out_cont, out_mask = out[:, 0:1], out[:, 1:2], out[:, 2:3]
out_mask, out_sdf = F.sigmoid(out_mask), out_sdf.contiguous()
fake_input = torch.cat([out_mask, out_sdf, out_cont, x_train_unpad], 1)
fake_d, fake_features = disc_model(fake_input.detach())
loss_fake = 0
# loss_fake += fake_d.mean()
loss_fake += F.binary_cross_entropy_with_logits(fake_d, zero.expand_as(fake_d))
# loss_fake += 0.5 * (-1 - fake_d.clamp(min=-1)).pow_(2).mean()
loss_fake.backward()
disc_optimizer.step()
# generator
gen_optimizer.zero_grad()
gen_d, fake_features = disc_model(fake_input)
loss_gen = 0
# loss_gen += -gen_d.mean()
# loss_gen += F.binary_cross_entropy_with_logits(gen_d, one.expand_as(gen_d))
# loss_gen += 0.5 * (1 - gen_d.div(3).clamp(min=-1)).pow_(2).mean()
loss_gen += F.mse_loss(fake_features, real_features.detach())
# loss_gen += F.mse_loss(gen_d, real_d.detach())
sdf_loss = F.mse_loss(out_sdf, sdf_train)
cont_loss = F.mse_loss(out_cont, cont_train)
# mask_loss = soft_dice_loss(out_mask, mask_target)
loss = loss_gen + sdf_loss + cont_loss # sdf_loss.mean() + cont_loss.mean() #+ mask_loss
loss.backward()
gen_optimizer.step()
gen_scheduler.step()
disc_scheduler.step()
# flat_out_mask = (out_mask.max(1)[1] != out_mask.shape[1] - 1).float()
f_iou = iou(out_mask.data, mask_train)
t_iou = threshold_iou(f_iou)
bc = 1 - 0.99 ** (i + 1)
sdf_loss_ma = 0.99 * sdf_loss_ma + 0.01 * sdf_loss.data[0]
cont_loss_ma = 0.99 * cont_loss_ma + 0.01 * cont_loss.data[0]
t_iou_ma = 0.99 * t_iou_ma + 0.01 * t_iou.mean()
f_iou_ma = 0.99 * f_iou_ma + 0.01 * f_iou.mean()
pbar.set_postfix(epoch=epoch, T_IoU=t_iou_ma / bc, SDF=sdf_loss_ma / bc,
Cont=cont_loss_ma / bc, IOU=f_iou_ma / bc, refresh=False)
if t_iou_ma > best_score:
best_score = t_iou_ma
best_model = copy.deepcopy(gen_model)
return best_model
def remove_missized_objects(mask, min_size, max_size):
mask = mask.clone()
obj_count = mask.max()
mask_size = math.sqrt(mask.shape[2] * mask.shape[3])
for idx in range(1, obj_count + 1):
obj_mask = mask == idx
size = math.sqrt(obj_mask.sum()) / mask_size
if size < min_size or size > max_size:
mask[obj_mask] = 0
return mask
# custom weights initialization called on netG and netD
def weights_init(m):
# if isinstance(m, _ConvNd) or isinstance(m, nn.Linear):
# # m.weight.data.normal_(0.0, 0.02)
# if m.bias is not None:
# m.bias.data.fill_(0)
if isinstance(m, _BatchNorm):
m.weight.data.normal_(1.0, 0.05)
m.bias.data.fill_(0)
def split_labels(mask, num_split_channels):
assert num_split_channels >= 2
mask_label_count = mask.max()
split_mask = mask.new(mask.shape[0], num_split_channels, *mask.shape[2:]).byte().fill_(0)
if mask_label_count == 0:
return split_mask
c_idx = mask.new(mask_label_count).random_(num_split_channels - 1)
split_mask[:, -1] = (mask == 0).squeeze(1)
for mc in range(mask_label_count):
split_mask[:, c_idx[mc]] |= (mask == mc + 1).squeeze(1)
return split_mask
class GanD(nn.Module):
def __init__(self, nc=3, nf=64):
super().__init__()
self.head = nn.Sequential(
# input is (nc) x 64 x 64
nn.Conv2d(nc, nf, 4, 2, 1, bias=False),
nn.ReLU(inplace=True),
# state size. (ndf) x 32 x 32
nn.Conv2d(nf, nf * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(nf * 2),
nn.ReLU(inplace=True),
# state size. (ndf*2) x 16 x 16
nn.Conv2d(nf * 2, nf * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(nf * 4),
nn.ReLU(inplace=True),
# state size. (ndf*4) x 8 x 8
nn.Conv2d(nf * 4, nf * 8, 4, 2, 1, bias=False),
)
self.tail = nn.Sequential(
nn.BatchNorm2d(nf * 8),
nn.ReLU(inplace=True),
# state size. (ndf*8) x 4 x 4
nn.Conv2d(nf * 8, 1, 4, 1, 0, bias=False),
)
def forward(self, input):
features = self.head(input)
output = self.tail(features)
return output.view(output.shape[0], -1).mean(-1), features
class GanD_UNet(nn.Module):
def __init__(self, nc=3, nf=64):
super().__init__()
self.unet = UNet(nc, 1, f=nf)
# self.conv = nn.Conv2d(1, 1, 8, 4)
# self.head = nn.Sequential(
# nn.Linear(nf * 2, nf * 2),
# nn.BatchNorm2d(nf * 2),
# nn.ReLU(True),
# nn.Linear(nf * 2, 1),
# )
def forward(self, input):
features = self.unet(input)
# output = features
# while output.shape[2] >= self.conv.kernel_size[0]:
# output = self.conv(output)
output = features.view(input.shape[0], -1).mean()
# output = features.view(*features.shape[:2], -1)
# output = torch.cat([output.mean(-1), output.std(-1)], 1)
# output = self.head(output).view(-1)
return output, features | [
"[email protected]"
] | |
c813db17e0b517e6d6d3eb324d57cdbd6b573e6a | 95d0806ce766805beffba8144e4b83076b5f8b91 | /hongwai/exam/thermo+hongwai_v01_hongwai.py | 381646566d49c0ddec0464b85706d2008c0c75f5 | [] | no_license | chengw99/dell | 1f4d5a2f20f3e61208266dc4a0adc4e18cd44ff8 | 4f932c3f0d3deb545700a9616456fb0beeb733d0 | refs/heads/master | 2021-04-15T08:53:49.011550 | 2019-03-23T14:56:41 | 2019-03-23T14:56:41 | 104,961,142 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 667 | py | # -*- coding: utf-8 -*-
"""
Created on Thu Nov 15 14:34:03 2018
@author: DELL
"""
# 此程序根据 thermo+hongwai_v01_pic_coord.py 选出的数据范围 对红外数据进行面平均
# 因为每个时间红外数据选择的范围都不同,所以不能做批量处理
import numpy as np
import pandas as pd
ipath = r'E:\py_data\hongwai'
opath = r'E:\py_output'
data = pd.read_csv(ipath+'\\'+'one-facing south-23.csv',names=np.arange(480)) # 调整输入文件
result = data.ix[1555:1595,220:260] # 选取位置的范围 坐标+—20 取面平均
a = np.array(result)
q = a.sum()/((a.shape[0])*(a.shape[1]))
print(q)
#result.to_csv(opath+'\\'+'30cm-00.csv') | [
"[email protected]"
] | |
2c2bf0e08f45b674f996afac25eb10688eae8afa | b5e8dc7c21659ac33b6e242a298a44d30bfa3610 | /env-prod/Lib/site-packages/sklearn/feature_selection/_univariate_selection.py | 3ad7abb6f9804830d88331b85a864e2066932f54 | [] | no_license | rpuegue/CoursA61 | c168b44cd9835ad7524c97b2983305c56acd8096 | a4fc8f7504491eb94cb2f1d2bf6d16674901a0c5 | refs/heads/main | 2023-03-10T09:08:31.293546 | 2021-02-22T16:25:51 | 2021-02-22T16:25:51 | 339,574,678 | 0 | 0 | null | 2021-02-22T16:25:51 | 2021-02-17T00:57:41 | Python | UTF-8 | Python | false | false | 28,925 | py | """Univariate features selection."""
# Authors: V. Michel, B. Thirion, G. Varoquaux, A. Gramfort, E. Duchesnay.
# L. Buitinck, A. Joly
# License: BSD 3 clause
import numpy as np
import warnings
from scipy import special, stats
from scipy.sparse import issparse
from ..base import BaseEstimator
from ..preprocessing import LabelBinarizer
from ..utils import (as_float_array, check_array, check_X_y, safe_sqr,
safe_mask)
from ..utils.extmath import safe_sparse_dot, row_norms
from ..utils.validation import check_is_fitted
from ._base import SelectorMixin
def _clean_nans(scores):
"""
Fixes Issue #1240: NaNs can't be properly compared, so change them to the
smallest value of scores's dtype. -inf seems to be unreliable.
"""
# XXX where should this function be called? fit? scoring functions
# themselves?
scores = as_float_array(scores, copy=True)
scores[np.isnan(scores)] = np.finfo(scores.dtype).min
return scores
######################################################################
# Scoring functions
# The following function is a rewriting of scipy.stats.f_oneway
# Contrary to the scipy.stats.f_oneway implementation it does not
# copy the data while keeping the inputs unchanged.
def f_oneway(*args):
"""Performs a 1-way ANOVA.
The one-way ANOVA tests the null hypothesis that 2 or more groups have
the same population mean. The test is applied to samples from two or
more groups, possibly with differing sizes.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
*args : array_like, sparse matrices
sample1, sample2... The sample measurements should be given as
arguments.
Returns
-------
F-value : float
The computed F-value of the test.
p-value : float
The associated p-value from the F-distribution.
Notes
-----
The ANOVA test has important assumptions that must be satisfied in order
for the associated p-value to be valid.
1. The samples are independent
2. Each sample is from a normally distributed population
3. The population standard deviations of the groups are all equal. This
property is known as homoscedasticity.
If these assumptions are not true for a given set of data, it may still be
possible to use the Kruskal-Wallis H-test (`scipy.stats.kruskal`_) although
with some loss of power.
The algorithm is from Heiman[2], pp.394-7.
See ``scipy.stats.f_oneway`` that should give the same results while
being less efficient.
References
----------
.. [1] Lowry, Richard. "Concepts and Applications of Inferential
Statistics". Chapter 14.
http://faculty.vassar.edu/lowry/ch14pt1.html
.. [2] Heiman, G.W. Research Methods in Statistics. 2002.
"""
n_classes = len(args)
args = [as_float_array(a) for a in args]
n_samples_per_class = np.array([a.shape[0] for a in args])
n_samples = np.sum(n_samples_per_class)
ss_alldata = sum(safe_sqr(a).sum(axis=0) for a in args)
sums_args = [np.asarray(a.sum(axis=0)) for a in args]
square_of_sums_alldata = sum(sums_args) ** 2
square_of_sums_args = [s ** 2 for s in sums_args]
sstot = ss_alldata - square_of_sums_alldata / float(n_samples)
ssbn = 0.
for k, _ in enumerate(args):
ssbn += square_of_sums_args[k] / n_samples_per_class[k]
ssbn -= square_of_sums_alldata / float(n_samples)
sswn = sstot - ssbn
dfbn = n_classes - 1
dfwn = n_samples - n_classes
msb = ssbn / float(dfbn)
msw = sswn / float(dfwn)
constant_features_idx = np.where(msw == 0.)[0]
if (np.nonzero(msb)[0].size != msb.size and constant_features_idx.size):
warnings.warn("Features %s are constant." % constant_features_idx,
UserWarning)
f = msb / msw
# flatten matrix to vector in sparse case
f = np.asarray(f).ravel()
prob = special.fdtrc(dfbn, dfwn, f)
return f, prob
def f_classif(X, y):
"""Compute the ANOVA F-value for the provided sample.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
X : {array-like, sparse matrix} shape = [n_samples, n_features]
The set of regressors that will be tested sequentially.
y : array of shape(n_samples)
The data matrix.
Returns
-------
F : array, shape = [n_features,]
The set of F values.
pval : array, shape = [n_features,]
The set of p-values.
See also
--------
chi2: Chi-squared stats of non-negative features for classification tasks.
f_regression: F-value between label/feature for regression tasks.
"""
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'])
args = [X[safe_mask(X, y == k)] for k in np.unique(y)]
return f_oneway(*args)
def _chisquare(f_obs, f_exp):
"""Fast replacement for scipy.stats.chisquare.
Version from https://github.com/scipy/scipy/pull/2525 with additional
optimizations.
"""
f_obs = np.asarray(f_obs, dtype=np.float64)
k = len(f_obs)
# Reuse f_obs for chi-squared statistics
chisq = f_obs
chisq -= f_exp
chisq **= 2
with np.errstate(invalid="ignore"):
chisq /= f_exp
chisq = chisq.sum(axis=0)
return chisq, special.chdtrc(k - 1, chisq)
def chi2(X, y):
"""Compute chi-squared stats between each non-negative feature and class.
This score can be used to select the n_features features with the
highest values for the test chi-squared statistic from X, which must
contain only non-negative features such as booleans or frequencies
(e.g., term counts in document classification), relative to the classes.
Recall that the chi-square test measures dependence between stochastic
variables, so using this function "weeds out" the features that are the
most likely to be independent of class and therefore irrelevant for
classification.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Sample vectors.
y : array-like of shape (n_samples,)
Target vector (class labels).
Returns
-------
chi2 : array, shape = (n_features,)
chi2 statistics of each feature.
pval : array, shape = (n_features,)
p-values of each feature.
Notes
-----
Complexity of this algorithm is O(n_classes * n_features).
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
f_regression: F-value between label/feature for regression tasks.
"""
# XXX: we might want to do some of the following in logspace instead for
# numerical stability.
X = check_array(X, accept_sparse='csr')
if np.any((X.data if issparse(X) else X) < 0):
raise ValueError("Input X must be non-negative.")
Y = LabelBinarizer().fit_transform(y)
if Y.shape[1] == 1:
Y = np.append(1 - Y, Y, axis=1)
observed = safe_sparse_dot(Y.T, X) # n_classes * n_features
feature_count = X.sum(axis=0).reshape(1, -1)
class_prob = Y.mean(axis=0).reshape(1, -1)
expected = np.dot(class_prob.T, feature_count)
return _chisquare(observed, expected)
def f_regression(X, y, center=True):
"""Univariate linear regression tests.
Linear model for testing the individual effect of each of many regressors.
This is a scoring function to be used in a feature selection procedure, not
a free standing feature selection procedure.
This is done in 2 steps:
1. The correlation between each regressor and the target is computed,
that is, ((X[:, i] - mean(X[:, i])) * (y - mean_y)) / (std(X[:, i]) *
std(y)).
2. It is converted to an F score then to a p-value.
For more on usage see the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
X : {array-like, sparse matrix} shape = (n_samples, n_features)
The set of regressors that will be tested sequentially.
y : array of shape(n_samples).
The data matrix
center : True, bool,
If true, X and y will be centered.
Returns
-------
F : array, shape=(n_features,)
F values of features.
pval : array, shape=(n_features,)
p-values of F-scores.
See also
--------
mutual_info_regression: Mutual information for a continuous target.
f_classif: ANOVA F-value between label/feature for classification tasks.
chi2: Chi-squared stats of non-negative features for classification tasks.
SelectKBest: Select features based on the k highest scores.
SelectFpr: Select features based on a false positive rate test.
SelectFdr: Select features based on an estimated false discovery rate.
SelectFwe: Select features based on family-wise error rate.
SelectPercentile: Select features based on percentile of the highest
scores.
"""
X, y = check_X_y(X, y, ['csr', 'csc', 'coo'], dtype=np.float64)
n_samples = X.shape[0]
# compute centered values
# note that E[(x - mean(x))*(y - mean(y))] = E[x*(y - mean(y))], so we
# need not center X
if center:
y = y - np.mean(y)
if issparse(X):
X_means = X.mean(axis=0).getA1()
else:
X_means = X.mean(axis=0)
# compute the scaled standard deviations via moments
X_norms = np.sqrt(row_norms(X.T, squared=True) -
n_samples * X_means ** 2)
else:
X_norms = row_norms(X.T)
# compute the correlation
corr = safe_sparse_dot(y, X)
corr /= X_norms
corr /= np.linalg.norm(y)
# convert to p-value
degrees_of_freedom = y.size - (2 if center else 1)
F = corr ** 2 / (1 - corr ** 2) * degrees_of_freedom
pv = stats.f.sf(F, 1, degrees_of_freedom)
return F, pv
######################################################################
# Base classes
class _BaseFilter(SelectorMixin, BaseEstimator):
"""Initialize the univariate feature selection.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues) or a single array with scores.
"""
def __init__(self, score_func):
self.score_func = score_func
def fit(self, X, y):
"""Run score function on (X, y) and get the appropriate features.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The training input samples.
y : array-like of shape (n_samples,)
The target values (class labels in classification, real numbers in
regression).
Returns
-------
self : object
"""
X, y = check_X_y(X, y, ['csr', 'csc'], multi_output=True)
if not callable(self.score_func):
raise TypeError("The score function should be a callable, %s (%s) "
"was passed."
% (self.score_func, type(self.score_func)))
self._check_params(X, y)
score_func_ret = self.score_func(X, y)
if isinstance(score_func_ret, (list, tuple)):
self.scores_, self.pvalues_ = score_func_ret
self.pvalues_ = np.asarray(self.pvalues_)
else:
self.scores_ = score_func_ret
self.pvalues_ = None
self.scores_ = np.asarray(self.scores_)
return self
def _check_params(self, X, y):
pass
######################################################################
# Specific filters
######################################################################
class SelectPercentile(_BaseFilter):
"""Select features according to a percentile of the highest scores.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues) or a single array with scores.
Default is f_classif (see below "See also"). The default function only
works with classification tasks.
percentile : int, optional, default=10
Percent of features to keep.
Attributes
----------
scores_ : array-like of shape (n_features,)
Scores of features.
pvalues_ : array-like of shape (n_features,)
p-values of feature scores, None if `score_func` returned only scores.
Examples
--------
>>> from sklearn.datasets import load_digits
>>> from sklearn.feature_selection import SelectPercentile, chi2
>>> X, y = load_digits(return_X_y=True)
>>> X.shape
(1797, 64)
>>> X_new = SelectPercentile(chi2, percentile=10).fit_transform(X, y)
>>> X_new.shape
(1797, 7)
Notes
-----
Ties between features with equal scores will be broken in an unspecified
way.
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
mutual_info_classif: Mutual information for a discrete target.
chi2: Chi-squared stats of non-negative features for classification tasks.
f_regression: F-value between label/feature for regression tasks.
mutual_info_regression: Mutual information for a continuous target.
SelectKBest: Select features based on the k highest scores.
SelectFpr: Select features based on a false positive rate test.
SelectFdr: Select features based on an estimated false discovery rate.
SelectFwe: Select features based on family-wise error rate.
GenericUnivariateSelect: Univariate feature selector with configurable mode.
"""
def __init__(self, score_func=f_classif, percentile=10):
super().__init__(score_func)
self.percentile = percentile
def _check_params(self, X, y):
if not 0 <= self.percentile <= 100:
raise ValueError("percentile should be >=0, <=100; got %r"
% self.percentile)
def _get_support_mask(self):
check_is_fitted(self)
# Cater for NaNs
if self.percentile == 100:
return np.ones(len(self.scores_), dtype=np.bool)
elif self.percentile == 0:
return np.zeros(len(self.scores_), dtype=np.bool)
scores = _clean_nans(self.scores_)
threshold = np.percentile(scores, 100 - self.percentile)
mask = scores > threshold
ties = np.where(scores == threshold)[0]
if len(ties):
max_feats = int(len(scores) * self.percentile / 100)
kept_ties = ties[:max_feats - mask.sum()]
mask[kept_ties] = True
return mask
class SelectKBest(_BaseFilter):
"""Select features according to the k highest scores.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues) or a single array with scores.
Default is f_classif (see below "See also"). The default function only
works with classification tasks.
k : int or "all", optional, default=10
Number of top features to select.
The "all" option bypasses selection, for use in a parameter search.
Attributes
----------
scores_ : array-like of shape (n_features,)
Scores of features.
pvalues_ : array-like of shape (n_features,)
p-values of feature scores, None if `score_func` returned only scores.
Examples
--------
>>> from sklearn.datasets import load_digits
>>> from sklearn.feature_selection import SelectKBest, chi2
>>> X, y = load_digits(return_X_y=True)
>>> X.shape
(1797, 64)
>>> X_new = SelectKBest(chi2, k=20).fit_transform(X, y)
>>> X_new.shape
(1797, 20)
Notes
-----
Ties between features with equal scores will be broken in an unspecified
way.
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
mutual_info_classif: Mutual information for a discrete target.
chi2: Chi-squared stats of non-negative features for classification tasks.
f_regression: F-value between label/feature for regression tasks.
mutual_info_regression: Mutual information for a continuous target.
SelectPercentile: Select features based on percentile of the highest scores.
SelectFpr: Select features based on a false positive rate test.
SelectFdr: Select features based on an estimated false discovery rate.
SelectFwe: Select features based on family-wise error rate.
GenericUnivariateSelect: Univariate feature selector with configurable mode.
"""
def __init__(self, score_func=f_classif, k=10):
super().__init__(score_func)
self.k = k
def _check_params(self, X, y):
if not (self.k == "all" or 0 <= self.k <= X.shape[1]):
raise ValueError("k should be >=0, <= n_features = %d; got %r. "
"Use k='all' to return all features."
% (X.shape[1], self.k))
def _get_support_mask(self):
check_is_fitted(self)
if self.k == 'all':
return np.ones(self.scores_.shape, dtype=bool)
elif self.k == 0:
return np.zeros(self.scores_.shape, dtype=bool)
else:
scores = _clean_nans(self.scores_)
mask = np.zeros(scores.shape, dtype=bool)
# Request a stable sort. Mergesort takes more memory (~40MB per
# megafeature on x86-64).
mask[np.argsort(scores, kind="mergesort")[-self.k:]] = 1
return mask
class SelectFpr(_BaseFilter):
"""Filter: Select the pvalues below alpha based on a FPR test.
FPR test stands for False Positive Rate test. It controls the total
amount of false detections.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues).
Default is f_classif (see below "See also"). The default function only
works with classification tasks.
alpha : float, optional
The highest p-value for features to be kept.
Attributes
----------
scores_ : array-like of shape (n_features,)
Scores of features.
pvalues_ : array-like of shape (n_features,)
p-values of feature scores.
Examples
--------
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.feature_selection import SelectFpr, chi2
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X.shape
(569, 30)
>>> X_new = SelectFpr(chi2, alpha=0.01).fit_transform(X, y)
>>> X_new.shape
(569, 16)
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
chi2: Chi-squared stats of non-negative features for classification tasks.
mutual_info_classif:
f_regression: F-value between label/feature for regression tasks.
mutual_info_regression: Mutual information between features and the target.
SelectPercentile: Select features based on percentile of the highest scores.
SelectKBest: Select features based on the k highest scores.
SelectFdr: Select features based on an estimated false discovery rate.
SelectFwe: Select features based on family-wise error rate.
GenericUnivariateSelect: Univariate feature selector with configurable mode.
"""
def __init__(self, score_func=f_classif, alpha=5e-2):
super().__init__(score_func)
self.alpha = alpha
def _get_support_mask(self):
check_is_fitted(self)
return self.pvalues_ < self.alpha
class SelectFdr(_BaseFilter):
"""Filter: Select the p-values for an estimated false discovery rate
This uses the Benjamini-Hochberg procedure. ``alpha`` is an upper bound
on the expected false discovery rate.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues).
Default is f_classif (see below "See also"). The default function only
works with classification tasks.
alpha : float, optional
The highest uncorrected p-value for features to keep.
Examples
--------
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.feature_selection import SelectFdr, chi2
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X.shape
(569, 30)
>>> X_new = SelectFdr(chi2, alpha=0.01).fit_transform(X, y)
>>> X_new.shape
(569, 16)
Attributes
----------
scores_ : array-like of shape (n_features,)
Scores of features.
pvalues_ : array-like of shape (n_features,)
p-values of feature scores.
References
----------
https://en.wikipedia.org/wiki/False_discovery_rate
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
mutual_info_classif: Mutual information for a discrete target.
chi2: Chi-squared stats of non-negative features for classification tasks.
f_regression: F-value between label/feature for regression tasks.
mutual_info_regression: Mutual information for a contnuous target.
SelectPercentile: Select features based on percentile of the highest scores.
SelectKBest: Select features based on the k highest scores.
SelectFpr: Select features based on a false positive rate test.
SelectFwe: Select features based on family-wise error rate.
GenericUnivariateSelect: Univariate feature selector with configurable mode.
"""
def __init__(self, score_func=f_classif, alpha=5e-2):
super().__init__(score_func)
self.alpha = alpha
def _get_support_mask(self):
check_is_fitted(self)
n_features = len(self.pvalues_)
sv = np.sort(self.pvalues_)
selected = sv[sv <= float(self.alpha) / n_features *
np.arange(1, n_features + 1)]
if selected.size == 0:
return np.zeros_like(self.pvalues_, dtype=bool)
return self.pvalues_ <= selected.max()
class SelectFwe(_BaseFilter):
"""Filter: Select the p-values corresponding to Family-wise error rate
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues).
Default is f_classif (see below "See also"). The default function only
works with classification tasks.
alpha : float, optional
The highest uncorrected p-value for features to keep.
Examples
--------
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.feature_selection import SelectFwe, chi2
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X.shape
(569, 30)
>>> X_new = SelectFwe(chi2, alpha=0.01).fit_transform(X, y)
>>> X_new.shape
(569, 15)
Attributes
----------
scores_ : array-like of shape (n_features,)
Scores of features.
pvalues_ : array-like of shape (n_features,)
p-values of feature scores.
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
chi2: Chi-squared stats of non-negative features for classification tasks.
f_regression: F-value between label/feature for regression tasks.
SelectPercentile: Select features based on percentile of the highest scores.
SelectKBest: Select features based on the k highest scores.
SelectFpr: Select features based on a false positive rate test.
SelectFdr: Select features based on an estimated false discovery rate.
GenericUnivariateSelect: Univariate feature selector with configurable mode.
"""
def __init__(self, score_func=f_classif, alpha=5e-2):
super().__init__(score_func)
self.alpha = alpha
def _get_support_mask(self):
check_is_fitted(self)
return (self.pvalues_ < self.alpha / len(self.pvalues_))
######################################################################
# Generic filter
######################################################################
# TODO this class should fit on either p-values or scores,
# depending on the mode.
class GenericUnivariateSelect(_BaseFilter):
"""Univariate feature selector with configurable strategy.
Read more in the :ref:`User Guide <univariate_feature_selection>`.
Parameters
----------
score_func : callable
Function taking two arrays X and y, and returning a pair of arrays
(scores, pvalues). For modes 'percentile' or 'kbest' it can return
a single array scores.
mode : {'percentile', 'k_best', 'fpr', 'fdr', 'fwe'}
Feature selection mode.
param : float or int depending on the feature selection mode
Parameter of the corresponding mode.
Attributes
----------
scores_ : array-like of shape (n_features,)
Scores of features.
pvalues_ : array-like of shape (n_features,)
p-values of feature scores, None if `score_func` returned scores only.
Examples
--------
>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.feature_selection import GenericUnivariateSelect, chi2
>>> X, y = load_breast_cancer(return_X_y=True)
>>> X.shape
(569, 30)
>>> transformer = GenericUnivariateSelect(chi2, 'k_best', param=20)
>>> X_new = transformer.fit_transform(X, y)
>>> X_new.shape
(569, 20)
See also
--------
f_classif: ANOVA F-value between label/feature for classification tasks.
mutual_info_classif: Mutual information for a discrete target.
chi2: Chi-squared stats of non-negative features for classification tasks.
f_regression: F-value between label/feature for regression tasks.
mutual_info_regression: Mutual information for a continuous target.
SelectPercentile: Select features based on percentile of the highest scores.
SelectKBest: Select features based on the k highest scores.
SelectFpr: Select features based on a false positive rate test.
SelectFdr: Select features based on an estimated false discovery rate.
SelectFwe: Select features based on family-wise error rate.
"""
_selection_modes = {'percentile': SelectPercentile,
'k_best': SelectKBest,
'fpr': SelectFpr,
'fdr': SelectFdr,
'fwe': SelectFwe}
def __init__(self, score_func=f_classif, mode='percentile', param=1e-5):
super().__init__(score_func)
self.mode = mode
self.param = param
def _make_selector(self):
selector = self._selection_modes[self.mode](score_func=self.score_func)
# Now perform some acrobatics to set the right named parameter in
# the selector
possible_params = selector._get_param_names()
possible_params.remove('score_func')
selector.set_params(**{possible_params[0]: self.param})
return selector
def _check_params(self, X, y):
if self.mode not in self._selection_modes:
raise ValueError("The mode passed should be one of %s, %r,"
" (type %s) was passed."
% (self._selection_modes.keys(), self.mode,
type(self.mode)))
self._make_selector()._check_params(X, y)
def _get_support_mask(self):
check_is_fitted(self)
selector = self._make_selector()
selector.pvalues_ = self.pvalues_
selector.scores_ = self.scores_
return selector._get_support_mask()
| [
"[email protected]"
] | |
0737a2f6e17c65219c251686d16823aafc690950 | 414db33a43c50a500741784eea627ba98bb63e27 | /0x0B-python-input_output/3-write_file.py | ae0c2b1e364ed751046127fc160d45a253030029 | [] | no_license | rayraib/holbertonschool-higher_level_programming | 2308ea02bd7f97eae3643e3ce0a6489cc1ad9ff5 | 6b4196eb890ffcb91e541431da9f5f57c5b85d4e | refs/heads/master | 2021-09-14T09:12:26.664653 | 2018-05-11T03:23:12 | 2018-05-11T03:23:12 | 113,070,818 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 236 | py | #!/usr/bin/python3
'''write to file'''
def write_file(filename="", text=""):
'''open file to write text and return the number of bytes weritten'''
with open(filename, 'w', encoding='utf-8') as f:
return(f.write(text))
| [
"[email protected]"
] | |
ea2782e8c450f6c5a2dc9fb342bdfef834919c34 | 0a9949a7dbe5f7d70028b22779b3821c62eb6510 | /gt_order_mongodb/conf/source_getdata0417.py | 4d878aad59c7d787d1dd3cd6a30173596d185ecb | [] | no_license | 744996162/warehouse | ed34f251addb9438a783945b6eed5eabe18ef5a2 | 3efd299a59a0703a1a092c58a6f7dc2564b92e4d | refs/heads/master | 2020-06-04T22:10:14.727156 | 2015-07-03T09:40:09 | 2015-07-03T09:40:09 | 35,603,929 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,784 | py | #ecoding=utf-8
__author__ = 'Administrator'
import sys
reload(sys)
import datetime
from dateutil.parser import parse
sys.setdefaultencoding('utf-8')
from source_mysql import *
sys.path.append('..')
from conf import *
import model
class GtOrderDao(Mysql):
def __init__(self, dbtype="local"):
Mysql.__init__(self, dbtype=dbtype)
def get_orders(self, sql, model=model.Model_Order):
results = self.get_all(sql)
model_list = []
if not results:
return []
for row in results:
o_model = model()
o_model.setVale(str(row[0]), str(row[1]), str(row[2]), str(row[3]), str(row[4]), str(row[5]), str(row[6]), str(row[7]), str(row[8]), str(row[9]), str(row[10]), str(row[11]))
model_list.append(o_model)
return model_list
class GtOrderSubDao(Mysql):
def __init__(self, dbtype="local"):
Mysql.__init__(self, dbtype=dbtype)
def get_orders(self, sql, model=model.Model_OrderSub):
results = self.get_all(sql)
model_list = []
if not results:
return []
for row in results:
o_model = model()
o_model.order_id = str(row[0])
o_model.uid = str(row[1])
o_model.account = str(row[2])
o_model.p_info = str(row[3])
o_model.depart_date = str(row[4])
o_model.train_no = str(row[5])
o_model.depart_name = str(row[6])
o_model.arrive_name = str(row[7])
o_model.name = str(row[8])
o_model.card_type = str(row[9])
o_model.card_no = str(row[10])
o_model.phone = str(row[11])
o_model.seat_name = str(row[12])
o_model.ticket_type = str(row[13])
o_model.status = str(row[14])
o_model.price = str(row[15])
o_model.create_time = str(row[16])
model_list.append(o_model)
return model_list
class QueryOrdersDao(GtOrderDao):
def __init__(self, dbtype="gtgj89"):
GtOrderDao.__init__(self,dbtype=dbtype)
self.tablename = 'user_order_history'
self.out_base_path = '/home/huolibi/data/gt_order_all/order/'
def get_orders(self, start_day, end_day=""):
if start_day == end_day or end_day == "":
sql = "select uid, p_info, account, order_date, i_status, depart_date, depart_name, arrive_name, ticket_count, train_no,amount,create_time " \
"from " + self.tablename + " " \
"where DATE_FORMAT(create_time,'%%Y%%m%%d')='%s' " % start_day
else:
sql = "select uid, p_info, account, order_date, i_status, depart_date, depart_name, arrive_name, ticket_count, train_no,amount,create_time " \
"from " + self.tablename + " " \
"where DATE_FORMAT(create_time,'%%Y%%m%%d')>='%s' " \
"and DATE_FORMAT(create_time,'%%Y%%m%%d')< '%s' " % (start_day, end_day)
print(sql)
model_result = super(QueryOrdersDao, self).get_orders(sql)
return model_result
def query_result_to_txt(self, start_day, end_day=""):
out_file_name = start_day + "_" + end_day
result_out_path = self.out_base_path + self.tablename + "_" + out_file_name
result_output=open(result_out_path, 'a')
model_result = self.get_orders(start_day, end_day)
for row in model_result:
out_str = row.getString()
result_output.write(out_str+'\n')
return result_out_path
class QueryOrdersSubDao(GtOrderSubDao):
def __init__(self, dbtype="gtgj89"):
GtOrderSubDao.__init__(self, dbtype=dbtype)
self.tablename = 'user_sub_order'
self.out_base_path = '/home/huolibi/data/gt_order_all/ordersub/'
def get_orders(self, start_day, end_day=""):
if start_day == end_day or end_day == "":
sql = "select order_id,uid,account,p_info,depart_date,train_no,depart_name,arrive_name,name,card_type,card_no,phone,seat_name,ticket_type,status,price,create_time " \
"from " + self.tablename + " " \
"where DATE_FORMAT(create_time,'%%Y%%m%%d')='%s' " % start_day
else:
sql = "select order_id,uid,account,p_info,depart_date,train_no,depart_name,arrive_name,name,card_type,card_no,phone,seat_name,ticket_type,status,price,create_time " \
"from " + self.tablename + " " \
"where DATE_FORMAT(create_time,'%%Y%%m%%d')>='%s' " \
"and DATE_FORMAT(create_time,'%%Y%%m%%d')< '%s' " % (start_day, end_day)
print(sql)
model_result = super(QueryOrdersSubDao, self).get_orders(sql)
return model_result
def query_result_to_txt(self, start_day, end_day=""):
out_file_name = start_day + "_" + end_day
result_out_path = self.out_base_path + self.tablename + "_" + out_file_name
result_output=open(result_out_path, 'a')
model_result = self.get_orders(start_day, end_day)
for row in model_result:
out_str = row.getString()
result_output.write(out_str+'\n')
return result_out_path
def getdata(start_date="20130701", end_date="20140101",deata=10):
date_list = []
s_day = parse(start_date)
end_day = parse(end_date)
days = (end_day-s_day).days
print(s_day, end_day,days)
for i in range(0, days,deata):
day1 = (s_day+datetime.timedelta(days=i)).strftime('%Y%m%d')
day2 = (s_day+datetime.timedelta(days=i+deata)).strftime('%Y%m%d')
date_list.append([day1,day2])
return date_list
def test1():
t = QueryOrdersDao()
# tablename = t.tablename
sql ="select uid, p_info, account, order_date, i_status, depart_date, depart_name, arrive_name, ticket_count, train_no,amount,create_time from user_order "
# o_object = GtOrderDao()
results = t.query_result_to_txt("20140101", "20150101")
print(results)
def OrderOutTest():
date_list = getdata(start_date="20150105", end_date="20140215", deata=3)
for date_arr in date_list:
s_day = date_arr[0]
end_day = date_arr[1]
print(s_day,end_day)
o_orderDao = QueryOrdersDao()
results = o_orderDao.query_result_to_txt(s_day, end_day)
def OrderSubOutTest():
date_list = getdata(start_date="20150214", end_date="20150315", deata=3)
for date_arr in date_list:
s_day = date_arr[0]
end_day = date_arr[1]
print(s_day, end_day)
o_orderSubDao = QueryOrdersSubDao()
results = o_orderSubDao.query_result_to_txt(s_day, end_day)
if __name__ == '__main__':
#o_orderSubDao = QueryOrdersSubDao()
#results = o_orderSubDao.query_result_to_txt("20130715", "20150101")
# OrderOutTest()
OrderSubOutTest()
pass
| [
"[email protected]"
] | |
fdce2f80d345ab41a55076388777658352217168 | 4d2531f7f4984109123bb70eb0ac3c8c08bb12e8 | /trash/faster_rcnn/.ipynb_checkpoints/faster_rcnn_r50_fpn_dcn_1x_trash-checkpoint.py | e845540dfb67a6500a02466dcf28428989daeb7f | [] | no_license | jhj9109/Segmentation_ObjectDetection | 0a3e8f92a90bed21a93b9c4cba7029330c1c4d2b | f40a0291b8513228be7475321ca59a1057f0aa27 | refs/heads/master | 2023-05-09T03:44:16.145448 | 2021-05-22T16:36:15 | 2021-05-22T16:36:15 | 369,830,033 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 246 | py | _base_ = 'faster_rcnn_r50_fpn_1x_trash.py'
model = dict(
backbone=dict(
dcn=dict(type='DCN',
deform_groups=1,
fallback_on_stride=False),
stage_with_dcn=(
False, True, True, True))) | [
"[email protected]"
] | |
8cc45ae5914c90e78e71724a0f2a83de0bfd00ab | ebac424304a4456193843472d3e91b5de79da514 | /order/views.py | 58924b80fb9fe0c21934ae0e514faa8fdca632ae | [] | no_license | haha479/bookstore | 1a0e7f73ed973ff06fd0c94cb92329baca39b527 | 3e85e2a62d4e9dde68c5f226c2e7558c6a06a03e | refs/heads/master | 2021-09-08T09:05:15.153693 | 2018-03-09T01:21:44 | 2018-03-09T01:21:44 | 115,857,026 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,927 | py | from django.shortcuts import render,redirect
from utils.decorators import login_required
from django.core.urlresolvers import reverse
from users.models import Address
from book.models import Books
from order.models import OrderInfo
from order.models import OrderGods
from django_redis import get_redis_connection
from django.db import transaction
from django.http import HttpResponse,JsonResponse
from datetime import datetime
import time
@login_required
def order_place(request):
'''显示提交订单页面'''
# 接收数据
books_ids = request.POST.getlist('books_ids')
# 校验数据
if not all(books_ids):
# 跳转到购物车页面
return redirect(reverse('cart:show'))
# 用户收货地址
passport_id = request.session.get('passport_id')
addrs = Address.objects.all()
# 用户要购买的商品信息
books_li = []
# 商品的总数目和总金额
total_count = 0
total_price = 0
conn = get_redis_connection('default')
cart_key = 'cart_%d' % passport_id
for id in books_ids:
# 根据id获取商品的信息
book = Books.objects.get_books_by_id(books_id=id)
# 从redis中获取用户要购买的商品的数目
count = conn.hget(cart_key,id)
book.count = count
# 计算商品的小计(商品数量 * 价格)
amount = int(book.count) * book.price
book.amount = amount
books_li.append(book)
# 累计计算商品的总数目和总金额
total_count += int(count)
# 将所有商品的小计都相加到总金额
total_price += book.amount
# 商品运费和实付款
transit_price = 10
total_pay = total_price + transit_price
# 1,2,3
books_ids = ','.join(books_ids)
# 组织模板上下文
context = {
'addrs' : addrs,
'books_li': books_li,
'total_count': total_count,
'total_price': total_price,
'transit_price': transit_price,
'total_pay': total_pay,
'books_ids': books_ids,
}
# 使用模板
return render(request,'order/place_order.html',context=context)
@transaction.atomic
def order_commit(request):
'''生成订单'''
# 验证用户是否登陆
if not request.session.has_key('islogin'):
return JsonResponse({'res':0, 'errmsg': '用户为登陆'})
# 接收数据
addr_id = request.POST.get('addr_id')
pay_method = request.POST.get('pay_method')
books_id = request.POST.get('books_ids')
# 进行数据检验
if not all([addr_id, pay_method, books_id]):
return JsonResponse({'res':1, 'errmsg': '数据不完整'})
try:
addr = Address.objects.get(id=addr_id)
except Exception as e:
# 地址信息出错
return JsonResponse({'res':2, 'errmsg': '地址信息错误'})
if int(pay_method) not in OrderInfo.PAY_METHOD_EMUM.values():
return JsonResponse({'res': 3, 'errmsg': '不支持的支付方式'})
# 订单创建
# 组织订单信息
passport_id = request.session.get('passport_id')
# 订单id : 20171212...+ 用户的id
order_id = datetime.now().strftime('%Y%m%d%H%M%S') + str(passport_id)
# 运费
transit_price = 10
# 订单商品总数和总金额
total_count = 0
total_price = 0
# 创建一个保存点
sid = transaction.savepoint()
try:
# 向订单信息表中添加一条记录
order = OrderInfo.objects.create(order_id=order_id,
passport_id = passport_id,
addr_id=addr_id,
total_count=total_count,
total_price=total_price,
transit_price=transit_price,
pay_method=pay_method)
# 向订单商品表中添加订单商品的记录
books_ids = books_id.split(',')
conn = get_redis_connection('default')
cart_key = 'cart_%d' % passport_id
# 遍历获取用户购买的商品信息
for id in books_ids:
books = Books.objects.get_books_by_id(books_id=id)
if books is None:
transaction.savepoint_rollback(sid)
return JsonResponse({'res':4, 'errmsg': '商品信息错误'})
# 获取用户购买的商品数目
count = conn.hget(cart_key, id)
# 判断商品的库存
if int(count) > books.stock:
transaction.savepoint_rollback(sid)
return JsonResponse({'res': 5, 'errmsg': '商品库存不足'})
# 创建一条订单商品记录
OrderGods.objects.create(order_id=order_id,
books_id=id,
count=count,
price=books.price
)
# 增加商品的销量, 减少商品库存
books.sales += int(count)
total_price += int(count)
books.save()
# 累计计算商品的总数目和总额
total_count += int(count)
total_price += int(count) * books.price
# 更新订单的商品总数目和总金额
order.total_count = total_count
order.total_price = total_price
except Exception as e:
# 操作数据库出错, 进行回滚操作
transaction.savepoint_rollback(sid)
return JsonResponse({'res': 7, 'errmsg': '服务器错误'})
# 清楚购物车对应记录
conn.hdel(cart_key, *books_ids)
# 事物提交
transaction.savepoint_commit(sid)
# 返回应答
return JsonResponse({'res': 6})
| [
"[email protected]"
] | |
11a91f1fed378f258c0163eab10cdf4961ce6f5d | 915c31ce84a826d225bcb1cc5f1e0323e712f6e4 | /calculate-testing-agreement.py | 6276ce7b87c1a1e58da5c8f6883613ea1968e06e | [
"Apache-2.0"
] | permissive | mac389/overdosed | 64162aaf8f57f7ca57bcc95678d0d18e231cda87 | 434255db4ea36581c9f94c7aa09ca6ca15169e8a | refs/heads/master | 2021-01-10T07:44:41.804936 | 2015-06-25T23:22:51 | 2015-06-25T23:22:51 | 36,990,551 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,157 | py | import csv
import numpy as np
from pprint import pprint
with open('testing-sample-MC-ratings.csv','rU') as my_rating_file:
my_ratings = [row for row in csv.reader(my_rating_file)]
#Automatic ratings
with open('testing-sample-w-ratings','rU') as automatic_rating_file:
automatic_ratings = [row for row in csv.reader(automatic_rating_file,delimiter='\t')]
my_rating_numbers = np.array([x[-1] for x in my_ratings]).astype(int)
automatic_rating_numbers = np.array([True if x[1]=='True' else False for x in automatic_ratings]).astype(int)
pprint(automatic_rating_numbers)
#me, them
yes_yes = len([i for i in xrange(len(my_rating_numbers)) if my_rating_numbers[i] == 1 and automatic_rating_numbers[i] == 1])
yes_no = len([i for i in xrange(len(my_rating_numbers)) if my_rating_numbers[i] == 1 and automatic_rating_numbers[i] == 0])
no_yes = len([i for i in xrange(len(my_rating_numbers)) if my_rating_numbers[i] == 0 and automatic_rating_numbers[i] == 1])
no_no = len([i for i in xrange(len(my_rating_numbers)) if my_rating_numbers[i] == 0 and automatic_rating_numbers[i] == 0])
print ' Them'
print 'Me',yes_yes,yes_no
print 'Me',no_yes, no_no | [
"[email protected]"
] | |
57cf2702270e99daab0cd0f773f9b28777c1fff8 | b4ecc9c5a74f11958e7a49999d0299e7bb883d2e | /postgres-database/dbFunctions.py | 34dd98379303cca9135af6e25108f79e95ad7f9a | [] | no_license | teja0508/AcronymLookup | 6edea8ab9bc27824b961563f5bf968b499490094 | ea5b812c41f138b5dccabbe2c474e2da0f85ce9e | refs/heads/main | 2022-12-20T08:00:30.161858 | 2020-10-18T06:01:32 | 2020-10-18T06:01:32 | 305,030,809 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 4,255 | py | # dbFunctions.py [Rachel Gardner]
#
# This file defines the AcronymsDatabase class, which
# interfaces with the PostgreSQL backend to store acronyms, definitions
# and their contexts.
import psycopg2
import json
from collections import Counter
class AcronymDatabase:
def __init__(self):
conn = psycopg2.connect(database="acronyms", user="postgres", password="Chandra@12", host="localhost")
self.conn = conn
self.cur = conn.cursor()
def addAcronym(self, acronym):
self.cur.execute("INSERT INTO acronyms (acronym) VALUES (%s) RETURNING aid", (acronym,))
return self.cur.fetchone()[0]
def getAcronym(self, acronym):
self.cur.execute("SELECT aid FROM acronyms WHERE acronym=%s", (acronym,))
result = self.cur.fetchone()
return result[0] if result else None
def addDefinition(self, definition, context, url, aID = False):
self.cur.execute("INSERT INTO definitions (definition, context, url) VALUES (%s, %s, %s) RETURNING did", (definition,context, url))
dID = self.cur.fetchone()[0]
# if acronym exists, link this definition to existing acronym
if (aID):
self.cur.execute("INSERT INTO acronyms_definitions (aid, did) VALUES (%s, %s)", (aID, dID))
return dID
def addTrueDefinition(self, acronym, truedef, url):
self.cur.execute("SELECT true_definition FROM true_definitions WHERE acronym=%s AND url=%s", (acronym,url))
result = self.cur.fetchone()
if(not result): result = None
else: result=result[0]
if(result is None):
self.cur.execute("INSERT INTO true_definitions (acronym, true_definition, url) VALUES (%s, %s, %s)", (acronym,truedef,url))
def getTrueDefinition(self, acronym, url):
self.cur.execute("SELECT true_definition FROM true_definitions WHERE acronym=%s AND url=%s", (acronym,url))
result = self.cur.fetchone()
return result[0] if result else None
def addContext(self, context):
self.cur.execute("INSERT INTO context (context) VALUES (%s) RETURNING cid", (context,))
return self.cur.fetchone()[0]
def acronymHasDefinition(self,aID, definition):
self.cur.execute("SELECT definitions.DID from definitions JOIN acronyms_definitions ON acronyms_definitions.DID = definitions.DID WHERE definitions.definition = %s AND acronyms_definitions.AID = %s", (definition, aID))
result = self.cur.fetchone()
return result[0] if result else None
def addContext(self,definition_id, context):
newContextJSON = json.dumps(context)
self.cur.execute("UPDATE context SET context=%s FROM definitions WHERE DID=%s", (newContextJSON,definition_id))
def updateContext(self, definition_id, context):
self.cur.execute("SELECT context FROM definitions JOIN context ON definitions.CID = context.CID WHERE DID = %s LIMIT 1;", (definition_id,))
oldContextJSON = self.cur.fetchone()[0]
oldContext = Counter(json.loads(oldContextJSON))
newContext = oldContext + context
newContextJSON = json.dumps(newContext)
self.cur.execute("UPDATE context SET context=%s FROM definitions WHERE DID=%s", (newContextJSON,definition_id))
def getContextAcronymList(self):
self.cur.execute("SELECT did, context, definition FROM definitions")
result = self.cur.fetchall()
ret = []
for elem in result:
did = str(elem[0])
self.cur.execute("SELECT aid FROM acronyms_definitions WHERE did=%s" ,(did,))
aid = str(self.cur.fetchone()[0])
self.cur.execute("SELECT acronym FROM acronyms WHERE aid=%s", (aid,))
acronym = self.cur.fetchone()[0]
ret.append((acronym, elem[1], elem[2]))
return ret
def clearTrueDefTable(self):
self.cur.execute("DELETE FROM true_definitions")
def clearAcronymTables(self):
self.cur.execute("DELETE FROM definitions")
self.cur.execute("DELETE FROM acronyms")
self.cur.execute("DELETE FROM acronyms_definitions")
def close(self):
self.conn.commit() # make the changes to the database persistent
self.cur.close()
self.conn.close()
| [
"[email protected]"
] | |
1281fb5541740ff35cf2cb197890e7c73fb333e2 | 221cada2354556fbb969f25ddd3079542904ef5d | /AlgoExpert/validate_bst.py | 24bf1e02380e497c9452015016be874e08f91ce8 | [] | no_license | syzdemonhunter/Coding_Exercises | 4b09e1a7dad7d1e3d4d4ae27e6e006732ffdcb1d | ca71572677d2b2a2aed94bb60d6ec88cc486a7f3 | refs/heads/master | 2020-05-24T11:19:35.019543 | 2019-11-22T20:08:32 | 2019-11-22T20:08:32 | 187,245,394 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 807 | py | # T: O(n)
# S: O(d)
def validateBst(tree):
return helper(tree, -float('inf'), float('inf'))
def helper(tree, min_val, max_val):
if not tree:
return True
if tree.value < min_val or tree.value >= max_val:
return False
left_valid = helper(tree.left, min_val, tree.value)
right_valid = helper(tree.right, tree.value, max_val)
return left_valid and right_valid
#######################
def validateBst(tree):
if not tree:
return True
return helper(tree, None, None)
def helper(root, lower, upper):
if not root:
return True
if lower and root.value < lower.value:
return False
if upper and root.value >= upper.value:
return False
return helper(root.left, lower, root) \
and helper(root.right, root, upper) | [
"[email protected]"
] | |
6f92343ff23e76b2a0b1a06cdd767ecf0e444f40 | e4ec5b6cf3cfe2568ef0b5654c019e398b4ecc67 | /azure-cli/2.0.18/libexec/lib/python3.6/site-packages/azure/mgmt/sql/models/encryption_protector.py | 28b7ccfc213258e75b4740819a87b7557399fc67 | [] | no_license | EnjoyLifeFund/macHighSierra-cellars | 59051e496ed0e68d14e0d5d91367a2c92c95e1fb | 49a477d42f081e52f4c5bdd39535156a2df52d09 | refs/heads/master | 2022-12-25T19:28:29.992466 | 2017-10-10T13:00:08 | 2017-10-10T13:00:08 | 96,081,471 | 3 | 1 | null | 2022-12-17T02:26:21 | 2017-07-03T07:17:34 | null | UTF-8 | Python | false | false | 2,969 | py | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from .proxy_resource import ProxyResource
class EncryptionProtector(ProxyResource):
"""The server encryption protector.
Variables are only populated by the server, and will be ignored when
sending a request.
:ivar id: Resource ID.
:vartype id: str
:ivar name: Resource name.
:vartype name: str
:ivar type: Resource type.
:vartype type: str
:param kind: Kind of encryption protector. This is metadata used for the
Azure portal experience.
:type kind: str
:ivar location: Resource location.
:vartype location: str
:ivar subregion: Subregion of the encryption protector.
:vartype subregion: str
:param server_key_name: The name of the server key.
:type server_key_name: str
:param server_key_type: The encryption protector type like
'ServiceManaged', 'AzureKeyVault'. Possible values include:
'ServiceManaged', 'AzureKeyVault'
:type server_key_type: str or :class:`ServerKeyType
<azure.mgmt.sql.models.ServerKeyType>`
:ivar uri: The URI of the server key.
:vartype uri: str
:ivar thumbprint: Thumbprint of the server key.
:vartype thumbprint: str
"""
_validation = {
'id': {'readonly': True},
'name': {'readonly': True},
'type': {'readonly': True},
'location': {'readonly': True},
'subregion': {'readonly': True},
'server_key_type': {'required': True},
'uri': {'readonly': True},
'thumbprint': {'readonly': True},
}
_attribute_map = {
'id': {'key': 'id', 'type': 'str'},
'name': {'key': 'name', 'type': 'str'},
'type': {'key': 'type', 'type': 'str'},
'kind': {'key': 'kind', 'type': 'str'},
'location': {'key': 'location', 'type': 'str'},
'subregion': {'key': 'properties.subregion', 'type': 'str'},
'server_key_name': {'key': 'properties.serverKeyName', 'type': 'str'},
'server_key_type': {'key': 'properties.serverKeyType', 'type': 'str'},
'uri': {'key': 'properties.uri', 'type': 'str'},
'thumbprint': {'key': 'properties.thumbprint', 'type': 'str'},
}
def __init__(self, server_key_type, kind=None, server_key_name=None):
super(EncryptionProtector, self).__init__()
self.kind = kind
self.location = None
self.subregion = None
self.server_key_name = server_key_name
self.server_key_type = server_key_type
self.uri = None
self.thumbprint = None
| [
"[email protected]"
] | |
edcd8ce0f727f58aaea7d90968b3b585f9aeab83 | befd78e2bfdeb7aa786e8d78aa30670e72226577 | /concurrency_with_asyncio/ch6/counting_freq_a.py | e4e1287524a416fe2d0e87e52535f912b079964d | [] | no_license | owari-taro/concurrency_in_python | 4ee2664a4e8c6a9a840ffd0878dbd53181818813 | 6f12d84b4a72cd5ddd05c74b1c94902c784e5a18 | refs/heads/master | 2023-03-06T12:41:15.637603 | 2023-02-28T15:02:14 | 2023-02-28T15:02:14 | 301,134,395 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 461 | py | import time
freqs={}
with open("data/googlebooks-eng-all-1gram-20120701-a","r")as reader:
st=time.time()
count=0
for line in reader:
if count%10_000==0:
print(freqs)
count+=1
data=line.split("\t")
word=data[0]
count=int(data[2])
if word in freqs:
freqs[word]=freqs[word]+count
else:
freqs[word]=count
end=time.time()
print(f"{end-st}")
| [
"[email protected]"
] | |
bca23f903311a368aa82760ee5be04a2b46ddd94 | dcfd89a6a8ebf3b32948dbf93e222063c223bec4 | /v2/api/utils.py | d2c3dd7fc2947f486115eb4c86ea84801e7f9616 | [] | no_license | SirYaro/kdm-manager | a19ff1d7711bc39a2de9e8f740cb03fac28a52b7 | 263d1330764b915f3acb0771e58845020cce06b2 | refs/heads/master | 2021-09-05T05:47:58.511224 | 2018-01-24T15:09:18 | 2018-01-24T15:09:18 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,189 | py | #!/usr/bin/python2.7
# general imports
from bson import json_util
from bson.objectid import ObjectId
from datetime import datetime, timedelta
from dateutil.relativedelta import relativedelta
import email
from email.header import Header as email_Header
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from flask import Response, make_response, request, current_app
from functools import update_wrapper
import json
import logging
import os
from pymongo import MongoClient
import smtplib
import socket
from string import Template
import sys
import time
import traceback
# project-specific imports
import settings
# random, one-off helpers in the main namespace
mdb = MongoClient()[settings.get("api","mdb")]
ymd = "%Y-%m-%d"
hms = "%H:%M:%S"
ymdhms = "%Y-%m-%d %H:%M:%S"
thirty_days_ago = datetime.now() - timedelta(days=30)
# generic http responses
http_200 = Response(response="OK!", status=200)
http_400 = Response(response="Bad Request!", status=400)
http_401 = Response(response="Authorization required", status=401)
http_404 = Response(response="Resource not found", status=404)
http_405 = Response(response="Method not allowed", status=405)
http_422 = Response(response="Missing argument, parameter or value", status=422)
http_500 = Response(response="Server explosion! The server erupts in a shower of gore, killing your request instantly. All other servers are so disturbed that they lose 1 survival.", status=500)
http_501 = Response(response="Not implemented in this release", status=501)
# laziness and DRYness methods
def cli_dump(key, spacer, value):
""" Returns a command line interface pretty string for use in admin scripts
and other things that dump strings to CLI over STDOUT. """
spaces = spacer - len(key)
output = "%s:" % key
output += " " * spaces
output += "%s" % value
print(output)
def decompose_name_string(name):
""" Accepts a name string and returns a list of possible versions of it. """
output = []
name_list = name.split(" ")
for i in range(len(name_list) + 1) :
output.append(" ".join(name_list[:i]))
return output
def html_file_to_template(rel_path):
""" Turns an HTML file into a string.Template object. """
tmp_file = os.path.join(settings.get("api","cwd"), rel_path)
return Template(file(tmp_file, "rb").read())
def seconds_to_hms(seconds):
""" Converts seconds (expressed as int) to a h:m:s string. """
m, s = divmod(seconds, 60)
h, m = divmod(m, 60)
return "%d:%02d:%02d" % (h, m, s)
def get_percentage(part, whole):
""" Input a part, then the whole. Returns percent as a float. """
if whole == 0:
return 0
else:
return 100 * round(float(part)/float(whole), 2)
def get_time_elapsed_since(start_time, units=None, round_seconds=True):
""" Use a datetime object as the first arg and a 'units' kwarg value of
either 'minutes' or 'hours' to find out how long it has been since that
time (in your preferred units).
Use 'age' as the value for 'units' to get a human-readable string
representing the elapsed time.
"""
delta = (datetime.now() - start_time)
days = delta.days
years = relativedelta(datetime.now(), start_time).years
offset = delta.total_seconds()
offset_hours = offset / 3600.0
if round_seconds:
offset = int(offset)
if units == "seconds":
return offset
elif units == "hours":
return int(offset_hours)
elif units == "minutes":
return int(delta.seconds / 60)
elif units == "days":
return delta.days
elif units == "years":
return years
elif units == "years_and_days":
for y in range(years):
days -= 365
year_word = "year"
if years >= 2:
year_word = "years"
return "%s %s and %s days" % (years, year_word, days)
elif units == "age":
if offset == 1:
return 'one second'
elif offset < 60:
return '%s seconds' % offset
elif offset == 60:
return 'one minute'
elif offset < 3600:
return "%s minutes" % get_time_elapsed_since(start_time, "minutes")
elif offset == 3600:
return 'one hour'
elif offset < 86400:
return "%s hours" % get_time_elapsed_since(start_time, "hours")
elif offset < 172800:
return "one day"
elif delta.days < 365:
return "%s days" % get_time_elapsed_since(start_time, "days")
elif delta.days == 365:
return "one year"
elif delta.days > 365:
return get_time_elapsed_since(start_time, 'years_and_days')
return delta
def list_to_pretty_string(l, quote_char=False):
""" Takes a list of strings and makes it into a single, pretty string
with commas, the word 'and' and that type of shit. """
l = list(l)
if len(l) == 0:
return None
elif len(l) == 1:
if quote_char:
return "%s%s%s" % (quote_char, l[0], quote_char)
else:
return l[0]
if quote_char:
l = [str("%s%s%s" % (quote_char,i,quote_char)) for i in l]
else:
try:
l = [i.encode('ascii','ignore') for i in l]
except:
l = [str(i) for i in l]
return " and ".join([", ".join(l[:-1]), l[-1]])
def sort_timeline(timeline):
""" Returns a sorted timeline. """
sorting_hat = {}
for ly in timeline:
sorting_hat[int(ly["year"])] = ly
return [sorting_hat[ly_key] for ly_key in sorted(sorting_hat.keys())]
def get_timeline_index_and_object(timeline,lantern_year):
""" Input a timeline and a target ly; get the index of that year and the
year (as a JSON object i.e. list of dicts) back.
If it can't find a year, it adds it. """
def get_ly():
for t in timeline:
if t["year"] == int(lantern_year):
return timeline.index(t), t
if get_ly() is None:
timeline.append({"year": lantern_year})
timeline = sort_timeline(timeline)
return get_ly()
else:
return get_ly()
def action_keyword(kw):
""" converts an action keyword 'kw' to a past-tense version of that
keyword and its preposition. """
if kw in ["add"]: # add
output = ("added", "to")
elif kw in ["rm"]: # remove
output = ("removed", "from")
elif kw in ["set", "update"]: # set
output = ("set", "to")
elif kw in ["inherited", "inherit"]:
output = ("inherited", "from")
elif kw in ["birth", "born"]:
output = ("born", "to")
elif kw in ["enforce"]: # automate
output = ("automatically set", "to")
else:
output = (kw, "to")
return output
# API response/request helpers
class noUser:
def __init__(self):
self.login="[email protected]"
self._id = "666"
#
# performance monitoring
#
def record_response_time(r):
""" Accepts a request object, uses it to log the request and its response
time to mdb. Prunes old lines. """
duration = request.stop_time - r.start_time
url_list = request.url.split(request.url_root)[1].split("/")
for i in url_list:
try:
ObjectId(i)
url_list.remove(i)
except:
pass
url = "/".join(url_list)
mdb.api_response_times.insert({
"created_on": datetime.now(),
"url": url,
"method": request.method,
"time": duration.total_seconds()
})
if request.metering:
request.logger = get_logger()
request.logger.debug('[%s] %s response in %s ' % (request.method, request.url, duration))
old_record_query = {"created_on": {"$lt": (datetime.now() - timedelta(days=7))}}
removed_records = mdb.api_response_times.remove(old_record_query)
#
# stub dictionary for creating the meta element of API returns
#
api_meta = {
"meta": {
"webapp": {
"release": settings.get('application','webapp_release'),
"age": get_time_elapsed_since(datetime.strptime('2015-11-29', '%Y-%m-%d'), units='age'),
},
"api": {
"version": settings.get("api","version"),
"hostname": socket.gethostname(),
"mdb_name": settings.get("api","mdb"),
},
"admins": list(mdb.users.find({"admin": {"$exists": True}}).sort('login')),
"object": {},
},
}
# mail session object
class mailSession:
""" Initialize one of these to authenticate via SMTP and send emails. This
is a port from the legacy app."""
def __init__(self):
self.logger = get_logger()
p_settings = settings.Settings('private')
self.smtp_host = p_settings.get("smtp","host")
self.smtp_user = p_settings.get("smtp","name")
self.smtp_pass = p_settings.get("smtp","pass")
self.sender_name = p_settings.get("smtp","name_pretty")
self.no_reply = p_settings.get("smtp","no-reply")
self.connect()
def connect(self):
self.server = smtplib.SMTP(self.smtp_host, 587)
self.server.starttls()
self.server.login(self.smtp_user, self.smtp_pass)
self.logger.debug("SMTP Authentication successful for %s (%s)." % (self.smtp_user, self.smtp_host))
time.sleep(0.75)
def send(self, reply_to=None, recipients=["[email protected]"], html_msg='This is a <b>test</b> message!', subject="KDM-Manager!"):
""" Generic Emailer. Accepts a list of 'recipients', a 'msg' string and a
sender name (leave undefinied to use [email protected]). """
author = email.utils.formataddr((str(email_Header(self.sender_name, 'utf-8')), self.no_reply))
msg = MIMEMultipart('alternative')
msg['From'] = author
msg['Subject'] = subject
msg['To'] = recipients[0]
if reply_to is not None:
msg.add_header('reply-to', reply_to)
msg.attach(MIMEText(html_msg.encode('ascii','ignore'),'html'))
self.server.sendmail(self.no_reply, recipients, msg.as_string())
self.server.quit()
self.logger.debug("Email sent successfully!")
# general usage methods
def basic_logging():
""" Executes logging.basicConfig() to catch logging from imported modules,
i.e. to make sure we don't lose any logging. """
logging.basicConfig(
filename = os.path.join(settings.get("application","log_root_dir"), "server.log"),
format = '[%(asctime)s] %(levelname)s:\t%(message)s',
level = settings.get("server","log_level"),
)
def get_logger(log_level=None, log_name=None):
""" Initialize a logger, specifying a new file if necessary. """
logger = logging.getLogger(__name__)
if len(logger.handlers): # if we're initializing a log, kill whatever other
logger.handlers = [] # handlers are open, so that the latest init wins
if not len(logger.handlers): # if it's not already open, open it
# set the file name or default to the script asking for the logger
log_file_name = log_name
if log_name is None:
log_file_name = os.path.splitext(os.path.basename(sys.argv[0]))[0]
log_handler_level = log_level
# do the same for log level, defaulting to the server's 'log_level'
if log_handler_level is None:
logger.setLevel(settings.get("server","log_level"))
elif type(log_handler_level) == str:
exec 'log_level_obj = logging.%s' % log_level
logger.setLevel(log_level_obj)
else:
logger.setLevel(log_handler_level)
# now check the logging root, just as a precaution
log_root_dir = settings.get("application","log_root_dir")
if not os.path.isdir(log_root_dir):
e = Exception("Logging root dir '%s' does not exist!" % log_root_dir)
raise e
# create the path and add it to the handler
log_path = os.path.join(log_root_dir, log_file_name + ".log")
logger_fh = logging.FileHandler(log_path)
# set the formatter and add it via addHandler()
formatter = logging.Formatter('[%(asctime)s] %(levelname)s:\t%(message)s', ymdhms)
logger_fh.setFormatter(formatter)
logger.addHandler(logger_fh)
return logger
def get_local_ip():
""" Uses the 8.8.8.8 trick to get the localhost IP address. """
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
ip = s.getsockname()[0]
s.close()
return ip
def get_application_url(strip_http=False):
""" Determines the URL to use for API operations based on some socket
operations and settings from the settings.cfg. Defaults to using localhost
on the default API port defined in settings.cfg. """
fqdn = socket.getfqdn()
if fqdn == settings.get("api","prod_fqdn"):
output = 'http://kdm-manager.com'
else:
output = "http://%s" % (get_local_ip())
if strip_http:
return output[7:]
else:
return output
class AssetDict(dict):
""" Creates a dictionary with additional attributes. """
def __init__(self, d={}, attribs={}):
""" Initialize with any dictionary and then use the 'attribs' kwarg as a
dictionary of arbitry key/value pairs that will become attribs of the
dictionary. """
self.meta_attribs = attribs
for k, v in self.meta_attribs.iteritems():
if type(v) == str:
exec """self.%s = "%s" """ % (k,v)
else:
exec "self.%s = %s" % (k,v)
self.update(d)
self.add_vars_to_dict()
def add_vars_to_dict(self):
""" Adds the arbitrary attribs of the dictionary to each asset in the
dict. """
for d in self.keys():
self[d]["handle"] = d
for k, v in self.meta_attribs.iteritems():
key = "%s" % k
self[d][key] = v
# decorators for API
def crossdomain(origin=None, methods=None, headers=None, max_age=21600, attach_to_all=True, automatic_options=True):
if methods is not None:
methods = ', '.join(sorted(x.upper() for x in methods))
if headers is not None and not isinstance(headers, basestring):
headers = ', '.join(x.upper() for x in headers)
if not isinstance(origin, basestring):
origin = ', '.join(origin)
if isinstance(max_age, timedelta):
max_age = max_age.total_seconds()
def get_methods():
if methods is not None:
return methods
options_resp = current_app.make_default_options_response()
return options_resp.headers['allow']
def decorator(f):
def wrapped_function(*args, **kwargs):
if automatic_options and request.method == 'OPTIONS':
resp = current_app.make_default_options_response()
else:
resp = make_response(f(*args, **kwargs))
if not attach_to_all and request.method != 'OPTIONS':
return resp
h = resp.headers
h['Access-Control-Allow-Origin'] = origin
h['Access-Control-Allow-Methods'] = get_methods()
h['Access-Control-Max-Age'] = str(max_age)
if headers is not None:
h['Access-Control-Allow-Headers'] = headers
return resp
f.provide_automatic_options = False
return update_wrapper(wrapped_function, f)
return decorator
#
# exception auto-mailer
#
def email_exception(exception):
""" This is called by the main Flask errorhandler() decorator in api.py
when we have an unhandled exception at any point of responding to a request.
This prevents user-facing (or Khoa-facing) failures from being silently
swallowed. """
# first, log it
logger = get_logger(log_level='DEBUG', log_name='error')
logger.exception(exception)
if not hasattr(request, 'User'):
request.User = noUser()
# finally, prepare the message template and the traceback for emailing
# tmp_file = os.path.join(settings.get("api","cwd"), "html/exception_alert.html")
# msg = Template(file(tmp_file, "rb").read())
msg = html_file_to_template("html/exception_alert.html")
tb = traceback.format_exc().replace(" "," ").replace("\n","<br/>")
# do it
s = msg.safe_substitute(traceback=tb, user_login=request.User.login, user_oid=request.User._id, datetime=datetime.now(), r_method=request.method, r_url=request.url, r_json=request.json)
e = mailSession()
e.send(subject="API Error! [%s]" % socket.getfqdn(), recipients=['[email protected]'], html_msg=s)
#
# special exception classes
#
class WorldQueryError(Exception):
""" Handler for asset-based errors. """
def __init__(self, query=None, message="World query produced zero results!"):
self.logger = get_logger()
self.logger.exception(message)
self.logger.error("Query was: %s" % query)
Exception.__init__(self, message)
class InvalidUsage(Exception):
""" Raise this type of exception at any point to return an HTTP response to
the requester. Syntax goes like this:
raise utils.InvalidUsage("Message", status_code=400)
Do this whenever the requester's param keys/values are not up to snuff, etc.
"""
status_code = 400
def __init__(self, message, status_code=400, payload=None):
Exception.__init__(self)
self.logger = get_logger(log_name='error')
self.message = message
if status_code is not None:
self.status_code = status_code
self.payload = payload
self.alert()
def alert(self):
""" Records the error and alerts the admins. TBD. """
self.logger.error("%s - %s" % (self.status_code, self.message))
# e = utils.mailSession()
# e.send(recipients=[user_login]i, html_msg=self.message)
def to_dict(self):
rv = dict(self.payload or ())
rv['message'] = self.message
return Response(rv['message'], self.status_code)
| [
"[email protected]"
] | |
c5597281e91a1f2336cb6e046d5aa458cb432bf9 | e4414bd8152e52855db7ab9065ae12b7329143e0 | /python/src/maxdiffarr.py | c59fcce6e0af15f92b72aaecbe371e748058fd86 | [] | no_license | catalinc/programmingpraxis-solutions | 39cb847877ec46d2fb85740791c24889ab5654a8 | c0b13906aa76ffac705bf108db138fb9a38bc16a | refs/heads/master | 2021-03-27T16:46:47.781839 | 2017-09-09T15:17:38 | 2017-09-09T15:17:38 | 53,532,233 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 678 | py | #!/usr/bin/env python
import sys
import unittest
# See http://programmingpraxis.com/2011/04/01/maximum-difference-in-an-array/
def maxdiff(a):
min_i = i = j = 0
d = -sys.maxint
for k in xrange(1, len(a)):
if a[k] < a[min_i]:
min_i = k
elif a[k] - a[min_i] > d:
d = a[k] - a[min_i]
i = min_i
j = k
return i, j, d
class Test(unittest.TestCase):
def test_1(self):
self.assertEqual((3, 4, 7), maxdiff([4, 3, 9, 1, 8, 2, 6, 7, 5]))
def test_2(self):
self.assertEqual((1, 2, 7), maxdiff([4, 2, 9, 1, 8, 3, 6, 7, 5]))
def test_3(self):
self.assertEqual((3, 7, 7), maxdiff([4, 3, 9, 1, 2, 6, 7, 8, 5]))
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
71a2d7cbbd8c0b66f75a771056748b3e7f026054 | aadea82d00400b71de86b1906ed347d10416e69b | /p350.py | b495e8cb02bea100bc4e51baabf5fe09685b80ad | [] | no_license | abishekravi/guvipython | fc0f56912691cd5a41ab20f0c36b2027ebccfb00 | 4fbb83f0a131775cd9eb3f810c2d1c9ad22d710a | refs/heads/master | 2021-08-16T10:22:00.052735 | 2020-06-25T04:35:42 | 2020-06-25T04:35:42 | 196,218,458 | 2 | 27 | null | null | null | null | UTF-8 | Python | false | false | 360 | py | #a
import sys,string, math,itertools
def minChCnt(s) :
dic1 = {}
for c in s :
if not c.isspace() :
dic1[c] = dic1.get(c,0) + 1
min1 = sys.maxsize
L = []
min1 = min(dic1.values())
for k,v in dic1.items() :
if v == min1 :
L.append(k)
return L
s = input()
n = len(s)
L = minChCnt(s)
print(*L)
| [
"[email protected]"
] | |
41be5f261a4866f3eca403bdcbdb7c9fee1a2e7d | 7ff12fccd5da300c6f844e01ae15399804d28107 | /41/lambda.py | b827614414b2610ff8e1e0f176b3ba5efbd2b92e | [] | no_license | weilaidb/python_dd | cc49d21d4ad8b6e56b80ea068b95255502eb9ea5 | 7458af7fb028850999cdbf089ac6c61a55096c25 | refs/heads/master | 2021-01-10T01:36:35.539121 | 2016-02-24T14:25:23 | 2016-02-24T14:25:23 | 47,059,506 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 144 | py | #!/usr/bin/python
#Filename:lambda.py
def make_repeater(n):
return lambda s:s*n
twice = make_repeater(2)
print twice('word')
print twice(15)
| [
"[email protected]"
] | |
817266a98c195d0592a873a56976b22261b0798b | d8fb7147e9b471fafb9b1e7ae208e20e1eab9cc1 | /lmtk/html/meta.py | 6e1b1fee1fa05847fa7a141fc3493ed38d3f63da | [
"MIT"
] | permissive | ericchen/lmtk | e61e5c1f0790dfba32f2ceb406df16045332b15f | ce2604851af66801f459b2e9bc5aeaf0c94bc15d | refs/heads/master | 2021-01-21T04:08:08.336582 | 2014-09-23T14:02:01 | 2014-09-23T14:02:01 | 33,095,938 | 1 | 0 | null | 2015-03-30T00:53:51 | 2015-03-30T00:53:50 | null | UTF-8 | Python | false | false | 5,134 | py | # -*- coding: utf-8 -*-
"""
lmtk.html.meta
~~~~~~~~~~~~~~
Tools for extracting metadata from HTML.
:copyright: Copyright 2014 by Matt Swain.
:license: MIT, see LICENSE file for more details.
"""
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
import re
from bs4 import BeautifulSoup
from lmtk.text import u
from lmtk.bib import PersonName
def extract_metadata(html):
"""Parse a HTML page to extract embedded metadata.
TODO: Is this obsolete due to lmtk.scrape package?
:param html: The HTML to parse. Either as a string or BeautifulSoup object.
"""
def resolve_meta(names):
"""Given a list of meta names, return the content of the first that is found."""
for name in names:
try:
return u(html.find('meta', attrs={'name': name, 'content': True})['content'].strip())
except TypeError:
continue
if isinstance(html, basestring):
html = BeautifulSoup(html, 'lxml')
meta = {
u'title': resolve_meta(['citation_title', 'dc.title', 'DC.title', 'title', 'citation_dissertation_name']),
u'journal': resolve_meta(['citation_journal_title', 'prism.publicationName', 'dc.source', 'DC.source']),
u'volume': resolve_meta(['citation_volume', 'prism.volume']),
u'issue': resolve_meta(['citation_issue', 'prism.number', 'citation_technical_report_number']),
u'page': resolve_meta(['citation_firstpage', 'prism.startingPage']),
u'abstract': resolve_meta(['description', 'dc.description', 'DC.description']),
u'publisher': resolve_meta(['citation_publisher', 'dc.publisher', 'DC.publisher']),
u'conference': resolve_meta(['citation_conference_title', 'citation_conference']),
u'institution': resolve_meta(['citation_dissertation_institution', 'citation_technical_report_institution']),
u'doi': resolve_meta(['citation_doi', 'dc.identifier', 'DC.identifier']),
u'issn': resolve_meta(['citation_issn', 'prism.issn']),
u'isbn': resolve_meta(['citation_isbn']),
u'pmid': resolve_meta(['citation_pmid']),
u'language': resolve_meta(['citation_language', 'dc.language', 'DC.language']),
u'copyright': resolve_meta(['dc.copyright', 'DC.copyright', 'prism.copyright']),
u'rights_agent': resolve_meta(['dc.rightsAgent', 'DC.rightsAgent', 'prism.rightsAgent']),
u'patent_number': resolve_meta(['citation_patent_number']),
u'patent_country': resolve_meta(['citation_patent_country']),
u'abstract_url': resolve_meta(['citation_abstract_html_url']),
u'html_url': resolve_meta(['citation_fulltext_html_url']),
u'pdf_url': resolve_meta(['citation_pdf_url']),
u'date': resolve_meta(['citation_publication_date', 'prism.publicationDate', 'citation_date', 'dc.date',
'DC.date', 'citation_online_date'])
}
# authors
persons = []
for metaname in ['citation_author', 'dc.creator', 'DC.creator']:
for el in html.find_all('meta', attrs={'name': metaname, 'content': True}):
person = PersonName(el['content'])
if person and not any(person.could_be(other) for other in persons):
persons.append(person)
persons = [dict(p) for p in persons]
affiliations = [el.get('content', None) for el in html.find_all('meta', attrs={'name': 'citation_author_institution'})]
if len(affiliations) == len(persons):
for i, aff in enumerate(affiliations):
persons[i][u'affiliation'] = [u(aff)]
meta[u'authors'] = persons
# keywords
keywords = set()
for metaname in ['dc.type', 'prism.section', 'citation_keywords']:
for el in html.find_all('meta', attrs={'name': metaname, 'content': True}):
kcomps = [u(k.strip()) for k in el['content'].split(',')]
keywords.update(kcomps)
meta[u'keywords'] = list(keywords)
# last page
last = html.find('meta', attrs={'name': 'citation_lastpage', 'content': True})
if last and 'content' in last.attrs and 'page' in meta:
meta[u'page'] = '%s-%s' % (meta[u'page'], u(last['content']))
# XML URL
xml = html.find('link', attrs={'rel': 'alternate', 'href': True,
'type': re.compile(r'^(text/xml|application/rdf\+xml)$')})
if xml:
meta[u'xml_url'] = u(xml['href'])
# PDF URL backup
pdf = html.find('link', attrs={'rel': 'alternate', 'type': 'application/pdf', 'href': True})
if not 'pdf_url' in meta and pdf:
meta[u'pdf_url'] = u(pdf['href'])
# title backup
if not 'title' in meta and html.title:
meta[u'title'] = html.title.string.strip()
for el in html.find_all(attrs={'rel': 'license', 'href': True}):
meta[u'license'] = u(el['href'])
if not 'license' in meta:
lic = html.find('meta', attrs={'name': re.compile(r'^dc\.rights$', re.I), 'content': True})
if lic:
meta[u'license'] = u(lic['content'])
meta = dict([(k, v) for k, v in meta.items() if v])
return meta
| [
"[email protected]"
] | |
f5a2e5a623c38797822aa43672dd567a377d6369 | 379e932003bac6869260a6b362bce589951c96a1 | /backend/test_25201/settings.py | ed9bc57d4ac097d5d02a1401e46104c4c50ebc7a | [] | no_license | crowdbotics-apps/test-25201 | 31e0020b73f87fd28b15a57f3702750cde7eba4e | 9cd38854979c7d6680a2b0f8ac32c71b5e3e13c1 | refs/heads/master | 2023-03-27T10:57:58.090951 | 2021-03-23T02:08:52 | 2021-03-23T02:08:52 | 350,550,517 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,214 | py | """
Django settings for test_25201 project.
Generated by 'django-admin startproject' using Django 2.2.2.
For more information on this file, see
https://docs.djangoproject.com/en/2.2/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.2/ref/settings/
"""
import os
import environ
import logging
env = environ.Env()
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = env.bool("DEBUG", default=False)
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = env.str("SECRET_KEY")
ALLOWED_HOSTS = env.list("HOST", default=["*"])
SITE_ID = 1
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_SSL_REDIRECT = env.bool("SECURE_REDIRECT", default=False)
# Application definition
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"django.contrib.sites",
"task",
"task_profile",
"tasker_business",
"location",
"wallet",
"task_category",
]
LOCAL_APPS = [
"home",
"modules",
"users.apps.UsersConfig",
]
THIRD_PARTY_APPS = [
"rest_framework",
"rest_framework.authtoken",
"rest_auth",
"rest_auth.registration",
"bootstrap4",
"allauth",
"allauth.account",
"allauth.socialaccount",
"allauth.socialaccount.providers.google",
"django_extensions",
"drf_yasg",
"storages",
# start fcm_django push notifications
"fcm_django",
# end fcm_django push notifications
]
INSTALLED_APPS += LOCAL_APPS + THIRD_PARTY_APPS
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "test_25201.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [os.path.join(BASE_DIR, "web_build")],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "test_25201.wsgi.application"
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": os.path.join(BASE_DIR, "db.sqlite3"),
}
}
if env.str("DATABASE_URL", default=None):
DATABASES = {"default": env.db()}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = "/static/"
MIDDLEWARE += ["whitenoise.middleware.WhiteNoiseMiddleware"]
AUTHENTICATION_BACKENDS = (
"django.contrib.auth.backends.ModelBackend",
"allauth.account.auth_backends.AuthenticationBackend",
)
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
os.path.join(BASE_DIR, "web_build/static"),
]
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
# allauth / users
ACCOUNT_EMAIL_REQUIRED = True
ACCOUNT_AUTHENTICATION_METHOD = "email"
ACCOUNT_USERNAME_REQUIRED = False
ACCOUNT_EMAIL_VERIFICATION = "optional"
ACCOUNT_CONFIRM_EMAIL_ON_GET = True
ACCOUNT_LOGIN_ON_EMAIL_CONFIRMATION = True
ACCOUNT_UNIQUE_EMAIL = True
LOGIN_REDIRECT_URL = "users:redirect"
ACCOUNT_ADAPTER = "users.adapters.AccountAdapter"
SOCIALACCOUNT_ADAPTER = "users.adapters.SocialAccountAdapter"
ACCOUNT_ALLOW_REGISTRATION = env.bool("ACCOUNT_ALLOW_REGISTRATION", True)
SOCIALACCOUNT_ALLOW_REGISTRATION = env.bool("SOCIALACCOUNT_ALLOW_REGISTRATION", True)
REST_AUTH_SERIALIZERS = {
# Replace password reset serializer to fix 500 error
"PASSWORD_RESET_SERIALIZER": "home.api.v1.serializers.PasswordSerializer",
}
REST_AUTH_REGISTER_SERIALIZERS = {
# Use custom serializer that has no username and matches web signup
"REGISTER_SERIALIZER": "home.api.v1.serializers.SignupSerializer",
}
# Custom user model
AUTH_USER_MODEL = "users.User"
EMAIL_HOST = env.str("EMAIL_HOST", "smtp.sendgrid.net")
EMAIL_HOST_USER = env.str("SENDGRID_USERNAME", "")
EMAIL_HOST_PASSWORD = env.str("SENDGRID_PASSWORD", "")
EMAIL_PORT = 587
EMAIL_USE_TLS = True
# AWS S3 config
AWS_ACCESS_KEY_ID = env.str("AWS_ACCESS_KEY_ID", "")
AWS_SECRET_ACCESS_KEY = env.str("AWS_SECRET_ACCESS_KEY", "")
AWS_STORAGE_BUCKET_NAME = env.str("AWS_STORAGE_BUCKET_NAME", "")
AWS_STORAGE_REGION = env.str("AWS_STORAGE_REGION", "")
USE_S3 = (
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
and AWS_STORAGE_BUCKET_NAME
and AWS_STORAGE_REGION
)
if USE_S3:
AWS_S3_CUSTOM_DOMAIN = env.str("AWS_S3_CUSTOM_DOMAIN", "")
AWS_S3_OBJECT_PARAMETERS = {"CacheControl": "max-age=86400"}
AWS_DEFAULT_ACL = env.str("AWS_DEFAULT_ACL", "public-read")
AWS_MEDIA_LOCATION = env.str("AWS_MEDIA_LOCATION", "media")
AWS_AUTO_CREATE_BUCKET = env.bool("AWS_AUTO_CREATE_BUCKET", True)
DEFAULT_FILE_STORAGE = env.str(
"DEFAULT_FILE_STORAGE", "home.storage_backends.MediaStorage"
)
MEDIA_URL = "/mediafiles/"
MEDIA_ROOT = os.path.join(BASE_DIR, "mediafiles")
# start fcm_django push notifications
FCM_DJANGO_SETTINGS = {"FCM_SERVER_KEY": env.str("FCM_SERVER_KEY", "")}
# end fcm_django push notifications
# Swagger settings for api docs
SWAGGER_SETTINGS = {
"DEFAULT_INFO": f"{ROOT_URLCONF}.api_info",
}
if DEBUG or not (EMAIL_HOST_USER and EMAIL_HOST_PASSWORD):
# output email to console instead of sending
if not DEBUG:
logging.warning(
"You should setup `SENDGRID_USERNAME` and `SENDGRID_PASSWORD` env vars to send emails."
)
EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
| [
"[email protected]"
] | |
acf5f3f8c41ffe2c340d87e431ce7c7df30ad01d | 8e24e8bba2dd476f9fe612226d24891ef81429b7 | /geeksforgeeks/python/python_all/70_1.py | ec22d692bf592f3d3a12f7546ec72edd733f712b | [] | no_license | qmnguyenw/python_py4e | fb56c6dc91c49149031a11ca52c9037dc80d5dcf | 84f37412bd43a3b357a17df9ff8811eba16bba6e | refs/heads/master | 2023-06-01T07:58:13.996965 | 2021-06-15T08:39:26 | 2021-06-15T08:39:26 | 349,059,725 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,054 | py | Python – Remove duplicate values across Dictionary Values
Sometimes, while working with Python dictionaries, we can have a problem in
which we need to remove all the duplicate values across all the dictionary
value lists. This problem can have application in data domains and web
development domains. Let’s discuss certain ways in which this task can be
performed.
> **Input** : test_dict = {‘Manjeet’: [1], ‘Akash’: [1, 8, 9]}
> **Output** : {‘Manjeet’: [], ‘Akash’: [8, 9]}
>
> **Input** : test_dict = {‘Manjeet’: [1, 1, 1], ‘Akash’: [1, 1, 1]}
> **Output** : {‘Manjeet’: [], ‘Akash’: []}
**Method #1 : UsingCounter() \+ list comprehension**
The combination of above functions can be used to solve this problem. In this,
we use Counter() to extract all frequencies and list comprehension to assign
the value with single occurrence in value list.
__
__
__
__
__
__
__
# Python3 code to demonstrate working of
# Remove duplicate values across Dictionary Values
# Using Counter() + list comprehension
from collections import Counter
# initializing dictionary
test_dict = {'Manjeet' : [1, 4, 5, 6],
'Akash' : [1, 8, 9],
'Nikhil': [10, 22, 4],
'Akshat': [5, 11, 22]}
# printing original dictionary
print("The original dictionary : " + str(test_dict))
# Remove duplicate values across Dictionary Values
# Using Counter() + list comprehension
cnt = Counter()
for idx in test_dict.values():
cnt.update(idx)
res = {idx: [key for key in j if cnt[key] == 1]
for idx, j in test_dict.items()}
# printing result
print("Uncommon elements records : " + str(res))
---
__
__
**Output :**
> The original dictionary : {‘Akshat’: [5, 11, 22], ‘Nikhil’: [10, 22, 4],
> ‘Manjeet’: [1, 4, 5, 6], ‘Akash’: [1, 8, 9]}
> Uncommon elements records : {‘Akshat’: [11], ‘Nikhil’: [10], ‘Manjeet’: [6],
> ‘Akash’: [8, 9]}
| [
"[email protected]"
] | |
485b3c5be8e95fe1bbf16aff7303d04fa4db5320 | 4ddf4fa6a4a499d64b23fb99d70a7bb3802fd1b0 | /utils.py | 3cbae7d209d317c10c8b8a9c0dd055c5e057feec | [] | no_license | biterbilen/MVML | 2b318b3883c00ed1908ef75924077e3aab639094 | 76a79ded26d09452234b7ae2b4809e47aa93df70 | refs/heads/master | 2023-01-13T10:04:10.269589 | 2020-11-16T18:55:19 | 2020-11-16T18:55:19 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 678 | py | import subprocess
import pandas as pd
class HexTransformer:
@staticmethod
def hex2rgb(hex):
hex = hex.replace("#", "")
return int(hex[0:2], 16), int(hex[2:4], 16), int(hex[4:6], 16)
def fit(self, X, y=None):
return self
def transform(self, X):
return X.apply(self.hex2rgb).apply(pd.Series)
def fit_transform(self, X, y=None):
self.fit(X)
return self.transform(X)
def clean_up():
subprocess.run(
"rm -rf .mummify .venv __pycache__ .ipynb_checkpoints mummify.log Procfile requirements.txt runtime.txt pipe.pkl Dockerfile",
shell=True,
)
if __name__ == "__main__":
clean_up()
| [
"[email protected]"
] | |
60567d1fa526b9f96876073dc18939f56fe55711 | 7be4f595d555614a28f708c1ba7edda321f0cf30 | /practice/algorithms/greedy/jim_and_the_orders/jim_and_the_orders.py | c2536053bea15992893c3f2b2b86f41f2c9beaa8 | [] | no_license | orel1108/hackerrank | de31a2d31aaf8aeb58477d1f2738744bfe492555 | 55da1f3a94e8c28ed0f0dea3103e51774f0047de | refs/heads/master | 2021-04-09T17:38:25.112356 | 2017-01-22T11:21:19 | 2017-01-22T11:21:19 | 50,198,159 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 210 | py | #!/usr/bin/env python
n = int(raw_input().strip())
t = []
for CNT in range(n):
ti, di = map(int, raw_input().strip().split())
t.append((ti + di, CNT + 1))
for VAL in sorted(t):
print VAL[1],
| [
"[email protected]"
] | |
8e9d522a4bee0bcbaef63387c784c3826c71dc90 | 09e57dd1374713f06b70d7b37a580130d9bbab0d | /data/p3BR/R1/benchmark/startQiskit_Class337.py | 4a5caf7314c5b9433c1c0f9694093bc128700ae8 | [
"BSD-3-Clause"
] | permissive | UCLA-SEAL/QDiff | ad53650034897abb5941e74539e3aee8edb600ab | d968cbc47fe926b7f88b4adf10490f1edd6f8819 | refs/heads/main | 2023-08-05T04:52:24.961998 | 2021-09-19T02:56:16 | 2021-09-19T02:56:16 | 405,159,939 | 2 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,360 | py | # qubit number=3
# total number=60
import numpy as np
from qiskit import QuantumCircuit, execute, Aer, QuantumRegister, ClassicalRegister, transpile, BasicAer, IBMQ
from qiskit.visualization import plot_histogram
from typing import *
from pprint import pprint
from math import log2
from collections import Counter
from qiskit.test.mock import FakeVigo, FakeYorktown
kernel = 'circuit/bernstein'
def bitwise_xor(s: str, t: str) -> str:
length = len(s)
res = []
for i in range(length):
res.append(str(int(s[i]) ^ int(t[i])))
return ''.join(res[::-1])
def bitwise_dot(s: str, t: str) -> str:
length = len(s)
res = 0
for i in range(length):
res += int(s[i]) * int(t[i])
return str(res % 2)
def build_oracle(n: int, f: Callable[[str], str]) -> QuantumCircuit:
# implement the oracle O_f
# NOTE: use multi_control_toffoli_gate ('noancilla' mode)
# https://qiskit.org/documentation/_modules/qiskit/aqua/circuits/gates/multi_control_toffoli_gate.html
# https://quantumcomputing.stackexchange.com/questions/3943/how-do-you-implement-the-toffoli-gate-using-only-single-qubit-and-cnot-gates
# https://quantumcomputing.stackexchange.com/questions/2177/how-can-i-implement-an-n-bit-toffoli-gate
controls = QuantumRegister(n, "ofc")
target = QuantumRegister(1, "oft")
oracle = QuantumCircuit(controls, target, name="Of")
for i in range(2 ** n):
rep = np.binary_repr(i, n)
if f(rep) == "1":
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
oracle.mct(controls, target[0], None, mode='noancilla')
for j in range(n):
if rep[j] == "0":
oracle.x(controls[j])
# oracle.barrier()
# oracle.draw('mpl', filename=(kernel + '-oracle.png'))
return oracle
def build_circuit(n: int, f: Callable[[str], str]) -> QuantumCircuit:
# implement the Bernstein-Vazirani circuit
zero = np.binary_repr(0, n)
b = f(zero)
# initial n + 1 bits
input_qubit = QuantumRegister(n+1, "qc")
classicals = ClassicalRegister(n, "qm")
prog = QuantumCircuit(input_qubit, classicals)
# inverse last one (can be omitted if using O_f^\pm)
prog.x(input_qubit[n])
# circuit begin
prog.h(input_qubit[1]) # number=1
prog.rx(-0.09738937226128368,input_qubit[2]) # number=2
prog.h(input_qubit[1]) # number=33
prog.y(input_qubit[2]) # number=56
prog.cz(input_qubit[2],input_qubit[1]) # number=34
prog.h(input_qubit[1]) # number=35
prog.h(input_qubit[1]) # number=3
# apply H to get superposition
for i in range(n):
prog.h(input_qubit[i])
prog.h(input_qubit[n])
prog.barrier()
# apply oracle O_f
oracle = build_oracle(n, f)
prog.append(
oracle.to_gate(),
[input_qubit[i] for i in range(n)] + [input_qubit[n]])
# apply H back (QFT on Z_2^n)
for i in range(n):
prog.h(input_qubit[i])
prog.barrier()
# measure
return prog
def get_statevector(prog: QuantumCircuit) -> Any:
state_backend = Aer.get_backend('statevector_simulator')
statevec = execute(prog, state_backend).result()
quantum_state = statevec.get_statevector()
qubits = round(log2(len(quantum_state)))
quantum_state = {
"|" + np.binary_repr(i, qubits) + ">": quantum_state[i]
for i in range(2 ** qubits)
}
return quantum_state
def evaluate(backend_str: str, prog: QuantumCircuit, shots: int, b: str) -> Any:
# Q: which backend should we use?
# get state vector
quantum_state = get_statevector(prog)
# get simulate results
# provider = IBMQ.load_account()
# backend = provider.get_backend(backend_str)
# qobj = compile(prog, backend, shots)
# job = backend.run(qobj)
# job.result()
backend = Aer.get_backend(backend_str)
# transpile/schedule -> assemble -> backend.run
results = execute(prog, backend, shots=shots).result()
counts = results.get_counts()
a = Counter(counts).most_common(1)[0][0][::-1]
return {
"measurements": counts,
# "state": statevec,
"quantum_state": quantum_state,
"a": a,
"b": b
}
def bernstein_test_1(rep: str):
"""011 . x + 1"""
a = "011"
b = "1"
return bitwise_xor(bitwise_dot(a, rep), b)
def bernstein_test_2(rep: str):
"""000 . x + 0"""
a = "000"
b = "0"
return bitwise_xor(bitwise_dot(a, rep), b)
def bernstein_test_3(rep: str):
"""111 . x + 1"""
a = "111"
b = "1"
return bitwise_xor(bitwise_dot(a, rep), b)
if __name__ == "__main__":
n = 2
a = "11"
b = "1"
f = lambda rep: \
bitwise_xor(bitwise_dot(a, rep), b)
prog = build_circuit(n, f)
sample_shot =4000
writefile = open("../data/startQiskit_Class337.csv", "w")
# prog.draw('mpl', filename=(kernel + '.png'))
backend = BasicAer.get_backend('statevector_simulator')
circuit1 = transpile(prog, FakeYorktown())
circuit1.h(qubit=2)
circuit1.x(qubit=3)
info = execute(circuit1,backend=backend, shots=sample_shot).result().get_counts()
print(info, file=writefile)
print("results end", file=writefile)
print(circuit1.depth(), file=writefile)
print(circuit1, file=writefile)
writefile.close()
| [
"[email protected]"
] | |
ef6e31cf1ee34575fe9a8110458a4b71ff5f2aef | 6994917b9d22e9e15e578a0e5c75dcf4ce3cb022 | /formularios/migrations/0002_auto_20200629_1106.py | ec722847b976eba1bbb468b1210f95fa3a236b6e | [] | no_license | linikerunk/rh-ticket | 59ad6411a3d08c90c2704b37ba9bba67ea7f7754 | bd8edd3eb1ea6cfe04fee03a4f41049a84c1e14a | refs/heads/master | 2023-01-06T21:25:06.851369 | 2020-10-29T20:32:53 | 2020-10-29T20:32:53 | 250,346,547 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 568 | py | # Generated by Django 2.2.9 on 2020-06-29 14:06
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('formularios', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='justificativaausencia',
name='ticket',
field=models.OneToOneField(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name='justificativa_ausencia', to='chamados.Ticket', verbose_name='Ticket'),
),
]
| [
"[email protected]"
] | |
0c279e45341ed15a542b30b7c9517f1c6b235dc4 | e6dab5aa1754ff13755a1f74a28a201681ab7e1c | /.parts/lib/PyAMF-0.6.1/pyamf/remoting/amf0.py | d3e90344ab10b0c668e00ae113df1b052165e4a5 | [] | no_license | ronkagan/Euler_1 | 67679203a9510147320f7c6513eefd391630703e | 022633cc298475c4f3fd0c6e2bde4f4728713995 | refs/heads/master | 2021-01-06T20:45:52.901025 | 2014-09-06T22:34:16 | 2014-09-06T22:34:16 | 23,744,842 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 89 | py | /home/action/.parts/packages/googleappengine/1.9.4/lib/PyAMF-0.6.1/pyamf/remoting/amf0.py | [
"[email protected]"
] | |
0e2eae513dca547791fbedea008e887c86f7689a | 3d9c7d046674c0fca73df7dd0d0281be7600d6c3 | /bases/renderers/rect_renderer.py | c179c4649b1253a742b051642c1cfb8baf202a22 | [] | no_license | qhuydtvt/micro-war | 9a5fef988ca229278e0b93319fd2be10a0b758b0 | 7151905277089b12c7312368ff85aba040e6fe5d | refs/heads/master | 2020-03-09T20:36:43.280626 | 2018-06-18T22:55:12 | 2018-06-18T22:55:12 | 128,989,368 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 433 | py | from bases.renderers.colors import RED
import pygame
class RectRenderer:
def __init__(self, width, height):
self.width = width
self.height = height
self.color = RED
self.line_width = 1
def draw(self, screen, position):
rect = (position.x - self.width / 2, position.y - self.height / 2, self.width, self.height)
pygame.draw.rect(screen, self.color, rect, self.line_width)
| [
"[email protected]"
] | |
3be1cc1cbe3da3822f1507033315a0b55fd4c5ac | 04b1803adb6653ecb7cb827c4f4aa616afacf629 | /third_party/blink/web_tests/external/wpt/fetch/content-type/resources/content-type.py | 0b5e93b937c293250b33ae2cb2e5cbe43e381a86 | [
"LGPL-2.0-or-later",
"LicenseRef-scancode-warranty-disclaimer",
"LGPL-2.1-only",
"GPL-1.0-or-later",
"GPL-2.0-only",
"LGPL-2.0-only",
"BSD-2-Clause",
"LicenseRef-scancode-other-copyleft",
"MIT",
"Apache-2.0",
"BSD-3-Clause"
] | permissive | Samsung/Castanets | 240d9338e097b75b3f669604315b06f7cf129d64 | 4896f732fc747dfdcfcbac3d442f2d2d42df264a | refs/heads/castanets_76_dev | 2023-08-31T09:01:04.744346 | 2021-07-30T04:56:25 | 2021-08-11T05:45:21 | 125,484,161 | 58 | 49 | BSD-3-Clause | 2022-10-16T19:31:26 | 2018-03-16T08:07:37 | null | UTF-8 | Python | false | false | 591 | py | def main(request, response):
values = request.GET.get_list("value")
content = request.GET.first("content", "<b>hi</b>\n")
output = "HTTP/1.1 200 OK\r\n"
output += "X-Content-Type-Options: nosniff\r\n"
if "single_header" in request.GET:
output += "Content-Type: " + ",".join(values) + "\r\n"
else:
for value in values:
output += "Content-Type: " + value + "\r\n"
output += "Content-Length: " + str(len(content)) + "\r\n"
output += "\r\n"
output += content
response.writer.write(output)
response.close_connection = True
| [
"[email protected]"
] | |
8cfce97e1d57c3a59f1770fd204e848d6bae7536 | 93092ee9f65e872ccb2826291cfdcaf3c3ae72c9 | /store/views.py | 3f7dde607142fabbe221fa82813c2e2d4497af20 | [] | no_license | SergioRodas/Django-ecommerce | 5893d791a6d8d9c953ed4d3389ce277c23b14001 | 258710dde069ddc9058d9f75d8218e4e14e9899d | refs/heads/main | 2023-08-15T18:09:36.431537 | 2021-10-24T23:52:33 | 2021-10-24T23:52:33 | 403,732,841 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,807 | py | from django.shortcuts import render
from django.http import JsonResponse
import json
import datetime
from .models import *
from .utils import cookieCart, cartData, guestOrder
def store(request):
data = cartData(request)
cartItems = data['cartItems']
order = data['order']
items = data['items']
products = Product.objects.all()
featured = products.all()[0:3]
women = products.all()[3:7]
news = products.all()[7:11]
context = {'products':products, 'featured':featured, 'women':women, 'news':news, 'cartItems':cartItems}
return render(request, 'store/store.html', context)
def shop(request):
data = cartData(request)
cartItems = data['cartItems']
order = data['order']
items = data['items']
products = Product.objects.all()
context = {'products':products, 'cartItems':cartItems}
return render(request, 'store/shop.html', context)
def cart(request):
data = cartData(request)
cartItems = data['cartItems']
order = data['order']
items = data['items']
context = {'items':items, 'order':order, 'cartItems':cartItems}
return render(request, 'store/cart.html', context)
def checkout(request):
data = cartData(request)
cartItems = data['cartItems']
order = data['order']
items = data['items']
context = {'items':items, 'order':order, 'cartItems':cartItems}
return render(request, 'store/checkout.html', context)
def updateItem(request):
data = json.loads(request.body)
productId = data['productId']
action = data['action']
print('Action:', action)
print('Product:', productId)
customer = request.user.customer
product = Product.objects.get(id=productId)
order, created = Order.objects.get_or_create(customer=customer, complete=False)
orderItem, created = OrderItem.objects.get_or_create(order=order, product=product)
if action == 'add':
orderItem.quantity = (orderItem.quantity + 1)
elif action == 'remove':
orderItem.quantity = (orderItem.quantity - 1)
orderItem.save()
if orderItem.quantity <= 0:
orderItem.delete()
return JsonResponse('Item was added', safe=False)
def processOrder(request):
transaction_id = datetime.datetime.now().timestamp()
data = json.loads(request.body)
if request.user.is_authenticated:
customer = request.user.customer
order, created = Order.objects.get_or_create(customer=customer, complete=False)
else:
customer, order = guestOrder(request, data)
total = float(data['form']['total'])
order.transaction_id = transaction_id
if total == order.get_cart_total:
order.complete = True
order.save()
if order.shipping == True:
ShippingAddress.objects.create(
customer=customer,
order=order,
address=data['shipping']['address'],
city=data['shipping']['city'],
state=data['shipping']['state'],
zipcode=data['shipping']['zipcode'],
)
return JsonResponse('Payment submitted..', safe=False) | [
"[email protected]"
] | |
7f7b017d2b6d812018f24c2aa3d1a7c4f61a0e3d | 7d2daf2e5cb120bf1bda2d2b86bbb8e6df7651b4 | /docs/0.7/_static/notebooks/first_steps.py | 2f1dd5a8b25a78e76f18263a5ce8ff7575987954 | [
"MIT"
] | permissive | contrera/gammapy-docs | d99a6c3fcd6946a58e9e0dfc976fbf0b924444f6 | 1bab4b3be807462449631363571c59d457a6b27f | refs/heads/master | 2020-03-31T03:16:01.956605 | 2018-10-05T07:59:08 | 2018-10-05T07:59:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 18,639 | py |
# coding: utf-8
# # Getting started with Gammapy
#
# ## Introduction
#
# This is a getting started tutorial for [Gammapy](http://docs.gammapy.org/).
#
# In this tutorial we will use the [Second Fermi-LAT Catalog of High-Energy Sources (2FHL) catalog](http://fermi.gsfc.nasa.gov/ssc/data/access/lat/2FHL/), corresponding event list and images to learn how to work with some of the central Gammapy data structures.
#
# We will cover the following topics:
#
# * **Sky images**
# * We will learn how to handle image based data with gammapy using a Fermi-LAT 2FHL example image. We will work with the following classes:
# - [gammapy.image.SkyImage](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html)
# - [astropy.coordinates.SkyCoord](http://astropy.readthedocs.io/en/latest/coordinates/index.html)
# - [numpy.ndarray](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html)
#
# * **Event lists**
# * We will learn how to handle event lists with Gammapy. Important for this are the following classes:
# - [gammapy.data.EventList](http://docs.gammapy.org/dev/api/gammapy.data.EventList.html)
# - [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html#astropy.table.Table)
#
# * **Source catalogs**
# * We will show how to load source catalogs with Gammapy and explore the data using the following classes:
# - [gammapy.catalog.SourceCatalog](http://docs.gammapy.org/dev/api/gammapy.catalog.SourceCatalog.html), specifically [gammapy.catalog.SourceCatalog2FHL](http://docs.gammapy.org/dev/api/gammapy.catalog.SourceCatalog2FHL.html)
# - [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html#astropy.table.Table)
#
# * **Spectral models and flux points**
# * We will pick an example source and show how to plot its spectral model and flux points. For this we will use the following classes:
# - [gammapy.spectrum.SpectralModel](http://docs.gammapy.org/dev/api/gammapy.spectrum.models.SpectralModel.html), specifically the [PowerLaw2](http://docs.gammapy.org/dev/api/gammapy.spectrum.models.PowerLaw2.html) model.
# - [gammapy.spectrum.FluxPoints](http://docs.gammapy.org/dev/api/gammapy.spectrum.FluxPoints.html#gammapy.spectrum.FluxPoints)
# - [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html#astropy.table.Table)
#
# If you're not yet familiar with the listed Astropy classes, maybe check out the [Astropy introduction for Gammapy users](astropy_introduction.ipynb) first.
# ## Setup
#
# **Important**: to run this tutorial the environment variable `GAMMAPY_EXTRA` must be defined and point to the directory, where the gammapy-extra is respository located on your machine. To check whether your setup is correct you can execute the following cell:
#
#
# In[1]:
import os
path = os.path.expandvars('$GAMMAPY_EXTRA/datasets')
if not os.path.exists(path):
raise Exception('gammapy-extra repository not found!')
else:
print('Great your setup is correct!')
# In case you encounter an error, you can un-comment and execute the following cell to continue. But we recommend to set up your enviroment correctly as decribed [here](http://docs.gammapy.org/dev/datasets/index.html#gammapy-extra) after you are done with this notebook.
# In[2]:
#os.environ['GAMMAPY_EXTRA'] = os.path.join(os.getcwd(), '..')
# Now we can continue with the usual IPython notebooks and Python imports:
# In[3]:
get_ipython().run_line_magic('matplotlib', 'inline')
import matplotlib.pyplot as plt
# In[4]:
import numpy as np
import astropy.units as u
from astropy.coordinates import SkyCoord
from astropy.visualization import simple_norm
# ## Sky images
#
# The central data structure to work with image based data in Gammapy is the [SkyImage](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html) class. It combines the raw data with world coordinate (WCS) information, FITS I/O functionality and convenience methods, that allow easy handling, processing and plotting of image based data.
#
# In this section we will learn how to:
#
# * Read sky images from FITS files
# * Smooth images
# * Plot images
# * Cutout parts from images
# * Reproject images to different WCS
#
# The `SkyImage` class is part of the [gammapy.image](http://docs.gammapy.org/dev/image/index.html) submodule. So we will start by importing it from there:
# In[5]:
from gammapy.image import SkyImage
# As a first example, we will read a FITS file from a prepared Fermi-LAT 2FHL dataset:
# In[6]:
vela_2fhl = SkyImage.read("$GAMMAPY_EXTRA/datasets/fermi_2fhl/fermi_2fhl_vela.fits.gz", hdu='COUNTS')
# As the FITS file `fermi_2fhl_vela.fits.gz` contains mutiple image extensions and a `SkyImage` can only represent a single image, we explicitely specify to read the extension called 'Counts'. Let's print the image to get some basic information about it:
# In[7]:
print(vela_2fhl)
# The shape of the image is 320x180 pixel, the data unit is counts ('ct') and it is defined using a cartesian projection in galactic coordinates.
#
# Let's take a closer look a the `.data` attribute:
# In[8]:
vela_2fhl.data
# That looks familiar! It just an *ordinary* 2 dimensional numpy array, which means you can apply any known numpy method to it:
# In[9]:
print('Total number of counts in the image: {:.0f}'.format(vela_2fhl.data.sum()))
# To show the image on the screen we can use the [SkyImage.show()](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html#gammapy.image.SkyImage.show) method. It basically calls [plt.imshow](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.imshow), passing the `vela_2fhl.data` attribute but in addition handles axis with world coordinates using [wcsaxes](https://wcsaxes.readthedocs.io/en/latest/) and defines some defaults for nicer plots (e.g. the colormap 'afmhot'):
# In[10]:
vela_2fhl.show()
# To make the structures in the image more visible we will smooth the data using a Gausian kernel with a radius of 0.5 deg. Again `SkyImage.smooth()` is a wrapper around existing functionality from the scientific Python libraries. In this case it is Scipy's [gaussian_filter](https://docs.scipy.org/doc/scipy-0.16.1/reference/generated/scipy.ndimage.filters.gaussian_filter.html) method. For convenience the kernel shape can be specified with as string and the smoothing radius with a quantity. It returns again a `SkyImage` object, that we can plot directly the same way we did above:
# In[11]:
# smooth counts image with gaussian kernel of 0.5 deg
vela_2fhl_smoothed = vela_2fhl.smooth(kernel='gauss', radius=0.5 * u.deg)
vela_2fhl_smoothed.show()
# The smoothed plot already looks much nicer, but still the image is rather large. As we are mostly interested in the inner part of the image, we will cut out a quadratic region of the size 9 deg x 9 deg around Vela. Therefore we use [SkyImage.cutout()](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html#gammapy.image.SkyImage.cutout), which wraps Astropy's [Cutout2D](http://docs.astropy.org/en/stable/api/astropy.nddata.Cutout2D.html). Again it returns a `SkyImage` object:
# In[12]:
# define center and size of the cutout region
center = SkyCoord(265.0, -2.0, unit='deg', frame='galactic')
size = 9.0 * u.deg
vela_2fhl_cutout = vela_2fhl_smoothed.cutout(center, size)
vela_2fhl_cutout.show()
# To make this exercise a bit more scientifically useful, we will load a second image containing WMAP data from the same region:
# In[13]:
vela_wmap = SkyImage.read("$GAMMAPY_EXTRA/datasets/images/Vela_region_WMAP_K.fits")
# define a norm to stretch the data, so it is better visible
norm = simple_norm(vela_wmap.data, stretch='sqrt', max_percent=99.9)
vela_wmap.show(cmap='viridis', norm=norm)
# In order to make the structures in the data better visible we used the [simple_norm()](http://docs.astropy.org/en/stable/api/astropy.visualization.mpl_normalize.simple_norm.html#astropy.visualization.mpl_normalize.simple_norm) method, which allows to stretch the data for plotting. This is very similar to the methods that e.g. `ds9` provides. In addition we used a different colormap called 'viridis'. An overview of different colomaps can be found [here](http://matplotlib.org/examples/color/colormaps_reference.html).
#
# Now let's check the details of the WMAP image:
# In[14]:
print(vela_wmap)
# As you can see it is defined using a tangential projection and ICRS coordinates, which is different from the projection used for the `vela_2fhl` image. To compare both images we have to reproject one image to the WCS of the other. This can be achieved with the [SkyImage.reproject()](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html#gammapy.image.SkyImage.reproject) method:
# In[15]:
# reproject WMAP image
vela_wmap_reprojected = vela_wmap.reproject(vela_2fhl)
# cutout part we're interested in
vela_wmap_reprojected_cutout = vela_wmap_reprojected.cutout(center, size)
vela_wmap_reprojected_cutout.show(cmap='viridis', norm=norm)
# Finally we will combine both images in single plot, by plotting WMAP contours on top of the smoothed Fermi-LAT 2FHL image:
#
# In[16]:
fig, ax, _ = vela_2fhl_cutout.plot()
ax.contour(vela_wmap_reprojected_cutout.data, cmap='Blues')
# ### Exercises
#
# * Add a marker and circle at the Vela pulsar position (you can find examples in the WCSAxes [documentation](https://wcsaxes.readthedocs.io/en/latest/overlays.html)).
# * Find the maximum brightness location in the WMAP image. The methods [np.argmax()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html) and [SkyImage.wcs_pixel_to_skycoord()](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html#gammapy.image.SkyImage.wcs_pixel_to_skycoord) might be helpful. Try to identify the source.
#
#
#
# ## Event lists
#
# Almost any high-level gamma-ray data analysis starts with the raw measured counts data, which is stored in event lists. In Gammapy event lists are represented by the [gammapy.data.EventList](http://docs.gammapy.org/dev/api/gammapy.data.EventList.html) class.
#
# In this section we will learn how to:
#
# * Read event lists from FITS files
# * Access and work with the `EventList` attributes such as `.table` and `.energy`
# * Filter events lists using convenience methods
#
# Let's start with the import from the [gammapy.data](http://docs.gammapy.org/dev/data/index.html) submodule:
# In[17]:
from gammapy.data import EventList
# Very similar to the `SkyImage` class an event list can be created, by passing a filename to the `.read()` method:
# In[18]:
events_2fhl = EventList.read('$GAMMAPY_EXTRA/datasets/fermi_2fhl/2fhl_events.fits.gz')
# This time the actual data is stored as an [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html#astropy.table.Table) object. It can be accessed with `.table` attribute:
# In[19]:
events_2fhl.table
# Let's try to find the total number of events contained int the list. This doesn't work:
#
# In[20]:
print('Total number of events: {}'.format(len(events_2fhl.table)))
# Because Gammapy objects don't redefine properties, that are accessible from the underlying Astropy of Numpy data object.
#
# So in this case of course we can directly use the `.table` attribute to find the total number of events:
# In[21]:
print('Total number of events: {}'.format(len(events_2fhl.table)))
# And we can access any other attribute of the `Table` object as well:
# In[22]:
events_2fhl.table.colnames
# For convenience we can access the most important event parameters as properties on the `EventList` objects. The attributes will return corresponding Astropy objects to represent the data, such as [astropy.units.Quantity](http://docs.astropy.org/en/stable/api/astropy.units.Quantity.html#astropy.units.Quantity), [astropy.coordinates.SkyCoord](http://docs.astropy.org/en/stable/api/astropy.coordinates.SkyCoord.html) or [astropy.time.Time](http://docs.astropy.org/en/stable/api/astropy.time.Time.html#astropy.time.Time) objects:
# In[23]:
events_2fhl.energy.to('GeV')
# In[24]:
events_2fhl.galactic
#events_2fhl.radec
# In[25]:
events_2fhl.time
# In addition `EventList` provides convenience methods to filter the event lists. One possible use case is to find the highest energy event within a radius of 0.5 deg around the vela position:
# In[26]:
# select all events within a radius of 0.5 deg around center
events_vela_2fhl = events_2fhl.select_sky_cone(center=center, radius=0.5 * u.deg)
# sort events by energy
events_vela_2fhl.table.sort('ENERGY')
# and show highest energy photon
events_vela_2fhl.energy[-1].to('GeV')
# ### Exercises
#
# * Make a counts energy spectrum for the galactic center region, within a radius of 10 deg.
# ## Source catalogs
#
# Gammapy provides a convenient interface to access and work with catalog based data.
#
# In this section we will learn how to:
#
# * Load builtins catalogs from [gammapy.catalog](http://docs.gammapy.org/dev/catalog/index.html)
# * Sort and index the underlying Astropy tables
# * Access data from individual sources
#
# Let's start with importing the 2FHL catalog object from the [gammapy.catalog](http://docs.gammapy.org/dev/catalog/index.html) submodule:
# In[27]:
from gammapy.catalog import SourceCatalog2FHL
# First we initialize the Fermi-LAT 2FHL catalog and directly take a look at the `.table` attribute:
# In[28]:
fermi_2fhl = SourceCatalog2FHL('$GAMMAPY_EXTRA/datasets/catalogs/fermi/gll_psch_v08.fit.gz')
fermi_2fhl.table
# This looks very familiar again. The data is just stored as an [astropy.table.Table](http://docs.astropy.org/en/stable/api/astropy.table.Table.html#astropy.table.Table) object. We have all the methods and attributes of the `Table` object available. E.g. we can sort the underlying table by `TS` to find the top 5 most significant sources:
#
#
# In[29]:
# sort table by TS
fermi_2fhl.table.sort('TS')
# invert the order to find the highest values and take the top 5
top_five_TS_2fhl = fermi_2fhl.table[::-1][:5]
# print the top five significant sources with association and source class
top_five_TS_2fhl[['Source_Name', 'ASSOC', 'CLASS']]
# If you are interested in the data of an individual source you can access the information from catalog using the name of the source or any alias source name that is defined in the catalog:
# In[30]:
mkn_421_2fhl = fermi_2fhl['2FHL J1104.4+3812']
# or use any alias source name that is defined in the catalog
mkn_421_2fhl = fermi_2fhl['Mkn 421']
print(mkn_421_2fhl.data['TS'])
# ### Exercises
#
# * Try to load the Fermi-LAT 3FHL catalog and check the total number of sources it contains.
# * Select all the sources from the 2FHL catalog which are contained in the Vela region. Add markers for all these sources and try to add labels with the source names. The methods [SkyImage.contains()](http://docs.gammapy.org/dev/api/gammapy.image.SkyImage.html#gammapy.image.SkyImage.contains) and [ax.text()](http://matplotlib.org/api/_as_gen/matplotlib.axes.Axes.text.html#matplotlib.axes.Axes.text) might be helpful.
# * Try to find the source class of the object at position ra=68.6803, dec=9.3331
#
# ## Spectral models and flux points
#
# In the previous section we learned how access basic data from individual sources in the catalog. Now we will go one step further and explore the full spectral information of sources. We will learn how to:
#
# * Plot spectral models
# * Compute integral and energy fluxes
# * Read and plot flux points
#
# As a first example we will start with the Crab Nebula:
# In[31]:
crab_2fhl = fermi_2fhl['Crab']
print(crab_2fhl.spectral_model)
# The `crab_2fhl.spectral_model` is an instance of the [gammapy.spectrum.models.PowerLaw2](http://docs.gammapy.org/dev/api/gammapy.spectrum.models.PowerLaw2.html#gammapy.spectrum.models.PowerLaw2) model, with the parameter values and errors taken from the 2FHL catalog.
#
# Let's plot the spectral model in the energy range between 50 GeV and 2000 GeV:
# In[32]:
ax_crab_2fhl = crab_2fhl.spectral_model.plot(
energy_range=[50, 2000] * u.GeV, energy_power=0)
# We assign the return axes object to variable called `ax_crab_2fhl`, because we will re-use it later to plot the flux points on top.
#
# To compute the differential flux at 100 GeV we can simply call the model like normal Python function and convert to the desired units:
# In[33]:
crab_2fhl.spectral_model(100 * u.GeV).to('cm-2 s-1 GeV-1')
# Next we can compute the integral flux of the Crab between 50 GeV and 2000 GeV:
# In[34]:
crab_2fhl.spectral_model.integral(
emin=50 * u.GeV, emax=2000 * u.GeV,
).to('cm-2 s-1')
# We can easily convince ourself, that it corresponds to the value given in the Fermi-LAT 2FHL catalog:
# In[35]:
crab_2fhl.data['Flux50']
# In addition we can compute the energy flux between 50 GeV and 2000 GeV:
# In[36]:
crab_2fhl.spectral_model.energy_flux(
emin=50 * u.GeV, emax=2000 * u.GeV,
).to('erg cm-2 s-1')
# Next we will access the flux points data of the Crab:
# In[37]:
print(crab_2fhl.flux_points)
# If you want to learn more about the different flux point formats you can read the specification [here](http://gamma-astro-data-formats.readthedocs.io/en/latest/results/flux_points/index.html#flux-points).
#
# No we can check again the underlying astropy data structure by accessing the `.table` attribute:
# In[38]:
crab_2fhl.flux_points.table
# Finally let's combine spectral model and flux points in a single plot and scale with `energy_power=2` to obtain the spectral energy distribution:
# In[39]:
ax = crab_2fhl.spectral_model.plot(
energy_range=[50, 2000] * u.GeV, energy_power=2,
)
crab_2fhl.flux_points.plot(ax=ax, energy_power=2)
# ### Exercises
#
# * Plot the spectral model and flux points for PKS 2155-304 for the 3FGL and 2FHL catalogs. Try to plot the error of the model (aka "Butterfly") as well. Note this requires the [uncertainties package](https://pythonhosted.org/uncertainties/) to be installed on your machine.
#
# ## What next?
#
# This was a quick introduction to some of the high-level classes in Astropy and Gammapy.
#
# * To learn more about those classes, go to the API docs (links are in the introduction at the top).
# * To learn more about other parts of Gammapy (e.g. Fermi-LAT and TeV data analysis), check out the other tutorial notebooks.
# * To see what's available in Gammapy, browse the [Gammapy docs](http://docs.gammapy.org/) or use the full-text search.
# * If you have any questions, ask on the mailing list.
| [
"[email protected]"
] | |
bedda5201e8709d0bee28aa1fadd7bd39ea7b648 | c9ddbdb5678ba6e1c5c7e64adf2802ca16df778c | /cases/synthetic/sieve-big-4869.py | e198d69d1cf2a1db2fc927fc579a2edee3123d55 | [] | no_license | Virtlink/ccbench-chocopy | c3f7f6af6349aff6503196f727ef89f210a1eac8 | c7efae43bf32696ee2b2ee781bdfe4f7730dec3f | refs/heads/main | 2023-04-07T15:07:12.464038 | 2022-02-03T15:42:39 | 2022-02-03T15:42:39 | 451,969,776 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 31,741 | py | # A resizable list of integers
class Vector(object):
items: [int] = None
size: int = 0
def __init__(self:"Vector"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector", idx: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector") -> int:
return self.size
# A resizable list of integers
class Vector2(object):
items: [int] = None
items2: [int] = None
size: int = 0
size2: int = 0
def __init__(self:"Vector2"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector2") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector2") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector2") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector2") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector2", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector2", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector2", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector2", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector2", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector2", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector2", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector2", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector2") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector2") -> int:
return self.size
# A resizable list of integers
class Vector3(object):
items: [int] = None
items2: [int] = None
items3: [int] = None
size: int = 0
size2: int = 0
size3: int = 0
def __init__(self:"Vector3"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector3") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector3") -> int:
return len(self.items)
# Returns current capacity
def capacity3(self:"Vector3") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector3") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector3") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity3(self:"Vector3") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector3", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector3", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append3(self:"Vector3", item: int, item2: int, item3: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector3", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector3", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all3(self:"Vector3", new_items: [int], new_items2: [int], new_items3: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector3", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector3", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at3(self:"Vector3", idx: int, idx2: int, idx3: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector3", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector3", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get3(self:"Vector3", idx: int, idx2: int, idx3: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector3") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector3") -> int:
return self.size
# Retrieves the current size of the vector
def length3(self:"Vector3") -> int:
return self.size
# A resizable list of integers
class Vector4(object):
items: [int] = None
items2: [int] = None
items3: [int] = None
items4: [int] = None
size: int = 0
size2: int = 0
size3: int = 0
size4: int = 0
def __init__(self:"Vector4"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector4") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector4") -> int:
return len(self.items)
# Returns current capacity
def capacity3(self:"Vector4") -> int:
return len(self.items)
# Returns current capacity
def capacity4(self:"Vector4") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity3(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity4(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector4", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector4", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append3(self:"Vector4", item: int, item2: int, item3: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append4(self:"Vector4", item: int, item2: int, item3: int, item4: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector4", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector4", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all3(self:"Vector4", new_items: [int], new_items2: [int], new_items3: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all4(self:"Vector4", new_items: [int], new_items2: [int], new_items3: [int], new_items4: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
item4:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector4", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector4", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at3(self:"Vector4", idx: int, idx2: int, idx3: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at4(self:"Vector4", idx: int, idx2: int, idx3: int, idx4: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector4", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector4", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get3(self:"Vector4", idx: int, idx2: int, idx3: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get4(self:"Vector4", idx: int, idx2: int, idx3: int, idx4: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector4") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector4") -> int:
return self.size
# Retrieves the current size of the vector
def length3(self:"Vector4") -> int:
return self.size
# Retrieves the current size of the vector
def length4(self:"Vector4") -> int:
return self.size
# A resizable list of integers
class Vector5(object):
items: [int] = None
items2: [int] = None
items3: [int] = None
items4: [int] = None
items5: [int] = None
size: int = 0
size2: int = 0
size3: int = 0
size4: int = 0
size5: int = 0
def __init__(self:"Vector5"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity3(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity4(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity5(self:"Vector5") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity3(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity4(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity5(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector5", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector5", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
$Target = item
self.size = self.size + 1
# Appends one item to end of vector
def append3(self:"Vector5", item: int, item2: int, item3: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append4(self:"Vector5", item: int, item2: int, item3: int, item4: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append5(self:"Vector5", item: int, item2: int, item3: int, item4: int, item5: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector5", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector5", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all3(self:"Vector5", new_items: [int], new_items2: [int], new_items3: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all4(self:"Vector5", new_items: [int], new_items2: [int], new_items3: [int], new_items4: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
item4:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all5(self:"Vector5", new_items: [int], new_items2: [int], new_items3: [int], new_items4: [int], new_items5: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
item4:int = 0
item5:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector5", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector5", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at3(self:"Vector5", idx: int, idx2: int, idx3: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at4(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at5(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int, idx5: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector5", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector5", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get3(self:"Vector5", idx: int, idx2: int, idx3: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get4(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get5(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int, idx5: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length3(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length4(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length5(self:"Vector5") -> int:
return self.size
# A faster (but more memory-consuming) implementation of vector
class DoublingVector(Vector):
doubling_limit:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector2(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector2") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector2") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector3(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
doubling_limit3:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector3") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector3") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity3(self:"DoublingVector3") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector4(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
doubling_limit3:int = 1000
doubling_limit4:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity3(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity4(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector5(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
doubling_limit3:int = 1000
doubling_limit4:int = 1000
doubling_limit5:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity3(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity4(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity5(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Makes a vector in the range [i, j)
def vrange(i:int, j:int) -> Vector:
v:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange2(i:int, j:int, i2:int, j2:int) -> Vector:
v:Vector = None
v2:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange3(i:int, j:int, i2:int, j2:int, i3:int, j3:int) -> Vector:
v:Vector = None
v2:Vector = None
v3:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange4(i:int, j:int, i2:int, j2:int, i3:int, j3:int, i4:int, j4:int) -> Vector:
v:Vector = None
v2:Vector = None
v3:Vector = None
v4:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange5(i:int, j:int, i2:int, j2:int, i3:int, j3:int, i4:int, j4:int, i5:int, j5:int) -> Vector:
v:Vector = None
v2:Vector = None
v3:Vector = None
v4:Vector = None
v5:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
# Sieve of Eratosthenes (not really)
def sieve(v:Vector) -> object:
i:int = 0
j:int = 0
k:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve2(v:Vector, v2:Vector) -> object:
i:int = 0
i2:int = 0
j:int = 0
j2:int = 0
k:int = 0
k2:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve3(v:Vector, v2:Vector, v3:Vector) -> object:
i:int = 0
i2:int = 0
i3:int = 0
j:int = 0
j2:int = 0
j3:int = 0
k:int = 0
k2:int = 0
k3:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve4(v:Vector, v2:Vector, v3:Vector, v4:Vector) -> object:
i:int = 0
i2:int = 0
i3:int = 0
i4:int = 0
j:int = 0
j2:int = 0
j3:int = 0
j4:int = 0
k:int = 0
k2:int = 0
k3:int = 0
k4:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve5(v:Vector, v2:Vector, v3:Vector, v4:Vector, v5:Vector) -> object:
i:int = 0
i2:int = 0
i3:int = 0
i4:int = 0
i5:int = 0
j:int = 0
j2:int = 0
j3:int = 0
j4:int = 0
j5:int = 0
k:int = 0
k2:int = 0
k3:int = 0
k4:int = 0
k5:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
# Input parameter
n:int = 50
n2:int = 50
n3:int = 50
n4:int = 50
n5:int = 50
# Data
v:Vector = None
v2:Vector = None
v3:Vector = None
v4:Vector = None
v5:Vector = None
i:int = 0
i2:int = 0
i3:int = 0
i4:int = 0
i5:int = 0
# Crunch
v = vrange(2, n)
v2 = vrange(2, n)
v3 = vrange(2, n)
v4 = vrange(2, n)
v5 = vrange(2, n)
sieve(v)
# Print
while i < v.length():
print(v.get(i))
i = i + 1
| [
"[email protected]"
] | |
32473ce729fccc6e0b67e307e604207b6a731331 | b27913c7372eabc5d736bcec58fd162056c575aa | /venv/Scripts/pip-script.py | e80ef915ca35c04d775fbf5a7fb3365ead6dfc31 | [] | no_license | bazhenov4job/soft_test | b39847e3483d45d34b99bf9302bcd2a360a48eb5 | 556a7b0e8dd9d4068a3227b83952c748d7e444e7 | refs/heads/master | 2020-12-22T21:30:17.133067 | 2020-02-02T16:56:44 | 2020-02-02T16:56:44 | 236,934,414 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 395 | py | #!E:\SoftLogic\soft_test\venv\Scripts\python.exe
# EASY-INSTALL-ENTRY-SCRIPT: 'pip==10.0.1','console_scripts','pip'
__requires__ = 'pip==10.0.1'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pip==10.0.1', 'console_scripts', 'pip')()
)
| [
"[email protected]"
] | |
aadbdd113e9ac3d93dad203d5549f454104cb73f | 3bb662400dcd168c66f0ccf7b76c086508721408 | /Nuisances/LeptonScale/samples.py | ae70a5614295d55d982f987dd2d34af9ece27c7d | [] | no_license | amassiro/PlotsConfigurationsCMSDAS2019Beijing | 4dea76368e2af18e40b9342f2d9863518993be3a | 04733d5429c93f073294a942828811891984a2de | refs/heads/master | 2020-09-16T12:40:39.425764 | 2019-12-09T03:13:21 | 2019-12-09T03:13:21 | 223,772,812 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,858 | py | import os
from LatinoAnalysis.Tools.commonTools import getSampleFiles, getBaseW, addSampleWeight
#
# function to simplify the life later ...
#
def nanoGetSampleFiles(inputDir, sample):
return getSampleFiles(inputDir, sample, True, 'nanoLatino_')
# samples
try:
len(samples)
except NameError:
import collections
samples = collections.OrderedDict()
#######################
### Skims and trees ###
#######################
treeBaseDir = '/eos/cms/store/group/phys_higgs/cmshww/amassiro/HWWNano'
mcReco = 'Fall2017_102X_nAODv4_Full2017v5'
mcSteps = 'MCl1loose2017v5__MCCorr2017v5__l2loose__l2tightOR2017v5'
mcDirectory = os.path.join(treeBaseDir, mcReco, mcSteps) # --> treeBaseDir/mcReco/mcSteps
################################################################
### Definition of common cuts (on lepton id/iso) and weights ###
################################################################
Nlep='2'
eleWP='mvaFall17V1Iso_WP90'
muWP='cut_Tight_HWWW'
LepWPCut = 'LepCut'+Nlep+'l__ele_'+eleWP+'__mu_'+muWP
LepWPweight = 'LepSF'+Nlep+'l__ele_'+eleWP+'__mu_'+muWP
fakeW = 'fakeW2l_ele_'+eleWP+'_mu_'+muWP
################################################
############ BASIC MC WEIGHTS ##################
################################################
XSWeight = 'XSWeight'
SFweight = 'SFweight'+Nlep+'l*'+LepWPweight+'*'+LepWPCut+'*PrefireWeight'
GenLepMatch = '(Alt$(Lepton_promptgenmatched[0]*Lepton_promptgenmatched[1], 0))'
#########################################
############ MC COMMON ##################
#########################################
#
# SFweight does not include btag weights
#
# -> genmatching is not required for Vg sample
#
mcCommonWeightNoMatch = 'XSWeight*' + SFweight + '*METFilter_MC'
mcCommonWeight = 'XSWeight*' + SFweight + '*' + GenLepMatch +'*METFilter_MC'
###########################################
############# BACKGROUNDS ###############
###########################################
###### DY #######
ptllDYW_NLO = '(((0.623108 + 0.0722934*gen_ptll - 0.00364918*gen_ptll*gen_ptll + 6.97227e-05*gen_ptll*gen_ptll*gen_ptll - 4.52903e-07*gen_ptll*gen_ptll*gen_ptll*gen_ptll)*(gen_ptll<45)*(gen_ptll>0) + 1*(gen_ptll>=45))*(abs(gen_mll-90)<3) + (abs(gen_mll-90)>3))'
ptllDYW_LO = '((0.632927+0.0456956*gen_ptll-0.00154485*gen_ptll*gen_ptll+2.64397e-05*gen_ptll*gen_ptll*gen_ptll-2.19374e-07*gen_ptll*gen_ptll*gen_ptll*gen_ptll+6.99751e-10*gen_ptll*gen_ptll*gen_ptll*gen_ptll*gen_ptll)*(gen_ptll>0)*(gen_ptll<100)+(1.41713-0.00165342*gen_ptll)*(gen_ptll>=100)*(gen_ptll<300)+1*(gen_ptll>=300))'
files = nanoGetSampleFiles(mcDirectory, 'DYJetsToLL_M-50') + \
nanoGetSampleFiles(mcDirectory, 'DYJetsToLL_M-10to50-LO')
samples['DY'] = {
'name': files,
'weight': mcCommonWeight,
'FilesPerJob': 8,
}
# SampleDictionary Sample weight
addSampleWeight(samples,'DY', 'DYJetsToLL_M-50', ptllDYW_NLO)
addSampleWeight(samples,'DY', 'DYJetsToLL_M-10to50-LO', ptllDYW_LO)
###### Top #######
files = nanoGetSampleFiles(mcDirectory, 'TTTo2L2Nu') + \
nanoGetSampleFiles(mcDirectory, 'ST_s-channel') + \
nanoGetSampleFiles(mcDirectory, 'ST_t-channel_antitop') + \
nanoGetSampleFiles(mcDirectory, 'ST_t-channel_top') + \
nanoGetSampleFiles(mcDirectory, 'ST_tW_antitop') + \
nanoGetSampleFiles(mcDirectory, 'ST_tW_top')
samples['top'] = {
'name': files,
'weight': mcCommonWeight,
'FilesPerJob': 1,
}
###### WW ########
samples['WW'] = {
'name': nanoGetSampleFiles(mcDirectory, 'WWTo2L2Nu'),
'weight': mcCommonWeight + '*nllW',
'FilesPerJob': 1
}
samples['WWewk'] = {
'name': nanoGetSampleFiles(mcDirectory, 'WpWmJJ_EWK_noTop'),
'weight': mcCommonWeight + '*(Sum$(abs(GenPart_pdgId)==6 || GenPart_pdgId==25)==0)', #filter tops and Higgs
'FilesPerJob': 2
}
files = nanoGetSampleFiles(mcDirectory, 'GluGluToWWToENEN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToENMN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToENTN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToMNEN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToMNMN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToMNTN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToTNEN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToTNMN') + \
nanoGetSampleFiles(mcDirectory, 'GluGluToWWToTNTN')
samples['ggWW'] = {
'name': files,
'weight': mcCommonWeight + '*1.53/1.4', # updating k-factor
'FilesPerJob': 10
}
######## Vg ########
files = nanoGetSampleFiles(mcDirectory, 'Wg_MADGRAPHMLM') + \
nanoGetSampleFiles(mcDirectory, 'ZGToLLG')
samples['Vg'] = {
'name': files,
'weight': mcCommonWeightNoMatch + '*!(Gen_ZGstar_mass > 0)',
'FilesPerJob': 10
}
addSampleWeight(samples, 'Vg', 'ZGToLLG', '(Sum$(GenPart_pdgId == 22 && TMath::Odd(GenPart_statusFlags) && GenPart_pt < 20.) == 0)')
######## VgS ########
files = nanoGetSampleFiles(mcDirectory, 'Wg_MADGRAPHMLM') + \
nanoGetSampleFiles(mcDirectory, 'ZGToLLG') + \
nanoGetSampleFiles(mcDirectory, 'WZTo3LNu_mllmin01')
samples['VgS'] = {
'name': files,
#'weight': mcCommonWeight + ' * (gstarLow * 0.94 + gstarHigh * 1.14)',
'weight': mcCommonWeight + ' * ((Gen_ZGstar_mass >0 && Gen_ZGstar_mass < 4) * 0.94 + (Gen_ZGstar_mass <0 || Gen_ZGstar_mass > 4) * 1.14)',
'FilesPerJob': 15,
'subsamples': {
'L': 'Gen_ZGstar_mass >0 && Gen_ZGstar_mass < 4',
'H': 'Gen_ZGstar_mass <0 || Gen_ZGstar_mass > 4'
}
}
addSampleWeight(samples, 'VgS', 'Wg_MADGRAPHMLM', '(Gen_ZGstar_mass > 0 && Gen_ZGstar_mass < 0.1)')
addSampleWeight(samples, 'VgS', 'ZGToLLG', '(Gen_ZGstar_mass > 0 && Gen_ZGstar_MomId == 22)*(Sum$(GenPart_pdgId == 22 && TMath::Odd(GenPart_statusFlags) && GenPart_pt < 20.) == 0)')
addSampleWeight(samples, 'VgS', 'WZTo3LNu_mllmin01', '(Gen_ZGstar_mass > 0.1)')
############ VZ ############
files = nanoGetSampleFiles(mcDirectory, 'ZZTo2L2Nu') + \
nanoGetSampleFiles(mcDirectory, 'ZZTo2L2Q') + \
nanoGetSampleFiles(mcDirectory, 'ZZTo4L') + \
nanoGetSampleFiles(mcDirectory, 'WZTo2L2Q')
samples['VZ'] = {
'name': files,
'weight': mcCommonWeight + '*1.11',
'FilesPerJob': 2
}
########## VVV #########
files = nanoGetSampleFiles(mcDirectory, 'ZZZ') + \
nanoGetSampleFiles(mcDirectory, 'WZZ') + \
nanoGetSampleFiles(mcDirectory, 'WWZ') + \
nanoGetSampleFiles(mcDirectory, 'WWW')
samples['VVV'] = {
'name': files,
'weight': mcCommonWeight
}
###########################################
############# SIGNALS ##################
###########################################
#### ggH -> WW
samples['ggH_hww'] = {
'name': nanoGetSampleFiles(mcDirectory, 'GluGluHToWWTo2L2NuPowheg_M125'),
'weight': mcCommonWeight,
#'weight': [mcCommonWeight, {'class': 'Weight2MINLO', 'args': '%s/src/LatinoAnalysis/Gardener/python/data/powheg2minlo/NNLOPS_reweight.root' % os.getenv('CMSSW_BASE')}],
'FilesPerJob': 4,
#'linesToAdd': ['.L %s/Differential/weight2MINLO.cc+' % configurations]
}
############ VBF H->WW ############
samples['qqH_hww'] = {
'name': nanoGetSampleFiles(mcDirectory, 'VBFHToWWTo2L2NuPowheg_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 3
}
############# ZH H->WW ############
samples['ZH_hww'] = {
'name': nanoGetSampleFiles(mcDirectory, 'HZJ_HToWWTo2L2Nu_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 1
}
samples['ggZH_hww'] = {
'name': nanoGetSampleFiles(mcDirectory, 'GluGluZH_HToWWTo2L2Nu_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 2
}
############ WH H->WW ############
samples['WH_hww'] = {
'name': nanoGetSampleFiles(mcDirectory, 'HWplusJ_HToWW_M125') + nanoGetSampleFiles(mcDirectory, 'HWminusJ_HToWW_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 2
}
############ ttH ############
samples['ttH_hww'] = {
'name': nanoGetSampleFiles(mcDirectory, 'ttHToNonbb_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 1
}
############ H->TauTau ############
samples['ggH_htt'] = {
'name': nanoGetSampleFiles(mcDirectory, 'GluGluHToTauTau_M125_ext1'),
'weight': mcCommonWeight,
'FilesPerJob': 1
}
samples['qqH_htt'] = {
'name': nanoGetSampleFiles(mcDirectory, 'VBFHToTauTau_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 2
}
samples['ZH_htt'] = {
'name': nanoGetSampleFiles(mcDirectory, 'HZJ_HToTauTau_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 2
}
samples['WH_htt'] = {
'name': nanoGetSampleFiles(mcDirectory, 'HWplusJ_HToTauTau_M125') + nanoGetSampleFiles(mcDirectory, 'HWminusJ_HToTauTau_M125'),
'weight': mcCommonWeight,
'FilesPerJob': 2
}
###########################################
| [
"[email protected]"
] | |
13f5b7dd1c8917cdf7aade9b6ea9ee975608d31b | 15978aacf0e44a890e36ff94c305aca5a056e5e8 | /10day--周六补/04-getpid和getppid.py | ec53cc565bba6bd0c33838ef3262a68ca235a100 | [] | no_license | ittoyou/1805_python_2 | ffbe613d893208b2454ef4f25cc2b8a9951ff047 | 1d6331a83598863042912bb26205d34417abed73 | refs/heads/master | 2020-03-24T13:58:12.276827 | 2018-07-27T07:58:57 | 2018-07-27T07:58:57 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 287 | py | import os
pid = os.fork()
if pid == 0:
print("老王")
print("我是子进程进程号是%d 我的父亲进程是%d"%(os.getpid(),os.getppid()))
else:
print("我是父进程我的进程号是%d 我的父亲的进程号是%d"%(os.getpid(),os.getppid()))
print("老宋")
| [
"[email protected]"
] | |
d17035de6b5911c40b4cb3b07529449ea464fef3 | ab5dfb7e6d14996b8340c5202ad49024d1f1ef02 | /grpclib/__init__.py | 0a3588e68466cf743000d59ab82de19a532ce2ce | [
"BSD-3-Clause"
] | permissive | SupperSpiderMan/grpclib | f4352fbbfcfc5a286e2174e964f51d0e7428d2ae | 89fbfd514f1f377a16d64c5a9732cf71090e0a7a | refs/heads/master | 2021-04-23T15:32:45.354275 | 2020-03-20T12:41:14 | 2020-03-20T13:05:19 | 249,936,393 | 1 | 0 | BSD-3-Clause | 2020-03-25T09:27:41 | 2020-03-25T09:27:40 | null | UTF-8 | Python | false | false | 132 | py | from .const import Status
from .exceptions import GRPCError
__version__ = '0.3.2rc1'
__all__ = (
'Status',
'GRPCError',
)
| [
"[email protected]"
] | |
8ac3a44f6b6bec67bac5320465fb81c6316968c2 | 63768dc92cde5515a96d774a32facb461a3bf6e9 | /jacket/api/compute/openstack/compute/schemas/migrate_server.py | b71561ae9ff7c244124debe5f8a8438ca30924d7 | [
"Apache-2.0"
] | permissive | ljZM33nd/jacket | 6fe9156f6f5789e5c24425afa7ce9237c302673d | d7ad3147fcb43131098c2a5210847634ff5fb325 | refs/heads/master | 2023-04-16T11:02:01.153751 | 2016-11-15T02:48:12 | 2016-11-15T02:48:12 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,752 | py | # Copyright 2014 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
from jacket.api.compute.validation import parameter_types
host = copy.deepcopy(parameter_types.hostname)
host['type'] = ['string', 'null']
migrate_live = {
'type': 'object',
'properties': {
'os-migrateLive': {
'type': 'object',
'properties': {
'block_migration': parameter_types.boolean,
'disk_over_commit': parameter_types.boolean,
'host': host
},
'required': ['block_migration', 'disk_over_commit', 'host'],
'additionalProperties': False,
},
},
'required': ['os-migrateLive'],
'additionalProperties': False,
}
block_migration = copy.deepcopy(parameter_types.boolean)
block_migration['enum'].append('auto')
migrate_live_v2_25 = copy.deepcopy(migrate_live)
del migrate_live_v2_25['properties']['os-migrateLive']['properties'][
'disk_over_commit']
migrate_live_v2_25['properties']['os-migrateLive']['properties'][
'block_migration'] = block_migration
migrate_live_v2_25['properties']['os-migrateLive']['required'] = (
['block_migration', 'host'])
| [
"[email protected]"
] | |
fb5b8741a7f3ab1b0292653c1e456d52c6fbf297 | 27acd9eeb0d2b9b6326cc0477e7dbb84341e265c | /test/vraag4/src/warmste-week/44.py | d719315393ffa6bef0b3edfc9fb3c08bf9786cf3 | [] | no_license | VerstraeteBert/algos-ds | e0fe35bc3c5b7d8276c07250f56d3719ecc617de | d9215f11cdfa1a12a3b19ade3b95fa73848a636c | refs/heads/master | 2021-07-15T13:46:58.790446 | 2021-02-28T23:28:36 | 2021-02-28T23:28:36 | 240,883,220 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 249 | py | def gift_inschrijven(tup,woordenboek):
klas=tup[0]
bedrag=tup[1]
if klas in woordenboek:
geld=woordenboek[klas]
geld+=bedrag
woordenboek[klas]=geld
else:
woordenboek[klas]=bedrag
return woordenboek | [
"[email protected]"
] | |
e5787ad9acf58585924c86ca9994a3c142f6c94b | 060ce17de7b5cdbd5f7064d1fceb4ded17a23649 | /fn_cisco_umbrella_inv/fn_cisco_umbrella_inv/components/umbrella_dns_rr_hist.py | e69d2201b06fc738b42f948cd0c24e135de5bb51 | [
"MIT"
] | permissive | ibmresilient/resilient-community-apps | 74bbd770062a22801cef585d4415c29cbb4d34e2 | 6878c78b94eeca407998a41ce8db2cc00f2b6758 | refs/heads/main | 2023-06-26T20:47:15.059297 | 2023-06-23T16:33:58 | 2023-06-23T16:33:58 | 101,410,006 | 81 | 107 | MIT | 2023-03-29T20:40:31 | 2017-08-25T14:07:33 | Python | UTF-8 | Python | false | false | 7,764 | py | # -*- coding: utf-8 -*-
# pragma pylint: disable=unused-argument, no-self-use
# (c) Copyright IBM Corp. 2010, 2018. All Rights Reserved.
""" Resilient functions component to run an Umbrella investigate Query - DNS RR History for an IP Address or domain
against a Cisco Umbrella server """
# Set up:
# Destination: a Queue named "umbrella_investigate".
# Manual Action: Execute a REST query against a Cisco Umbrella server.
import json
import logging
from datetime import datetime
from resilient_circuits import ResilientComponent, function, handler, StatusMessage, FunctionResult, FunctionError
from fn_cisco_umbrella_inv.util.resilient_inv import ResilientInv
from fn_cisco_umbrella_inv.util.helpers import validate_opts, validate_params, process_params, is_none, \
create_attachment, get_proxies
class FunctionComponent(ResilientComponent):
"""Component that implements Resilient function 'umbrella_dns_rr_hist' of
package fn_cisco_umbrella_inv.
The Function does a Cisco Umbrella Investigate query lookup takes the following parameters:
resource , dns_type
An example of a set of query parameter might look like the following:
umbinv_resource = "1.2.3.4" or resource = "example.com"
umbinv_dns_type = "A"
The Investigate Query will executes a REST call against the Cisco Umbrella Investigate server and returns a result
in JSON format similar to the following.
{'resource_name': 'cosmos.furnipict.com',
'query_execution_time': '2018-05-02 16:03:15',
"dns_rr_history": { "rrs": [ {
"rr": "www.example.com.",
"ttl": 86400,
"class": "IN",
"type": "A",
"name": "93.184.216.119"
}
...
...
],
"features": {
"rr_count": 19,
"ld2_count": 10,
"ld3_count": 14,
"ld2_1_count": 7,
"ld2_2_count": 11,
"div_ld2": 0.5263157894736842,
"div_ld3": 0.7368421052631579,
"div_ld2_1": 0.3684210526315789,
"div_ld2_2": 0.5789473684210527
}
}
}
"""
def __init__(self, opts):
"""constructor provides access to the configuration options"""
super(FunctionComponent, self).__init__(opts)
self.options = opts.get("fn_cisco_umbrella_inv", {})
validate_opts(self)
self.proxies = get_proxies(opts, self.options)
@handler("reload")
def _reload(self, event, opts):
"""Configuration options have changed, save new values"""
self.options = opts.get("fn_cisco_umbrella_inv", {})
self.proxies = get_proxies(opts, self.options)
@function("umbrella_dns_rr_hist")
def _umbrella_dns_rr_hist_function(self, event, *args, **kwargs):
"""Function: Resilient Function : Cisco Umbrella Investigate for DNS RR History for a IP, Type and Domain Name"""
try:
# Get the function parameters:
umbinv_resource = kwargs.get("umbinv_resource") # text
umbinv_dns_type = self.get_select_param(kwargs.get("umbinv_dns_type")) # select, values: "A", "NS", "MX", "TXT", "CNAME"
incident_id = kwargs.get("incident_id") # number
artifact_type = kwargs.get("artifact_type") # text
log = logging.getLogger(__name__)
log.info("umbinv_resource: %s", umbinv_resource)
log.info("umbinv_dns_type: %s", umbinv_dns_type)
log.info("incident_id: %s", incident_id)
log.info("artifact_type: %s", artifact_type)
if is_none(umbinv_resource):
raise ValueError("Required parameter 'umbinv_resource' not set")
if is_none(umbinv_dns_type):
raise ValueError("Required parameter 'umbinv_dns_type' not set")
if is_none(incident_id):
raise ValueError("Required parameter 'incident_id' not set")
if is_none(artifact_type):
raise ValueError("Required parameter 'artifact_type' not set")
yield StatusMessage("Starting...")
res = None
res_type = None
process_result = {}
func_name = event.name
params = {"resource": umbinv_resource.strip(), "dns_type": umbinv_dns_type, "incident_id": incident_id,
"artifact_type": artifact_type}
validate_params(params)
process_params(params, process_result)
if "_res" not in process_result or "_res_type" not in process_result:
raise ValueError("Parameter 'umbinv_resource' was not processed correctly")
else:
res = process_result.pop("_res")
res_type = process_result.pop("_res_type")
if res_type != "domain_name" and res_type != "ip_address":
raise ValueError("Parameter 'umbinv_resource' was an incorrect type '{}', should be a 'domain name', "
"or an 'ip address'.".format(res_type))
api_token = self.options.get("api_token")
base_url = self.options.get("base_url")
rinv = ResilientInv(api_token, base_url, proxies=self.proxies)
yield StatusMessage("Running Cisco Investigate query...")
rtn = rinv.rr_history(res, query_type=umbinv_dns_type)
query_execution_time = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
if ("rrs" in rtn and len(rtn["rrs"]) == 0) \
or ("rrs_tf" in rtn and len(rtn["rrs_tf"]) == 0):
log.debug(json.dumps(rtn))
yield StatusMessage("No Results returned for resource '{}' with query type '{}'."
.format(res, umbinv_dns_type))
results = {}
elif ("rrs" in rtn and len(rtn["rrs"]) > int(self.options.get("results_limit", "200"))) \
or ("rrs_tf" in rtn and len(rtn["rrs_tf"]) > int(self.options.get("results_limit", "200"))):
att_report = create_attachment(self, func_name, res, params, rtn, query_execution_time)
# Add in "query_execution_time" and "ip_address" to result to facilitate post-processing.
results = {"over_limit": True, "resource_name": res, "att_name": att_report["name"],
"query_execution_time": query_execution_time}
yield StatusMessage("Returning 'dns_rr_history' results for resource '{0}' as attachment: {1} ."
.format(res,att_report["name"]))
else:
# Add in "query_execution_time" and "ip_address" to result to facilitate post-processing.
results = {"dns_rr_history": json.loads(json.dumps(rtn)), "resource_name": res,
"query_execution_time": query_execution_time}
yield StatusMessage("Returning 'dns_rr_history' results for resource '{}'.".format(res))
yield StatusMessage("Done...")
log.debug(json.dumps(results))
# Produce a FunctionResult with the results
yield FunctionResult(results)
except Exception:
log.exception("Exception in Resilient Function.")
yield FunctionError() | [
"[email protected]"
] | |
489bcb93be6187df718facdbb8397383188b4e5d | 024b434652c3e329bc7740c7b0c1776c7d6da54f | /cli/connect.py | 0c1a53603414dd086019f90c68d2b9f84431d230 | [
"MIT"
] | permissive | Barski-lab/cwl-airflow-cli | 3bffa85b519bc7ad645a798327af19fafb570639 | ede2928fb1cb161ae985d11a365ae2e6da4afa97 | refs/heads/master | 2021-01-20T22:05:42.415130 | 2017-09-16T19:38:25 | 2017-09-16T19:38:25 | 101,799,674 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 816 | py | #! /usr/bin/env python
import MySQLdb
import urlparse
class DbConnect:
"""Class to get access to DB"""
def __init__(self, conf):
self.sql_conn_url=urlparse.urlparse(conf.get('core', 'sql_alchemy_conn'))
self.connect()
def connect(self):
self.conn = MySQLdb.connect (host = self.sql_conn_url.hostname,
user = self.sql_conn_url.username,
passwd = self.sql_conn_url.password,
db = self.sql_conn_url.path.strip('/'),
port = self.sql_conn_url.port)
self.conn.set_character_set('utf8')
self.cursor = self.conn.cursor()
def close(self):
try:
self.conn.close()
except:
pass
| [
"[email protected]"
] | |
0f5cb35930f5208d39ec2592d907071525142e1b | 08d163710a17497d81e2bc4b53c77b2c787f2baa | /src/dataset/gtsrb.py | a649dab3498c19c716ce3d6906fe22a5fc8ad9e1 | [
"MIT"
] | permissive | ngonthier/dti-sprites | 7a9b80601b26bd760635d371500e34b15f409932 | 7c41b8bf15916f2ac14d6c0de795cd32e4689672 | refs/heads/main | 2023-04-30T21:51:01.183767 | 2021-05-14T11:56:14 | 2021-05-14T11:56:14 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,846 | py | from abc import ABCMeta
from functools import lru_cache
from PIL import Image
import numpy as np
from torch.utils.data.dataset import Dataset as TorchDataset
from torchvision.transforms import Compose, Resize, ToTensor
from utils import coerce_to_path_and_check_exist, get_files_from_dir, use_seed
from utils.image import IMG_EXTENSIONS
from utils.path import DATASETS_PATH
class GTSRB8Dataset(TorchDataset):
root = DATASETS_PATH
name = 'gtsrb8'
n_channels = 3
n_classes = 8
img_size = (28, 28)
def __init__(self, split, **kwargs):
self.data_path = coerce_to_path_and_check_exist(self.root / 'GTSRB')
self.split = split
input_files = get_files_from_dir(self.data_path / 'train', IMG_EXTENSIONS, sort=True, recursive=True)
labels = [int(f.parent.name) for f in input_files]
self.input_files = np.asarray(input_files)
self.labels = np.asarray(labels)
# We filter the dataset to keep 8 classes
good_labels = {k: i for i, k in enumerate([3, 7, 9, 11, 17, 18, 25, 35])}
mask = np.isin(self.labels, list(good_labels.keys()))
self.input_files = self.input_files[mask]
self.labels = np.asarray([good_labels[l] for l in self.labels[mask]])
N = len(self.input_files)
if split == 'val':
with use_seed(46):
indices = np.random.choice(range(N), 100, replace=False)
self.input_files = self.input_files[indices]
self.labels = self.labels[indices]
def __len__(self):
return len(self.input_files)
def __getitem__(self, idx):
inp = self.transform(Image.open(self.input_files[idx]).convert('RGB'))
return inp, self.labels[idx]
@property
@lru_cache()
def transform(self):
return Compose([Resize(self.img_size), ToTensor()])
| [
"[email protected]"
] | |
c70ba4a760d383cbe9f83c83caa406e0a58dbc48 | cc15af2ccc401b2d1d8fcfd219295121c1cece5d | /ROAR/utilities_module/module.py | 1ca90b65b5f5f492aaf34919218b5ed8d633b876 | [
"Apache-2.0"
] | permissive | moonwonlee/ROAR | 755277c5f79df67a78896e2739764eac6b7e0e7e | 1e189d895ac34197b8c8fc3017970cb706feb3e6 | refs/heads/main | 2023-06-15T22:35:29.796573 | 2021-07-02T20:27:39 | 2021-07-02T20:27:39 | 359,383,030 | 1 | 0 | Apache-2.0 | 2021-07-02T20:27:40 | 2021-04-19T08:23:58 | Python | UTF-8 | Python | false | false | 1,297 | py | from abc import ABC, abstractmethod
import time
from pathlib import Path
class Module(ABC):
def __init__(self, threaded=False, update_interval: float = 0.5,
should_save: bool = False, name: str = "module", **kwargs):
self.threaded = threaded
self.update_interval = update_interval
self.should_continue_threaded = True
self.should_save = should_save
self.saving_dir_path: Path = Path(f"data/output/{name}")
if should_save and self.saving_dir_path.exists() is False:
self.saving_dir_path.mkdir(exist_ok=True, parents=True)
@abstractmethod
def run_in_series(self, **kwargs):
"""
This is the none-threaded function. It run in series!
Args:
**kwargs:
Returns:
"""
pass
def run_in_threaded(self, **kwargs):
"""
This is the threaded function.
Args:
**kwargs:
Returns:
"""
while self.should_continue_threaded:
self.run_in_series()
if self.should_save:
self.save()
time.sleep(self.update_interval)
def shutdown(self):
self.should_continue_threaded = False
@abstractmethod
def save(self, **kwargs):
pass
| [
"[email protected]"
] | |
b3aa8d40a9d1110d654d7483b38f77c4184c5fe8 | 770e2091d4d571bd4bfeeefbd0755129ef79d2cf | /matplotlib/5.py | e6dc037d574d73aa6187c890e876aff42ff3390f | [] | no_license | youjia4321/data_analysis | 24d4897562b07fa9f7effb2f4f6add9b87fb5807 | 4f2e4a0389e0bbf67b654b9e9fe12088133cddbe | refs/heads/master | 2020-04-27T10:42:58.664049 | 2019-03-08T02:58:18 | 2019-03-08T02:58:18 | 174,266,384 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 523 | py | import numpy as np
import matplotlib.pyplot as plt
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# 数据的直方图
n, bins, patches = plt.hist(x, 50, density=1, facecolor='r', alpha=0.75)
# for i in patches:
# print(i)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(60, .025, r'$\mu=100, \sigma=15$')
plt.text(40, .01, r'$\alpha_i > \beta_i$')
plt.text(70, .01, r'$\sum_{i=0}^\infty x_i$')
plt.axis([0, 160, 0, 0.03])
plt.grid(True) # 显示网格
plt.show() | [
"[email protected]"
] | |
fad413cdf0ebf545f27260b7ed0593f679b3b9cf | e6bc1f55371786dad70313eb468a3ccf6000edaf | /Datasets/the-minion-game/Correct/076.py | fa39bfcd0f041cb2eb3e2c8fd566a8a06327f68b | [] | no_license | prateksha/Source-Code-Similarity-Measurement | 9da92e3b22c372ed6ea54d8b6ab2c5921e8c41c0 | fb371b837917794d260a219a1ca09c46a5b15962 | refs/heads/master | 2023-01-04T07:49:25.138827 | 2020-10-25T14:43:57 | 2020-10-25T14:43:57 | 285,744,963 | 3 | 0 | null | null | null | null | UTF-8 | Python | false | false | 342 | py | def minion_game(string):
vowel =['A','E','I','O','U']
S=0
K=0
for i in range(len(string)):
if string[i] in vowel:
K+= len(string)-i
else:
S+=len(string)-i
if S>K:
print("Stuart"+" "+ "%d" % S)
elif K>S:
print("Kevin"+" "+'%d' % K)
else:
print("Draw") | [
"[email protected]"
] | |
93492b704f6fe05dc224243661ed0c2a8066ed38 | 67e817ca139ca039bd9eee5b1b789e5510119e83 | /Tree/Kth Smallest Element in a BST.py | 78fa4551d211d33dd4023977b36259e7db2cd425 | [] | no_license | dstch/my_leetcode | 0dc41e7a2526c2d85b6b9b6602ac53f7a6ba9273 | 48a8c77e81cd49a75278551048028c492ec62994 | refs/heads/master | 2021-07-25T21:30:41.705258 | 2021-06-06T08:58:29 | 2021-06-06T08:58:29 | 164,360,878 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,634 | py | #!/usr/bin/env python
# encoding: utf-8
"""
@author: dstch
@license: (C) Copyright 2013-2019, Regulus Tech.
@contact: [email protected]
@file: Kth Smallest Element in a BST.py
@time: 2019/9/11 22:08
@desc: Given a binary search tree, write a function kthSmallest to find the kth smallest element in it.
Note:
You may assume k is always valid, 1 ≤ k ≤ BST's total elements.
Example 1:
Input: root = [3,1,4,null,2], k = 1
3
/ \
1 4
\
2
Output: 1
Example 2:
Input: root = [5,3,6,2,4,null,null,1], k = 3
5
/ \
3 6
/ \
2 4
/
1
Output: 3
"""
# Definition for a binary tree node.
class TreeNode:
def __init__(self, x):
self.val = x
self.left = None
self.right = None
# BST中序遍历后是一个有序数组
class Solution:
def kthSmallest(self, root: TreeNode, k: int) -> int:
return self.get_list(root, [])[k]
def get_list(self, root, res):
if root.left is not None:
res = self.get_list(root.left, res)
if root is not None:
res.append(root.val)
if root.right is not None:
res = self.get_list(root.right, res)
return res
# 分治思想
class Solution1:
def kthSmallest(self, root: TreeNode, k: int) -> int:
cnt = self.count(root.left)
if k <= cnt:
return self.kthSmallest(root.left, k)
elif k > cnt + 1:
return self.kthSmallest(root.right, k - cnt - 1)
return root.val
def count(self, root):
if root is None:
return 0
return 1 + self.count(root.left) + self.count(root.right)
| [
"[email protected]"
] | |
7835d7fea00e4dcad46bf9117f2c76e5d0e341a4 | 65b4522c04c2be071c2d42095956fe950fe1cebe | /inversions/inversion11/best_model/predict/analysis/pred_disp_large_scale/plots/post_disp/gen_post_disp.py | 2d7cabac2102b3f35527307245505ab7abce8bca | [] | no_license | geodesy/viscojapan | ac0cd93f7a2134cd2651623b94879dcc21c0c46a | 03e70265b56eb5994e73bcb6066f0be338e42f27 | refs/heads/master | 2021-03-03T18:19:07.779601 | 2015-07-16T03:50:49 | 2015-07-16T03:50:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 593 | py | import numpy as np
import viscojapan as vj
res_file='/home/zy/workspace/viscojapan/inversions/inversion10/iter2/run7/outs/nrough_05_naslip_11.h5'
reader = vj.inv.ResultFileReader(res_file)
disp = reader.get_pred_disp()
post = disp.get_post_at_nth_epoch(-1)
sites = disp.get_sites
tp = np.loadtxt('stations_large_scale.in','4a, f, f')
pos_dic = {ii[0].decode():(ii[1], ii[2]) for ii in tp}
with open('post_disp2','wt') as fid:
for site, y in zip(sites, post):
lon, lat = pos_dic[site.id]
mag = np.sqrt(y[0]**2+y[1]**2)
fid.write('%f %f %f\n'%(lon, lat, mag))
| [
"[email protected]"
] | |
729d0bb1c126d3c229c8e3e17db982202c6a8746 | 6c57b1694817d1710335429c12c2d9774ff446e3 | /2017-08-26/BWM-Model_case09_11/generated_files/LEMS_c302_C2_BWM_Model_nrn.py | 3f97e63b84048d26ca68ce228f2cd5b992e9849f | [] | no_license | lungd/openworm-experiments | cd3875e8071c35eacb919c318344bac56d0fe379 | 065f481fbb445ef12b8ab2110f501686d26c213c | refs/heads/master | 2021-01-01T04:41:38.397726 | 2017-09-12T13:55:40 | 2017-09-12T13:55:40 | 97,220,679 | 1 | 1 | null | 2017-09-01T17:10:28 | 2017-07-14T10:07:56 | Python | UTF-8 | Python | false | false | 14,670 | py | '''
Neuron simulator export for:
Components:
Leak (Type: ionChannelPassive: conductance=1.0E-11 (SI conductance))
k_fast (Type: ionChannelHH: conductance=1.0E-11 (SI conductance))
k_slow (Type: ionChannelHH: conductance=1.0E-11 (SI conductance))
ca_boyle (Type: ionChannelHH: conductance=1.0E-11 (SI conductance))
ca_simple (Type: ionChannelHH: conductance=1.0E-11 (SI conductance))
k_muscle (Type: ionChannelHH: conductance=1.0E-11 (SI conductance))
ca_muscle (Type: ionChannelHH: conductance=1.0E-11 (SI conductance))
null (Type: notes)
CaPool (Type: fixedFactorConcentrationModel: restingConc=0.0 (SI concentration) decayConstant=0.43081187094550927 (SI time) rho=9.19E-7 (SI rho_factor))
silent (Type: silentSynapse)
DB2_to_MDL12_exc_syn (Type: gradedSynapse2: conductance=1.0000000000000002E-10 (SI conductance) ar=0.5 (SI per_time) ad=50.0 (SI per_time) beta=125.0 (SI per_voltage) vth=-0.025 (SI voltage) erev=0.0 (SI voltage))
GenericMuscleCell (Type: cell)
GenericNeuronCell (Type: cell)
offset_current (Type: pulseGenerator: delay=0.0 (SI time) duration=2.0 (SI time) amplitude=0.0 (SI current))
c302_C2_BWM_Model (Type: network)
sim_c302_C2_BWM_Model (Type: Simulation: length=1.0 (SI time) step=5.0E-5 (SI time))
This NEURON file has been generated by org.neuroml.export (see https://github.com/NeuroML/org.neuroml.export)
org.neuroml.export v1.5.3
org.neuroml.model v1.5.3
jLEMS v0.9.9.0
'''
import neuron
import time
import hashlib
h = neuron.h
h.load_file("stdlib.hoc")
h.load_file("stdgui.hoc")
h("objref p")
h("p = new PythonObject()")
class NeuronSimulation():
def __init__(self, tstop, dt, seed=123456789):
print("\n Starting simulation in NEURON generated from NeuroML2 model...\n")
self.seed = seed
self.randoms = []
self.next_global_id = 0 # Used in Random123 classes for elements using random(), etc.
self.next_spiking_input_id = 0 # Used in Random123 classes for elements using random(), etc.
'''
Adding simulation Component(id=sim_c302_C2_BWM_Model type=Simulation) of network/component: c302_C2_BWM_Model (Type: network)
'''
# ###################### Population: DB2
print("Population DB2 contains 1 instance(s) of component: GenericNeuronCell of type: cell")
print("Setting the default initial concentrations for ca (used in GenericNeuronCell) to 0.0 mM (internal), 2.0 mM (external)")
h("cai0_ca_ion = 0.0")
h("cao0_ca_ion = 2.0")
h.load_file("GenericNeuronCell.hoc")
a_DB2 = []
h("{ n_DB2 = 1 }")
h("objectvar a_DB2[n_DB2]")
for i in range(int(h.n_DB2)):
h("a_DB2[%i] = new GenericNeuronCell()"%i)
h("access a_DB2[%i].soma"%i)
self.next_global_id+=1
h("{ a_DB2[0].position(-0.2, -244.5, 15.787000000000001) }")
h("proc initialiseV_DB2() { for i = 0, n_DB2-1 { a_DB2[i].set_initial_v() } }")
h("objref fih_DB2")
h('{fih_DB2 = new FInitializeHandler(0, "initialiseV_DB2()")}')
h("proc initialiseIons_DB2() { for i = 0, n_DB2-1 { a_DB2[i].set_initial_ion_properties() } }")
h("objref fih_ion_DB2")
h('{fih_ion_DB2 = new FInitializeHandler(1, "initialiseIons_DB2()")}')
# ###################### Population: DD2
print("Population DD2 contains 1 instance(s) of component: GenericNeuronCell of type: cell")
print("Setting the default initial concentrations for ca (used in GenericNeuronCell) to 0.0 mM (internal), 2.0 mM (external)")
h("cai0_ca_ion = 0.0")
h("cao0_ca_ion = 2.0")
h.load_file("GenericNeuronCell.hoc")
a_DD2 = []
h("{ n_DD2 = 1 }")
h("objectvar a_DD2[n_DD2]")
for i in range(int(h.n_DD2)):
h("a_DD2[%i] = new GenericNeuronCell()"%i)
h("access a_DD2[%i].soma"%i)
self.next_global_id+=1
h("{ a_DD2[0].position(-1.85, -156.474989999999991, -42.850000000000001) }")
h("proc initialiseV_DD2() { for i = 0, n_DD2-1 { a_DD2[i].set_initial_v() } }")
h("objref fih_DD2")
h('{fih_DD2 = new FInitializeHandler(0, "initialiseV_DD2()")}')
h("proc initialiseIons_DD2() { for i = 0, n_DD2-1 { a_DD2[i].set_initial_ion_properties() } }")
h("objref fih_ion_DD2")
h('{fih_ion_DD2 = new FInitializeHandler(1, "initialiseIons_DD2()")}')
# ###################### Population: MDL12
print("Population MDL12 contains 1 instance(s) of component: GenericMuscleCell of type: cell")
print("Setting the default initial concentrations for ca (used in GenericMuscleCell) to 0.0 mM (internal), 2.0 mM (external)")
h("cai0_ca_ion = 0.0")
h("cao0_ca_ion = 2.0")
h.load_file("GenericMuscleCell.hoc")
a_MDL12 = []
h("{ n_MDL12 = 1 }")
h("objectvar a_MDL12[n_MDL12]")
for i in range(int(h.n_MDL12)):
h("a_MDL12[%i] = new GenericMuscleCell()"%i)
h("access a_MDL12[%i].soma"%i)
self.next_global_id+=1
h("{ a_MDL12[0].position(80., 60., -80.) }")
h("proc initialiseV_MDL12() { for i = 0, n_MDL12-1 { a_MDL12[i].set_initial_v() } }")
h("objref fih_MDL12")
h('{fih_MDL12 = new FInitializeHandler(0, "initialiseV_MDL12()")}')
h("proc initialiseIons_MDL12() { for i = 0, n_MDL12-1 { a_MDL12[i].set_initial_ion_properties() } }")
h("objref fih_ion_MDL12")
h('{fih_ion_MDL12 = new FInitializeHandler(1, "initialiseIons_MDL12()")}')
# ###################### Continuous Projection: NC_DB2_MDL12_Acetylcholine
print("Adding continuous projection: NC_DB2_MDL12_Acetylcholine from DB2 to MDL12, with 1 connection(s)")
h("objectvar syn_NC_DB2_MDL12_Acetylcholine_silent_pre[1]")
h("objectvar syn_NC_DB2_MDL12_Acetylcholine_DB2_to_MDL12_exc_syn_post[1]")
# Continuous Connection 0: cell 0, seg 0 (0.5) [0.5 on a_DB2[0].soma] -> cell 0, seg 0 (0.5) [0.5 on a_MDL12[0].soma], weight: 1.0
h("a_DB2[0].soma { syn_NC_DB2_MDL12_Acetylcholine_silent_pre[0] = new silent(0.500000) }")
h("a_MDL12[0].soma { syn_NC_DB2_MDL12_Acetylcholine_DB2_to_MDL12_exc_syn_post[0] = new DB2_to_MDL12_exc_syn(0.500000) }")
h("setpointer syn_NC_DB2_MDL12_Acetylcholine_silent_pre[0].vpeer, a_MDL12[0].soma.v(0.500000)")
h("setpointer syn_NC_DB2_MDL12_Acetylcholine_DB2_to_MDL12_exc_syn_post[0].vpeer, a_DB2[0].soma.v(0.500000)")
trec = h.Vector()
trec.record(h._ref_t)
h.tstop = tstop
h.dt = dt
h.steps_per_ms = 1/h.dt
# ###################### File to save: c302_C2_BWM_Model.activity.dat (neurons_activity)
# Column: DB2/0/GenericNeuronCell/caConc
h(' objectvar v_DB2_v_neurons_activity ')
h(' { v_DB2_v_neurons_activity = new Vector() } ')
h(' { v_DB2_v_neurons_activity.record(&a_DB2[0].soma.cai(0.5)) } ')
h.v_DB2_v_neurons_activity.resize((h.tstop * h.steps_per_ms) + 1)
# Column: DD2/0/GenericNeuronCell/caConc
h(' objectvar v_DD2_v_neurons_activity ')
h(' { v_DD2_v_neurons_activity = new Vector() } ')
h(' { v_DD2_v_neurons_activity.record(&a_DD2[0].soma.cai(0.5)) } ')
h.v_DD2_v_neurons_activity.resize((h.tstop * h.steps_per_ms) + 1)
# ###################### File to save: c302_C2_BWM_Model.muscles.dat (muscles_v)
# Column: MDL12/0/GenericMuscleCell/v
h(' objectvar v_MDL12_v_muscles_v ')
h(' { v_MDL12_v_muscles_v = new Vector() } ')
h(' { v_MDL12_v_muscles_v.record(&a_MDL12[0].soma.v(0.5)) } ')
h.v_MDL12_v_muscles_v.resize((h.tstop * h.steps_per_ms) + 1)
# ###################### File to save: c302_C2_BWM_Model.muscles.activity.dat (muscles_activity)
# Column: MDL12/0/GenericMuscleCell/caConc
h(' objectvar v_MDL12_v_muscles_activity ')
h(' { v_MDL12_v_muscles_activity = new Vector() } ')
h(' { v_MDL12_v_muscles_activity.record(&a_MDL12[0].soma.cai(0.5)) } ')
h.v_MDL12_v_muscles_activity.resize((h.tstop * h.steps_per_ms) + 1)
# ###################### File to save: c302_C2_BWM_Model.dat (neurons_v)
# Column: DB2/0/GenericNeuronCell/v
h(' objectvar v_DB2_v_neurons_v ')
h(' { v_DB2_v_neurons_v = new Vector() } ')
h(' { v_DB2_v_neurons_v.record(&a_DB2[0].soma.v(0.5)) } ')
h.v_DB2_v_neurons_v.resize((h.tstop * h.steps_per_ms) + 1)
# Column: DD2/0/GenericNeuronCell/v
h(' objectvar v_DD2_v_neurons_v ')
h(' { v_DD2_v_neurons_v = new Vector() } ')
h(' { v_DD2_v_neurons_v.record(&a_DD2[0].soma.v(0.5)) } ')
h.v_DD2_v_neurons_v.resize((h.tstop * h.steps_per_ms) + 1)
# ###################### File to save: time.dat (time)
# Column: time
h(' objectvar v_time ')
h(' { v_time = new Vector() } ')
h(' { v_time.record(&t) } ')
h.v_time.resize((h.tstop * h.steps_per_ms) + 1)
self.initialized = False
self.sim_end = -1 # will be overwritten
def run(self):
self.initialized = True
sim_start = time.time()
print("Running a simulation of %sms (dt = %sms; seed=%s)" % (h.tstop, h.dt, self.seed))
h.run()
self.sim_end = time.time()
sim_time = self.sim_end - sim_start
print("Finished NEURON simulation in %f seconds (%f mins)..."%(sim_time, sim_time/60.0))
self.save_results()
def advance(self):
if not self.initialized:
h.finitialize()
self.initialized = True
h.fadvance()
###############################################################################
# Hash function to use in generation of random value
# This is copied from NetPyNE: https://github.com/Neurosim-lab/netpyne/blob/master/netpyne/simFuncs.py
###############################################################################
def _id32 (self,obj):
return int(hashlib.md5(obj).hexdigest()[0:8],16) # convert 8 first chars of md5 hash in base 16 to int
###############################################################################
# Initialize the stim randomizer
# This is copied from NetPyNE: https://github.com/Neurosim-lab/netpyne/blob/master/netpyne/simFuncs.py
###############################################################################
def _init_stim_randomizer(self,rand, stimType, gid, seed):
rand.Random123(self._id32(stimType), gid, seed)
def save_results(self):
print("Saving results at t=%s..."%h.t)
if self.sim_end < 0: self.sim_end = time.time()
# ###################### File to save: time.dat (time)
py_v_time = [ t/1000 for t in h.v_time.to_python() ] # Convert to Python list for speed...
f_time_f2 = open('time.dat', 'w')
num_points = len(py_v_time) # Simulation may have been stopped before tstop...
for i in range(num_points):
f_time_f2.write('%f'% py_v_time[i]) # Save in SI units...+ '\n')
f_time_f2.close()
print("Saved data to: time.dat")
# ###################### File to save: c302_C2_BWM_Model.activity.dat (neurons_activity)
py_v_DB2_v_neurons_activity = [ float(x ) for x in h.v_DB2_v_neurons_activity.to_python() ] # Convert to Python list for speed, variable has dim: concentration
py_v_DD2_v_neurons_activity = [ float(x ) for x in h.v_DD2_v_neurons_activity.to_python() ] # Convert to Python list for speed, variable has dim: concentration
f_neurons_activity_f2 = open('c302_C2_BWM_Model.activity.dat', 'w')
num_points = len(py_v_time) # Simulation may have been stopped before tstop...
for i in range(num_points):
f_neurons_activity_f2.write('%e\t'% py_v_time[i] + '%e\t'%(py_v_DB2_v_neurons_activity[i]) + '%e\t'%(py_v_DD2_v_neurons_activity[i]) + '\n')
f_neurons_activity_f2.close()
print("Saved data to: c302_C2_BWM_Model.activity.dat")
# ###################### File to save: c302_C2_BWM_Model.muscles.dat (muscles_v)
py_v_MDL12_v_muscles_v = [ float(x / 1000.0) for x in h.v_MDL12_v_muscles_v.to_python() ] # Convert to Python list for speed, variable has dim: voltage
f_muscles_v_f2 = open('c302_C2_BWM_Model.muscles.dat', 'w')
num_points = len(py_v_time) # Simulation may have been stopped before tstop...
for i in range(num_points):
f_muscles_v_f2.write('%e\t'% py_v_time[i] + '%e\t'%(py_v_MDL12_v_muscles_v[i]) + '\n')
f_muscles_v_f2.close()
print("Saved data to: c302_C2_BWM_Model.muscles.dat")
# ###################### File to save: c302_C2_BWM_Model.muscles.activity.dat (muscles_activity)
py_v_MDL12_v_muscles_activity = [ float(x ) for x in h.v_MDL12_v_muscles_activity.to_python() ] # Convert to Python list for speed, variable has dim: concentration
f_muscles_activity_f2 = open('c302_C2_BWM_Model.muscles.activity.dat', 'w')
num_points = len(py_v_time) # Simulation may have been stopped before tstop...
for i in range(num_points):
f_muscles_activity_f2.write('%e\t'% py_v_time[i] + '%e\t'%(py_v_MDL12_v_muscles_activity[i]) + '\n')
f_muscles_activity_f2.close()
print("Saved data to: c302_C2_BWM_Model.muscles.activity.dat")
# ###################### File to save: c302_C2_BWM_Model.dat (neurons_v)
py_v_DB2_v_neurons_v = [ float(x / 1000.0) for x in h.v_DB2_v_neurons_v.to_python() ] # Convert to Python list for speed, variable has dim: voltage
py_v_DD2_v_neurons_v = [ float(x / 1000.0) for x in h.v_DD2_v_neurons_v.to_python() ] # Convert to Python list for speed, variable has dim: voltage
f_neurons_v_f2 = open('c302_C2_BWM_Model.dat', 'w')
num_points = len(py_v_time) # Simulation may have been stopped before tstop...
for i in range(num_points):
f_neurons_v_f2.write('%e\t'% py_v_time[i] + '%e\t'%(py_v_DB2_v_neurons_v[i]) + '%e\t'%(py_v_DD2_v_neurons_v[i]) + '\n')
f_neurons_v_f2.close()
print("Saved data to: c302_C2_BWM_Model.dat")
save_end = time.time()
save_time = save_end - self.sim_end
print("Finished saving results in %f seconds"%(save_time))
print("Done")
quit()
if __name__ == '__main__':
ns = NeuronSimulation(tstop=1000, dt=0.05, seed=123456789)
ns.run()
| [
"[email protected]"
] | |
31c1779398b3ce2aad134e0fb09067334f0f0976 | a9e27f69b8db430252cd29f334f182b9962e22ae | /src/collective/deformwidgets/dynatree.py | 2dcec4d7bb40a7798e2a521b94322c3fd90574f1 | [] | no_license | collective/collective.deformwidgets | d4b09da0ae935f16437f56f77351e52f3ccace81 | fef9e81e5e448de30fcd654a68f1cfd98e7aeb9d | refs/heads/master | 2023-03-22T11:55:37.251077 | 2012-10-11T21:20:53 | 2012-10-12T06:10:33 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 7,384 | py | #!/usr/bin/python
# -*- coding: utf-8 -*-
from imsvdex.vdex import VDEXManager
from Products.CMFCore.utils import getToolByName
from zope.app.component.hooks import getSite
import colander
import deform
import json
def vdex_to_dynatree(vdex=None):
'''
Convert a vdex manager object
to something understandable by a dynatree widget
'''
retval = []
def convert(key, value):
''' converter '''
retval = {}
retval['title'] = value[0]
retval['key'] = key
if value[1]:
children_keys = value[1].keys()
children_keys.sort()
retval['children'] = [convert(x, value[1][x]) for x in
children_keys]
return retval
vdex_dict = vdex.getVocabularyDict()
keys = vdex_dict.keys()
keys.sort()
for key in keys:
retval.append(convert(key, vdex_dict[key]))
return retval
class DynatreeWidgetContentBrowser(deform.widget.SelectWidget):
'''
Renders a ``dynatree`` widget to select contents of a site
**Attributes/Arguments**
vocabulary
An imsvdex.vdex.VDEXManager object
Can also be provided by the field
null_value
The value which represents the null value. When the null
value is encountered during serialization, the
:attr:`colander.null` sentinel is returned to the caller.
Default: ``\'\'`` (the empty string).
template
The template name used to render the widget. Default:
``dynatree``.
readonly_template
The template name used to render the widget in read-only mode.
Default: ``readonly/dynatree``.
'''
template = 'dynatree_content'
readonly_template = 'readonly/dynatree_content'
null_value = ''
vocabulary = None
requirements = (('jquery.dynatree', None), )
selectMode = '2'
@staticmethod
def convert_cstruct(cstruct):
''' return cstruct jsonified, wrapped in a list if necessary '''
if cstruct in (colander.null, None):
return json.dumps([])
else:
return json.dumps([cstruct])
def get_preselected_values(self, cstruct, readonly):
'''
for the preselected keys, get the values if necessary.
Necessary as in the values aren\'t used in write case.
Since computation is expensive, we return an empty
list if readonly is set to false
'''
if readonly:
retval = []
for key in cstruct or []:
term = self.vocabulary.getTermById(key)
retval.append(self.vocabulary.getTermCaption(term))
return retval
else:
return []
@staticmethod
def get_item_child_name(dummy):
''' Return the name of the item child '''
return 'null'
def titles(self, values):
site = getSite()
catalog = getToolByName(site, 'portal_catalog')
for brain in catalog.searchResults(UID=values):
yield brain.Title
@property
def tree(self):
''' return the tree datastructure as needed by dynatree, jsonified '''
return json.dumps(vdex_to_dynatree(vdex=self.vocabulary))
def serialize(
self,
field,
cstruct,
readonly=False,
):
template = readonly and self.readonly_template or self.template
return field.renderer(
template,
site_url=getSite().absolute_url(),
field=field,
object_provides_filter=getattr(field.schema,
'object_provides_filter', ''),
values=cstruct,
titles=self.titles(cstruct),
dynatree_parameters='',
fieldName=field.name,
)
class SingleSelectDynatreeWidget(deform.widget.SelectWidget):
'''
Renders a ``dynatree`` widget based on a predefined set of values.
**Attributes/Arguments**
vocabulary
An imsvdex.vdex.VDEXManager object
Can also be provided by the field
null_value
The value which represents the null value. When the null
value is encountered during serialization, the
:attr:`colander.null` sentinel is returned to the caller.
Default: ``\'\'`` (the empty string).
template
The template name used to render the widget. Default:
``dynatree``.
readonly_template
The template name used to render the widget in read-only mode.
Default: ``readonly/dynatree``.
'''
template = 'dynatree'
readonly_template = 'readonly/dynatree'
null_value = ''
vocabulary = None
requirements = (('jquery.dynatree', None), )
selectMode = '1'
@staticmethod
def convert_cstruct(cstruct):
''' return cstruct jsonified, wrapped in a list if necessary '''
if cstruct in (colander.null, None):
return json.dumps([])
else:
return json.dumps([cstruct])
def get_preselected_values(self, cstruct, readonly):
'''
for the preselected keys, get the values if necessary.
Necessary as in the values aren\'t used in write case.
Since computation is expensive, we return an empty
list if readonly is set to false
'''
if readonly:
retval = []
for key in cstruct or []:
term = self.vocabulary.getTermById(key)
retval.append(self.vocabulary.getTermCaption(term))
return retval
else:
return []
@staticmethod
def get_item_child_name(dummy):
''' Return the name of the item child '''
return 'null'
@property
def tree(self):
''' return the tree datastructure as needed by dynatree, jsonified '''
return json.dumps(vdex_to_dynatree(vdex=self.vocabulary))
def serialize(
self,
field,
cstruct,
readonly=False,
):
if not self.vocabulary:
self.vocabulary = getattr(field.schema, 'vocabulary',
self.vocabulary)
assert self.vocabulary, 'You must give me a vocabulary'
template = readonly and self.readonly_template or self.template
return field.renderer(
template,
field=field,
preselected=self.convert_cstruct(cstruct),
preselected_values=self.get_preselected_values(cstruct,
readonly),
tree=self.tree,
select_mode=self.selectMode,
item_name=field.name,
item_child_name=self.get_item_child_name(field),
)
class MultiSelectDynatreeWidget(SingleSelectDynatreeWidget):
''' Dynatree widget for sequence fields '''
selectMode = '2'
@staticmethod
def convert_cstruct(cstruct):
''' return cstruct jsonified, wrapped in a list if necessary '''
if cstruct in (colander.null, None):
return json.dumps([])
else:
return json.dumps(cstruct)
@staticmethod
def get_item_child_name(field):
''' Return the name of the item child '''
return field.children[0].name
class MultiSelectMode3DynatreeWidget(MultiSelectDynatreeWidget):
''' Dynatree widget for sequence fields mode 2 '''
selectMode = 3
| [
"[email protected]"
] | |
0002d483929a2213dc78b3654a3d16d500e71d6f | c80ec1805a7e6cb1bd3f4b3e383ef4f4cf164765 | /gen/filters/rules/family/_hastwins.py | c2e37d04b70c8905222c5247ba01261e08eced18 | [] | no_license | balrok/gramps_addon | 57c8e976c47ea3c1d1298d3fd4406c13909ac933 | 0c79561bed7ff42c88714edbc85197fa9235e188 | refs/heads/master | 2020-04-16T03:58:27.818732 | 2015-02-01T14:17:44 | 2015-02-01T14:17:44 | 30,111,898 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,365 | py | #
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2013 Nick Hall
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
#-------------------------------------------------------------------------
#
# Standard Python modules
#
#-------------------------------------------------------------------------
#-------------------------------------------------------------------------
#
# GRAMPS modules
#
#-------------------------------------------------------------------------
from .. import Rule
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
from ....lib.childreftype import ChildRefType
#-------------------------------------------------------------------------
#
# HasTwins
#
#-------------------------------------------------------------------------
class HasTwins(Rule):
"""Rule that checks for a family with twins"""
name = _('Families with twins')
description = _("Matches families with twins")
category = _('Child filters')
def apply(self, db, family):
date_list = []
for childref in family.get_child_ref_list():
if int(childref.get_mother_relation()) == ChildRefType.BIRTH:
child = db.get_person_from_handle(childref.ref)
birthref = child.get_birth_ref()
if birthref:
birth = db.get_event_from_handle(birthref.ref)
sortval = birth.get_date_object().get_sort_value()
if sortval != 0:
if sortval in date_list:
return True
else:
date_list.append(sortval)
return False
| [
"[email protected]"
] | |
7b8b1e3bf65134928bbed9628517cb616c9a5cb4 | 9d58c796906c6687241125ee6602cc612cb28735 | /FileRstproj/FileRstproj/wsgi.py | 18849c0de1d7746d732056d98c0d2fce323e9eb0 | [] | no_license | uday99/RestFrameworkUploadfile | 29de0929ad7c2af2d4e4a856801467b05ce53829 | 6b92f0d55893e435f4383a4691a79642be28923f | refs/heads/main | 2023-07-10T18:40:27.225900 | 2021-08-19T15:21:09 | 2021-08-19T15:21:09 | 397,986,544 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 399 | py | """
WSGI config for FileRstproj project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/3.1/howto/deployment/wsgi/
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'FileRstproj.settings')
application = get_wsgi_application()
| [
"[email protected]"
] | |
8d666bd97f5421c1264ea7eceaaada6c70d79d20 | d4dab65a18429c72dda12f1271ec0b4452cb7aad | /tensorflow/python/distribute/values.py | a2c834f893e2d6376714229bd585502bebe2edf0 | [
"Apache-2.0"
] | permissive | chiragmak10/tensorflow | 6d486210bebb68f3328853731cdb37fbe6aa9b1a | 0e1f4418b84170533b3d388ac29042fe486403ee | refs/heads/master | 2020-04-12T08:30:11.229595 | 2018-12-19T04:56:29 | 2018-12-19T05:00:24 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 82,713 | py | # Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Various classes representing distributed values."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import contextlib
import operator
import weakref
import six
from tensorflow.python.data.experimental.ops import batching
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.data.ops import multi_device_iterator_ops
from tensorflow.python.distribute import device_util
from tensorflow.python.distribute import distribute_lib
from tensorflow.python.distribute import distribution_strategy_context
from tensorflow.python.distribute import input_ops
from tensorflow.python.distribute import reduce_util
from tensorflow.python.eager import context
from tensorflow.python.eager import tape
from tensorflow.python.framework import device as tf_device
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import gen_resource_variable_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.training import saver
from tensorflow.python.training.checkpointable import base as checkpointable
from tensorflow.python.util import nest
def _devices_match(d1, d2):
return device_util.canonicalize(d1) == device_util.canonicalize(d2)
class DeviceMap(object):
"""A mapping of replicas & logical device ids to devices."""
@property
def all_devices(self):
"""Returns a tuple of strings with all devices in this DeviceMap."""
raise NotImplementedError("Required for DeviceMap implementations.")
@property
def devices_by_replica(self):
"""Returns a tuple `t` where `t[replica]` is the devices for `replica`."""
raise NotImplementedError("Required for DeviceMap implementations.")
@property
def num_logical_devices(self):
"""Count of the number of devices each replica may be defined across."""
raise NotImplementedError("Required for DeviceMap implementations.")
@property
def num_replicas_in_graph(self):
"""Number of replicas defined in this graph."""
raise NotImplementedError("Required for DeviceMap implementations.")
def logical_device_from_values(self, values):
"""Returns the logical device index `values` is on."""
raise NotImplementedError("Required for DeviceMap implementations.")
def logical_to_actual_devices(self, logical_device_id):
"""Returns sequence of `num_replicas_in_graph` devices."""
raise NotImplementedError("Required for DeviceMap implementations.")
def select_for_current_replica(self, values, replica_context):
"""Select the element of `values` for the current replica."""
raise NotImplementedError("Required for DeviceMap implementations.")
def replica_for_device(self, device):
"""Return the replica id containing `device`."""
raise NotImplementedError("Required for DeviceMap implementations.")
def select_for_device(self, values, device):
"""Select the element of `values` to access from `device`."""
raise NotImplementedError("Required for DeviceMap implementations.")
def is_device_in_replica(self, device, replica_id):
"""Returns whether `device` is a member of replica `replica_id`."""
raise NotImplementedError("Required for DeviceMap implementations.")
class SingleDeviceMap(DeviceMap):
"""A device map for 1 non-computation device.
Use `SingleDeviceMap` when the device does not correspond to some replica of
the computation. For computation devices, use `ReplicaDeviceMap` below (even
if there is only a single device in the map).
"""
def __init__(self, device):
"""Initialize a `SingleDeviceMap`.
Args:
device: A string device.
"""
assert isinstance(device, six.string_types)
self._device = device_util.canonicalize(device)
self._devices = (self._device,)
@property
def all_devices(self):
return self._devices
@property
def devices_by_replica(self):
raise ValueError("SingleDeviceMap not indexed by replicas")
@property
def num_logical_devices(self):
return 1
@property
def num_replicas_in_graph(self):
return 1
def logical_device_from_values(self, values):
del values
return 0
def logical_to_actual_devices(self, logical_device_id):
assert logical_device_id == 0
return self._devices
def select_for_current_replica(self, values, replica_context):
assert len(values) == 1
del replica_context
return values[0]
def replica_for_device(self, device):
raise ValueError("SingleDeviceMap not indexed by replicas")
def select_for_device(self, values, device):
assert len(values) == 1
if self._device != device:
raise ValueError("Device %s not found in %s (current device %s)" %
(device, self._devices, device_util.current()))
return values[0]
def is_device_in_replica(self, device, replica_id):
raise ValueError("SingleDeviceMap not indexed by replicas")
def __repr__(self):
return "%s(%r)" % (self.__class__.__name__, self._device)
class ReplicaDeviceMap(DeviceMap):
"""A device map for 1 device per replica."""
def __init__(self, devices):
"""Initialize a `ReplicaDeviceMap`.
Args:
devices: `devices[i]` is the string device for replica `i`.
"""
self._devices = tuple(device_util.canonicalize(d) for d in devices)
if len(set(self._devices)) != len(self._devices):
raise ValueError("Duplicate devices in %s, after canonicalization: %s" %
(devices, self._devices))
self._device_to_replica = {d: r for r, d in enumerate(self._devices)}
@property
def all_devices(self):
return self._devices
@property
def devices_by_replica(self):
return ((d,) for d in self._devices)
@property
def num_logical_devices(self):
return 1
@property
def num_replicas_in_graph(self):
return len(self._devices)
def logical_device_from_values(self, values):
del values
return 0
def logical_to_actual_devices(self, logical_device_id):
assert logical_device_id == 0
return self._devices
def select_for_current_replica(self, values, replica_context):
assert len(values) == len(self._devices)
replica_id = replica_context.replica_id_in_sync_group
if not isinstance(replica_id, int):
replica_id = tensor_util.constant_value(replica_id)
return values[replica_id]
def replica_for_device(self, device):
return self._device_to_replica.get(device)
def select_for_device(self, values, device):
assert len(values) == len(self._devices)
replica_id = self._device_to_replica.get(device)
if replica_id is None:
raise ValueError("Device %s not found in %s (current device %s)" %
(device, self._devices, device_util.current()))
return values[replica_id]
def is_device_in_replica(self, device, replica_id):
return _devices_match(device, self._devices[replica_id])
def __str__(self):
return "[%s]" % (", ".join(self._devices))
def __repr__(self):
return "%s([%s])" % (self.__class__.__name__,
", ".join(repr(d) for d in self._devices))
LogicalDeviceSpec = collections.namedtuple(
"LogicalDeviceSpec", ("device_map", "logical_device"))
class DistributedValues(object):
"""Holds a map from device to values. Either PerReplica or Mirrored."""
def __init__(self, device_map, values, logical_device=None):
assert isinstance(device_map, DeviceMap)
self._device_map = device_map
self._values = tuple(values)
if logical_device is None:
logical_device = device_map.logical_device_from_values(self._values)
self._logical_device = logical_device
# TODO(josh11b): Split this into two functions, one with device, one without.
def get(self, device=None):
"""Returns the value for the current device or raises a ValueError."""
if device is None:
replica_context = distribution_strategy_context.get_replica_context()
if replica_context:
return self._device_map.select_for_current_replica(
self._values, replica_context)
else:
device = distribute_lib.get_update_device()
if device is None:
return self._get_cross_replica()
device = device_util.canonicalize(device)
return self._device_map.select_for_device(self._values, device)
@property
def primary(self):
"""Returns a representative component."""
return self._values[0]
@property
def devices(self):
return self._device_map.logical_to_actual_devices(self._logical_device)
@property
def logical_device(self):
return self._logical_device
@property
def device_map(self):
return self._device_map
# TODO(josh11b): Replace unwrap with this?
@property
def values(self):
return self._values
@property
def is_tensor_like(self):
for v in self._values:
if not tensor_util.is_tensor(v):
return False
return True
def __str__(self):
devices = self.devices
assert len(self._values) == len(devices)
debug_str = ",\n".join(" %d %s: %s" % (i, devices[i], self._values[i])
for i in range(len(devices)))
return "%s:{\n%s\n}" % (self.__class__.__name__, debug_str)
def __repr__(self):
devices = self.devices
assert len(self._values) == len(devices)
debug_repr = ",\n".join(" %d %s: %r" % (i, devices[i], self._values[i])
for i in range(len(devices)))
return "%s:{\n%s\n}" % (self.__class__.__name__, debug_repr)
# NOTE(josh11b,apassos): It would be great if we could inspect the values this was
# initialized with and use that to generate the overloaded operators here.
# Unfortunately, Python's rules for special methods don't allow this, see
# https://docs.python.org/3/reference/datamodel.html#special-method-names
# "if a class defines a method named __getitem__(), and x is an instance of
# this class, then x[i] is roughly equivalent to type(x).__getitem__(x, i)."
# In particular, these special methods don't go through __getattr__, and
# it will only use those methods if they are defined in the class, not the
# object.
class DistributedDelegate(DistributedValues):
"""A map from device to values; acts as the same type as the values."""
def __getattr__(self, name):
# TODO(priyag): This needs to be made robust against pitfalls from mix use
# __getattr__ and @property. See b/120402273.
return getattr(self.get(), name)
# pylint: disable=multiple-statements
def __add__(self, o): return self.get() + o
def __radd__(self, o): return o + self.get()
def __sub__(self, o): return self.get() - o
def __rsub__(self, o): return o - self.get()
def __mul__(self, o): return self.get() * o
def __rmul__(self, o): return o * self.get()
def __truediv__(self, o): return self.get() / o
def __rtruediv__(self, o): return o / self.get()
def __floordiv__(self, o): return self.get() // o
def __rfloordiv__(self, o): return o // self.get()
def __mod__(self, o): return self.get() % o
def __rmod__(self, o): return o % self.get()
def __lt__(self, o): return self.get() < o
def __le__(self, o): return self.get() <= o
def __gt__(self, o): return self.get() > o
def __ge__(self, o): return self.get() >= o
def __and__(self, o): return self.get() & o
def __rand__(self, o): return o & self.get()
def __or__(self, o): return self.get() | o
def __ror__(self, o): return o | self.get()
def __xor__(self, o): return self.get() ^ o
def __rxor__(self, o): return o ^ self.get()
def __getitem__(self, o): return self.get()[o]
def __pow__(self, o, modulo=None): return pow(self.get(), o, modulo)
def __rpow__(self, o): return pow(o, self.get())
def __invert__(self): return ~self.get()
def __neg__(self): return -self.get()
def __abs__(self): return abs(self.get())
def __div__(self, o):
try:
return self.get().__div__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __rdiv__(self, o):
try:
return self.get().__rdiv__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __matmul__(self, o):
try:
return self.get().__matmul__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __rmatmul__(self, o):
try:
return self.get().__rmatmul__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
# TODO(josh11b): Even more operator overloads.
class PerReplica(DistributedValues):
"""Holds a map from device to unsynchronized values."""
pass
# Note that unlike PerReplica, Mirrored values inherit from
# DistributedDelegate and so can be used directly in cross-replica mode.
class Mirrored(DistributedDelegate):
"""Holds a map from device to values which are kept in sync."""
def _get_cross_replica(self):
device = device_util.canonicalize(device_util.current())
replica_id = self._device_map.replica_for_device(device)
if replica_id is None:
return self.primary
return self._values[replica_id]
def _as_graph_element(self):
obj = self.get()
conv_fn = getattr(obj, "_as_graph_element", None)
if conv_fn and callable(conv_fn):
return conv_fn()
return obj
def _assign_on_device(device, variable, tensor):
with ops.device(device):
return variable.assign(array_ops.identity(tensor))
DistributedVarOp = collections.namedtuple(
"DistributedVarOp", ["name", "graph", "type"])
class DistributedVariable(DistributedDelegate):
"""Holds a map from device to variables."""
# TODO(josh11b): Support changing the set of variables if e.g. if new
# devices are joining or a device is to leave.
def __init__(self, device_map, values, logical_device=None):
super(DistributedVariable, self).__init__(
device_map, values, logical_device=logical_device)
self._common_name = self.primary.name.split(":")[0]
# Use a weakref to make it easy to map from the contained values
# to the container without introducing a reference cycle.
for v in values:
v._distributed_container = weakref.ref(self) # pylint: disable=protected-access
# tf.keras keeps track of variables initialized using this attribute. When
# tf.keras gets the default session, it initializes all uninitialized vars.
# We need to make _keras_initialized a member of DistributedVariable because
# without this it will use `__getattr__` which will delegate to a component
# variable.
self._keras_initialized = False
# Typically, a `DistributedVariable`'s initializer is composed of the
# initializers of the components variables. However, in some cases, such as
# when restoring from a checkpoint, we may set the _initializer_op
# property on the entire `DistributedVariable`.
self._initializer_op = None
def is_initialized(self, name=None):
"""Identifies if all the component variables are initialized.
Args:
name: Name of the final `logical_and` op.
Returns:
The op that evaluates to True or False depending on if all the
component variables are initialized.
"""
result = self.primary.is_initialized()
# We iterate through the list of values except the last one to allow us to
# name the final `logical_and` op the same name that is passed by the user
# to the `is_initialized` op. For distributed variables, the
# `is_initialized` op is a `logical_and` op.
for v in self._values[1:-1]:
result = math_ops.logical_and(result, v.is_initialized())
result = math_ops.logical_and(result, self._values[-1].is_initialized(),
name=name)
return result
@property
def initializer(self):
if self._initializer_op:
init_op = self._initializer_op
else:
# return grouped ops of all the var initializations of component values of
# the mirrored variable
init_op = control_flow_ops.group(tuple(
v.initializer for v in self._values))
return init_op
def _get_closest(self):
"""Return member in the same replica if possible, else the primary."""
replica_context = distribution_strategy_context.get_replica_context()
if replica_context:
return self._device_map.select_for_current_replica(
self._values, replica_context)
device = distribute_lib.get_update_device()
if device is None:
device = device_util.canonicalize(device_util.current())
replica_id = self._device_map.replica_for_device(device)
if replica_id is None:
return self.primary
return self._values[replica_id]
def initialized_value(self):
return self._get_closest().initialized_value()
@property
def initial_value(self):
return self._get_closest().initial_value
@property
def graph(self):
return self.primary.graph
@property
def _shared_name(self):
return self._common_name
@property
def _unique_id(self):
return self.primary._unique_id # pylint: disable=protected-access
@property
def name(self):
return self.primary.name
@property
def dtype(self):
return self.primary.dtype
@property
def shape(self):
return self.primary.shape
def get_shape(self):
return self.primary.get_shape()
def to_proto(self, export_scope=None):
return self.primary.to_proto(export_scope=export_scope)
@property
def op(self):
# We want cross-replica code that does some var.op.X calls
# to work (even if the current device isn't in self.devices), but
# other uses of var.op in a cross-replica context to fail.
if distribution_strategy_context.get_cross_replica_context():
return DistributedVarOp(self.primary.op.name,
self.primary.op.graph,
self.primary.op.type)
return self.get().op
@property
def _in_graph_mode(self):
return self.primary._in_graph_mode # pylint: disable=protected-access
def read_value(self):
strategy = distribution_strategy_context.get_distribution_strategy()
return strategy.extended.read_var(self)
def _should_act_as_resource_variable(self):
"""Pass resource_variable_ops.is_resource_variable check."""
pass
ops.register_dense_tensor_like_type(DistributedVariable)
def _apply_aggregation(strategy, value, aggregation, destinations):
if aggregation == vs.VariableAggregation.ONLY_FIRST_REPLICA:
return strategy.broadcast(strategy.unwrap(value)[0],
destinations=destinations)
reduce_op = reduce_util.ReduceOp.from_variable_aggregation(aggregation)
return strategy.extended.reduce_to(reduce_op, value, destinations)
class _MirroredSaveable(saver.BaseSaverBuilder.ResourceVariableSaveable):
"""Class for defining how to restore a MirroredVariable."""
def __init__(self, mirrored_variable, primary_variable, name):
self._mirrored_variable = mirrored_variable
super(_MirroredSaveable, self).__init__(primary_variable, "", name)
def restore(self, restored_tensors, restored_shapes):
"""Restore the same value into all variables."""
tensor, = restored_tensors
return control_flow_ops.group(tuple(
_assign_on_device(v.device, v, tensor)
for v in self._mirrored_variable.values))
class MirroredVariable(DistributedVariable, Mirrored,
checkpointable.CheckpointableBase):
"""Holds a map from device to variables whose values are kept in sync."""
def __init__(self, device_map, values, aggregation, logical_device=None):
super(MirroredVariable, self).__init__(
device_map, values, logical_device=logical_device)
self._aggregation = aggregation
# The arguments to update() are automatically unwrapped so the update()
# function would normally see regular variables, not MirroredVariables.
# However, the update function can still operate on wrapped MirroredVariables
# through object members, captured arguments, etc. This is more likely in an
# update_non_slot() function (like OptimizerV2._finish), which can
# update several non-slot variables in one call.
def _assign_func(self, *args, **kwargs):
f = kwargs.pop("f")
if distribution_strategy_context.get_cross_replica_context():
update_device = distribute_lib.get_update_device()
if update_device is not None:
# We are calling an assign function on the mirrored variable in an
# update context.
v = self.get(device=update_device)
return f(v, *args, **kwargs)
# We are calling assign on the mirrored variable in cross replica context,
# use `strategy.update()` to update the variable.
strategy = distribution_strategy_context.get_distribution_strategy()
return strategy.update(self, f, *args, **kwargs)
else:
_assert_replica_context()
# We are calling an assign function on the mirrored variable in replica
# context.
# We reduce the value we want to assign/add/sub. More details about how we
# handle the different use cases can be found in the _reduce method.
# We call the function on each of the mirrored variables with the reduced
# value.
if self._aggregation == vs.VariableAggregation.NONE:
raise ValueError("You must specify an aggregation method to update a "
"MirroredVariable in Replica Context.")
def merge_fn(strategy, value, *other_args, **other_kwargs):
v = _apply_aggregation(strategy, value, self._aggregation, self)
return strategy.update(self, f, v, *other_args, **other_kwargs)
return distribution_strategy_context.get_replica_context().merge_call(
merge_fn, args=args, kwargs=kwargs)
def assign_sub(self, *args, **kwargs):
assign_sub_fn = lambda var, *a, **kw: var.assign_sub(*a, **kw)
return self._assign_func(f=assign_sub_fn, *args, **kwargs)
def assign_add(self, *args, **kwargs):
assign_add_fn = lambda var, *a, **kw: var.assign_add(*a, **kw)
return self._assign_func(f=assign_add_fn, *args, **kwargs)
def assign(self, *args, **kwargs):
assign_fn = lambda var, *a, **kw: var.assign(*a, **kw)
return self._assign_func(f=assign_fn, *args, **kwargs)
@property
def aggregation(self):
return self._aggregation
def _get_cross_replica(self):
device = device_util.canonicalize(device_util.current())
replica_id = self._device_map.replica_for_device(device)
if replica_id is None:
return array_ops.identity(self.primary)
return array_ops.identity(self._values[replica_id])
def _as_graph_element(self):
# pylint: disable=protected-access
if distribution_strategy_context.get_cross_replica_context():
return self.primary._as_graph_element()
return self.get()._as_graph_element()
def _gather_saveables_for_checkpoint(self):
"""Overrides CheckpointableBase method.
This allows both name-based and object-based save and restore of
MirroredVariables.
Returns:
A dictionary mapping attribute names to `SaveableObject` factories.
"""
def _saveable_factory(name=self._common_name):
return _MirroredSaveable(self, self.primary, name)
return {checkpointable.VARIABLE_VALUE_KEY: _saveable_factory}
# Register a conversion function which reads the value of the variable,
# allowing instances of the class to be used as tensors.
def _tensor_conversion_mirrored(var, dtype=None, name=None, as_ref=False):
# Try to avoid assignments to and other mutations of MirroredVariable
# state except through a DistributionStrategy.update() call.
assert not as_ref
return ops.internal_convert_to_tensor(
var.get(), dtype=dtype, name=name, as_ref=as_ref)
ops.register_tensor_conversion_function(MirroredVariable,
_tensor_conversion_mirrored)
def _enclosing_tpu_context():
# pylint: disable=protected-access
tpu_context = ops.get_default_graph()._get_control_flow_context()
# pylint: enable=protected-access
while tpu_context is not None and not isinstance(
tpu_context, control_flow_ops.XLAControlFlowContext):
tpu_context = tpu_context.outer_context
return tpu_context
# TODO(jhseu): Deduplicate code. We copy code because we don't want to
# inherit from DistributedDelegate. DistributedDelegate will not work in a
# tpu.replicate() because it assumes that you're in a device context where you
# can operate on a single version of the variable, but a tpu.replicate()
# operates on all variables and is replicated during a rewrite pass.
class TPUMirroredVariable(checkpointable.CheckpointableBase):
"""Holds a map from device to TPU variables whose values are kept in sync."""
def __init__(self, device_map, values, aggregation, logical_device=None):
assert isinstance(device_map, DeviceMap)
self._device_map = device_map
self._values = tuple(values)
if logical_device is None:
logical_device = device_map.logical_device_from_values(self._values)
self._logical_device = logical_device
# Use a weakref to make it easy to map from the contained values
# to the container without introducing a reference cycle.
for v in self._values:
v._mirrored_container = weakref.ref(self) # pylint: disable=protected-access
self._common_name = self.primary.name.split(":")[0]
self._aggregation = aggregation
# Needed for GradientTape
self._trainable = self.primary.trainable
# Typically like `DistributedVariable`, a `TPUMirroredVariable`'s
# initializer is composed of the initializers of the components variables.
# However, in some cases, such as when restoring from a checkpoint, we may
# set the _initializer_op property on the entire `TPUMirroredVariable`.
self._initializer_op = None
def _get(self, device=None):
"""Returns the value for the current device or raises a ValueError."""
if device is None:
replica_context = distribution_strategy_context.get_replica_context()
if replica_context:
return self._device_map.select_for_current_replica(
self._values, replica_context)
else:
device = distribute_lib.get_update_device()
if device is None:
return self._get_cross_replica()
device = device_util.canonicalize(device)
return self._device_map.select_for_device(self._values, device)
@property
def primary(self):
"""Returns a representative component."""
return self._values[0]
@property
def devices(self):
return self._device_map.logical_to_actual_devices(self._logical_device)
@property
def logical_device(self):
return self._logical_device
@property
def device_map(self):
return self._device_map
# TODO(josh11b): Replace unwrap with this?
@property
def values(self):
return self._values
# pylint: disable=multiple-statements
def __add__(self, o): return self.read_value() + o
def __radd__(self, o): return o + self.read_value()
def __sub__(self, o): return self.read_value() - o
def __rsub__(self, o): return o - self.read_value()
def __mul__(self, o): return self.read_value() * o
def __rmul__(self, o): return o * self.read_value()
def __truediv__(self, o): return self.read_value() / o
def __rtruediv__(self, o): return o / self.read_value()
def __floordiv__(self, o): return self.read_value() // o
def __rfloordiv__(self, o): return o // self.read_value()
def __mod__(self, o): return self.read_value() % o
def __rmod__(self, o): return o % self.read_value()
def __lt__(self, o): return self.read_value() < o
def __le__(self, o): return self.read_value() <= o
def __gt__(self, o): return self.read_value() > o
def __ge__(self, o): return self.read_value() >= o
def __and__(self, o): return self.read_value() & o
def __rand__(self, o): return o & self.read_value()
def __or__(self, o): return self.read_value() | o
def __ror__(self, o): return o | self.read_value()
def __xor__(self, o): return self.read_value() ^ o
def __rxor__(self, o): return o ^ self.read_value()
def __getitem__(self, o): return self.read_value()[o]
def __pow__(self, o, modulo=None): return pow(self.read_value(), o, modulo)
def __rpow__(self, o): return pow(o, self.read_value())
def __invert__(self): return ~self.read_value()
def __neg__(self): return -self.read_value()
def __abs__(self): return abs(self.read_value())
def __div__(self, o):
try:
return self.read_value().__div__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __rdiv__(self, o):
try:
return self.read_value().__rdiv__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __matmul__(self, o):
try:
return self.read_value().__matmul__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __rmatmul__(self, o):
try:
return self.read_value().__rmatmul__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __str__(self):
devices = self.devices
debug_str = ",\n".join(" %d %s: %s" % (i, devices[i], self._values[i])
for i in range(len(devices)))
return "%s:{\n%s\n}" % (self.__class__.__name__, debug_str)
def __repr__(self):
devices = self.devices
debug_repr = ",\n".join(" %d %s: %r" % (i, devices[i], self._values[i])
for i in range(len(devices)))
return "%s:{\n%s\n}" % (self.__class__.__name__, debug_repr)
@property
def handle(self):
# If we're in a tpu.rewrite(), return the replicated handle.
tpu_context = _enclosing_tpu_context()
if tpu_context is not None:
return tpu_context.get_replicated_var_handle(
self._common_name, self._values)
device = distribute_lib.get_update_device()
if device is None:
return self.primary.handle
return self._get(device=device).handle
@property
def device(self):
return self._get().device
# The arguments to update() are automatically unwrapped so the update()
# function would normally see regular variables, not MirroredVariables.
# However, the update function can still operate on wrapped MirroredVariables
# through object members, captured arguments, etc. This is more likely in an
# update_non_slot() function (like OptimizerV2._finish), which can
# update several non-slot variables in one call.
def _assign_func(self, *args, **kwargs):
strategy = distribution_strategy_context.get_distribution_strategy()
if strategy.__class__.__name__ != "TPUStrategy":
raise ValueError("You may only assign to a TPUMirroredVariable within a "
"TPUStrategy.")
f = kwargs.pop("f")
if distribution_strategy_context.get_cross_replica_context():
if _enclosing_tpu_context() is not None:
return distribution_strategy_context.get_distribution_strategy().update(
self, f, *args, **kwargs)
update_device = distribute_lib.get_update_device()
# We are calling update on the mirrored variable in cross replica context.
if update_device is not None:
# We are calling an assign function on the mirrored variable in cross
# replica context.
v = self._get(device=update_device)
return f(v, *args, **kwargs)
return distribution_strategy_context.get_distribution_strategy().update(
self, f, *args, **kwargs)
else:
_assert_replica_context()
# We are calling an assign function on the mirrored variable in replica
# context.
# We reduce the value we want to assign/add/sub. More details about how we
# handle the different use cases can be found in the _reduce method.
# We call the function on each of the mirrored variables with the reduced
# value.
if self._aggregation == vs.VariableAggregation.NONE:
raise ValueError("You must specify an aggregation method to update a "
"TPUMirroredVariable in Replica Context.")
def merge_fn(strategy, value, *other_args, **other_kwargs):
v = _apply_aggregation(strategy, value, self._aggregation, self)
return strategy.update(self, f, v, *other_args, **other_kwargs)
return distribution_strategy_context.get_replica_context().merge_call(
merge_fn, args=args, kwargs=kwargs)
@contextlib.contextmanager
def _handle_graph(self, handle):
# Note: might have an eager tensor but not be executing eagerly when
# building functions.
if (context.executing_eagerly() or isinstance(handle, ops.EagerTensor)
or ops.has_default_graph()):
yield
else:
with handle.graph.as_default():
yield
@property
def trainable(self):
return self._trainable
def _read_variable_op(self, parent_op=None):
if self.trainable:
tape.variable_accessed(self)
if parent_op is not None:
with ops.control_dependencies([parent_op]):
return gen_resource_variable_ops.read_variable_op(
self.handle, self.dtype)
return gen_resource_variable_ops.read_variable_op(
self.handle, self.dtype)
def read_value(self):
return self._read_variable_op()
def assign_sub(self, *args, **kwargs):
def assign_sub_fn(var, delta, **kw):
name = kw.pop("name", None)
read_value = kw.pop("read_value", True)
with self._handle_graph(var.handle):
op = gen_resource_variable_ops.assign_sub_variable_op(
var.handle, ops.convert_to_tensor(delta, dtype=self.dtype),
name=name)
if read_value:
return self._read_variable_op(parent_op=op)
return op
return self._assign_func(f=assign_sub_fn, *args, **kwargs)
def assign_add(self, *args, **kwargs):
def assign_add_fn(var, delta, **kw):
name = kw.pop("name", None)
read_value = kw.pop("read_value", True)
with self._handle_graph(var.handle):
op = gen_resource_variable_ops.assign_add_variable_op(
var.handle, ops.convert_to_tensor(delta, dtype=self.dtype),
name=name)
if read_value:
return self._read_variable_op(parent_op=op)
return op
return self._assign_func(f=assign_add_fn, *args, **kwargs)
def assign(self, *args, **kwargs):
def assign_fn(var, value, **kw):
name = kw.pop("name", None)
read_value = kw.pop("read_value", True)
with self._handle_graph(var.handle):
op = gen_resource_variable_ops.assign_variable_op(
var.handle, ops.convert_to_tensor(value, dtype=self.dtype),
name=name)
if read_value:
return self._read_variable_op(parent_op=op)
return op
return self._assign_func(f=assign_fn, *args, **kwargs)
@property
def aggregation(self):
return self._aggregation
@property
def constraint(self):
return None
@property
def initializer(self):
if self._initializer_op:
init_op = self._initializer_op
else:
init_op = control_flow_ops.group(tuple(
v.initializer for v in self._values))
return init_op
@property
def graph(self):
return self.primary.graph
@property
def _shared_name(self):
return self._common_name
@property
def _unique_id(self):
return self.primary._unique_id # pylint: disable=protected-access
@property
def name(self):
return self.primary.name
@property
def dtype(self):
return self.primary.dtype
@property
def shape(self):
return self.primary.shape
def get_shape(self):
return self.primary.get_shape()
def to_proto(self, export_scope=None):
return self.primary.to_proto(export_scope=export_scope)
def _get_cross_replica(self):
device = device_util.canonicalize(device_util.current())
replica = self._device_map.replica_for_device(device)
if replica is None:
return self.primary
return self._values[replica]
def _as_graph_element(self):
# pylint: disable=protected-access
if distribution_strategy_context.get_cross_replica_context():
return self.primary._as_graph_element()
return self._read_variable_op()
def _gather_saveables_for_checkpoint(self):
"""Overrides CheckpointableBase method.
This allows both name-based and object-based save and restore of
MirroredVariables.
Returns:
A dictionary mapping attribute names to `SaveableObject` factories.
"""
def _saveable_factory(name=self._common_name):
return _MirroredSaveable(self, self.primary, name)
return {checkpointable.VARIABLE_VALUE_KEY: _saveable_factory}
def _should_act_as_resource_variable(self):
"""Pass resource_variable_ops.is_resource_variable check."""
pass
# Needed to pass ResourceVariable checks.
@property
def op(self):
return self.primary.op
# pylint: disable=protected-access
@property
def _save_slice_info(self):
return self.primary._save_slice_info
def _get_save_slice_info(self):
return self.primary._get_save_slice_info()
def _set_save_slice_info(self, save_slice_info):
return self.primary._set_save_slice_info(save_slice_info)
# pylint: enable=protected-access
@property
def _in_graph_mode(self):
return self.primary._in_graph_mode # pylint: disable=protected-access
def _dense_var_to_tensor(self, dtype=None, name=None, as_ref=False):
"""Converts a variable to a tensor."""
# pylint: disable=protected-access
if _enclosing_tpu_context() is None:
return self._get()._dense_var_to_tensor(dtype, name, as_ref)
# pylint: enable=protected-access
if dtype is not None and dtype != self.dtype:
return math_ops.cast(self.read_value(), dtype)
if as_ref:
return self.handle
else:
return self.read_value()
def is_initialized(self, name=None):
"""Identifies if all the component variables are initialized.
Args:
name: Name of the final `logical_and` op.
Returns:
The op that evaluates to True or False depending on if all the
component variables are initialized.
"""
# TODO(jhseu): Do we need TPU context implementation?
result = self.primary.is_initialized()
# We iterate through the list of values except the last one to allow us to
# name the final `logical_and` op the same name that is passed by the user
# to the `is_initialized` op. For distributed variables, the
# `is_initialized` op is a `logical_and` op.
for v in self._values[1:-1]:
result = math_ops.logical_and(result, v.is_initialized())
result = math_ops.logical_and(result, self._values[-1].is_initialized(),
name=name)
return result
# Register a conversion function which reads the value of the variable,
# allowing instances of the class to be used as tensors.
def _tensor_conversion_tpu_mirrored(var, dtype=None, name=None, as_ref=False):
return var._dense_var_to_tensor(dtype=dtype, name=name, as_ref=as_ref) # pylint: disable=protected-access
ops.register_tensor_conversion_function(TPUMirroredVariable,
_tensor_conversion_tpu_mirrored)
ops.register_dense_tensor_like_type(TPUMirroredVariable)
class _ReplicaLocalSaveable(saver.BaseSaverBuilder.SaveableObject):
"""Class for defining how to restore a ReplicaLocalVariable."""
def __init__(self, replica_local_variable, name):
self._replica_local_variable = replica_local_variable
# We use a callable so that we don't have to evaluate this expression
# in the case where we are trying to restore instead of save.
def tensor():
strategy = distribution_strategy_context.get_distribution_strategy()
return strategy.extended.read_var(replica_local_variable)
spec = saver.BaseSaverBuilder.SaveSpec(
tensor=tensor,
slice_spec="",
name=name,
dtype=replica_local_variable.dtype)
super(_ReplicaLocalSaveable, self).__init__(tensor, [spec], name)
def restore(self, restored_tensors, restored_shapes):
"""Restore the same value into all variables."""
tensor, = restored_tensors
return self._replica_local_variable.assign(tensor)
def _assert_replica_context():
if not distribution_strategy_context.get_replica_context():
raise RuntimeError(
"Replica-local variables may only be assigned in a replica context.")
class ReplicaLocalVariable(DistributedVariable, PerReplica,
checkpointable.CheckpointableBase):
"""Holds a map from device to variables whose values are reduced on save."""
def __init__(self, device_map, values, aggregation, logical_device=None):
self._aggregation = aggregation
super(ReplicaLocalVariable, self).__init__(
device_map, values, logical_device=logical_device)
def assign_sub(self, *args, **kwargs):
_assert_replica_context()
return self.get().assign_sub(*args, **kwargs)
def assign_add(self, *args, **kwargs):
_assert_replica_context()
return self.get().assign_add(*args, **kwargs)
def assign(self, *args, **kwargs):
if distribution_strategy_context.get_cross_replica_context():
# To preserve the sum across save and restore, we have to divide the
# total across all devices when restoring a variable that was summed
# when saving.
tensor = args[0]
if self._aggregation == vs.VariableAggregation.SUM:
tensor *= 1. / len(self.devices)
return control_flow_ops.group(tuple(
_assign_on_device(v.device, v, tensor) for v in self._values))
else:
_assert_replica_context()
return self.get().assign(*args, **kwargs)
@property
def aggregation(self):
return self._aggregation
def _get_cross_replica(self):
if self._aggregation == vs.VariableAggregation.ONLY_FIRST_REPLICA:
return self.primary
# TODO(josh11b): Use a strategy-specific method.
total = math_ops.add_n(self._values)
if self._aggregation == vs.VariableAggregation.MEAN:
return total * (1./ len(self._values))
return total
def _as_graph_element(self):
# pylint: disable=protected-access
if distribution_strategy_context.get_cross_replica_context():
return self._get_cross_replica()
return self.get()._as_graph_element()
def _gather_saveables_for_checkpoint(self):
"""Overrides CheckpointableBase method.
This allows both name-based and object-based save and restore of
ReplicaLocalVariables.
Returns:
A dictionary mapping attribute names to `SaveableObject` factories.
"""
def _saveable_factory(name=self._common_name):
return _ReplicaLocalSaveable(self, name)
return {checkpointable.VARIABLE_VALUE_KEY: _saveable_factory}
# Register a conversion function for ReplicaLocalVariable which allows as_ref to
# be true.
def _tensor_conversion_replica_local(var, dtype=None, name=None, as_ref=False):
return ops.internal_convert_to_tensor(
var.get(), dtype=dtype, name=name, as_ref=as_ref)
ops.register_tensor_conversion_function(ReplicaLocalVariable,
_tensor_conversion_replica_local)
def regroup(device_map, values, wrap_class=PerReplica):
"""Makes a nest per-replica into a nest of PerReplica/Mirrored values."""
assert isinstance(device_map, DeviceMap)
assert len(values) == device_map.num_replicas_in_graph
v0 = values[0]
if isinstance(v0, list):
for v in values[1:]:
assert isinstance(v, list)
assert len(v) == len(v0), ("len(v) == %d, len(v0) == %d, v: %s, v0: %s" %
(len(v), len(v0), v, v0))
return [regroup(device_map, tuple(v[i] for v in values), wrap_class)
for i in range(len(v0))]
if isinstance(v0, tuple):
for v in values[1:]:
assert isinstance(v, tuple)
assert len(v) == len(v0)
regrouped_tuple = tuple(
regroup(device_map, tuple(v[i] for v in values), wrap_class)
for i in range(len(v0)))
if hasattr(v0, "_fields"):
# This tuple is in fact a namedtuple! Create a new namedtuple instance
# and initialize it with the regrouped values:
assert hasattr(type(v0), "_make")
return type(v0)._make(regrouped_tuple)
else:
return regrouped_tuple
if isinstance(v0, dict):
v0keys = set(v0.keys())
for v in values[1:]:
assert isinstance(v, dict), ("v[0]: %r v[i]: %r" % (v0, v))
assert set(v.keys()) == v0keys, ("v[0].keys: %s v[i].keys: %s" %
(v0keys, set(v.keys())))
return {key: regroup(device_map, tuple(v[key] for v in values), wrap_class)
for key in v0keys}
# If exactly the same object across all devices, return it unwrapped.
same_id = True
for v in values[1:]:
if v is not v0:
same_id = False
break
# Consider three cases where same_id is true:
# * If v0 is a DistributedVariable (a MirroredVariable or
# ReplicaLocalVariable, and same_id means it is the same across all
# devices), we want to return it. We check DistributedVariable
# specifically since it can look like it has a
# _distributed_container member since its members do.
# * If v0 is a member of a distributed variable, in which case
# hasattr(v0, "_distributed_container") is true, we want to
# return the DistributedVariable that contains it using the
# _distributed_container logic below. This case can trigger
# same_id when there is only one device.
# * In any other situation, same_id means we return v0.
if same_id and (isinstance(v0, DistributedVariable) or
not hasattr(v0, "_distributed_container")):
return v0
# Detect the case where each device has a parallel component of the
# same MirroredVariable (or ReplicaLocalVariable). In this case we
# want to return the containing MirroredVariable, after a bunch of
# sanity checking. In particular, each component should have the
# same container, and the devices of the variables should match the
# keys of the per-replica dictionary.
if hasattr(v0, "_distributed_container"):
# pylint: disable=protected-access
assert not isinstance(v0, MirroredVariable), (
"ids = %s, values = %s" % ([id(v) for v in values], values))
assert device_map.is_device_in_replica(v0.device, 0), (
"v0.device = %s, device_map = %s" % (v0.device, device_map))
distributed_container = v0._distributed_container()
assert distributed_container is not None
for r, v in enumerate(values[1:]):
assert device_map.is_device_in_replica(v.device, r + 1), (
"v.device = %s, r = %d, device_map = %s" %
(v.device, r + 1, device_map))
assert distributed_container is v._distributed_container()
return distributed_container
# pylint: enable=protected-access
return wrap_class(device_map, values)
def select_replica(replica_id, structured):
"""Specialize a nest of regular & per-replica values for one replica."""
def _get(x):
return x.values[replica_id] if isinstance(x, DistributedValues) else x
return nest.map_structure(_get, structured)
def select_device_mirrored(device, structured):
"""Specialize a nest of regular & mirrored values for one device."""
def _get_mirrored(x):
if isinstance(x, DistributedValues):
if not isinstance(x, Mirrored):
raise TypeError(
"Expected value to be mirrored across replicas: %s in %s." %
(x, structured))
return x.get(device)
else:
return x
return nest.map_structure(_get_mirrored, structured)
def update_regroup(extended, device_map, updates, group):
"""Regroup for an update, with dependencies to ensure all updates execute."""
# TODO(josh11b): Replace "Mirrored" here with a function that does the following
# so we can avoid all these nest operations.
regrouped = regroup(device_map, updates, Mirrored)
if not group:
return nest.map_structure(extended._unwrap, regrouped) # pylint: disable=protected-access
grouped_flat = []
for u in nest.flatten(regrouped):
if isinstance(u, DistributedValues):
g = extended._group(u) # pylint: disable=protected-access
if u.is_tensor_like:
# Make sure we run all updates. Without this, something like
# session.run(extended.update(...)) may only update one replica.
values = []
for d in u.devices:
with ops.device(d), ops.control_dependencies([g]):
values.append(array_ops.identity(u.get(d)))
g = Mirrored(u.device_map, values)
else:
g = u
grouped_flat.append(g)
return nest.pack_sequence_as(regrouped, grouped_flat)
class InputWorkers(object):
"""A 1-to-many mapping from input worker devices to compute devices."""
def __init__(self, device_map, worker_device_pairs=None, logical_device=0):
"""Initialize an `InputWorkers` object.
Args:
device_map: A `DeviceMap` with the computation devices fed by the
input workers.
worker_device_pairs: A sequence of pairs:
`(input device, a tuple of compute devices fed by that input device)`.
logical_device: The logical device of `device_map` to feed.
"""
self._device_map = device_map
self._logical_device = logical_device
if worker_device_pairs is None:
worker_device_pairs = ((
device_util.canonicalize("/device:CPU:0"),
device_map.logical_to_actual_devices(logical_device)),)
self._input_worker_devices = tuple(d for d, _ in worker_device_pairs)
self._fed_devices = tuple(tuple(device_util.canonicalize(d) for d in f)
for _, f in worker_device_pairs)
flattened = tuple(d for l in self._fed_devices for d in l)
assert (flattened ==
device_map.logical_to_actual_devices(logical_device)), (
"flattened: %s logical device %d: %s" %
(flattened, logical_device,
device_map.logical_to_actual_devices(logical_device)))
@property
def device_map(self):
return self._device_map
@property
def logical_device(self):
return self._logical_device
@property
def num_workers(self):
return len(self._input_worker_devices)
@property
def worker_devices(self):
return self._input_worker_devices
def compute_devices_for_worker(self, worker_index):
return self._fed_devices[worker_index]
def __repr__(self):
devices = self.worker_devices
debug_repr = ",\n".join(" %d %s: %s" %
(i, devices[i], self._fed_devices[i])
for i in range(len(devices)))
return "%s:{\n%s\n device_map: %s}" % (
self.__class__.__name__, debug_repr, self._device_map)
class PerReplicaDataIterator(object):
"""An iterator (like `tf.data.Iterator`) into a `PerReplicaDataset`."""
def __init__(self, iterator, input_workers, worker_index, prefetch_on_device):
assert isinstance(input_workers, InputWorkers)
self._iterator = iterator
self._input_workers = input_workers
self._worker_index = worker_index
self._prefetch_on_device = prefetch_on_device
@property
def initializer(self):
return self._iterator.initializer
def get_next_as_list(self, name=None):
"""Scatter the input across devices."""
if self._prefetch_on_device:
data_list = self._iterator.get_next()
else:
batch = self._iterator.get_next(name=name)
data_list = []
def get_ith(i):
return lambda x: x[i]
devices = self._input_workers.compute_devices_for_worker(
self._worker_index)
for i, d in enumerate(devices):
v = nest.map_structure(get_ith(i), batch)
if context.executing_eagerly():
with ops.device(d):
v = nest.map_structure(array_ops.identity, v)
data_list.append(v)
return data_list
def get_next(self, name=None):
assert self._input_workers.num_workers == 1
data_list = self.get_next_as_list(name)
return regroup(self._input_workers.device_map, data_list)
@property
def output_classes(self):
return self._iterator.output_classes
@property
def output_shapes(self):
return self._iterator.output_shapes
@property
def output_types(self):
return self._iterator.output_types
class PerReplicaDataset(object):
"""Like `tf.data.Dataset` split devices, producing `PerReplica` data."""
def __init__(self, dataset, input_workers, worker_index,
prefetch_on_device=None):
assert isinstance(input_workers, InputWorkers)
assert worker_index is not None
assert worker_index is not True
assert worker_index is not False
self._input_workers = input_workers
self._worker_index = worker_index
# Default to using prefetching in graph mode, unless specified.
# TODO(rohanj): Enable prefetching in eager mode.
self._prefetch_on_device = prefetch_on_device
if self._prefetch_on_device is None:
self._prefetch_on_device = not context.executing_eagerly()
assert not (self._prefetch_on_device and context.executing_eagerly()), (
"Prefetching is only supported in graph mode currently")
self._dataset = dataset
if not self._prefetch_on_device:
# TODO(priyag): If dropping remainder is not appropriate, find another
# approach to distributing the dataset when not possible to divide evenly.
# Possibly not an issue when we start using PartitionedDataset.
num_replicas = len(input_workers.compute_devices_for_worker(worker_index))
self._dataset = dataset.batch(num_replicas, drop_remainder=True)
def make_one_shot_iterator(self):
"""Get a one time use iterator for the distributed PerReplicaDataset."""
# Graph mode with one shot iterator is disabled.
if not context.executing_eagerly():
raise ValueError("Cannot create a one shot iterator. Please use "
"`make_initializable_iterator()` instead.")
# Eager mode prefetching would error out in constructor. Only remaining
# case is non-prefetching in eager mode. We delegate to
# PerReplicaDataIterator to handle that case.
dataset_iterator = dataset_ops.make_one_shot_iterator(self._dataset)
return PerReplicaDataIterator(
dataset_iterator, self._input_workers, self._worker_index,
prefetch_on_device=False)
def make_initializable_iterator(self):
"""Get an initializable iterator for the distributed PerReplicaDataset."""
# Eager mode generates already initialized iterators. Hence we cannot create
# an initializable iterator.
if context.executing_eagerly():
raise ValueError("Cannot create initializable iterator in Eager mode. "
"Please use `make_one_shot_iterator` instead.")
if self._prefetch_on_device:
replica_devices = self._input_workers.compute_devices_for_worker(
self._worker_index)
dataset_iterator = multi_device_iterator_ops.MultiDeviceIterator(
self._dataset, replica_devices)
else:
dataset_iterator = dataset_ops.make_initializable_iterator(self._dataset)
return PerReplicaDataIterator(
dataset_iterator, self._input_workers, self._worker_index,
prefetch_on_device=self._prefetch_on_device)
class MultiWorkerDataIterator(object):
"""An iterator (like `tf.data.Iterator`) into a `MultiWorkerDataset`."""
def __init__(self, iterators, input_workers):
"""Initialize the `MultiWorkerDataIterator` object.
Args:
iterators: a list of worker, iterator pairs.
input_workers: an `InputWorkers` object.
Raises:
ValueError: if iterators and input_workers are not compatible.
"""
assert isinstance(input_workers, InputWorkers)
workers = tuple(d for d, _ in iterators)
if workers != input_workers.worker_devices:
raise ValueError("iterators and input_workers are not compatible. "
"iterator workers: %r input_workers devices: %r" %
(workers, input_workers.worker_devices))
self._iterators = tuple(i for _, i in iterators)
self._input_workers = input_workers
@property
def initializer(self):
return control_flow_ops.group(
tuple(iterator.initializer for iterator in self._iterators))
def get_iterator(self, worker):
for i, w in enumerate(self._input_workers.worker_devices):
if worker == w:
return self._iterators[i]
return None
@property
def output_shapes(self):
return self._iterators[0].output_shapes
@property
def output_types(self):
return self._iterators[0].output_types
def get_next(self, name=None):
"""Scatter the input across hosts and devices."""
replicas = []
for worker, iterator in zip(self._input_workers.worker_devices,
self._iterators):
if name is not None:
d = tf_device.DeviceSpec.from_string(worker)
new_name = "%s_%s_%d" % (name, d.job, d.task)
else:
new_name = None
with ops.device(worker):
data_per_worker = iterator.get_next_as_list(name=new_name)
# Append to replicas to get a flat list of values indexed by replica.
replicas.extend(data_per_worker)
return regroup(self._input_workers.device_map, replicas)
class MultiWorkerDataset(object):
"""Like a `tf.data.Dataset` that distributes data to different workers.
Each worker gets one shard of the input dataset. This currently does not work
in eager mode.
"""
def __init__(self, dataset_fn, input_workers, prefetch_on_device=None,
auto_shard=False):
"""Initialize the MultiWorkerDataset object.
Args:
dataset_fn: a function or a list of functions that returns a
`tf.data.Dataset`.
input_workers: an `InputWorkers` object.
prefetch_on_device: whether to prefetch to devices.
auto_shard: whether to auto-shard the dataset.
"""
assert isinstance(input_workers, InputWorkers)
if isinstance(dataset_fn, (list, tuple)):
if len(dataset_fn) != input_workers.num_workers:
raise ValueError("If `dataset_fn` is a list, it must have one entry "
"per worker")
if auto_shard:
raise ValueError(
"If `dataset_fn` is a list, `auto_shard` is not supported.")
self._input_workers = input_workers
self._datasets = []
# TODO(yuefengz, priyag): support different set of jobs for input
# processing.
for i, worker in enumerate(input_workers.worker_devices):
with ops.device(worker):
if isinstance(dataset_fn, (list, tuple)):
worker_input = dataset_fn[i]()
else:
worker_input = dataset_fn()
if auto_shard:
worker_input = input_ops.auto_shard_dataset(
worker_input, input_workers.num_workers, i)
dataset = PerReplicaDataset(worker_input, input_workers, i,
prefetch_on_device=prefetch_on_device)
self._datasets.append((worker, dataset))
def make_one_shot_iterator(self):
iterators = []
for worker, dataset in self._datasets:
with ops.device(worker):
iterators.append((worker, dataset_ops.make_one_shot_iterator(dataset)))
return MultiWorkerDataIterator(iterators, self._input_workers)
def make_initializable_iterator(self):
iterators = []
for worker, dataset in self._datasets:
with ops.device(worker):
iterators.append(
(worker, dataset_ops.make_initializable_iterator(dataset)))
return MultiWorkerDataIterator(iterators, self._input_workers)
class InputIterator(object):
"""An input iterator, intended to be passed to `DistributionStrategy.run`."""
def get_next(self):
"""Returns the next inputs for all replicas."""
raise NotImplementedError("must be implemented in descendants")
def initialize(self):
"""Initialize the underlying input dataset, when applicable.
In eager mode, this will create a new iterator and return it.
In graph mode, this will initialize the same underlying iterator(s).
Users are required to call this if
- This iterator was returned from a call to `make_input_fn_iterator` with an
input function that returns a dataset.
- Or this iterator was returned from a call to `make_dataset_iterator`.
Returns:
A list of initialization ops to be executed.
"""
raise NotImplementedError("must be implemented in descendants")
class InputIteratorImpl(InputIterator):
"""Common implementation for all input iterators."""
def __init__(self, input_workers, iterators):
assert isinstance(input_workers, InputWorkers)
if not input_workers.worker_devices:
raise ValueError("Should have at least one worker for input iterator.")
self._iterators = iterators
self._input_workers = input_workers
self._is_eager = context.executing_eagerly()
def get_next(self, name=None):
"""Returns the next input from the iterator for all replicas."""
assert self._is_eager == context.executing_eagerly(), (
"Iterator should be created and used in same execution mode.")
replicas = []
for i, worker in enumerate(self._input_workers.worker_devices):
if name is not None:
d = tf_device.DeviceSpec.from_string(worker)
new_name = "%s_%s_%d" % (name, d.job, d.task)
else:
new_name = None
with ops.device(worker):
# Make `replicas` a flat list of values across all replicas.
replicas.extend(self._iterators[i].get_next_as_list(new_name))
return regroup(self._input_workers.device_map, replicas)
def initialize(self):
"""Initialze underlying iterators.
Returns:
A list of any initializer ops that should be run.
"""
assert self._is_eager == context.executing_eagerly(), (
"Iterator should be created and used in same execution mode.")
init_ops = []
for it in self._iterators:
init_ops.extend(it.initialize())
return init_ops
# TODO(priyag): Remove when we switch to using `MultiDeviceIterator` for TPUs.
@property
def output_classes(self):
return self._iterators[0].output_classes
# TODO(priyag): Remove when we switch to using `MultiDeviceIterator` for TPUs.
@property
def output_shapes(self):
return self._iterators[0].output_shapes
# TODO(priyag): Remove when we switch to using `MultiDeviceIterator` for TPUs.
@property
def output_types(self):
return self._iterators[0].output_types
# TODO(priyag): Remove when we switch to using `MultiDeviceIterator` for TPUs.
def get_iterator(self, worker):
for i, w in enumerate(self._input_workers.worker_devices):
if worker == w:
return self._iterators[i]
return None
class InputFunctionIterator(InputIteratorImpl):
"""Iterator created from input function."""
def __init__(self, input_fn, input_workers, input_contexts):
"""Make an iterator for input provided via an input function.
Currently implements PER_WORKER mode, in which the `input_fn` is called
once on each worker.
TODO(priyag): Add other replication modes.
TODO(priyag): Allow taking input function that returns a callable that
returns nest of tensors.
Args:
input_fn: Input function that returns a `tf.data.Dataset` object.
input_workers: an `InputWorkers` object.
input_contexts: A list of `InputContext` instances to be passed to call(s)
to `input_fn`. Length and order should match worker order in
`worker_device_pairs`.
"""
assert isinstance(input_workers, InputWorkers)
if input_workers.num_workers != len(input_contexts):
raise ValueError(
"Number of input workers (%d) is not same as number of "
"input_contexts (%d)" %
(input_workers.num_workers, len(input_contexts)))
iterators = []
for i, ctx in enumerate(input_contexts):
worker = input_workers.worker_devices[i]
with ops.device(worker):
result = input_fn(ctx)
if not isinstance(result, dataset_ops.DatasetV2):
raise ValueError("input_fn must return a tf.data.Dataset.")
devices = input_workers.compute_devices_for_worker(i)
iterator = _SingleWorkerDatasetIterator(result, worker, devices)
iterators.append(iterator)
super(InputFunctionIterator, self).__init__(input_workers, iterators)
class DatasetIterator(InputIteratorImpl):
"""Iterator created from input dataset."""
def __init__(self, dataset, input_workers, split_batch_by=None):
"""Make an iterator for the dataset on given devices.
If `split_batch_by` is not None, we "split" each batch of the
dataset by `split_batch_by` value. To achieve this, we first unbatch the
input dataset and then rebatch it with the per replica batch size that is
calculated using `global_batch_size // split_batch_by`.
The currently supported datasets are as follows:
`dataset.batch()` is the last operation on the dataset OR
`dataset.apply(map_and_batch)` is the last operation on the dataset OR
`dataset.batch().prefetch()` are the last 2 operations on the dataset OR
`dataset.apply(map_and_batch).prefetch()` are the last 2 operations.
TODO(priyag): Support multi worker / host cases properly by cloning
and sharding the dataset on each worker. Current setup will only work in
some cases, such as in-graph multi worker GPU case. If the input pipeline
has random shuffling (with a different seed on each worker), each worker
will see random input from the same overall dataset in each step. Otherwise,
each worker will see the same input in each step.
Args:
dataset: `tf.data.Dataset` that will be used as the input source.
input_workers: an `InputWorkers` object.
split_batch_by: Optional integer. If present, we "split" each batch of the
dataset by `split_batch_by` value.
"""
assert isinstance(input_workers, InputWorkers)
if split_batch_by:
dataset = _split_dataset_batch(dataset, split_batch_by)
iterators = []
for i, worker in enumerate(input_workers.worker_devices):
with ops.device(worker):
worker_devices = input_workers.compute_devices_for_worker(i)
iterator = _SingleWorkerDatasetIterator(dataset, worker, worker_devices)
iterators.append(iterator)
super(DatasetIterator, self).__init__(input_workers, iterators)
class _SingleWorkerDatasetIterator(object):
"""Iterator for a single `tf.data.Dataset`."""
def __init__(self, dataset, worker, devices):
"""Create iterator for the `dataset` to fetch data to worker's `devices` .
`MultiDeviceIterator` is used to prefetch input to the devices on the
given worker. `MultiDeviceIterator` doesn't work in eager mode yet.
Args:
dataset: A `tf.data.Dataset` instance.
worker: Worker on which ops should be created.
devices: Distribute data from `dataset` to these devices.
"""
self._dataset = dataset
self._worker = worker
self._devices = devices
self._is_eager = context.executing_eagerly()
self._make_iterator()
def _make_iterator(self):
"""Make appropriate iterator on the dataset."""
with ops.device(self._worker):
if self._is_eager:
# TODO(rohanj): Enable prefetching in eager mode.
# TODO(priyag): Measure the performance of this approach vs calling
# get_next on the original dataset N times.
dataset = self._dataset.batch(len(self._devices), drop_remainder=True)
iterator = dataset_ops.make_one_shot_iterator(dataset)
else:
iterator = multi_device_iterator_ops.MultiDeviceIterator(
self._dataset, self._devices)
self._iterator = iterator
def get_next_as_list(self, name=None):
"""Get next element from the underlying iterator."""
with ops.device(self._worker):
if self._is_eager:
# Batched dataset case.
batch = self._iterator.get_next(name=name)
data_list = []
for i, d in enumerate(self._devices):
v = nest.map_structure(operator.itemgetter(i), batch)
with ops.device(d):
v = nest.map_structure(array_ops.identity, v)
data_list.append(v)
else:
# MultiDeviceIterator case.
data_list = self._iterator.get_next()
return data_list
def initialize(self):
"""Initialze underlying iterator.
In eager execution, this simply recreates the underlying iterator.
In graph execution, it returns the initializer ops for the underlying
iterator.
Returns:
A list of any initializer ops that should be run.
"""
if self._is_eager:
self._make_iterator()
return []
else:
return [self._iterator.initializer]
@property
def output_classes(self):
return self._iterator.output_classes
@property
def output_shapes(self):
return self._iterator.output_shapes
@property
def output_types(self):
return self._iterator.output_types
def _split_dataset_batch(dataset, split_batch_by):
"""Divide a batch-ed dataset's batches into smaller batches."""
# TODO(sourabhbajaj): Remove this in lieu of distributed datasets
# pylint: disable=protected-access
def _get_batch_dataset(d):
"""Get the underlying batch dataset from the dataset object."""
if isinstance(d, dataset_ops.DatasetV1Adapter):
d = d._dataset
if isinstance(d, (dataset_ops.BatchDataset, batching._MapAndBatchDataset)):
return d
elif isinstance(d, dataset_ops.PrefetchDataset):
return _get_batch_dataset(d._input_dataset)
raise ValueError(
"Unable to get batched dataset from the input dataset. `batch` "
"`map_and_batch` need to be the last operations on the dataset. "
"The batch operations can be followed by a prefetch.")
batched_dataset = _get_batch_dataset(dataset)
if isinstance(batched_dataset, dataset_ops.BatchDataset):
batch_size = batched_dataset._batch_size
drop_remainder = batched_dataset._drop_remainder
elif isinstance(batched_dataset, batching._MapAndBatchDataset):
batch_size = batched_dataset._batch_size_t
drop_remainder = batched_dataset._drop_remainder_t
# pylint: enable=protected-access
if tensor_util.is_tensor(batch_size):
batch_size = tensor_util.constant_value(batch_size)
if tensor_util.is_tensor(drop_remainder):
drop_remainder = tensor_util.constant_value(drop_remainder)
if batch_size % split_batch_by:
raise ValueError(
"Batch size %s cannot be sharded evenly across replicas %s" % (
batch_size, split_batch_by))
new_batch_size = batch_size // split_batch_by
dataset = dataset.apply(batching.unbatch())
return dataset.batch(new_batch_size, drop_remainder=drop_remainder)
class MultiStepContext(object):
"""A context object that can be used to capture things when running steps.
This context object is useful when running multiple steps at a time using the
`experimental_run_steps_on_iterator` API. For e.g. it allows the user's step
function to specify which outputs to emit at what frequency. Currently it
supports capturing output from the last step, as well as capturing non tensor
outputs. In the future it will be augmented to support other use cases such
as output each N steps.
"""
def __init__(self):
"""Initialize an output context.
Returns:
A context object.
"""
self._last_step_outputs = {}
self._last_step_outputs_reduce_ops = {}
self._non_tensor_outputs = {}
@property
def last_step_outputs(self):
"""A dictionary consisting of outputs to be captured on last step.
Keys in the dictionary are names of tensors to be captured, as specified
when `set_last_step_output` is called.
Values in the dictionary are the tensors themselves. If
`set_last_step_output` was called with a `reduce_op` for this output,
then the value is the reduced value.
Returns:
A dictionary with last step outputs.
"""
return self._last_step_outputs
def _set_last_step_outputs(self, outputs):
"""Replace the entire dictionary of last step outputs."""
if not isinstance(outputs, dict):
raise ValueError("Need a dictionary to set last_step_outputs.")
self._last_step_outputs = outputs
def set_last_step_output(self, name, output, reduce_op=None):
"""Set `output` with `name` to be outputted from the last step.
Args:
name: String, name to identify the output. Doesn't need to match tensor
name.
output: The tensors that should be outputted with `name`. See below for
actual types supported.
reduce_op: Reduction method to use to reduce outputs from multiple
replicas. Required if `set_last_step_output` is called in a replica
context. Optional in cross_replica_context.
When present, the outputs from all the replicas are reduced using the
current distribution strategy's `reduce` method. Hence, the type of
`output` must be what's supported by the corresponding `reduce` method.
For e.g. if using MirroredStrategy and reduction is set, output
must be a `PerReplica` value.
The reduce method is also recorded in a dictionary
`_last_step_outputs_reduce_ops` for later interpreting of the
outputs as already reduced or not.
"""
if distribution_strategy_context.get_cross_replica_context():
self._last_step_outputs_reduce_ops[name] = reduce_op
if reduce_op is None:
self._last_step_outputs[name] = output
else:
distribution = distribution_strategy_context.get_distribution_strategy()
self._last_step_outputs[name] = distribution.reduce(reduce_op, output)
else:
assert reduce_op is not None
def merge_fn(distribution, value):
self._last_step_outputs[name] = distribution.reduce(reduce_op, value)
# Setting this inside the `merge_fn` because all replicas share the same
# context object, so it's more robust to set it only once (even if all
# the replicas are trying to set the same value).
self._last_step_outputs_reduce_ops[name] = reduce_op
distribution_strategy_context.get_replica_context().merge_call(
merge_fn, args=(output,))
@property
def non_tensor_outputs(self):
"""A dictionary consisting of any non tensor outputs to be captured."""
return self._non_tensor_outputs
def set_non_tensor_output(self, name, output):
"""Set `output` with `name` to be captured as a non tensor output."""
if distribution_strategy_context.get_cross_replica_context():
self._non_tensor_outputs[name] = output
else:
def merge_fn(distribution, value):
# NOTE(priyag): For non tensor outputs, we simply return all the values
# in a list as reduction doesn't make sense on non tensors.
self._non_tensor_outputs[name] = distribution.unwrap(value)
distribution_strategy_context.get_replica_context().merge_call(
merge_fn, args=(output,))
def value_container(val):
"""Returns the container that this per-replica `value` belongs to.
Args:
val: A value returned by `call_for_each_replica()` or a variable
created in `scope()`.
Returns:
A container that `value` belongs to.
If value does not belong to any container (including the case of
container having been destroyed), returns the value itself.
"""
if (hasattr(val, "_distributed_container") and
# DistributedVariable has _distributed_container defined
# but we don't want to return it.
not isinstance(val, DistributedVariable)):
container = val._distributed_container() # pylint: disable=protected-access
if container is not None:
return container
return val
# TODO(josh11b): Descend from Variable.
class AggregatingVariable(checkpointable.CheckpointableBase):
"""A wrapper around a variable that aggregates updates across replicas."""
def __init__(self, v, aggregation):
self._v = v
# NOTE: We don't use "_distributed_container" here because we don't want
# to trigger that code path in regroup().
v._aggregating_container = weakref.ref(self) # pylint: disable=protected-access
self._aggregation = aggregation
def get(self):
return self._v
def __getattr__(self, name):
return getattr(self._v, name)
def _assign_func(self, *args, **kwargs):
f = kwargs.pop("f")
if distribution_strategy_context.get_cross_replica_context():
update_device = distribute_lib.get_update_device()
if update_device is not None:
# We are calling an assign function in an update context.
return f(self._v, *args, **kwargs)
# We are calling an assign function in cross replica context, wrap it in
# an update call.
return distribution_strategy_context.get_distribution_strategy().update(
self, f, *args, **kwargs)
else:
assert distribution_strategy_context.get_replica_context()
# We are calling an assign function in replica context.
# We reduce the value we want to assign/add/sub. More details about how we
# handle the different use cases can be found in the _reduce method.
# We call the function with the reduced value.
if self._aggregation == vs.VariableAggregation.NONE:
raise ValueError("You must specify an aggregation method to update a "
"a variable in Replica Context.")
def merge_fn(strategy, value, *other_args, **other_kwargs):
v = _apply_aggregation(strategy, value, self._aggregation, self)
return strategy.update(self, f, v, *other_args, **other_kwargs)
return distribution_strategy_context.get_replica_context().merge_call(
merge_fn, args=args, kwargs=kwargs)
def assign_sub(self, *args, **kwargs):
assign_sub_fn = lambda var, *a, **kw: var.assign_sub(*a, **kw)
return self._assign_func(f=assign_sub_fn, *args, **kwargs)
def assign_add(self, *args, **kwargs):
assign_add_fn = lambda var, *a, **kw: var.assign_add(*a, **kw)
return self._assign_func(f=assign_add_fn, *args, **kwargs)
def assign(self, *args, **kwargs):
assign_fn = lambda var, *a, **kw: var.assign(*a, **kw)
return self._assign_func(f=assign_fn, *args, **kwargs)
@property
def aggregation(self):
return self._aggregation
@property
def name(self):
return self._v.name
@property
def dtype(self):
return self._v.dtype
# TODO(josh11b): Test saving & restoring.
def _gather_saveables_for_checkpoint(self):
return {checkpointable.VARIABLE_VALUE_KEY: self._v}
# pylint: disable=multiple-statements
def __add__(self, o): return self._v + o
def __radd__(self, o): return o + self._v
def __sub__(self, o): return self._v - o
def __rsub__(self, o): return o - self._v
def __mul__(self, o): return self._v * o
def __rmul__(self, o): return o * self._v
def __truediv__(self, o): return self._v / o
def __rtruediv__(self, o): return o / self._v
def __floordiv__(self, o): return self._v // o
def __rfloordiv__(self, o): return o // self._v
def __mod__(self, o): return self._v % o
def __rmod__(self, o): return o % self._v
def __lt__(self, o): return self._v < o
def __le__(self, o): return self._v <= o
def __gt__(self, o): return self._v > o
def __ge__(self, o): return self._v >= o
def __and__(self, o): return self._v & o
def __rand__(self, o): return o & self._v
def __or__(self, o): return self._v | o
def __ror__(self, o): return o | self._v
def __xor__(self, o): return self._v ^ o
def __rxor__(self, o): return o ^ self._v
def __getitem__(self, o): return self._v[o]
def __pow__(self, o, modulo=None): return pow(self._v, o, modulo)
def __rpow__(self, o): return pow(o, self._v)
def __invert__(self): return ~self._v
def __neg__(self): return -self._v
def __abs__(self): return abs(self._v)
def __div__(self, o):
try:
return self._v.__div__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __rdiv__(self, o):
try:
return self._v.__rdiv__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __matmul__(self, o):
try:
return self._v.__matmul__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __rmatmul__(self, o):
try:
return self._v.__rmatmul__(o)
except AttributeError:
# See https://docs.python.org/3/library/constants.html#NotImplemented
return NotImplemented
def __str__(self):
return str(self._v)
def __repr__(self):
return repr(self._v)
def _should_act_as_resource_variable(self):
"""Pass resource_variable_ops.is_resource_variable check."""
pass
# Register a conversion function which reads the value of the variable,
# allowing instances of the class to be used as tensors.
def _tensor_conversion_aggregate(var, dtype=None, name=None, as_ref=False):
return ops.internal_convert_to_tensor(
var.get(), dtype=dtype, name=name, as_ref=as_ref)
ops.register_tensor_conversion_function(
AggregatingVariable, _tensor_conversion_aggregate)
ops.register_dense_tensor_like_type(AggregatingVariable)
| [
"[email protected]"
] | |
f193b93fe7ed2fb7532ba5e080503c1e75e790b1 | abad82a1f487c5ff2fb6a84059a665aa178275cb | /Codewars/8kyu/count-of-positives-slash-sum-of-negatives/Python/test.py | 61c68d640b924eca1c079ae4f9c87625028e4368 | [
"MIT"
] | permissive | RevansChen/online-judge | 8ae55f136739a54f9c9640a967ec931425379507 | ad1b07fee7bd3c49418becccda904e17505f3018 | refs/heads/master | 2021-01-19T23:02:58.273081 | 2019-07-05T09:42:40 | 2019-07-05T09:42:40 | 88,911,035 | 9 | 0 | null | null | null | null | UTF-8 | Python | false | false | 533 | py | # Python - 3.4.3
Test.describe("Basic tests")
Test.assert_equals(count_positives_sum_negatives([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, -11, -12, -13, -14, -15]),[10,-65])
Test.assert_equals(count_positives_sum_negatives([0, 2, 3, 0, 5, 6, 7, 8, 9, 10, -11, -12, -13, -14]),[8,-50])
Test.assert_equals(count_positives_sum_negatives([1]),[1,0])
Test.assert_equals(count_positives_sum_negatives([-1]),[0,-1])
Test.assert_equals(count_positives_sum_negatives([0,0,0,0,0,0,0,0,0]),[0,0])
Test.assert_equals(count_positives_sum_negatives([]),[])
| [
"[email protected]"
] | |
3580008934feddc279f08ed0aa8247a1544c764e | a519c248ccac7cfb4c934b5ad1159f4937117fba | /More_String_Manipulation/correct_password.py | c97737688bac4ff84491704a13c359e3ebae10d0 | [] | no_license | kaci65/Nichola_Lacey_Python_By_Example_BOOK | 1ae5654b82c01e320eaf8d1e41fb03804c0786fc | 2cc18d2d8351a990cee31e253c4bcb298ba4c266 | refs/heads/main | 2023-04-12T18:16:03.663399 | 2021-05-02T14:34:00 | 2021-05-02T14:34:00 | 346,003,770 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 333 | py | #!/usr/bin/python3
"""check if user input is correct the secondtime round"""
passwrd = input("Please enter password: ")
passwrd2 = input("Please confirm password: ")
if passwrd == passwrd2:
print("Thank you")
elif passwrd.islower() != passwrd2.islower():
print("They must be in the same case")
else:
print("Incorrect")
| [
"[email protected]"
] | |
cc1c4654fe4f9af3d5dcd665eaad982c2c01d813 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02900/s932480273.py | 879d6d669a5cb836b2df581b153b4b2ff7c9129f | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 367 | py | a,b=map(int,input().split())
def factorization(n):
a=[]
while n%2==0:
a.append(2)
n//=2
f=3
while f*f<=n:
if n%f==0:
a.append(f)
n//=f
else:
f+=2
if n!=1:
a.append(n)
return a
s_a=set(factorization(a))
s_b=set(factorization(b))
ans_arr=s_a&s_b
print(len(ans_arr)+1) | [
"[email protected]"
] | |
b363e8058c2887db3c204ca9ca827bc385baa3b7 | f0d713996eb095bcdc701f3fab0a8110b8541cbb | /pPyAgyeNEvQsBytaR_19.py | 4aff1636b0a9b914bbbca2ca40fe25b9d7d42fe7 | [] | no_license | daniel-reich/turbo-robot | feda6c0523bb83ab8954b6d06302bfec5b16ebdf | a7a25c63097674c0a81675eed7e6b763785f1c41 | refs/heads/main | 2023-03-26T01:55:14.210264 | 2021-03-23T16:08:01 | 2021-03-23T16:08:01 | 350,773,815 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 448 | py | """
Write a function that calculates the **factorial** of a number
**recursively**.
### Examples
factorial(5) ➞ 120
factorial(3) ➞ 6
factorial(1) ➞ 1
factorial(0) ➞ 1
### Notes
N/A
"""
def factorial(n):
counter = n
counter2 = n
if n == 0:
return 1
for x in range (counter2):
counter = counter - 1
if x < n and counter > 0:
n = n * counter
else:
n = n
return n
| [
"[email protected]"
] | |
af549c5f28809aa0ff4c12ecd2d66aad2f41e951 | 2b167e29ba07e9f577c20c54cb943861d0ccfa69 | /numerical_analysis_backup/large-scale-multiobj2/core-arch5-guard0-beta0-hebbe/pareto25.py | 24ae69bfb9cc96c1eae6bea2eb789a9b5259fd40 | [] | no_license | LiYan1988/kthOld_OFC | 17aeeed21e195d1a9a3262ec2e67d6b1d3f9ff0f | b1237577ea68ad735a65981bf29584ebd889132b | refs/heads/master | 2021-01-11T17:27:25.574431 | 2017-01-23T05:32:35 | 2017-01-23T05:32:35 | 79,773,237 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,486 | py | # -*- coding: utf-8 -*-
"""
Created on Thu Aug 4 15:15:10 2016
@author: li
optimize both throughput and connections
"""
#import sys
#sys.path.insert(0, '/home/li/Dropbox/KTH/numerical_analysis/ILPs')
import csv
from gurobipy import *
import numpy as np
from arch5_decomposition_new import Arch5_decompose
np.random.seed(2010)
num_cores=10
num_slots=320
i = 5
filename = 'traffic_matrix_pod250_load50_'+str(i)+'.csv'
# print filename
tm = []
with open(filename) as f:
reader = csv.reader(f)
for idx, row in enumerate(reader):
row = [float(u) for u in row]
tm.append(row)
tm = np.array(tm)
#%% arch2
corev = [6, 8]
connection_ub = []
throughput_ub = []
obj_ub = []
connection_lb = []
throughput_lb = []
obj_lb = []
connection_he = []
throughput_he = []
obj_he = []
for c in corev:
m = Arch5_decompose(tm, num_slots=num_slots, num_cores=c,
alpha=1,beta=0)
m.create_model_routing(mipfocus=1,timelimit=36000,mipgap=0.01, method=3,
threads=20)
connection_ub.append(m.connection_ub_)
throughput_ub.append(m.throughput_ub_)
obj_ub.append(m.obj_ub_)
np.save('core_usagex_i%d_c%d.npy'%(i,c), m.core_usagex)
# m.create_model_sa(mipfocus=1,timelimit=26000,mipgap=0.01, method=2,
# SubMIPNodes=2000, heuristics=0.8, threads=4, presolve=2)
# connection_lb.append(m.connection_lb_)
# throughput_lb.append(m.throughput_lb_)
# obj_lb.append(m.obj_lb_)
# m.write_result_csv('cnklist_lb_%d_%d.csv'%(i,c), m.cnklist_lb)
connection_lb.append(0)
throughput_lb.append(0)
obj_lb.append(0)
# m.heuristic()
# connection_he.append(m.obj_heuristic_connection_)
# throughput_he.append(m.obj_heuristic_throughput_)
# obj_he.append(m.obj_heuristic_)
# m.write_result_csv('cnklist_heuristic_%d_%d.csv'%(i,c),
# m.cnklist_heuristic_)
connection_he.append(0)
throughput_he.append(0)
obj_he.append(0)
result = np.array([corev,
connection_ub,throughput_ub,obj_ub,
connection_lb,throughput_lb,obj_lb,
connection_he,throughput_he,obj_he]).T
file_name = "result_pareto_arch5_old_2_{}.csv".format(i)
with open(file_name, 'w') as f:
writer = csv.writer(f, delimiter=',')
writer.writerow(['beta', 'connection_ub', 'throughput_ub',
'obj_ub', 'connection_lb', 'throughput_lb', 'obj_lb',
'connection_he', 'throughput_he', 'obj_he'])
writer.writerows(result)
| [
"[email protected]"
] | |
38043e53324b35022507d8c661ba877f03024257 | a92db55b2a21ac3e4191e22800c66f8155c39f18 | /backendrooms/Chat/urls.py | c850fd056c1be7ea8dd3ff01e4d9ad9ab67f554c | [] | no_license | thaopanda/EasyAccomd | 046fc4520d0234e66ab6b49c2da2c20db0d38d1c | f9e68833ac1a57ca1cdd3e5db37fd15ecdef3ef1 | refs/heads/master | 2023-03-26T07:48:16.038758 | 2021-03-25T05:03:45 | 2021-03-25T05:03:45 | 351,314,303 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 311 | py | from django.contrib import admin
from django.urls import path, include
from Chat import views
urlpatterns = [
path('thread/', views.GetThread.as_view()),
path('chat/<int:pk>/<int:begin>/<int:end>/', views.GetChat.as_view()),
path('threadAdmin/<str:username>/', views.GetThreadAdmin.as_view()),
]
| [
"[email protected]"
] | |
89e123beb4ec1f95633bb5373c04ca769c7422d5 | 904b71e153361110dad46a9bf204011adfeab429 | /realtime_accl_graph.py | c4a718928dd84407bff9e3aa0c663816caf5d302 | [] | no_license | MiyabiTane/BDM | dbe4c4de0910fa7519fdc2d4967c996e01462a04 | 841bed04bbaaae80b91dfd73259a74677f25820c | refs/heads/master | 2020-09-06T07:39:50.374633 | 2019-12-20T14:50:53 | 2019-12-20T14:50:53 | 220,366,358 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,396 | py | # -*- coding: utf-8 -*-
"""
matplotlibでリアルタイムプロットする例
無限にsin関数をplotし続ける
"""
from __future__ import unicode_literals, print_function
import numpy as np
import matplotlib.pyplot as plt
import smbus
import time
def pause_plot():
plt.figure(figsize=(10,8))
plt.subplots_adjust(hspace=1)
ax1 = plt.subplot(311)
ax2 = plt.subplot(312)
ax3 = plt.subplot(313)
x = [-0.9,-0.8,-0.7,-0.6,-0.5,-0.4,-0.3,-0.2,-0.1,0] #np.arange(-1, 1, 0.1)
y_x = [0,0,0,0,0,0,0,0,0,0] #np.sin(x)
y_y = [0,0,0,0,0,0,0,0,0,0]
y_z = [0,0,0,0,0,0,0,0,0,0]
ax1.set_xlim(-1, 0)
ax1.set_ylim(-1, 1)
ax2.set_xlim(-1, 0)
ax2.set_ylim(-1, 1)
ax3.set_xlim(-1, 0)
ax2.set_ylim(-1, 1)
# 初期化的に一度plotしなければならない
# そのときplotしたオブジェクトを受け取る受け取る必要がある.
# listが返ってくるので,注意
lines1, = ax1.plot(x, y_x, color="red")
ax1.set_title("acceleration x-axis")
lines2, = ax2.plot(x, y_y, color="blue")
ax2.set_title("acceleration y-axis")
lines3, = ax3.plot(x, y_z, color="green")
ax3.set_title("acceleration z-axis")
I2C_ADDR=0x1d
# Get I2C bus
bus = smbus.SMBus(1)
# Select Control register, 0x2A(42)
# 0x00(00) StandBy mode
bus.write_byte_data(I2C_ADDR, 0x2A, 0x00)
# Select Control register, 0x2A(42)
# 0x01(01) Active mode
bus.write_byte_data(I2C_ADDR, 0x2A, 0x01)
# Select Configuration register, 0x0E(14)
# 0x00(00) Set range to +/- 2g
bus.write_byte_data(I2C_ADDR, 0x0E, 0x00)
time.sleep(0.5)
# ここから無限にplotする
while True:
# plotデータの更新
data = bus.read_i2c_block_data(I2C_ADDR, 0x00, 7)
xAccl = (data[1] * 256 + data[2]) / 16
if xAccl > 2047 :
xAccl -= 4096
yAccl = (data[3] * 256 + data[4]) / 16
if yAccl > 2047 :
yAccl -= 4096
zAccl = (data[5] * 256 + data[6]) / 16
if zAccl > 2047 :
zAccl -= 4096
x = map(lambda p: p+0.1, x)
y_x.pop(0)
y_x.append(xAccl)
lines1.set_data(x, y_x)
ax1.set_xlim(min(x), max(x))
ax1.set_ylim(min(y_x)-10, max(y_x)+10)
y_y.pop(0)
y_y.append(yAccl)
lines2.set_data(x, y_y)
ax2.set_xlim(min(x), max(x))
ax2.set_ylim(min(y_y)-10, max(y_y)+10)
y_z.pop(0)
y_z.append(zAccl)
lines3.set_data(x, y_z)
ax3.set_xlim(min(x), max(x))
ax3.set_ylim(min(y_z)-10, max(y_z)+10)
# set_data()を使うと軸とかは自動設定されないっぽいので,
# 今回の例だとあっという間にsinカーブが描画範囲からいなくなる.
# そのためx軸の範囲は適宜修正してやる必要がある.
#ax.set_xlim(x-1, x)
# 一番のポイント
# - plt.show() ブロッキングされてリアルタイムに描写できない
# - plt.ion() + plt.draw() グラフウインドウが固まってプログラムが止まるから使えない
# ----> plt.pause(interval) これを使う!!! 引数はsleep時間
time.sleep(0.1)
plt.pause(.01)
if __name__ == "__main__":
pause_plot()
| [
"[email protected]"
] | |
74d9d462e821b53877b54326c9b3fcdc6186e434 | c4702d1a06640555829b367852138cc93ba4a161 | /dealer_sale_order/report/dealer_sale_order_dp_report.py | c8e2704dea9d21e634595e4ff5b1bf37fe55262f | [] | no_license | Rizalimami/dym | 0ecadf9c049b22ebfebf92e4eab6eaad17dd3e26 | af1bcf7b77a3212bc8a8a0e41e6042a134587ed4 | refs/heads/master | 2020-04-08T10:56:43.605698 | 2018-11-27T06:44:08 | 2018-11-27T06:44:08 | 159,287,876 | 0 | 2 | null | null | null | null | UTF-8 | Python | false | false | 1,945 | py | import time
from datetime import datetime
from openerp.report import report_sxw
from openerp.osv import osv
from openerp import pooler
import fungsi_terbilang
from openerp.tools import DEFAULT_SERVER_DATE_FORMAT, DEFAULT_SERVER_DATETIME_FORMAT, DATETIME_FORMATS_MAP
import pytz
from openerp.tools.translate import _
import base64
class dealer_sale_order(report_sxw.rml_parse):
def __init__(self, cr, uid, name, context=None):
super(dealer_sale_order, self).__init__(cr, uid, name, context=context)
self.localcontext.update({
'time': time,
'no_urut': self.no_urut,
'terbilang': self.terbilang,
'invoice_id': self.invoice_id,
'waktu_local': self.waktu_local,
})
self.no = 0
def no_urut(self):
self.no+=1
return self.no
def terbilang(self,amount):
hasil = fungsi_terbilang.terbilang(amount, "idr", 'id')
return hasil
def invoice_id(self):
invoice = self.pool.get('dealer.sale.order').browse(self.cr, self.uid, self.ids).name
invoice2 = self.pool.get('account.invoice').search(self.cr, self.uid,[ ('origin','ilike',invoice),('tipe','=','customer') ])
no_invoice = self.pool.get('account.invoice').browse(self.cr, self.uid,invoice2).number
return no_invoice
def waktu_local(self):
tanggal = datetime.now().strftime('%y%m%d')
menit = datetime.now()
user = self.pool.get('res.users').browse(self.cr, self.uid, self.uid)
tz = pytz.timezone(user.tz) if user.tz else pytz.utc
start = pytz.utc.localize(menit).astimezone(tz)
start_date = start.strftime("%d-%m-%Y %H:%M")
return start_date
report_sxw.report_sxw('report.rml.dealer.sale.order.dp.po', 'dealer.sale.order', 'addons/dealer_sale_order/report/dealer_sale_order_dp_report.rml', parser = dealer_sale_order, header = False) | [
"[email protected]"
] | |
0c38900605baafbc66468be8b179b55caad9d3e2 | e03e59d67c96c1afa0a1c76e62235a3e3f639976 | /django_test7sangpum_exam/django_test7sangpum_exam/urls.py | 0df11edfa6c35162ba1bb27fad7cc5209cac88cd | [] | no_license | kangmihee/EX_python | 10a63484802e6ff5454f12f7ade7e277dbf3df97 | 0a8dafe667f188cd89ef7f021823f6b4a9033dc0 | refs/heads/master | 2020-07-02T00:23:05.465127 | 2019-09-03T07:49:46 | 2019-09-03T07:49:46 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 968 | py | """django_test7sangpum_exam URL Configuration
The `urlpatterns` list routes URLs to views. For more information please see:
https://docs.djangoproject.com/en/2.2/topics/http/urls/
Examples:
Function views
1. Add an import: from my_app import views
2. Add a URL to urlpatterns: path('', views.home, name='home')
Class-based views
1. Add an import: from other_app.views import Home
2. Add a URL to urlpatterns: path('', Home.as_view(), name='home')
Including another URLconf
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""
from django.contrib import admin
from django.urls import path
from examapp import views
urlpatterns = [
path('admin/', admin.site.urls),
path('', views.Main),
path('list', views.list),
path('list/insert', views.insert),
path('list/insertok', views.insertok),
]
| [
"acorn@acorn-PC"
] | acorn@acorn-PC |
ab388f793983f211a87b38848eb922961c385516 | b6010878b98cc924bcafda43893e8ca1d375f4bb | /parser.py | a855ab1a133713966d57cc09d5575abd3b08e091 | [] | no_license | pepijndevos/bobcat | 7fc06e17dbec63750845bcd6b75465485bd9d07f | d4dea0b62c9fe9d4396be0d17f94d7a5bf289022 | refs/heads/master | 2020-03-26T03:36:02.603236 | 2018-09-01T14:38:27 | 2018-09-01T14:38:27 | 144,462,635 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,296 | py | from ply import lex
from ply.lex import TOKEN
tokens = [
# Identifiers
'ID',
# constants
'INT_CONST_DEC', 'INT_CONST_OCT', 'INT_CONST_HEX', 'INT_CONST_BIN',
'FLOAT_CONST', 'HEX_FLOAT_CONST',
'CHAR_CONST',
#'WCHAR_CONST',
# String literals
'STRING_LITERAL',
#'WSTRING_LITERAL',
]
literals = "[](),:"
# valid C identifiers (K&R2: A.2.3), plus '$' (supported by some compilers)
identifier = r'[a-zA-Z_$][0-9a-zA-Z_$]*'
hex_prefix = '0[xX]'
hex_digits = '[0-9a-fA-F]+'
bin_prefix = '0[bB]'
bin_digits = '[01]+'
# integer constants (K&R2: A.2.5.1)
integer_suffix_opt = r'(([uU]ll)|([uU]LL)|(ll[uU]?)|(LL[uU]?)|([uU][lL])|([lL][uU]?)|[uU])?'
decimal_constant = '(0'+integer_suffix_opt+')|([1-9][0-9]*'+integer_suffix_opt+')'
octal_constant = '0[0-7]*'+integer_suffix_opt
hex_constant = hex_prefix+hex_digits+integer_suffix_opt
bin_constant = bin_prefix+bin_digits+integer_suffix_opt
bad_octal_constant = '0[0-7]*[89]'
# character constants (K&R2: A.2.5.2)
# Note: a-zA-Z and '.-~^_!=&;,' are allowed as escape chars to support #line
# directives with Windows paths as filenames (..\..\dir\file)
# For the same reason, decimal_escape allows all digit sequences. We want to
# parse all correct code, even if it means to sometimes parse incorrect
# code.
#
simple_escape = r"""([a-zA-Z._~!=&\^\-\\?'"])"""
decimal_escape = r"""(\d+)"""
hex_escape = r"""(x[0-9a-fA-F]+)"""
bad_escape = r"""([\\][^a-zA-Z._~^!=&\^\-\\?'"x0-7])"""
escape_sequence = r"""(\\("""+simple_escape+'|'+decimal_escape+'|'+hex_escape+'))'
cconst_char = r"""([^'\\\n]|"""+escape_sequence+')'
char_const = "'"+cconst_char+"'"
wchar_const = 'L'+char_const
unmatched_quote = "('"+cconst_char+"*\\n)|('"+cconst_char+"*$)"
unmatched_doublequote = "(\""+cconst_char+"*\\n)|(\""+cconst_char+"*$)"
bad_char_const = r"""('"""+cconst_char+"""[^'\n]+')|('')|('"""+bad_escape+r"""[^'\n]*')"""
# string literals (K&R2: A.2.6)
string_char = r"""([^"\\\n]|"""+escape_sequence+')'
string_literal = '"'+string_char+'*"'
wstring_literal = 'L'+string_literal
bad_string_literal = '"'+string_char+'*?'+bad_escape+string_char+'*"'
# floating constants (K&R2: A.2.5.3)
exponent_part = r"""([eE][-+]?[0-9]+)"""
fractional_constant = r"""([0-9]*\.[0-9]+)|([0-9]+\.)"""
floating_constant = '(((('+fractional_constant+')'+exponent_part+'?)|([0-9]+'+exponent_part+'))[FfLl]?)'
binary_exponent_part = r'''([pP][+-]?[0-9]+)'''
hex_fractional_constant = '((('+hex_digits+r""")?\."""+hex_digits+')|('+hex_digits+r"""\.))"""
hex_floating_constant = '('+hex_prefix+'('+hex_digits+'|'+hex_fractional_constant+')'+binary_exponent_part+'[FfLl]?)'
t_ignore = ' \t'
# Newlines
def t_NEWLINE(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
t_STRING_LITERAL = string_literal
# The following floating and integer constants are defined as
# functions to impose a strict order (otherwise, decimal
# is placed before the others because its regex is longer,
# and this is bad)
#
@TOKEN(floating_constant)
def t_FLOAT_CONST(t):
return t
@TOKEN(hex_floating_constant)
def t_HEX_FLOAT_CONST(t):
return t
@TOKEN(hex_constant)
def t_INT_CONST_HEX(t):
return t
@TOKEN(bin_constant)
def t_INT_CONST_BIN(t):
return t
@TOKEN(bad_octal_constant)
def t_BAD_CONST_OCT(t):
print("Invalid octal constant")
t.lexer.skip(1)
@TOKEN(octal_constant)
def t_INT_CONST_OCT(t):
return t
@TOKEN(decimal_constant)
def t_INT_CONST_DEC(t):
return t
# Must come before bad_char_const, to prevent it from
# catching valid char constants as invalid
#
@TOKEN(char_const)
def t_CHAR_CONST(t):
return t
@TOKEN(wchar_const)
def t_WCHAR_CONST(t):
return t
@TOKEN(unmatched_quote)
def t_UNMATCHED_QUOTE(t):
print("Unmatched '")
t.lexer.skip(1)
@TOKEN(bad_char_const)
def t_BAD_CHAR_CONST(t):
print("Invalid char constant %s" % t.value)
t.lexer.skip(1)
@TOKEN(wstring_literal)
def t_WSTRING_LITERAL(t):
return t
@TOKEN(unmatched_doublequote)
def t_UNMATCHED_DOUBLEQUOTE(t):
print("Unmatched \"")
t.lexer.skip(1)
@TOKEN(bad_string_literal)
def t_BAD_STRING_LITERAL(t):
print("Invalid string literal")
t.lexer.skip(1)
@TOKEN(identifier)
def t_ID(t):
return t
def t_COMMENT(t):
r'\#.*'
pass
# Error handling rule
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
lexer = lex.lex()
import ply.yacc as yacc
from collections import namedtuple
class Juxt(list):
def __repr__(self):
return "Juxt"+list.__repr__(self)
Def = namedtuple("Def", ["name", "quotation"])
Lit = namedtuple("Lit", ["type", "value"])
Node = namedtuple("Node", ["type", "quotation"])
def p_expression(p):
"""expression : empty
| expression part"""
if len(p) > 2:
p[1].append(p[2])
p[0] = p[1]
else:
p[0] = []
def p_part(p):
"""part : definition
| word
| juxt
| quotation
| node"""
p[0] = p[1]
def p_word_id(p):
"""word : ID"""
p[0] = p[1]
def p_word_char(p):
"""word : CHAR_CONST"""
p[0] = Lit('char', p[1])
def p_word_float(p):
"""word : FLOAT_CONST
| HEX_FLOAT_CONST"""
p[0] = Lit('float', p[1])
def p_word_int(p):
"""word : INT_CONST_BIN
| INT_CONST_DEC
| INT_CONST_HEX
| INT_CONST_OCT"""
p[0] = Lit('int', p[1])
def p_word_string(p):
"""word : STRING_LITERAL"""
p[0] = Lit('char*', p[1])
def p_juxt(p):
"""juxt : word ',' word
| juxt ',' word"""
if isinstance(p[1], Juxt):
p[1].append(p[3])
p[0] = p[1]
else:
p[0] = Juxt([p[1], p[3]])
def p_quotation(p):
"""quotation : '[' expression ']'"""
p[0] = p[2]
def p_definition(p):
"""definition : ID ':' quotation"""
p[0] = Def(p[1], p[3])
def p_node(p):
"""node : ID '(' expression ')'"""
p[0] = Node(p[1], p[3])
def p_empty(p):
'empty :'
pass
# Error rule for syntax errors
def p_error(p):
print("Syntax error in input!")
# Build the parser
parser = yacc.yacc()
if __name__ == "__main__":
while True:
try:
s = input('> ')
except EOFError:
break
if not s: continue
result = parser.parse(s)
print(result)
| [
"[email protected]"
] | |
27456c5262db06b8bef96281690b636ec91d8907 | 073c2fd73875ce4e7d061623b8403f8d77c45d92 | /cohesity_management_sdk/models/restore_app_task_state_proto.py | 10ff0ddfb0d0caef8f2dea7883811f228b6e1723 | [
"Apache-2.0"
] | permissive | naveena-maplelabs/management-sdk-python | b11441b2edccc5a1262785bd559ad4b3ea984c3b | 06ce4119d955dc08cdbc5109c935afcfcd9d65ab | refs/heads/master | 2021-05-20T10:52:12.776816 | 2020-03-10T03:28:08 | 2020-03-10T03:28:08 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 3,158 | py | # -*- coding: utf-8 -*-
# Copyright 2020 Cohesity Inc.
import cohesity_management_sdk.models.restore_app_params
class RestoreAppTaskStateProto(object):
"""Implementation of the 'RestoreAppTaskStateProto' model.
TODO: type model description here.
Attributes:
app_restore_progress_monitor_subtask_path (string): The Pulse task
path to the application restore task sub tree. If the application
restore has to wait on other tasks (for example, a SQL db restore
may wait for a tail log backup or a VM restore), then this would
represent a sub-tree of 'progress_monitor_task_path' in
PerformRestoreTaskStateProto.
last_finished_log_backup_start_time_usecs (long|int): The start time
of the last finished log backup run. For SQL application, this is
set iff we need to take a tail log backup.
restore_app_params (RestoreAppParams): This message captures all the
necessary arguments specified by the user to restore an
application.
"""
# Create a mapping from Model property names to API property names
_names = {
"app_restore_progress_monitor_subtask_path":'appRestoreProgressMonitorSubtaskPath',
"last_finished_log_backup_start_time_usecs":'lastFinishedLogBackupStartTimeUsecs',
"restore_app_params":'restoreAppParams'
}
def __init__(self,
app_restore_progress_monitor_subtask_path=None,
last_finished_log_backup_start_time_usecs=None,
restore_app_params=None):
"""Constructor for the RestoreAppTaskStateProto class"""
# Initialize members of the class
self.app_restore_progress_monitor_subtask_path = app_restore_progress_monitor_subtask_path
self.last_finished_log_backup_start_time_usecs = last_finished_log_backup_start_time_usecs
self.restore_app_params = restore_app_params
@classmethod
def from_dictionary(cls,
dictionary):
"""Creates an instance of this model from a dictionary
Args:
dictionary (dictionary): A dictionary representation of the object as
obtained from the deserialization of the server's response. The keys
MUST match property names in the API description.
Returns:
object: An instance of this structure class.
"""
if dictionary is None:
return None
# Extract variables from the dictionary
app_restore_progress_monitor_subtask_path = dictionary.get('appRestoreProgressMonitorSubtaskPath')
last_finished_log_backup_start_time_usecs = dictionary.get('lastFinishedLogBackupStartTimeUsecs')
restore_app_params = cohesity_management_sdk.models.restore_app_params.RestoreAppParams.from_dictionary(dictionary.get('restoreAppParams')) if dictionary.get('restoreAppParams') else None
# Return an object of this model
return cls(app_restore_progress_monitor_subtask_path,
last_finished_log_backup_start_time_usecs,
restore_app_params)
| [
"[email protected]"
] | |
cb970200d41ae02ac8af2dcd141947007c3de70d | 0a5e8bd5d1291afdb99bb5682611c9ddabacefe7 | /LSCSM.py | 88c70fd0922c7a58911929514aa7bef344193ab5 | [] | no_license | antolikjan/topographica-contrib | f89374e95d741ca1b7fad708be575bd705be79e0 | 3842e33811821dca58258b00442d6928b6804f7f | refs/heads/master | 2020-12-12T21:01:01.699128 | 2014-05-23T14:57:49 | 2014-05-23T14:57:49 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 43,494 | py | from scipy.optimize import fmin_ncg, anneal, fmin_cg, fmin_bfgs, fmin_tnc, fmin_l_bfgs_b
import __main__
import numpy
import pylab
import sys
sys.path.append('/home/antolikjan/topographica/Theano/')
import theano
theano.config.floatX='float32'
#theano.config.warn.sum_sum_bug=False
from theano import tensor as T
from topo.misc.filepath import normalize_path, application_path
from contrib.JanA.ofestimation import *
from contrib.modelfit import *
import contrib.dd
import contrib.JanA.dataimport
from contrib.JanA.regression import laplaceBias
from pyevolve import *
from contrib.JanA.visualization import printCorrelationAnalysis
from theano.tensor.shared_randomstreams import RandomStreams
from topo import numbergen
import contrib.JanA.SG
#profmode = theano.ProfileMode(optimizer='FAST_RUN', linker=theano.gof.OpWiseCLinker())
class LSCSM(object):
def __init__(self,XX,YY,num_lgn,num_neurons):
(self.num_pres,self.image_size) = numpy.shape(XX)
self.num_lgn = num_lgn
self.num_neurons = num_neurons
self.size = numpy.sqrt(self.image_size)
self.xx = theano.shared(numpy.asarray(numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).T.flatten(),dtype=theano.config.floatX))
self.yy = theano.shared(numpy.asarray(numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).flatten(),dtype=theano.config.floatX))
self.Y = theano.shared(numpy.asarray(YY,dtype=theano.config.floatX))
self.X = theano.shared(numpy.asarray(XX,dtype=theano.config.floatX))
self.of = __main__.__dict__.get('OF','Exp')
self.K = T.fvector('K')
self.index = T.lscalar
#self.KK = theano.printing.Print(message='My mesasge')(self.K)
self.x = self.K[0:self.num_lgn]
self.y = self.K[self.num_lgn:2*self.num_lgn]
self.s = self.K[2*self.num_lgn:3*self.num_lgn]
self.a = T.reshape(self.K[3*self.num_lgn:3*self.num_lgn+num_neurons*self.num_lgn],(self.num_lgn,self.num_neurons))
self.n = self.K[3*self.num_lgn+num_neurons*self.num_lgn:3*self.num_lgn+num_neurons*self.num_lgn+self.num_neurons]
lgn_output,updates = theano.scan(lambda i,x,y,s: T.dot(self.X,T.exp(-T.div_proxy(((self.xx - x[i])**2 + (self.yy - y[i])**2),s[i] )).T), sequences= T.arange(self.num_lgn), non_sequences=[self.x,self.y,self.s])
self.output = T.dot(lgn_output.T,self.a)
self.model = self.construct_of(self.output-self.n)
if __main__.__dict__.get('LL',True):
self.loglikelyhood = T.sum(self.model) - T.sum(self.Y * T.log(self.model))
else:
self.loglikelyhood = T.sum(T.sqr(self.model.T - self.Y))
def func(self):
return theano.function(inputs=[self.K], outputs=self.loglikelyhood,mode='FAST_RUN')
def der(self):
g_K = T.grad(self.loglikelyhood, self.K)
return theano.function(inputs=[self.K], outputs=g_K,mode='FAST_RUN')
def response(self,X,kernels):
self.X.value = X
resp = theano.function(inputs=[self.K], outputs=self.model,mode='FAST_RUN')
return resp(kernels)
def construct_of(self,inn):
if self.of == 'Exp':
return T.exp(inn)
elif self.of == 'Sigmoid':
return 1.0 / (1 + T.exp(-inn))
elif self.of == 'SoftSign':
return 1+ (inn / (1 + T.abs_(inn)))
elif self.of == 'Square':
return T.sqr(inn)
elif self.of == 'ExpExp':
return T.exp(T.exp(inn))
elif self.of == 'ExpSquare':
return T.exp(T.sqr(inn))
elif self.of == 'LogisticLoss':
return __main__.__dict__.get('LogLossCoef',1.0)*T.log(1+T.exp(__main__.__dict__.get('LogLossCoef',1.0)*inn))
def returnRFs(self,K):
x = K[0:self.num_lgn]
y = K[self.num_lgn:2*self.num_lgn]
s = K[2*self.num_lgn:3*self.num_lgn]
a = numpy.reshape(K[3*self.num_lgn:3*self.num_lgn+self.num_neurons*self.num_lgn],(self.num_lgn,self.num_neurons))
n = K[3*self.num_lgn+self.num_neurons*self.num_lgn:3*self.num_lgn+self.num_neurons*self.num_lgn+self.num_neurons]
rfs = numpy.zeros((self.num_neurons,self.image_size))
xx = numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).T.flatten()
yy = numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).flatten()
print x
print y
print s
print a
print n
for j in xrange(self.num_neurons):
for i in xrange(0,self.num_lgn):
rfs[j,:] += a[i,j]*numpy.exp(-((xx - x[i])**2 + (yy - y[i])**2)/s[i])
return rfs
class LSCSM1(object):
def __init__(self,XX,YY,num_lgn,num_neurons,batch_size=100):
(self.num_pres,self.image_size) = numpy.shape(XX)
self.num_lgn = num_lgn
self.num_neurons = num_neurons
self.size = numpy.sqrt(self.image_size)
self.hls = __main__.__dict__.get('HiddenLayerSize',1.0)
self.divisive = __main__.__dict__.get('Divisive',False)
self.batch_size=batch_size
self.xx = theano.shared(numpy.asarray(numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).T.flatten(),dtype=theano.config.floatX))
self.yy = theano.shared(numpy.asarray(numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).flatten(),dtype=theano.config.floatX))
self.Y = theano.shared(numpy.asarray(YY,dtype=theano.config.floatX))
self.X = theano.shared(numpy.asarray(XX,dtype=theano.config.floatX))
self.v1of = __main__.__dict__.get('V1OF','Exp')
self.lgnof = __main__.__dict__.get('LGNOF','Exp')
self.K = T.fvector('K')
self.index = T.lscalar('I')
#srng = RandomStreams(seed=234)
#self.index = srng.random_integers((1,1),high=self.num_pres-batch_size)[0]
self.x = self.K[0:self.num_lgn]
self.y = self.K[self.num_lgn:2*self.num_lgn]
self.sc = self.K[2*self.num_lgn:3*self.num_lgn]
self.ss = self.K[3*self.num_lgn:4*self.num_lgn]
idx = 4*self.num_lgn
if not __main__.__dict__.get('BalancedLGN',True):
self.rc = self.K[idx:idx+self.num_lgn]
self.rs = self.K[idx+self.num_lgn:idx+2*self.num_lgn]
idx = idx + 2*self.num_lgn
if __main__.__dict__.get('LGNTreshold',False):
self.ln = self.K[idx:idx + self.num_lgn]
idx += self.num_lgn
if __main__.__dict__.get('SecondLayer',False):
self.a = T.reshape(self.K[idx:idx+int(num_neurons*self.hls)*self.num_lgn],(self.num_lgn,int(self.num_neurons*self.hls)))
idx += int(num_neurons*self.hls)*self.num_lgn
self.a1 = T.reshape(self.K[idx:idx+num_neurons*int(self.num_neurons*self.hls)],(int(self.num_neurons*self.hls),self.num_neurons))
idx = idx+num_neurons*int(num_neurons*self.hls)
if self.divisive:
self.d = T.reshape(self.K[idx:idx+int(num_neurons*self.hls)*self.num_lgn],(self.num_lgn,int(self.num_neurons*self.hls)))
idx += int(num_neurons*self.hls)*self.num_lgn
self.d1 = T.reshape(self.K[idx:idx+num_neurons*int(self.num_neurons*self.hls)],(int(self.num_neurons*self.hls),self.num_neurons))
idx = idx+num_neurons*int(num_neurons*self.hls)
else:
self.a = T.reshape(self.K[idx:idx+num_neurons*self.num_lgn],(self.num_lgn,self.num_neurons))
idx += num_neurons*self.num_lgn
if self.divisive:
self.d = T.reshape(self.K[idx:idx+num_neurons*self.num_lgn],(self.num_lgn,self.num_neurons))
idx += num_neurons*self.num_lgn
self.n = self.K[idx:idx+self.num_neurons]
idx += num_neurons
if self.divisive:
self.nd = self.K[idx:idx+self.num_neurons]
idx += num_neurons
if __main__.__dict__.get('SecondLayer',False):
self.n1 = self.K[idx:idx+int(self.num_neurons*self.hls)]
idx += int(self.num_neurons*self.hls)
if self.divisive:
self.nd1 = self.K[idx:idx+int(self.num_neurons*self.hls)]
idx += int(self.num_neurons*self.hls)
if __main__.__dict__.get('BalancedLGN',True):
lgn_kernel = lambda i,x,y,sc,ss: T.dot(self.X,(T.exp(-((self.xx - x[i])**2 + (self.yy - y[i])**2)/sc[i]).T/ T.sqrt(sc[i]*numpy.pi)) - (T.exp(-((self.xx - x[i])**2 + (self.yy - y[i])**2)/ss[i]).T/ T.sqrt(ss[i]*numpy.pi)))
lgn_output,updates = theano.scan(lgn_kernel , sequences= T.arange(self.num_lgn), non_sequences=[self.x,self.y,self.sc,self.ss])
else:
lgn_kernel = lambda i,x,y,sc,ss,rc,rs: T.dot(self.X,rc[i]*(T.exp(-((self.xx - x[i])**2 + (self.yy - y[i])**2)/sc[i]).T/ T.sqrt(sc[i]*numpy.pi)) - rs[i]*(T.exp(-((self.xx - x[i])**2 + (self.yy - y[i])**2)/ss[i]).T/ T.sqrt(ss[i]*numpy.pi)))
lgn_output,updates = theano.scan(lgn_kernel,sequences=T.arange(self.num_lgn),non_sequences=[self.x,self.y,self.sc,self.ss,self.rc,self.rs])
#lgn_output = theano.printing.Print(message='lgn output:')(lgn_output)
lgn_output = lgn_output.T
if __main__.__dict__.get('LGNTreshold',False):
lgn_output = lgn_output - self.ln.T
lgn_output = self.construct_of(lgn_output,self.lgnof)
self.output = T.dot(lgn_output,self.a)
#self.output = theano.printing.Print(message='Output1:')(self.output)
#self.n = theano.printing.Print(message='N:')(self.n)
#self.output = theano.printing.Print(message='Output2:')(self.output)
if __main__.__dict__.get('SecondLayer',False):
if self.divisive:
#self.model_output = self.construct_of((self.output-self.n1)/(1.0+T.dot(lgn_output,self.d)-self.nd1),self.v1of)
self.model_output = self.construct_of(self.output-self.n1,self.v1of)
self.model_output = self.construct_of( (T.dot(self.model_output , self.a1) - self.n)/(1.0+T.dot(self.model_output , self.d1) - self.nd),self.v1of)
else:
self.model_output = self.construct_of(self.output-self.n1,self.v1of)
self.model_output = self.construct_of(T.dot(self.model_output , self.a1) - self.n,self.v1of)
else:
if self.divisive:
self.model_output = self.construct_of((self.output-self.n)/(1.0+T.dot(lgn_output,self.d)-self.nd),self.v1of)
else:
self.model_output = self.construct_of(self.output-self.n,self.v1of)
if __main__.__dict__.get('LL',True):
#self.model_output = theano.printing.Print(message='model output:')(self.model_output)
ll = T.sum(self.model_output) - T.sum(self.Y * T.log(self.model_output+0.0000000000000000001))
if __main__.__dict__.get('Sparse',False):
ll += __main__.__dict__.get('FLL1',1.0)*T.sum(abs(self.a)) + __main__.__dict__.get('FLL2',1.0)*T.sum(self.a**2)
if __main__.__dict__.get('SecondLayer',False):
ll += __main__.__dict__.get('SLL1',1.0)*T.sum(abs(self.a1)) + __main__.__dict__.get('SLL2',1.0)**T.sum(self.a1**2)
else:
ll = T.sum(T.sqr(self.model_output - self.Y))
#ll = theano.printing.Print(message='LL:')(ll)
self.loglikelyhood = ll
def func(self):
return theano.function(inputs=[self.K], outputs=self.loglikelyhood,mode='FAST_RUN')
def der(self):
g_K = T.grad(self.loglikelyhood, self.K)
return theano.function(inputs=[self.K], outputs=g_K,mode='FAST_RUN')
def response(self,X,kernels):
self.X.value = X
resp = theano.function(inputs=[self.K], outputs=self.model_output,mode='FAST_RUN')
return resp(kernels)
def construct_of(self,inn,of):
if of == 'Linear':
return inn
if of == 'Exp':
return T.exp(inn)
elif of == 'Sigmoid':
return 5.0 / (1.0 + T.exp(-inn))
elif of == 'SoftSign':
return inn / (1 + T.abs_(inn))
elif of == 'Square':
return T.sqr(inn)
elif of == 'ExpExp':
return T.exp(T.exp(inn))
elif of == 'ExpSquare':
return T.exp(T.sqr(inn))
elif of == 'LogisticLoss':
return __main__.__dict__.get('LogLossCoef',1.0)*T.log(1+T.exp(__main__.__dict__.get('LogLossCoef',1.0)*inn))
def returnRFs(self,K):
x = K[0:self.num_lgn]
y = K[self.num_lgn:2*self.num_lgn]
sc = K[2*self.num_lgn:3*self.num_lgn]
ss = K[3*self.num_lgn:4*self.num_lgn]
idx = 4*self.num_lgn
if not __main__.__dict__.get('BalancedLGN',True):
rc = K[idx:idx+self.num_lgn]
rs = K[idx+self.num_lgn:idx+2*self.num_lgn]
idx = idx + 2*self.num_lgn
if __main__.__dict__.get('LGNTreshold',False):
ln = K[idx:idx + self.num_lgn]
idx += self.num_lgn
if __main__.__dict__.get('SecondLayer',False):
a = numpy.reshape(K[idx:idx+int(self.num_neurons*self.hls)*self.num_lgn],(self.num_lgn,int(self.num_neurons*self.hls)))
idx += int(self.num_neurons*self.hls)*self.num_lgn
a1 = numpy.reshape(K[idx:idx+self.num_neurons*int(self.num_neurons*self.hls)],(int(self.num_neurons*self.hls),self.num_neurons))
idx = idx+self.num_neurons*int(self.num_neurons*self.hls)
if self.divisive:
d = numpy.reshape(K[idx:idx+int(self.num_neurons*self.hls)*self.num_lgn],(self.num_lgn,int(self.num_neurons*self.hls)))
idx += int(self.num_neurons*self.hls)*self.num_lgn
d1 = numpy.reshape(K[idx:idx+self.num_neurons*int(self.num_neurons*self.hls)],(int(self.num_neurons*self.hls),self.num_neurons))
idx = idx+self.num_neurons*int(self.num_neurons*self.hls)
else:
a = numpy.reshape(K[idx:idx+self.num_neurons*self.num_lgn],(self.num_lgn,self.num_neurons))
idx += self.num_neurons*self.num_lgn
if self.divisive:
d = numpy.reshape(K[idx:idx+self.num_neurons*self.num_lgn],(self.num_lgn,self.num_neurons))
idx += self.num_neurons*self.num_lgn
n = K[idx:idx+self.num_neurons]
if __main__.__dict__.get('SecondLayer',False):
n1 = K[idx+self.num_neurons:idx+self.num_neurons+int(self.num_neurons*self.hls)]
xx = numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).T.flatten()
yy = numpy.repeat([numpy.arange(0,self.size,1)],self.size,axis=0).flatten()
print 'X'
print x
print 'Y'
print y
print 'SS'
print ss
print 'SC'
print sc
print 'A'
print a
print 'N'
print n
if not __main__.__dict__.get('BalancedLGN',True):
print 'RS'
print rs
print 'RC'
print rc
if __main__.__dict__.get('SecondLayer',False):
print 'A1'
print a1
print self.hls
pylab.figure()
pylab.imshow(a1)
if __main__.__dict__.get('LGNTreshold',False):
print 'LN'
print ln
if __main__.__dict__.get('SecondLayer',False):
num_neurons_first_layer = int(self.num_neurons*self.hls)
else:
num_neurons_first_layer = self.num_neurons
rfs = numpy.zeros((num_neurons_first_layer,self.image_size))
for j in xrange(num_neurons_first_layer):
for i in xrange(0,self.num_lgn):
if __main__.__dict__.get('BalancedLGN',True):
rfs[j,:] += a[i,j]*(numpy.exp(-((xx - x[i])**2 + (yy - y[i])**2)/sc[i])/numpy.sqrt((sc[i]*numpy.pi)) - numpy.exp(-((xx - x[i])**2 + (yy - y[i])**2)/ss[i])/numpy.sqrt((ss[i]*numpy.pi)))
else:
rfs[j,:] += a[i,j]*(rc[i]*numpy.exp(-((xx - x[i])**2 + (yy - y[i])**2)/sc[i])/numpy.sqrt((sc[i]*numpy.pi)) - rs[i]*numpy.exp(-((xx - x[i])**2 + (yy - y[i])**2)/ss[i])/numpy.sqrt((ss[i]*numpy.pi)))
return rfs
bounds = []
class BoundsSafeFunction(object):
def __init__(self,function,bounds):
self.fun = function
self.mins = zip*(bounds)[0]
self.maxs = zip*(bounds)[1]
def fun(inp):
print 'C'
print inp
above_bounds = inp > maxs
below_bounds = inp < mins
if numpy.sum(above_bounds)+numpy.sum(above_bounds):
inp[numpy.nonzero(above_bounds)] = maxs[numpy.nonzero(above_bounds)]
inp[numpy.nonzero(below_bounds)] = mins[numpy.nonzero(below_bounds)]
z = self.fun(inp)
z = z+numpy.sqrt(numpy.power(inp[numpy.nonzero(above_bounds)]-maxs[numpy.nonzero(above_bounds)],2) +numpy.power(inp[numpy.nonzero(below_bounds)]-maxs[numpy.nonzero(below_bounds)],2))
print z
return z
else:
z = self.fun(inn)
print z
return z
def func(inp):
index = int(contrib.JanA.LSCSM.r()*(contrib.JanA.LSCSM.num_pres-contrib.JanA.LSCSM.batch_size))
contrib.JanA.LSCSM.lscsm.X.value = contrib.JanA.LSCSM.X[index:index+contrib.JanA.LSCSM.batch_size,:]
contrib.JanA.LSCSM.lscsm.Y.value = contrib.JanA.LSCSM.Y[index:index+contrib.JanA.LSCSM.batch_size,:]
return contrib.JanA.LSCSM.fun(inp)
def der(inp):
index = int(contrib.JanA.LSCSM.r()*(contrib.JanA.LSCSM.num_pres-contrib.JanA.LSCSM.batch_size))
contrib.JanA.LSCSM.lscsm.X.value = contrib.JanA.LSCSM.X[index:index+contrib.JanA.LSCSM.batch_size,:]
contrib.JanA.LSCSM.lscsm.Y.value = contrib.JanA.LSCSM.Y[index:index+contrib.JanA.LSCSM.batch_size,:]
return contrib.JanA.LSCSM.de(inp)
class GGEvo(object):
def __init__(self,XX,YY,num_lgn,num_neurons,bounds,batch_size=600):
self.XX = XX
self.YY = YY
self.batch_size = batch_size
self.num_lgn = num_lgn
self.num_neurons = num_neurons
self.num_pres = numpy.shape(XX)[0]
self.lscsm = LSCSM1(numpy.mat(XX),numpy.mat(YY),num_lgn,num_neurons)
self.lscsm_func = self.lscsm.func()
self.lscsm_der = self.lscsm.der()
self.num_eval = __main__.__dict__.get('NumEval',10)
self.bounds = bounds
self.r = numbergen.UniformRandom(seed=513)
if __main__.__dict__.get('EarlyStop',False):
self.valXX = XX[0:__main__.__dict__.get('EarlyStopSize',100),:]
self.valYY = YY[0:__main__.__dict__.get('EarlyStopSize',100),:]
self.XX = XX[__main__.__dict__.get('EarlyStopSize',100):,:]
self.YY = YY[__main__.__dict__.get('EarlyStopSize',100):,:]
contrib.JanA.LSCSM.fun = self.lscsm_func
contrib.JanA.LSCSM.de = self.lscsm_der
contrib.JanA.LSCSM.r= self.r
contrib.JanA.LSCSM.X = self.XX
contrib.JanA.LSCSM.Y = self.YY
contrib.JanA.LSCSM.lscsm= self.lscsm
contrib.JanA.LSCSM.batch_size = self.batch_size
contrib.JanA.LSCSM.num_pres = self.num_pres
def perform_gradient_descent(self,chromosome):
inp = numpy.array([v for v in chromosome])
if self.num_eval != 0:
#(new_K,success,c)=fmin_l_bfgs_b(self.func,inp[:],fprime=self.der,bounds=self.bounds,maxfun = self.num_eval,m=100)
if __main__.__dict__.get('LRAlg','TNC') == 'TNC':
(new_K,success,c)=fmin_tnc(self.lscsm_func,inp[:],fprime=self.lscsm_der,bounds=self.bounds,maxfun = self.num_eval,messages=0,approx_grad=0)
if __main__.__dict__.get('LRAlg','TNC') == 'SG':
new_K = contrib.JanA.SG.SG(inp,self.XX,self.YY,self.lscsm,self.lscsm_der,self.bounds,learning_rate=__main__.__dict__.get('LR',0.000001),num_steps=self.num_eval,batch_size=__main__.__dict__.get('BATCH_SIZE',600))
for i in xrange(0,len(chromosome)):
chromosome[i] = new_K[i]
if __main__.__dict__.get('EarlyStop',False):
self.lscsm.X.value = self.valXX
self.lscsm.Y.value = self.valYY
score = self.lscsm_func(numpy.array(new_K))
self.lscsm.X.value = self.XX
self.lscsm.Y.value = self.YY
else:
self.lscsm.X.value = self.XX
self.lscsm.Y.value = self.YY
score = self.lscsm_func(numpy.array(new_K))
else:
print 'DERIVATION'
print self.der(numpy.array(inp))
print 'FUNC'
print self.func(numpy.array(inp))
#score = self.func(numpy.array(inp))
score=1
#K = fmin_bfgs(self.func,numpy.array(inp),fprime=self.der,maxiter=2,full_output=0)
#score = self.func(K)
#(K,score,t1,t2,t3,t4,t5,t6) = fmin_ncg(self.func,numpy.array(inp),self.der,fhess = self.hess,avextol=0.00001,maxiter=2,full_output=1)
#print z
#(K,score,t1,t2,t3,t4,t5,t6) = fmin_ncg(self.func,numpy.array(inp),self.der,fhess = self.hess,avextol=0.00001,maxiter=2,full_output=1)
return score
def objF(inp):
a = numpy.array(zip(contrib.JanA.LSCSM.bounds))
mins = a[:,0,0]
maxs = a[:,0,1]
above_bounds = numpy.array(inp) > numpy.array(maxs)
below_bounds = numpy.array(inp) < numpy.array(mins)
if numpy.sum(above_bounds)+numpy.sum(below_bounds):
inp[numpy.nonzero(above_bounds)] = maxs[numpy.nonzero(above_bounds)]
inp[numpy.nonzero(below_bounds)] = mins[numpy.nonzero(below_bounds)]
z = contrib.JanA.LSCSM.fun(inp)
z = z+1000000*numpy.sqrt(numpy.sum(numpy.power(inp[numpy.nonzero(above_bounds)]-maxs[numpy.nonzero(above_bounds)],2)
) +1000000*numpy.sum(numpy.power(inp[numpy.nonzero(below_bounds)]-maxs[numpy.nonzero(below_bounds)],2)))
return z
else:
z = contrib.JanA.LSCSM.fun(inp)
return z
class GGEvoPyBrain(object):
def __init__(self,XX,YY,num_lgn,num_neurons,bounds):
self.XX = XX
self.YY = YY
self.num_lgn = num_lgn
self.num_neurons = num_neurons
self.lscsm = LSCSM1(numpy.mat(XX),numpy.mat(YY),num_lgn,num_neurons)
self.func = self.lscsm.func()
self.der = self.lscsm.der()
self.num_eval = __main__.__dict__.get('NumEval',10)
self.bounds = bounds
a = numpy.array(zip(bounds))
self.mins = a[:,0,0]
self.maxs = a[:,0,1]
contrib.JanA.LSCSM.fun = self.func
contrib.JanA.LSCSM.bounds = self.bounds
def perform_gradient_descent(self,chromosome):
from pybrain.optimization import ExactNES, OriginalNES
inp = numpy.array([v for v in chromosome])
if self.num_eval != 0:
#bf = BoundsSafeFunction(self.func,self.bounds)
l = ExactNES(objF, inp[:],rangemins=self.mins,rangemaxs=self.maxs,learningRate=0.01,initCovariances=numpy.eye(len(bounds))*0.1)
l.minimize = True
l.maxEvaluations = self.num_eval
#l.rangemins = self.mins
#l.rangemaxs = self.maxs
(new_K,success) = l.learn()
for i in xrange(0,len(chromosome)):
chromosome[i] = new_K[i]
score = objF(numpy.array(new_K))
return score
def fitLSCSMEvo(X,Y,num_lgn,num_neurons_to_estimate):
num_pres,num_neurons = numpy.shape(Y)
num_pres,kernel_size = numpy.shape(X)
Ks = numpy.ones((num_neurons,num_lgn*4+1))
laplace = laplaceBias(numpy.sqrt(kernel_size),numpy.sqrt(kernel_size))
setOfAlleles = GAllele.GAlleles()
bounds = []
for j in xrange(0,num_lgn):
setOfAlleles.add(GAllele.GAlleleRange(6,(numpy.sqrt(kernel_size)-6),real=True))
bounds.append((6,(numpy.sqrt(kernel_size)-6)))
setOfAlleles.add(GAllele.GAlleleRange(6,(numpy.sqrt(kernel_size)-6),real=True))
bounds.append((6,(numpy.sqrt(kernel_size)-6)))
for j in xrange(0,num_lgn):
setOfAlleles.add(GAllele.GAlleleRange(1.0,25,real=True))
bounds.append((1.0,25))
setOfAlleles.add(GAllele.GAlleleRange(1.0,25,real=True))
bounds.append((1.0,25))
if not __main__.__dict__.get('BalancedLGN',True):
for j in xrange(0,num_lgn):
setOfAlleles.add(GAllele.GAlleleRange(0.0,1.0,real=True))
bounds.append((0.0,1.0))
setOfAlleles.add(GAllele.GAlleleRange(0.0,1.0,real=True))
bounds.append((0.0,1.0))
if __main__.__dict__.get('LGNTreshold',False):
for j in xrange(0,num_lgn):
setOfAlleles.add(GAllele.GAlleleRange(0,20,real=True))
bounds.append((0,20))
if __main__.__dict__.get('NegativeLgn',True):
minw = -__main__.__dict__.get('MaxW',5000)
else:
minw = 0
maxw = __main__.__dict__.get('MaxW',5000)
print __main__.__dict__.get('MaxW',5000)
if __main__.__dict__.get('Divisive',False):
d=2
else:
d=1
if __main__.__dict__.get('SecondLayer',False):
for j in xrange(0,num_lgn):
for k in xrange(0,int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))):
setOfAlleles.add(GAllele.GAlleleRange(minw,maxw,real=True))
bounds.append((minw,maxw))
for j in xrange(0,int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))):
for k in xrange(0,num_neurons_to_estimate):
setOfAlleles.add(GAllele.GAlleleRange(-2,2,real=True))
bounds.append((-2,2))
if __main__.__dict__.get('Divisive',False):
for j in xrange(0,num_lgn):
for k in xrange(0,int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))):
setOfAlleles.add(GAllele.GAlleleRange(minw,maxw,real=True))
bounds.append((0,maxw))
for j in xrange(0,int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))):
for k in xrange(0,num_neurons_to_estimate):
setOfAlleles.add(GAllele.GAlleleRange(-2,2,real=True))
bounds.append((0,2))
else:
for i in xrange(0,d):
for j in xrange(0,num_lgn):
for k in xrange(0,num_neurons_to_estimate):
setOfAlleles.add(GAllele.GAlleleRange(minw,maxw,real=True))
bounds.append((minw,maxw))
for k in xrange(0,num_neurons_to_estimate):
for i in xrange(0,d):
setOfAlleles.add(GAllele.GAlleleRange(0,20,real=True))
bounds.append((0,20))
if __main__.__dict__.get('SecondLayer',False):
for i in xrange(0,d):
for k in xrange(0,int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))):
setOfAlleles.add(GAllele.GAlleleRange(0,20,real=True))
bounds.append((0,20))
if __main__.__dict__.get('PyBrain',False):
ggevo = GGEvoPyBrain(X,Y,num_lgn,num_neurons_to_estimate,bounds)
else:
ggevo = GGEvo(X,Y,num_lgn,num_neurons_to_estimate,bounds)
genome_size = num_lgn*4
if __main__.__dict__.get('SecondLayer',False):
genome_size += int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))*num_lgn*d+num_neurons_to_estimate*d
genome_size += num_neurons_to_estimate*int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))*d + int(num_neurons_to_estimate*__main__.__dict__.get('HiddenLayerSize',1.0))*d
else:
genome_size += num_neurons_to_estimate*num_lgn*d+num_neurons_to_estimate*d
if not __main__.__dict__.get('BalancedLGN',True):
genome_size += num_lgn*2
if __main__.__dict__.get('LGNTreshold',False):
genome_size += num_lgn
print 'Genome size and bounds size'
print genome_size
print len(bounds)
genome = G1DList.G1DList(genome_size)
genome.setParams(allele=setOfAlleles)
genome.evaluator.set(ggevo.perform_gradient_descent)
genome.mutator.set(Mutators.G1DListMutatorAllele)
genome.initializator.set(Initializators.G1DListInitializatorAllele)
genome.crossover.set(Crossovers.G1DListCrossoverUniform)
ga = GSimpleGA.GSimpleGA(genome,__main__.__dict__.get('Seed',1023))
ga.minimax = Consts.minimaxType["minimize"]
#ga.selector.set(Selectors.GRouletteWheel)
ga.setElitism(True)
ga.setGenerations(__main__.__dict__.get('GenerationSize',100))
ga.setPopulationSize(__main__.__dict__.get('PopulationSize',100))
ga.setMutationRate(__main__.__dict__.get('MutationRate',0.05))
ga.setCrossoverRate(__main__.__dict__.get('CrossoverRate',0.9))
pop = ga.getPopulation()
#pop.scaleMethod.set(Scaling.SigmaTruncScaling)
ga.evolve(freq_stats=1)
best = ga.bestIndividual()
#profmode.print_summary()
#print best
inp = [v for v in best]
ggevo.lscsm.X.value = ggevo.XX
ggevo.lscsm.Y.value = ggevo.YY
(new_K,success,c)=fmin_tnc(ggevo.lscsm_func,inp[:],fprime=ggevo.lscsm_der,bounds=ggevo.bounds,maxfun=__main__.__dict__.get('FinalNumEval',10000),messages=0,approx_grad=0)
#inp[:-1] = numpy.reshape(inp[:-1],(num_lgn,4)).T.flatten()
print 'Final likelyhood'
ggevo.lscsm.X.value = ggevo.XX
ggevo.lscsm.Y.value = ggevo.YY
print ggevo.lscsm_func(new_K)
Ks = new_K
#rf= ggevo.lscsm.returnRFs(numpy.array([Ks[i,:]]))
#pylab.figure()
#m = numpy.max(numpy.abs(rf[0,0:kernel_size]))
#pylab.imshow(numpy.reshape(rf[0],(numpy.sqrt(kernel_size),numpy.sqrt(kernel_size))),vmin=-m,vmax=m,cmap=pylab.cm.RdBu,interpolation='nearest')
#pylab.colorbar()
#pylab.show()
rfs = ggevo.lscsm.returnRFs(Ks)
rpi = numpy.linalg.pinv(X.T*X + __main__.__dict__.get('RPILaplaceBias',0.0001)*laplace) * X.T * Y
return [Ks,rpi,ggevo.lscsm,rfs]
def fitLSCSMEvoSequential(X,Y,num_lgn,num_neurons_to_estimate):
Ks = []
rpis = []
lscsms = []
rfs = []
for i in xrange(0,num_neurons_to_estimate):
[K,rpi,lscsm,rf] = fitLSCSMEvo(X,Y[:,i],num_lgn,1)
Ks.append(K)
rpis.append(rpi)
lscsms.append(lscsm)
rfs.append(rf)
return [numpy.vstack(Ks),numpy.hstack(rpis),lscsms,numpy.vstack(rfs)]
def runLSCSM():
import noiseEstimation
res = contrib.dd.DB(None)
(sizex,sizey,training_inputs,training_set,validation_inputs,validation_set,ff,db_node) = contrib.JanA.dataimport.sortOutLoading(res)
raw_validation_set = db_node.data["raw_validation_set"]
num_pres,num_neurons = numpy.shape(training_set)
num_pres,kernel_size = numpy.shape(training_inputs)
size = numpy.sqrt(kernel_size)
if __main__.__dict__.get('Delay',0):
delay = __main__.__dict__.get('Delay',0)
training_set = training_set[delay:,:]
validation_set = validation_set[delay:,:]
training_inputs = training_inputs[0:-delay,:]
validation_inputs = validation_inputs[0:-delay,:]
for i in xrange(0,len(raw_validation_set)):
raw_validation_set[i] = raw_validation_set[i][delay:,:]
raw_validation_data_set=numpy.rollaxis(numpy.array(raw_validation_set),2)
to_delete = []
#to_delete = [2,3,4,5,6,9,10,11,12,13,14,18,22,26,28,29,30,31,32,34,35,36,37,41,44,50,51,54,55,57,59,60,63,65,67,68,70,71,73,74,76,79,81,82,84,86,87,88,90,91,94,95,97,98,99,100,102]
#training_set = numpy.delete(training_set, to_delete, axis = 1)
#validation_set = numpy.delete(validation_set, to_delete, axis = 1)
#for i in xrange(0,10):
# raw_validation_set[i] = numpy.delete(raw_validation_set[i], to_delete, axis = 1)
# creat history
history_set = training_set[0:-1,:]
history_validation_set = validation_set[0:-1,:]
training_set = training_set[1:,:]
validation_set = validation_set[1:,:]
training_inputs= training_inputs[1:,:]
validation_inputs= validation_inputs[1:,:]
for i in xrange(0,len(raw_validation_set)):
raw_validation_set[i] = raw_validation_set[i][1:,:]
print numpy.shape(training_inputs[0])
params={}
params["LSCSM"]=True
db_node = db_node.get_child(params)
params={}
params["Delay"] = __main__.__dict__.get('Delay',0)
params["LaplacaBias"] = __main__.__dict__.get('LaplaceBias',0.0004)
params["LGN_NUM"] = __main__.__dict__.get('LgnNum',6)
params["num_neurons"] = __main__.__dict__.get('NumNeurons',103)
params["sequential"] = __main__.__dict__.get('Sequential',False)
params["ll"] = __main__.__dict__.get('LL',True)
params["V1OF"] = __main__.__dict__.get('V1OF','Exp')
params["LGNOF"] = __main__.__dict__.get('LGNOF','Exp')
params["BalancedLGN"] = __main__.__dict__.get('BalancedLGN',True)
params["LGNTreshold"] = __main__.__dict__.get('LGNTreshold',False)
params["SecondLayer"] = __main__.__dict__.get('SecondLayer',False)
params["HiddenLayerSize"] = __main__.__dict__.get('HiddenLayerSize',1.0)
params["LogLossCoef"] = __main__.__dict__.get('LogLossCoef',1.0)
params["NegativeLgn"] = __main__.__dict__.get('NegativeLgn',True)
params["MaxW"] = __main__.__dict__.get('MaxW',5000)
params["GenerationSize"] = __main__.__dict__.get('GenerationSize',100)
params["PopulationSize"] = __main__.__dict__.get('PopulationSize',100)
params["MutationRate"] = __main__.__dict__.get('MutationRate',0.05)
params["CrossoverRate"] = __main__.__dict__.get('CrossoverRate',0.9)
params["FromWhichNeuron"] = __main__.__dict__.get('FromWhichNeuron',0)
db_node1 = db_node
db_node = db_node.get_child(params)
num_neurons=params["num_neurons"]
training_set = training_set[:,params["FromWhichNeuron"]:params["FromWhichNeuron"]+num_neurons]
validation_set = validation_set[:,params["FromWhichNeuron"]:params["FromWhichNeuron"]+num_neurons]
for i in xrange(0,len(raw_validation_set)):
raw_validation_set[i] = raw_validation_set[i][:,params["FromWhichNeuron"]:params["FromWhichNeuron"]+num_neurons]
raw_validation_data_set=numpy.rollaxis(numpy.array(raw_validation_set),2)
if __main__.__dict__.get('Sequential',False):
[K,rpi,glm,rfs]= fitLSCSMEvoSequential(numpy.mat(training_inputs),numpy.mat(training_set),params["LGN_NUM"],num_neurons)
else:
[K,rpi,glm,rfs]= fitLSCSMEvo(numpy.mat(training_inputs),numpy.mat(training_set),params["LGN_NUM"],num_neurons)
pylab.figure()
m = numpy.max(numpy.abs(rfs))
print numpy.shape(rfs)
print num_neurons*__main__.__dict__.get('HiddenLayerSize',1.0)
for i in xrange(0,num_neurons*__main__.__dict__.get('HiddenLayerSize',1.0)):
pylab.subplot(11,11,i+1)
pylab.imshow(numpy.reshape(rfs[i,0:kernel_size],(size,size)),vmin=-m,vmax=m,cmap=pylab.cm.RdBu,interpolation='nearest')
pylab.savefig('GLM_rfs.png')
pylab.figure()
m = numpy.max(numpy.abs(rpi))
for i in xrange(0,num_neurons):
pylab.subplot(11,11,i+1)
pylab.imshow(numpy.reshape(rpi[:,i],(size,size)),vmin=-m,vmax=m,cmap=pylab.cm.RdBu,interpolation='nearest')
pylab.savefig('RPI_rfs.png')
rpi_pred_act = training_inputs * rpi
rpi_pred_val_act = validation_inputs * rpi
if __main__.__dict__.get('Sequential',False):
glm_pred_act = numpy.hstack([glm[i].response(training_inputs,K[i]) for i in xrange(0,num_neurons)])
glm_pred_val_act = numpy.hstack([glm[i].response(validation_inputs,K[i]) for i in xrange(0,num_neurons)])
else:
glm_pred_act = glm.response(training_inputs,K)
glm_pred_val_act = glm.response(validation_inputs,K)
runLSCSMAnalysis(rpi_pred_act,rpi_pred_val_act,glm_pred_act,glm_pred_val_act,training_set,validation_set,num_neurons,raw_validation_data_set)
to_delete=[]
if len(raw_validation_set) != 1:
signal_power,noise_power,normalized_noise_power,training_prediction_power,validation_prediction_power,signal_power_variance = signal_power_test(raw_validation_data_set, numpy.array(training_set), numpy.array(validation_set), glm_pred_act, glm_pred_val_act)
to_delete = numpy.array(numpy.nonzero((numpy.array(normalized_noise_power) > 85) * 1.0))[0]
print 'After deleting ' , len(to_delete) , 'most noisy neurons (<15% signal to noise ratio)\n\n\n'
if len(to_delete) != num_neurons:
training_set = numpy.delete(training_set, to_delete, axis = 1)
validation_set = numpy.delete(validation_set, to_delete, axis = 1)
glm_pred_act = numpy.delete(glm_pred_act, to_delete, axis = 1)
glm_pred_val_act = numpy.delete(glm_pred_val_act, to_delete, axis = 1)
rpi_pred_act = numpy.delete(rpi_pred_act, to_delete, axis = 1)
rpi_pred_val_act = numpy.delete(rpi_pred_val_act, to_delete, axis = 1)
for i in xrange(0,len(raw_validation_set)):
raw_validation_set[i] = numpy.delete(raw_validation_set[i], to_delete, axis = 1)
raw_validation_data_set=numpy.rollaxis(numpy.array(raw_validation_set),2)
runLSCSMAnalysis(rpi_pred_act,rpi_pred_val_act,glm_pred_act,glm_pred_val_act,training_set,validation_set,num_neurons-len(to_delete),raw_validation_data_set)
db_node.add_data("Kernels",K,force=True)
db_node.add_data("LSCSM",glm,force=True)
contrib.dd.saveResults(res,normalize_path(__main__.__dict__.get('save_name','res.dat')))
def loadLSCSMandAnalyse():
res = contrib.dd.DB(None)
(sizex,sizey,training_inputs,training_set,validation_inputs,validation_set,ff,db_node) = contrib.JanA.dataimport.sortOutLoading(res)
raw_validation_set = db_node.data["raw_validation_set"]
res = contrib.dd.loadResults("LSCSMbasic.dat")
dataset_node = res.children[0].children[0]
training_set = dataset_node.data["training_set"]
validation_set = dataset_node.data["validation_set"]
training_inputs= dataset_node.data["training_inputs"]
validation_inputs= dataset_node.data["validation_inputs"]
lscsm_new = contrib.JanA.LSCSM.LSCSM1(numpy.mat(training_inputs),numpy.mat(training_set),8,103)
K = res.children[0].children[0].children[0].children[0].data["Kernels"]
lscsm = res.children[0].children[0].children[0].children[0].data["LSCSM"]
rfs = lscsm_new.returnRFs(K)
pred_act = lscsm.response(training_inputs,K)
pred_val_act = lscsm.response(validation_inputs,K)
sizex=numpy.sqrt(numpy.shape(training_inputs)[1])
from contrib.JanA.visualization import compareModelPerformanceWithRPI
compareModelPerformanceWithRPI(numpy.mat(training_set),numpy.mat(validation_set),numpy.mat(training_inputs),numpy.mat(validation_inputs),numpy.mat(pred_act),numpy.mat(pred_val_act),numpy.array(raw_validation_set),sizex,sizex,modelname='Model')
def runLSCSMAnalysis(rpi_pred_act,rpi_pred_val_act,glm_pred_act,glm_pred_val_act,training_set,validation_set,num_neurons,raw_validation_data_set):
ofs = run_nonlinearity_detection(numpy.mat(training_set),numpy.mat(rpi_pred_act),display=True)
rpi_pred_act_t = apply_output_function(numpy.mat(rpi_pred_act),ofs)
rpi_pred_val_act_t = apply_output_function(numpy.mat(rpi_pred_val_act),ofs)
ofs = run_nonlinearity_detection(numpy.mat(training_set),numpy.mat(glm_pred_act),display=True)
glm_pred_act_t = apply_output_function(numpy.mat(glm_pred_act),ofs)
glm_pred_val_act_t = apply_output_function(numpy.mat(glm_pred_val_act),ofs)
pylab.figure()
for i in xrange(0,num_neurons):
pylab.subplot(11,11,i+1)
pylab.plot(rpi_pred_val_act[:,i],validation_set[:,i],'o')
pylab.title('RPI')
pylab.savefig('RPI_val_relationship.png')
#print 'GLM PRED VAL ACT'
#print glm_pred_val_act
pylab.figure()
for i in xrange(0,num_neurons):
pylab.subplot(11,11,i+1)
pylab.plot(glm_pred_val_act[:,i],validation_set[:,i],'o')
pylab.title('GLM')
pylab.savefig('GLM_val_relationship.png')
pylab.figure()
for i in xrange(0,num_neurons):
pylab.subplot(11,11,i+1)
pylab.plot(rpi_pred_val_act_t[:,i],validation_set[:,i],'o')
pylab.title('RPI')
pylab.savefig('RPI_t_val_relationship.png')
pylab.figure()
for i in xrange(0,num_neurons):
pylab.subplot(11,11,i+1)
pylab.plot(glm_pred_val_act_t[:,i],validation_set[:,i],'o')
pylab.title('RPI')
pylab.savefig('GLM_t_val_relationship.png')
print numpy.shape(validation_set)
print numpy.shape(rpi_pred_val_act_t)
print numpy.shape(glm_pred_val_act)
pylab.figure()
pylab.plot(numpy.mean(numpy.power(validation_set - rpi_pred_val_act_t,2),0),numpy.mean(numpy.power(validation_set - glm_pred_val_act,2),0),'o')
pylab.hold(True)
pylab.plot([0.0,1.0],[0.0,1.0])
pylab.xlabel('RPI')
pylab.ylabel('GLM')
pylab.savefig('GLM_vs_RPI_MSE.png')
print 'RPI \n'
#(ranks,correct,pred) = performIdentification(validation_set,rpi_pred_val_act)
#print "Natural:", correct , "Mean rank:", numpy.mean(ranks) , "MSE", numpy.mean(numpy.power(validation_set - rpi_pred_val_act,2))
#(ranks,correct,pred) = performIdentification(validation_set,rpi_pred_val_act_t)
#print "Natural+TF:", correct , "Mean rank:", numpy.mean(ranks) , "MSE", numpy.mean(numpy.power(validation_set - rpi_pred_val_act_t,2))
if numpy.shape(raw_validation_data_set)[1] != 1:
signal_power,noise_power,normalized_noise_power,training_prediction_power,validation_prediction_power,signal_power_variance = signal_power_test(raw_validation_data_set, numpy.array(training_set), numpy.array(validation_set), numpy.array(rpi_pred_act), numpy.array(rpi_pred_val_act))
signal_power,noise_power,normalized_noise_power,training_prediction_power_t,rpi_validation_prediction_power_t,signal_power_variance = signal_power_test(raw_validation_data_set, numpy.array(training_set), numpy.array(validation_set), numpy.array(rpi_pred_act_t), numpy.array(rpi_pred_val_act_t))
print "Prediction power on training set / validation set: ", numpy.mean(training_prediction_power) , " / " , numpy.mean(validation_prediction_power)
print "Prediction power after TF on training set / validation set: ", numpy.mean(training_prediction_power_t) , " / " , numpy.mean(rpi_validation_prediction_power_t)
print 'WithoutTF'
printCorrelationAnalysis(training_set,validation_set,rpi_pred_act,rpi_pred_val_act)
print 'WithTF'
printCorrelationAnalysis(training_set,validation_set,rpi_pred_act_t,rpi_pred_val_act_t)
print '\n \n GLM \n'
#(ranks,correct,pred) = performIdentification(validation_set,glm_pred_val_act)
#print "Natural:", correct , "Mean rank:", numpy.mean(ranks) , "MSE", numpy.mean(numpy.power(validation_set - glm_pred_val_act,2))
#(ranks,correct,pred) = performIdentification(validation_set,glm_pred_val_act_t)
#print "Natural+TF:", correct , "Mean rank:", numpy.mean(ranks) , "MSE", numpy.mean(numpy.power(validation_set - glm_pred_val_act_t,2))
if numpy.shape(raw_validation_data_set)[1] != 1:
glm_signal_power,glm_noise_power,glm_normalized_noise_power,glm_training_prediction_power,glm_validation_prediction_power,glm_signal_power_variance = signal_power_test(raw_validation_data_set, numpy.array(training_set), numpy.array(validation_set), numpy.array(glm_pred_act), numpy.array(glm_pred_val_act))
glm_signal_power_t,glm_noise_power_t,glm_normalized_noise_power_t,glm_training_prediction_power_t,glm_validation_prediction_power_t,glm_signal_power_variances_t = signal_power_test(raw_validation_data_set, numpy.array(training_set), numpy.array(validation_set), numpy.array(glm_pred_act_t), numpy.array(glm_pred_val_act_t))
print "Prediction power on training set / validation set: ", numpy.mean(glm_training_prediction_power) , " / " , numpy.mean(glm_validation_prediction_power)
print "Prediction power after TF on training set / validation set: ", numpy.mean(glm_training_prediction_power_t) , " / " , numpy.mean(glm_validation_prediction_power_t)
print 'WithoutTF'
printCorrelationAnalysis(training_set,validation_set,glm_pred_act,glm_pred_val_act)
print 'WithTF'
printCorrelationAnalysis(training_set,validation_set,glm_pred_act_t,glm_pred_val_act_t)
if numpy.shape(raw_validation_data_set)[1] != 1:
pylab.figure()
pylab.plot(rpi_validation_prediction_power_t[:num_neurons],glm_validation_prediction_power[:num_neurons],'o')
pylab.hold(True)
pylab.plot([0.0,1.0],[0.0,1.0])
pylab.xlabel('RPI_t')
pylab.ylabel('GLM')
pylab.savefig('GLM_vs_RPIt_prediction_power.png')
pylab.figure()
pylab.plot(rpi_validation_prediction_power_t[:num_neurons],glm_validation_prediction_power_t[:num_neurons],'o')
pylab.hold(True)
pylab.plot([0.0,1.0],[0.0,1.0])
pylab.xlabel('RPI_t')
pylab.ylabel('GLM_t')
pylab.savefig('GLMt_vs_RPIt_prediction_power.png')
#db_node.add_data("Kernels",K,force=True)
#db_node.add_data("GLM",glm,force=True)
#db_node.add_data("ReversCorrelationPredictedActivities",glm_pred_act,force=True)
#db_node.add_data("ReversCorrelationPredictedActivities+TF",glm_pred_act_t,force=True)
#db_node.add_data("ReversCorrelationPredictedValidationActivities",glm_pred_val_act,force=True)
#db_node.add_data("ReversCorrelationPredictedValidationActivities+TF",glm_pred_val_act_t,force=True)
#return [K,validation_inputs, validation_set]
| [
"[email protected]"
] | |
d3fd8759143553d27047922048b584d72c658ecd | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /gaussiana/ch3_2019_04_11_02_04_35_965689.py | 1d72b19e863e45f9715f26e5fa15712c131d7587 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 127 | py | import math
def calcula_gaussiana(x,mi,sigma):
Y=1/(sigma*(2*math.pi)**1/2)*(math.exp(-0.5*((x-mi)/sigma)**2))
return Y | [
"[email protected]"
] | |
3e5b0ba83f75b854bad02097f5a41b1eb1fa092c | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/429/usersdata/321/104630/submittedfiles/jogoDaVelha_BIB.py | ef680987742f43f7773f7fd8ccbe646aa5e714c4 | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,288 | py | # -*- coding: utf-8 -*-
# COLOQUE SUA BIBLIOTECA A PARTIR DAQUI
import random
tabuleiro =[
[1,1,1],
[1,1,1],
[1,1,1]]
def nome():
nome = str(input('Qual seu nome? \n'))
return nome
def simbolo():
s = str(input('Qual símbolo você deseja utilizar no jogo? (X ou O) \n'))
while s != 'X' and s != 'O':
print('Insira um símbolo válido.')
s = str(input('Qual símbolo você deseja utilizar no jogo? (X ou O) '))
if s == 'X':
computador = 'O'
else:
computador = 'X'
return
def sorteio():
#j1 = 'nome'
j2 = 'Computador'
sort = random.randint(0,1)
if sort == 1:
return print ('Vencedor do sorteio para início do jogo: %' % nome)
if sort == 0:
return print ('Vencedor do sorteio para início do jogo: %s' % j2)
def mostrar_tabuleiro(tabuleiro) :
print((tabuleiro[0] [0])+'|'+(tabuleiro[0] [1])+'|'+(tabuleiro[0] [2]))
print('')
print((tabuleiro[1] [0])+'|'+(tabuleiro[1] [1])+'|'+(tabuleiro[1] [2]))
print('')
print((tabuleiro[2] [0])+'|'+(tabuleiro[2] [1])+'|'+(tabuleiro[2] [2]))
return
def jogada():
sorteado = 1
print(v.index(sorteado))
v.remove(sorteado)
| [
"[email protected]"
] | |
e05f017bf8b3f62820b45ff9151749491fee423e | 10d08b2531672de54d924fec6654a978cb082055 | /fos/actor/polygonlines.py | 31b4955bed4b6a56f181ca6606143957e9376dcc | [
"BSD-3-Clause"
] | permissive | arokem/Fos | 05afa04f05ba3b0585e38dfa79ec7ae4332ec8f9 | 5066bbd74954ba7e60eeb06451f3a25ef1b291e2 | refs/heads/master | 2021-01-18T15:59:59.905221 | 2011-08-15T14:16:50 | 2011-08-15T14:16:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,855 | py | import scipy.spatial as sp
import numpy as np
from fos.lib.pyglet.gl import *
from fos import Actor
from fos.core.intersection import ray_aabb_intersection
class PolygonLines(Actor):
def __init__(self,
vertices,
connectivity,
colors = None,
affine = None):
""" A TreeRegion, composed of many trees
vertices : Nx3
Local 3D coordinates x,y,z
connectivity : Mx2
Polygon line topology
colors : Nx4 or 1x4, float [0,1]
Color per vertex
affine : 4x4
Affine transformation of the actor
including translation
"""
super(PolygonLines, self).__init__()
if affine is None:
self.affine = np.eye(4, dtype = np.float32)
else:
self.affine = affine
self._update_glaffine()
self.vertices = vertices
self.connectivity = connectivity.ravel()
# this coloring section is for per/vertex color
if colors is None:
# default colors for each line
self.colors = np.array( [[1.0, 0.0, 0.0, 1.0]], dtype = np.float32).repeat(len(self.vertices), axis=0)
else:
# colors array is half the size of the connectivity array
assert( len(self.vertices) == len(colors) )
self.colors = colors
# create AABB using the vertices
self.make_aabb(margin=2.0)
# create kdtree
self.kdtree = sp.KDTree(self.vertices, 5)
# create pointers
self.vertices_ptr = self.vertices.ctypes.data
self.connectivity_ptr = self.connectivity.ctypes.data
self.connectivity_nr = self.connectivity.size
self.colors_ptr = self.colors.ctypes.data
# VBO related
self.vertex_vbo = GLuint(0)
glGenBuffers(1, self.vertex_vbo)
glBindBuffer(GL_ARRAY_BUFFER_ARB, self.vertex_vbo)
glBufferData(GL_ARRAY_BUFFER_ARB, 4 * self.vertices.size, self.vertices_ptr, GL_STATIC_DRAW)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0)
# for colors
self.colors_vbo = GLuint(0)
glGenBuffers(1, self.colors_vbo)
glBindBuffer(GL_ARRAY_BUFFER, self.colors_vbo)
glBufferData(GL_ARRAY_BUFFER, 4 * self.colors.size, self.colors_ptr, GL_STATIC_DRAW)
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0)
# for connectivity
self.connectivity_vbo = GLuint(0)
glGenBuffers(1, self.connectivity_vbo)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.connectivity_vbo)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 4 * self.connectivity_nr, self.connectivity_ptr, GL_STATIC_DRAW)
def process_pickray(self,near,far):
""" Called when picking hit this actor
"""
# subdivide pickray and index the kdtree to find the indices of the
# closeby points. do then pickray-linesegment intersection
print "-------------"
print "near", near
print "far", far
# fine intersection points with aabb
# assume that intersection exists
near = np.array(near)
far = np.array(far)
print 'boundingbox', self.aabb.coord[0], self.aabb.coord[1]
ab1, ab2 = self.get_aabb_coords()
re = ray_aabb_intersection(near, far, ab1, ab2)
print "returned intersection points", re
# needs to have at least 2
if len(re) < 2:
return False
ne = np.array(re[0])
fa = np.array(re[1])
print "used near", ne
print "used far", fa
d = (fa-ne) / np.linalg.norm(fa-ne)
# how many subdivisions of the unit vector
nr_subdiv = 20
kdtree_sphere_query_radius = 2.0
dt = np.linalg.norm((fa-ne)) / 10
print "kdtree"
print self.kdtree.mins, self.kdtree.maxes
# create points
for i in range(nr_subdiv+1):
point = ne + (dt*i) * d
# apply inverse of affine transformation to get back
# to the original vertices space
point_inv = np.dot( np.linalg.inv(self.affine), np.array( [point[0], point[1], point[2], 1.0] ) )
point_inv2 = point_inv[:3]
ind = self.kdtree.query_ball_point(point_inv2, kdtree_sphere_query_radius)
if len(ind) > 0:
print 'change colors'
self.colors[ ind, 1 ] = 1.0
self.colors[ ind, 0 ] = 0.0
def draw(self):
self.draw2()
def draw1(self):
glPushMatrix()
glMultMatrixf(self.glaffine)
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glBindBuffer(GL_ARRAY_BUFFER_ARB, self.vertex_vbo)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0)
glBindBuffer(GL_ARRAY_BUFFER_ARB, self.colors_vbo)
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.connectivity_vbo)
glDrawElements(GL_LINES, self.connectivity_nr, GL_UNSIGNED_INT, 0)
if self.show_aabb:
self.draw_aabb()
glPopMatrix()
def draw2(self):
glPushMatrix()
glMultMatrixf(self.glaffine)
glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
glEnableClientState(GL_VERTEX_ARRAY)
glEnableClientState(GL_COLOR_ARRAY)
glVertexPointer(3, GL_FLOAT, 0, self.vertices_ptr)
glColorPointer(4, GL_FLOAT, 0, self.colors_ptr)
glDrawElements(GL_LINES, self.connectivity_nr, GL_UNSIGNED_INT, self.connectivity_ptr)
glDisableClientState(GL_COLOR_ARRAY)
glDisableClientState(GL_VERTEX_ARRAY)
if self.show_aabb:
self.draw_aabb()
glPopMatrix()
| [
"[email protected]"
] | |
a4a434e1abbb243fe9c141b2cd61b361631bc774 | 55ceefc747e19cdf853e329dba06723a44a42623 | /_CodeTopics/LeetCode/1-200/000012/000012.py | 34382f7be371ec1d8887db2f2cda6c7f0cd89365 | [] | no_license | BIAOXYZ/variousCodes | 6c04f3e257dbf87cbe73c98c72aaa384fc033690 | ee59b82125f100970c842d5e1245287c484d6649 | refs/heads/master | 2023-09-04T10:01:31.998311 | 2023-08-26T19:44:39 | 2023-08-26T19:44:39 | 152,967,312 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 1,035 | py | class Solution(object):
def intToRoman(self, num):
"""
:type num: int
:rtype: str
"""
res = ""
nums = [1000, 500, 100, 50, 10, 5, 1]
letters = ["M", "D", "C", "L", "X", "V", "I"]
i = 0
flag = 0
while num > 0:
if num >= nums[i]:
res += letters[i]
num -= nums[i]
flag += 1
if flag == 4:
res = res[:-4] + letters[i] + letters[i-1]
if len(res) > 2 and res[-3] == res[-1]:
res = res[:-3] + letters[i] + letters[i-2]
else:
i += 1
flag = 0
return res
"""
https://leetcode-cn.com/submissions/detail/177308458/
3999 / 3999 个通过测试用例
状态:通过
执行用时: 56 ms
内存消耗: 13.1 MB
执行用时:56 ms, 在所有 Python 提交中击败了70.77%的用户
内存消耗:13.1 MB, 在所有 Python 提交中击败了19.03%的用户
"""
| [
"[email protected]"
] | |
c477c1338273b6757adf50fb02167e6cc81c5192 | 8015f1c62a2cb4efd21aa8938336913bf8117868 | /bamap/ba3638.pngMap.py | 2a662be28a2e2003287a7c148ccf7bb84271f63c | [] | no_license | GamerNoTitle/Beepers-and-OLED | 675b5e3c179df0f0e27b42bf594c43860d03b9af | afe1340e5394ae96bda5f9022a8a66824368091e | refs/heads/master | 2020-04-20T00:09:47.122471 | 2019-04-29T04:59:35 | 2019-04-29T04:59:35 | 168,515,579 | 4 | 2 | null | null | null | null | UTF-8 | Python | false | false | 8,468 | py | ba3638.pngMap = [
'11111111111111111111111111111111111111111111111110001111111111111111111111111111111111111111110100111111111111111111111111111111',
'11111111111111111111111111111111111111111111111110001111111111111111111111111111111111111111110000001111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111001111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111100001111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111110011111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111111111111000000011111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111111111100000000011111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111111111110001111111111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111101000000000000001011111111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111111111000000000000000000000000001110000111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111111100000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111110000000000000000000000000000000000111111111111111111111111111111111111110101111111111111111111111111111111111111',
'11111111111111100000000000000000000000000000000000111111111111111111111111111111111111010011111111111111111111111111111111111111',
'11111111111100000000000000000000000000000000000000001111111111111111111111111111111111100011111111111111111111111111111111111111',
'11111111111000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111110000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111110000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111110000000000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111',
'11111110000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111000000000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111',
'11110000000000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000000000000000101111111111111111111111111111111111111111111111111111111111111111111111111111',
'11110000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11110000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000000000000011111100000111111111111111111111111111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000000000001111111111001111111111111111111111111111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111110000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111101000000000000000000000000000000000000111111111111111111111111111100011111111111111111111111000011111111111111111111111111',
'11111100000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111000011111111111111111111111111',
'11111111101000000000000000000000000000000001111111111111111111111111111111111111111011111111111110000111111111111111111111111111',
'11111111110000000000000000000000000000000011111111111111111111111111111111111111100001111111111110001111111111111111111111111111',
'11111111111111100000000000000000000000011111111111111111111111111111111111111101111111111111111111011111111111111111111111111111',
'11111111111111000000000000000000000011111111111111111111111111111111111111111100111111111111111111111111111111111111111111111111',
'11111111111111111111000000000000000000111111111111111111111111111111111111111101111111111111111111111111111111111111111111111111',
'11111111111111111111110000000000000001111111111111111111111111111111111111111100111011111111111111111111111111111111111111111111',
'11111111001100111111110000000000000000111111111111111111111111111111111111111111110011111111000111111111111111111111111111111111',
'11111110000100111111100000000000000000011111111111111111111111111111111111111111111101111100000011111111111111111111111111111111',
'11111111011111111101000000000000000000111111111111111111111111111111111111111111111111111110100011111111111111111111111111111111',
'11111111111111111100000000000000000000111111111111111111111111111111111111111111111111111111000011111111111111111111111111111111',
'11111111111111110000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111111110001111111',
'11111111111110000000000000000000000000011111111111111111111111111111111111111101111111111111111111111111111111111111110000111111',
'11111111100000000000000000000000000000011111111111111111111111111111111111111000001111111111111111111111111111111111111111111111',
'00111110000000000000000000000000000000011111111111111111111111111111111111111100001111111111111111111111111111111111111111111111',
'00010100000000000000000000000000000000011111111111111111111111111111111111111101011111111111111111111111111111111111111111111111',
'01000000000000000000000000000000000000011111111111111111111111111111111010011111111111111111111111111111111111111111111111111111',
'01000000000000000000000000000000000000011111111111111111111111111111111000111111111111111111111111111111111111111111111111111111',
'10000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111111011111111111',
'00000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111100000111110001',
'00000000000000000000000000000000000000011111111111111111111111111111111111111111111111111111111111111111111111111100001111110000',
'11000000000000000000000000000000000000011000111111111111111111111111111111111111111111111111111111111111111111111111111111111000',
'11110000000000000000000000000000000000010000111111111111111111111111111111111101000111111111111111111111111111111111111111111000',
'11110000000000000000000000000000000000000000111111111111111111111111111111111110000111111111111111111111111111111111111111111110',
'11110000000000000000000000000000000000000000111111111111111111110001111111111111001111111111111111111111111111111111111111111111',
'11111100000000000000000000000000000000010001111111111111111111000111111111111111111111111111111111111111111111111111111111111111',
'11111111111110101111000000000000000001110000000111111111111111000011111111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111100000000000000111110011111111111111111111111111111111111111111111111111111111111111111111111111111111111',
'11111111111111111111111110000011100000111111111111111111111111111111111111111111111111111111111111111111111111111111111000011111',
]
| [
"[email protected]"
] | |
bd360494b3878fc03ec61fa166235858fdda2145 | 8072c1cf03f82fd1891536e93b6c28f16df82ea4 | /metamorphosys/META/meta/DesignDataPackage/lib/python/avm/__init__.py | 547231adf5bb6d039dc95b4fd07a3cd812a38db4 | [
"LicenseRef-scancode-other-permissive"
] | permissive | 15831944/metamorphosys-desktop | 90a26c6700c444accd58358c457d675eb2fdd59f | 4adb71c18f7811289fc4ba4dcb465e99e93d8e60 | refs/heads/master | 2021-05-30T13:34:07.037424 | 2015-09-24T22:09:05 | 2015-09-24T22:09:05 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 297,086 | py | # .\_avm.py
# -*- coding: utf-8 -*-
# PyXB bindings for NM:8c3bce54577a879cd94d42789711c9f5d444aa71
# Generated 2015-08-13 10:01:46.170000 by PyXB version 1.2.3
# Namespace avm [xmlns:avm]
import pyxb
import pyxb.binding
import pyxb.binding.saxer
import io
import pyxb.utils.utility
import pyxb.utils.domutils
import sys
# Unique identifier for bindings created at the same time
_GenerationUID = pyxb.utils.utility.UniqueIdentifier('urn:uuid:3717d230-41cc-11e5-ac39-7429af7917c0')
# Version of PyXB used to generate the bindings
_PyXBVersion = '1.2.3'
# Generated bindings are not compatible across PyXB versions
if pyxb.__version__ != _PyXBVersion:
raise pyxb.PyXBVersionError(_PyXBVersion)
# Import bindings for namespaces imported into schema
import pyxb.binding.datatypes
import iFAB as _ImportedBinding__iFAB
# NOTE: All namespace declarations are reserved within the binding
Namespace = pyxb.namespace.NamespaceForURI(u'avm', create_if_missing=True)
Namespace.configureCategories(['typeBinding', 'elementBinding'])
def CreateFromDocument (xml_text, default_namespace=None, location_base=None):
"""Parse the given XML and use the document element to create a
Python instance.
@param xml_text An XML document. This should be data (Python 2
str or Python 3 bytes), or a text (Python 2 unicode or Python 3
str) in the L{pyxb._InputEncoding} encoding.
@keyword default_namespace The L{pyxb.Namespace} instance to use as the
default namespace where there is no default namespace in scope.
If unspecified or C{None}, the namespace of the module containing
this function will be used.
@keyword location_base: An object to be recorded as the base of all
L{pyxb.utils.utility.Location} instances associated with events and
objects handled by the parser. You might pass the URI from which
the document was obtained.
"""
if pyxb.XMLStyle_saxer != pyxb._XMLStyle:
dom = pyxb.utils.domutils.StringToDOM(xml_text)
return CreateFromDOM(dom.documentElement)
if default_namespace is None:
default_namespace = Namespace.fallbackNamespace()
saxer = pyxb.binding.saxer.make_parser(fallback_namespace=default_namespace, location_base=location_base)
handler = saxer.getContentHandler()
xmld = xml_text
if isinstance(xmld, unicode):
xmld = xmld.encode(pyxb._InputEncoding)
saxer.parse(io.BytesIO(xmld))
instance = handler.rootObject()
return instance
def CreateFromDOM (node, default_namespace=None):
"""Create a Python instance from the given DOM node.
The node tag must correspond to an element declaration in this module.
@deprecated: Forcing use of DOM interface is unnecessary; use L{CreateFromDocument}."""
if default_namespace is None:
default_namespace = Namespace.fallbackNamespace()
return pyxb.binding.basis.element.AnyCreateFromDOM(node, default_namespace)
# Atomic simple type: {avm}CalculationTypeEnum
class CalculationTypeEnum (pyxb.binding.datatypes.string, pyxb.binding.basis.enumeration_mixin):
"""An atomic simple type."""
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'CalculationTypeEnum')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 224, 2)
_Documentation = None
CalculationTypeEnum._CF_enumeration = pyxb.binding.facets.CF_enumeration(value_datatype=CalculationTypeEnum, enum_prefix=None)
CalculationTypeEnum.Declarative = CalculationTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Declarative', tag=u'Declarative')
CalculationTypeEnum.Python = CalculationTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Python', tag=u'Python')
CalculationTypeEnum._InitializeFacetMap(CalculationTypeEnum._CF_enumeration)
Namespace.addCategoryObject('typeBinding', u'CalculationTypeEnum', CalculationTypeEnum)
# Atomic simple type: {avm}DataTypeEnum
class DataTypeEnum (pyxb.binding.datatypes.string, pyxb.binding.basis.enumeration_mixin):
"""An atomic simple type."""
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DataTypeEnum')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 230, 2)
_Documentation = None
DataTypeEnum._CF_enumeration = pyxb.binding.facets.CF_enumeration(value_datatype=DataTypeEnum, enum_prefix=None)
DataTypeEnum.String = DataTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'String', tag=u'String')
DataTypeEnum.Boolean = DataTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Boolean', tag=u'Boolean')
DataTypeEnum.Integer = DataTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Integer', tag=u'Integer')
DataTypeEnum.Real = DataTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Real', tag=u'Real')
DataTypeEnum._InitializeFacetMap(DataTypeEnum._CF_enumeration)
Namespace.addCategoryObject('typeBinding', u'DataTypeEnum', DataTypeEnum)
# Atomic simple type: {avm}DimensionTypeEnum
class DimensionTypeEnum (pyxb.binding.datatypes.string, pyxb.binding.basis.enumeration_mixin):
"""An atomic simple type."""
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DimensionTypeEnum')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 238, 2)
_Documentation = None
DimensionTypeEnum._CF_enumeration = pyxb.binding.facets.CF_enumeration(value_datatype=DimensionTypeEnum, enum_prefix=None)
DimensionTypeEnum.Matrix = DimensionTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Matrix', tag=u'Matrix')
DimensionTypeEnum.Vector = DimensionTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Vector', tag=u'Vector')
DimensionTypeEnum.Scalar = DimensionTypeEnum._CF_enumeration.addEnumeration(unicode_value=u'Scalar', tag=u'Scalar')
DimensionTypeEnum._InitializeFacetMap(DimensionTypeEnum._CF_enumeration)
Namespace.addCategoryObject('typeBinding', u'DimensionTypeEnum', DimensionTypeEnum)
# List simple type: [anonymous]
# superclasses pyxb.binding.datatypes.anySimpleType
class STD_ANON (pyxb.binding.basis.STD_list):
"""Simple type that is a list of pyxb.binding.datatypes.anyURI."""
_ExpandedName = None
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 319, 6)
_Documentation = None
_ItemType = pyxb.binding.datatypes.anyURI
STD_ANON._InitializeFacetMap()
# List simple type: [anonymous]
# superclasses pyxb.binding.datatypes.anySimpleType
class STD_ANON_ (pyxb.binding.basis.STD_list):
"""Simple type that is a list of pyxb.binding.datatypes.anyURI."""
_ExpandedName = None
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 408, 6)
_Documentation = None
_ItemType = pyxb.binding.datatypes.anyURI
STD_ANON_._InitializeFacetMap()
# List simple type: [anonymous]
# superclasses pyxb.binding.datatypes.anySimpleType
class STD_ANON_2 (pyxb.binding.basis.STD_list):
"""Simple type that is a list of pyxb.binding.datatypes.anyURI."""
_ExpandedName = None
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 422, 6)
_Documentation = None
_ItemType = pyxb.binding.datatypes.anyURI
STD_ANON_2._InitializeFacetMap()
# List simple type: [anonymous]
# superclasses pyxb.binding.datatypes.anySimpleType
class STD_ANON_3 (pyxb.binding.basis.STD_list):
"""Simple type that is a list of pyxb.binding.datatypes.anyURI."""
_ExpandedName = None
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 428, 6)
_Documentation = None
_ItemType = pyxb.binding.datatypes.anyURI
STD_ANON_3._InitializeFacetMap()
# List simple type: [anonymous]
# superclasses pyxb.binding.datatypes.anySimpleType
class STD_ANON_4 (pyxb.binding.basis.STD_list):
"""Simple type that is a list of pyxb.binding.datatypes.anyURI."""
_ExpandedName = None
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 447, 10)
_Documentation = None
_ItemType = pyxb.binding.datatypes.anyURI
STD_ANON_4._InitializeFacetMap()
# Atomic simple type: {avm}SimpleFormulaOperation
class SimpleFormulaOperation (pyxb.binding.datatypes.string, pyxb.binding.basis.enumeration_mixin):
"""An atomic simple type."""
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'SimpleFormulaOperation')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 454, 2)
_Documentation = None
SimpleFormulaOperation._CF_enumeration = pyxb.binding.facets.CF_enumeration(value_datatype=SimpleFormulaOperation, enum_prefix=None)
SimpleFormulaOperation.Addition = SimpleFormulaOperation._CF_enumeration.addEnumeration(unicode_value=u'Addition', tag=u'Addition')
SimpleFormulaOperation.Multiplication = SimpleFormulaOperation._CF_enumeration.addEnumeration(unicode_value=u'Multiplication', tag=u'Multiplication')
SimpleFormulaOperation.ArithmeticMean = SimpleFormulaOperation._CF_enumeration.addEnumeration(unicode_value=u'ArithmeticMean', tag=u'ArithmeticMean')
SimpleFormulaOperation.GeometricMean = SimpleFormulaOperation._CF_enumeration.addEnumeration(unicode_value=u'GeometricMean', tag=u'GeometricMean')
SimpleFormulaOperation.Maximum = SimpleFormulaOperation._CF_enumeration.addEnumeration(unicode_value=u'Maximum', tag=u'Maximum')
SimpleFormulaOperation.Minimum = SimpleFormulaOperation._CF_enumeration.addEnumeration(unicode_value=u'Minimum', tag=u'Minimum')
SimpleFormulaOperation._InitializeFacetMap(SimpleFormulaOperation._CF_enumeration)
Namespace.addCategoryObject('typeBinding', u'SimpleFormulaOperation', SimpleFormulaOperation)
# Atomic simple type: {avm}DoDDistributionStatementEnum
class DoDDistributionStatementEnum (pyxb.binding.datatypes.string, pyxb.binding.basis.enumeration_mixin):
"""An atomic simple type."""
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DoDDistributionStatementEnum')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 488, 2)
_Documentation = None
DoDDistributionStatementEnum._CF_enumeration = pyxb.binding.facets.CF_enumeration(value_datatype=DoDDistributionStatementEnum, enum_prefix=None)
DoDDistributionStatementEnum.StatementA = DoDDistributionStatementEnum._CF_enumeration.addEnumeration(unicode_value=u'StatementA', tag=u'StatementA')
DoDDistributionStatementEnum.StatementB = DoDDistributionStatementEnum._CF_enumeration.addEnumeration(unicode_value=u'StatementB', tag=u'StatementB')
DoDDistributionStatementEnum.StatementC = DoDDistributionStatementEnum._CF_enumeration.addEnumeration(unicode_value=u'StatementC', tag=u'StatementC')
DoDDistributionStatementEnum.StatementD = DoDDistributionStatementEnum._CF_enumeration.addEnumeration(unicode_value=u'StatementD', tag=u'StatementD')
DoDDistributionStatementEnum.StatementE = DoDDistributionStatementEnum._CF_enumeration.addEnumeration(unicode_value=u'StatementE', tag=u'StatementE')
DoDDistributionStatementEnum._InitializeFacetMap(DoDDistributionStatementEnum._CF_enumeration)
Namespace.addCategoryObject('typeBinding', u'DoDDistributionStatementEnum', DoDDistributionStatementEnum)
# List simple type: [anonymous]
# superclasses pyxb.binding.datatypes.anySimpleType
class STD_ANON_5 (pyxb.binding.basis.STD_list):
"""Simple type that is a list of pyxb.binding.datatypes.anyURI."""
_ExpandedName = None
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 580, 10)
_Documentation = None
_ItemType = pyxb.binding.datatypes.anyURI
STD_ANON_5._InitializeFacetMap()
# Complex type {avm}Component with content type ELEMENT_ONLY
class Component_ (pyxb.binding.basis.complexTypeDefinition):
"""Test documentation for Component type. Yep."""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Component')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 67, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element DomainModel uses Python identifier DomainModel
__DomainModel = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DomainModel'), 'DomainModel', '__avm_Component__DomainModel', True, pyxb.utils.utility.Location(u'avm.xsd', 72, 6), )
DomainModel = property(__DomainModel.value, __DomainModel.set, None, None)
# Element Property uses Python identifier Property
__Property = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Property'), 'Property', '__avm_Component__Property', True, pyxb.utils.utility.Location(u'avm.xsd', 73, 6), )
Property = property(__Property.value, __Property.set, None, None)
# Element ResourceDependency uses Python identifier ResourceDependency
__ResourceDependency = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ResourceDependency'), 'ResourceDependency', '__avm_Component__ResourceDependency', True, pyxb.utils.utility.Location(u'avm.xsd', 74, 6), )
ResourceDependency = property(__ResourceDependency.value, __ResourceDependency.set, None, None)
# Element Connector uses Python identifier Connector
__Connector = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Connector'), 'Connector', '__avm_Component__Connector', True, pyxb.utils.utility.Location(u'avm.xsd', 75, 6), )
Connector = property(__Connector.value, __Connector.set, None, None)
# Element DistributionRestriction uses Python identifier DistributionRestriction
__DistributionRestriction = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DistributionRestriction'), 'DistributionRestriction', '__avm_Component__DistributionRestriction', True, pyxb.utils.utility.Location(u'avm.xsd', 76, 6), )
DistributionRestriction = property(__DistributionRestriction.value, __DistributionRestriction.set, None, None)
# Element Port uses Python identifier Port
__Port = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Port'), 'Port', '__avm_Component__Port', True, pyxb.utils.utility.Location(u'avm.xsd', 77, 6), )
Port = property(__Port.value, __Port.set, None, None)
# Element Classifications uses Python identifier Classifications
__Classifications = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Classifications'), 'Classifications', '__avm_Component__Classifications', True, pyxb.utils.utility.Location(u'avm.xsd', 78, 6), )
Classifications = property(__Classifications.value, __Classifications.set, None, None)
# Element AnalysisConstruct uses Python identifier AnalysisConstruct
__AnalysisConstruct = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'AnalysisConstruct'), 'AnalysisConstruct', '__avm_Component__AnalysisConstruct', True, pyxb.utils.utility.Location(u'avm.xsd', 79, 6), )
AnalysisConstruct = property(__AnalysisConstruct.value, __AnalysisConstruct.set, None, None)
# Element Supercedes uses Python identifier Supercedes
__Supercedes = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Supercedes'), 'Supercedes', '__avm_Component__Supercedes', True, pyxb.utils.utility.Location(u'avm.xsd', 80, 6), )
Supercedes = property(__Supercedes.value, __Supercedes.set, None, None)
# Element Formula uses Python identifier Formula
__Formula = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Formula'), 'Formula', '__avm_Component__Formula', True, pyxb.utils.utility.Location(u'avm.xsd', 81, 6), )
Formula = property(__Formula.value, __Formula.set, None, None)
# Element DomainMapping uses Python identifier DomainMapping
__DomainMapping = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DomainMapping'), 'DomainMapping', '__avm_Component__DomainMapping', True, pyxb.utils.utility.Location(u'avm.xsd', 82, 6), )
DomainMapping = property(__DomainMapping.value, __DomainMapping.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Component__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 84, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 84, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute Version uses Python identifier Version
__Version = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Version'), 'Version', '__avm_Component__Version', pyxb.binding.datatypes.string)
__Version._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 85, 4)
__Version._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 85, 4)
Version = property(__Version.value, __Version.set, None, None)
# Attribute SchemaVersion uses Python identifier SchemaVersion
__SchemaVersion = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'SchemaVersion'), 'SchemaVersion', '__avm_Component__SchemaVersion', pyxb.binding.datatypes.string)
__SchemaVersion._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 86, 4)
__SchemaVersion._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 86, 4)
SchemaVersion = property(__SchemaVersion.value, __SchemaVersion.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_Component__ID', pyxb.binding.datatypes.string)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 87, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 87, 4)
ID = property(__ID.value, __ID.set, None, None)
_ElementMap.update({
__DomainModel.name() : __DomainModel,
__Property.name() : __Property,
__ResourceDependency.name() : __ResourceDependency,
__Connector.name() : __Connector,
__DistributionRestriction.name() : __DistributionRestriction,
__Port.name() : __Port,
__Classifications.name() : __Classifications,
__AnalysisConstruct.name() : __AnalysisConstruct,
__Supercedes.name() : __Supercedes,
__Formula.name() : __Formula,
__DomainMapping.name() : __DomainMapping
})
_AttributeMap.update({
__Name.name() : __Name,
__Version.name() : __Version,
__SchemaVersion.name() : __SchemaVersion,
__ID.name() : __ID
})
Namespace.addCategoryObject('typeBinding', u'Component', Component_)
# Complex type {avm}DomainModel with content type EMPTY
class DomainModel_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DomainModel with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DomainModel')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 89, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute UsesResource uses Python identifier UsesResource
__UsesResource = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'UsesResource'), 'UsesResource', '__avm_DomainModel__UsesResource', pyxb.binding.datatypes.IDREFS)
__UsesResource._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 90, 4)
__UsesResource._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 90, 4)
UsesResource = property(__UsesResource.value, __UsesResource.set, None, None)
# Attribute Author uses Python identifier Author
__Author = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Author'), 'Author', '__avm_DomainModel__Author', pyxb.binding.datatypes.string)
__Author._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 91, 4)
__Author._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 91, 4)
Author = property(__Author.value, __Author.set, None, None)
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_DomainModel__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 92, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 92, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_DomainModel__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 93, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 93, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_DomainModel__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 94, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 94, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_DomainModel__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 95, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 95, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_DomainModel__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 96, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 96, 4)
ID = property(__ID.value, __ID.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__UsesResource.name() : __UsesResource,
__Author.name() : __Author,
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition,
__Name.name() : __Name,
__ID.name() : __ID
})
Namespace.addCategoryObject('typeBinding', u'DomainModel', DomainModel_)
# Complex type {avm}Property with content type EMPTY
class Property_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Property with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Property')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 98, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Property__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 99, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 99, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute OnDataSheet uses Python identifier OnDataSheet
__OnDataSheet = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'OnDataSheet'), 'OnDataSheet', '__avm_Property__OnDataSheet', pyxb.binding.datatypes.boolean)
__OnDataSheet._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 100, 4)
__OnDataSheet._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 100, 4)
OnDataSheet = property(__OnDataSheet.value, __OnDataSheet.set, None, None)
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_Property__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 101, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 101, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute Definition uses Python identifier Definition
__Definition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Definition'), 'Definition', '__avm_Property__Definition', pyxb.binding.datatypes.anyURI)
__Definition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 102, 4)
__Definition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 102, 4)
Definition = property(__Definition.value, __Definition.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_Property__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 103, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 103, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_Property__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 104, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 104, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_Property__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 105, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 105, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Name.name() : __Name,
__OnDataSheet.name() : __OnDataSheet,
__Notes.name() : __Notes,
__Definition.name() : __Definition,
__ID.name() : __ID,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'Property', Property_)
# Complex type {avm}Resource with content type EMPTY
class Resource_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Resource with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Resource')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 148, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Resource__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 149, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 149, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute Path uses Python identifier Path
__Path = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Path'), 'Path', '__avm_Resource__Path', pyxb.binding.datatypes.anyURI)
__Path._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 150, 4)
__Path._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 150, 4)
Path = property(__Path.value, __Path.set, None, None)
# Attribute Hash uses Python identifier Hash
__Hash = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Hash'), 'Hash', '__avm_Resource__Hash', pyxb.binding.datatypes.string)
__Hash._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 151, 4)
__Hash._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 151, 4)
Hash = property(__Hash.value, __Hash.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_Resource__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 152, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 152, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_Resource__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 153, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 153, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_Resource__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 154, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 154, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_Resource__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 155, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 155, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Name.name() : __Name,
__Path.name() : __Path,
__Hash.name() : __Hash,
__ID.name() : __ID,
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'Resource', Resource_)
# Complex type {avm}DomainModelParameter with content type EMPTY
class DomainModelParameter_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DomainModelParameter with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DomainModelParameter')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 191, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_DomainModelParameter__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 192, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 192, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_DomainModelParameter__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 193, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 193, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_DomainModelParameter__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 194, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 194, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'DomainModelParameter', DomainModelParameter_)
# Complex type {avm}ValueExpressionType with content type EMPTY
class ValueExpressionType_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ValueExpressionType with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ValueExpressionType')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 208, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ValueExpressionType', ValueExpressionType_)
# Complex type {avm}DistributionRestriction with content type EMPTY
class DistributionRestriction_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DistributionRestriction with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DistributionRestriction')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 245, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_DistributionRestriction__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 246, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 246, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Notes.name() : __Notes
})
Namespace.addCategoryObject('typeBinding', u'DistributionRestriction', DistributionRestriction_)
# Complex type {avm}DomainModelMetric with content type ELEMENT_ONLY
class DomainModelMetric_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DomainModelMetric with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DomainModelMetric')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 267, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element Value uses Python identifier Value
__Value = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Value'), 'Value', '__avm_DomainModelMetric__Value', False, pyxb.utils.utility.Location(u'avm.xsd', 269, 6), )
Value = property(__Value.value, __Value.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_DomainModelMetric__ID', pyxb.binding.datatypes.ID, required=True)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 271, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 271, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_DomainModelMetric__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 272, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 272, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_DomainModelMetric__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 273, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 273, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_DomainModelMetric__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 274, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 274, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
__Value.name() : __Value
})
_AttributeMap.update({
__ID.name() : __ID,
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'DomainModelMetric', DomainModelMetric_)
# Complex type {avm}AnalysisConstruct with content type EMPTY
class AnalysisConstruct_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}AnalysisConstruct with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'AnalysisConstruct')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 315, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'AnalysisConstruct', AnalysisConstruct_)
# Complex type {avm}Design with content type ELEMENT_ONLY
class Design_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Design with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Design')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 324, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element RootContainer uses Python identifier RootContainer
__RootContainer = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'RootContainer'), 'RootContainer', '__avm_Design__RootContainer', False, pyxb.utils.utility.Location(u'avm.xsd', 326, 6), )
RootContainer = property(__RootContainer.value, __RootContainer.set, None, None)
# Element DomainFeature uses Python identifier DomainFeature
__DomainFeature = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DomainFeature'), 'DomainFeature', '__avm_Design__DomainFeature', True, pyxb.utils.utility.Location(u'avm.xsd', 327, 6), )
DomainFeature = property(__DomainFeature.value, __DomainFeature.set, None, None)
# Element ResourceDependency uses Python identifier ResourceDependency
__ResourceDependency = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ResourceDependency'), 'ResourceDependency', '__avm_Design__ResourceDependency', True, pyxb.utils.utility.Location(u'avm.xsd', 328, 6), )
ResourceDependency = property(__ResourceDependency.value, __ResourceDependency.set, None, None)
# Attribute SchemaVersion uses Python identifier SchemaVersion
__SchemaVersion = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'SchemaVersion'), 'SchemaVersion', '__avm_Design__SchemaVersion', pyxb.binding.datatypes.string)
__SchemaVersion._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 330, 4)
__SchemaVersion._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 330, 4)
SchemaVersion = property(__SchemaVersion.value, __SchemaVersion.set, None, None)
# Attribute DesignID uses Python identifier DesignID
__DesignID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'DesignID'), 'DesignID', '__avm_Design__DesignID', pyxb.binding.datatypes.string)
__DesignID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 331, 4)
__DesignID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 331, 4)
DesignID = property(__DesignID.value, __DesignID.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Design__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 332, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 332, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute DesignSpaceSrcID uses Python identifier DesignSpaceSrcID
__DesignSpaceSrcID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'DesignSpaceSrcID'), 'DesignSpaceSrcID', '__avm_Design__DesignSpaceSrcID', pyxb.binding.datatypes.string)
__DesignSpaceSrcID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 333, 4)
__DesignSpaceSrcID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 333, 4)
DesignSpaceSrcID = property(__DesignSpaceSrcID.value, __DesignSpaceSrcID.set, None, None)
_ElementMap.update({
__RootContainer.name() : __RootContainer,
__DomainFeature.name() : __DomainFeature,
__ResourceDependency.name() : __ResourceDependency
})
_AttributeMap.update({
__SchemaVersion.name() : __SchemaVersion,
__DesignID.name() : __DesignID,
__Name.name() : __Name,
__DesignSpaceSrcID.name() : __DesignSpaceSrcID
})
Namespace.addCategoryObject('typeBinding', u'Design', Design_)
# Complex type {avm}Container with content type ELEMENT_ONLY
class Container_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Container with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Container')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 335, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element Container uses Python identifier Container
__Container = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Container'), 'Container', '__avm_Container__Container', True, pyxb.utils.utility.Location(u'avm.xsd', 337, 6), )
Container = property(__Container.value, __Container.set, None, None)
# Element Property uses Python identifier Property
__Property = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Property'), 'Property', '__avm_Container__Property', True, pyxb.utils.utility.Location(u'avm.xsd', 338, 6), )
Property = property(__Property.value, __Property.set, None, None)
# Element ComponentInstance uses Python identifier ComponentInstance
__ComponentInstance = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ComponentInstance'), 'ComponentInstance', '__avm_Container__ComponentInstance', True, pyxb.utils.utility.Location(u'avm.xsd', 339, 6), )
ComponentInstance = property(__ComponentInstance.value, __ComponentInstance.set, None, None)
# Element Port uses Python identifier Port
__Port = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Port'), 'Port', '__avm_Container__Port', True, pyxb.utils.utility.Location(u'avm.xsd', 340, 6), )
Port = property(__Port.value, __Port.set, None, None)
# Element Connector uses Python identifier Connector
__Connector = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Connector'), 'Connector', '__avm_Container__Connector', True, pyxb.utils.utility.Location(u'avm.xsd', 341, 6), )
Connector = property(__Connector.value, __Connector.set, None, None)
# Element JoinData uses Python identifier JoinData
__JoinData = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'JoinData'), 'JoinData', '__avm_Container__JoinData', True, pyxb.utils.utility.Location(u'avm.xsd', 342, 6), )
JoinData = property(__JoinData.value, __JoinData.set, None, None)
# Element Formula uses Python identifier Formula
__Formula = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Formula'), 'Formula', '__avm_Container__Formula', True, pyxb.utils.utility.Location(u'avm.xsd', 343, 6), )
Formula = property(__Formula.value, __Formula.set, None, None)
# Element ContainerFeature uses Python identifier ContainerFeature
__ContainerFeature = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ContainerFeature'), 'ContainerFeature', '__avm_Container__ContainerFeature', True, pyxb.utils.utility.Location(u'avm.xsd', 344, 6), )
ContainerFeature = property(__ContainerFeature.value, __ContainerFeature.set, None, None)
# Element ResourceDependency uses Python identifier ResourceDependency
__ResourceDependency = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ResourceDependency'), 'ResourceDependency', '__avm_Container__ResourceDependency', True, pyxb.utils.utility.Location(u'avm.xsd', 345, 6), )
ResourceDependency = property(__ResourceDependency.value, __ResourceDependency.set, None, None)
# Element DomainModel uses Python identifier DomainModel
__DomainModel = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DomainModel'), 'DomainModel', '__avm_Container__DomainModel', True, pyxb.utils.utility.Location(u'avm.xsd', 346, 6), )
DomainModel = property(__DomainModel.value, __DomainModel.set, None, None)
# Element Resource uses Python identifier Resource
__Resource = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Resource'), 'Resource', '__avm_Container__Resource', True, pyxb.utils.utility.Location(u'avm.xsd', 347, 6), )
Resource = property(__Resource.value, __Resource.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_Container__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 349, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 349, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Container__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 350, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 350, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_Container__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 351, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 351, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_Container__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 352, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 352, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute Description uses Python identifier Description
__Description = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Description'), 'Description', '__avm_Container__Description', pyxb.binding.datatypes.string)
__Description._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 353, 4)
__Description._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 353, 4)
Description = property(__Description.value, __Description.set, None, None)
_ElementMap.update({
__Container.name() : __Container,
__Property.name() : __Property,
__ComponentInstance.name() : __ComponentInstance,
__Port.name() : __Port,
__Connector.name() : __Connector,
__JoinData.name() : __JoinData,
__Formula.name() : __Formula,
__ContainerFeature.name() : __ContainerFeature,
__ResourceDependency.name() : __ResourceDependency,
__DomainModel.name() : __DomainModel,
__Resource.name() : __Resource
})
_AttributeMap.update({
__XPosition.name() : __XPosition,
__Name.name() : __Name,
__YPosition.name() : __YPosition,
__ID.name() : __ID,
__Description.name() : __Description
})
Namespace.addCategoryObject('typeBinding', u'Container', Container_)
# Complex type {avm}ComponentInstance with content type ELEMENT_ONLY
class ComponentInstance_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ComponentInstance with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ComponentInstance')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 374, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element PortInstance uses Python identifier PortInstance
__PortInstance = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'PortInstance'), 'PortInstance', '__avm_ComponentInstance__PortInstance', True, pyxb.utils.utility.Location(u'avm.xsd', 376, 6), )
PortInstance = property(__PortInstance.value, __PortInstance.set, None, None)
# Element PrimitivePropertyInstance uses Python identifier PrimitivePropertyInstance
__PrimitivePropertyInstance = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'PrimitivePropertyInstance'), 'PrimitivePropertyInstance', '__avm_ComponentInstance__PrimitivePropertyInstance', True, pyxb.utils.utility.Location(u'avm.xsd', 377, 6), )
PrimitivePropertyInstance = property(__PrimitivePropertyInstance.value, __PrimitivePropertyInstance.set, None, None)
# Element ConnectorInstance uses Python identifier ConnectorInstance
__ConnectorInstance = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ConnectorInstance'), 'ConnectorInstance', '__avm_ComponentInstance__ConnectorInstance', True, pyxb.utils.utility.Location(u'avm.xsd', 378, 6), )
ConnectorInstance = property(__ConnectorInstance.value, __ConnectorInstance.set, None, None)
# Attribute ComponentID uses Python identifier ComponentID
__ComponentID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ComponentID'), 'ComponentID', '__avm_ComponentInstance__ComponentID', pyxb.binding.datatypes.string)
__ComponentID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 380, 4)
__ComponentID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 380, 4)
ComponentID = property(__ComponentID.value, __ComponentID.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_ComponentInstance__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 381, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 381, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_ComponentInstance__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 382, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 382, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute DesignSpaceSrcComponentID uses Python identifier DesignSpaceSrcComponentID
__DesignSpaceSrcComponentID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'DesignSpaceSrcComponentID'), 'DesignSpaceSrcComponentID', '__avm_ComponentInstance__DesignSpaceSrcComponentID', pyxb.binding.datatypes.string)
__DesignSpaceSrcComponentID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 383, 4)
__DesignSpaceSrcComponentID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 383, 4)
DesignSpaceSrcComponentID = property(__DesignSpaceSrcComponentID.value, __DesignSpaceSrcComponentID.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_ComponentInstance__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 384, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 384, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_ComponentInstance__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 385, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 385, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
__PortInstance.name() : __PortInstance,
__PrimitivePropertyInstance.name() : __PrimitivePropertyInstance,
__ConnectorInstance.name() : __ConnectorInstance
})
_AttributeMap.update({
__ComponentID.name() : __ComponentID,
__ID.name() : __ID,
__Name.name() : __Name,
__DesignSpaceSrcComponentID.name() : __DesignSpaceSrcComponentID,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'ComponentInstance', ComponentInstance_)
# Complex type {avm}ComponentPrimitivePropertyInstance with content type ELEMENT_ONLY
class ComponentPrimitivePropertyInstance_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ComponentPrimitivePropertyInstance with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ComponentPrimitivePropertyInstance')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 394, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element Value uses Python identifier Value
__Value = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Value'), 'Value', '__avm_ComponentPrimitivePropertyInstance__Value', False, pyxb.utils.utility.Location(u'avm.xsd', 396, 6), )
Value = property(__Value.value, __Value.set, None, None)
# Attribute IDinComponentModel uses Python identifier IDinComponentModel
__IDinComponentModel = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'IDinComponentModel'), 'IDinComponentModel', '__avm_ComponentPrimitivePropertyInstance__IDinComponentModel', pyxb.binding.datatypes.string, required=True)
__IDinComponentModel._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 398, 4)
__IDinComponentModel._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 398, 4)
IDinComponentModel = property(__IDinComponentModel.value, __IDinComponentModel.set, None, None)
_ElementMap.update({
__Value.name() : __Value
})
_AttributeMap.update({
__IDinComponentModel.name() : __IDinComponentModel
})
Namespace.addCategoryObject('typeBinding', u'ComponentPrimitivePropertyInstance', ComponentPrimitivePropertyInstance_)
# Complex type {avm}ValueNode with content type EMPTY
class ValueNode_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ValueNode with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ValueNode')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 464, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_ValueNode__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 465, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 465, 4)
ID = property(__ID.value, __ID.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__ID.name() : __ID
})
Namespace.addCategoryObject('typeBinding', u'ValueNode', ValueNode_)
# Complex type {avm}Operand with content type EMPTY
class Operand_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Operand with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Operand')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 477, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Symbol uses Python identifier Symbol
__Symbol = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Symbol'), 'Symbol', '__avm_Operand__Symbol', pyxb.binding.datatypes.string, required=True)
__Symbol._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 478, 4)
__Symbol._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 478, 4)
Symbol = property(__Symbol.value, __Symbol.set, None, None)
# Attribute ValueSource uses Python identifier ValueSource
__ValueSource = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ValueSource'), 'ValueSource', '__avm_Operand__ValueSource', pyxb.binding.datatypes.anyURI, required=True)
__ValueSource._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 479, 4)
__ValueSource._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 479, 4)
ValueSource = property(__ValueSource.value, __ValueSource.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Symbol.name() : __Symbol,
__ValueSource.name() : __ValueSource
})
Namespace.addCategoryObject('typeBinding', u'Operand', Operand_)
# Complex type {avm}ConnectorFeature with content type EMPTY
class ConnectorFeature_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ConnectorFeature with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ConnectorFeature')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 497, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ConnectorFeature', ConnectorFeature_)
# Complex type {avm}ContainerFeature with content type EMPTY
class ContainerFeature_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ContainerFeature with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ContainerFeature')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 498, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ContainerFeature', ContainerFeature_)
# Complex type {avm}DomainMapping with content type EMPTY
class DomainMapping_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DomainMapping with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DomainMapping')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 499, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'DomainMapping', DomainMapping_)
# Complex type {avm}TestBench with content type ELEMENT_ONLY
class TestBench_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}TestBench with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'TestBench')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 500, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element TopLevelSystemUnderTest uses Python identifier TopLevelSystemUnderTest
__TopLevelSystemUnderTest = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'TopLevelSystemUnderTest'), 'TopLevelSystemUnderTest', '__avm_TestBench__TopLevelSystemUnderTest', False, pyxb.utils.utility.Location(u'avm.xsd', 502, 6), )
TopLevelSystemUnderTest = property(__TopLevelSystemUnderTest.value, __TopLevelSystemUnderTest.set, None, None)
# Element Parameter uses Python identifier Parameter
__Parameter = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Parameter'), 'Parameter', '__avm_TestBench__Parameter', True, pyxb.utils.utility.Location(u'avm.xsd', 503, 6), )
Parameter = property(__Parameter.value, __Parameter.set, None, None)
# Element Metric uses Python identifier Metric
__Metric = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Metric'), 'Metric', '__avm_TestBench__Metric', True, pyxb.utils.utility.Location(u'avm.xsd', 504, 6), )
Metric = property(__Metric.value, __Metric.set, None, None)
# Element TestInjectionPoint uses Python identifier TestInjectionPoint
__TestInjectionPoint = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'TestInjectionPoint'), 'TestInjectionPoint', '__avm_TestBench__TestInjectionPoint', True, pyxb.utils.utility.Location(u'avm.xsd', 505, 6), )
TestInjectionPoint = property(__TestInjectionPoint.value, __TestInjectionPoint.set, None, None)
# Element TestComponent uses Python identifier TestComponent
__TestComponent = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'TestComponent'), 'TestComponent', '__avm_TestBench__TestComponent', True, pyxb.utils.utility.Location(u'avm.xsd', 506, 6), )
TestComponent = property(__TestComponent.value, __TestComponent.set, None, None)
# Element Workflow uses Python identifier Workflow
__Workflow = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Workflow'), 'Workflow', '__avm_TestBench__Workflow', False, pyxb.utils.utility.Location(u'avm.xsd', 507, 6), )
Workflow = property(__Workflow.value, __Workflow.set, None, None)
# Element Settings uses Python identifier Settings
__Settings = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Settings'), 'Settings', '__avm_TestBench__Settings', True, pyxb.utils.utility.Location(u'avm.xsd', 508, 6), )
Settings = property(__Settings.value, __Settings.set, None, None)
# Element TestStructure uses Python identifier TestStructure
__TestStructure = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'TestStructure'), 'TestStructure', '__avm_TestBench__TestStructure', True, pyxb.utils.utility.Location(u'avm.xsd', 509, 6), )
TestStructure = property(__TestStructure.value, __TestStructure.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_TestBench__Name', pyxb.binding.datatypes.string, required=True)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 511, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 511, 4)
Name = property(__Name.value, __Name.set, None, None)
_ElementMap.update({
__TopLevelSystemUnderTest.name() : __TopLevelSystemUnderTest,
__Parameter.name() : __Parameter,
__Metric.name() : __Metric,
__TestInjectionPoint.name() : __TestInjectionPoint,
__TestComponent.name() : __TestComponent,
__Workflow.name() : __Workflow,
__Settings.name() : __Settings,
__TestStructure.name() : __TestStructure
})
_AttributeMap.update({
__Name.name() : __Name
})
Namespace.addCategoryObject('typeBinding', u'TestBench', TestBench_)
# Complex type {avm}ContainerInstanceBase with content type EMPTY
class ContainerInstanceBase_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ContainerInstanceBase with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ContainerInstanceBase')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 530, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute IDinSourceModel uses Python identifier IDinSourceModel
__IDinSourceModel = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'IDinSourceModel'), 'IDinSourceModel', '__avm_ContainerInstanceBase__IDinSourceModel', pyxb.binding.datatypes.string, required=True)
__IDinSourceModel._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 531, 4)
__IDinSourceModel._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 531, 4)
IDinSourceModel = property(__IDinSourceModel.value, __IDinSourceModel.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_ContainerInstanceBase__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 532, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 532, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_ContainerInstanceBase__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 533, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 533, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__IDinSourceModel.name() : __IDinSourceModel,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'ContainerInstanceBase', ContainerInstanceBase_)
# Complex type {avm}TestBenchValueBase with content type ELEMENT_ONLY
class TestBenchValueBase_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}TestBenchValueBase with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'TestBenchValueBase')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 535, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element Value uses Python identifier Value
__Value = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Value'), 'Value', '__avm_TestBenchValueBase__Value', False, pyxb.utils.utility.Location(u'avm.xsd', 537, 6), )
Value = property(__Value.value, __Value.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_TestBenchValueBase__ID', pyxb.binding.datatypes.string, required=True)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 539, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 539, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_TestBenchValueBase__Name', pyxb.binding.datatypes.string, required=True)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 540, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 540, 4)
Name = property(__Name.value, __Name.set, None, None)
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_TestBenchValueBase__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 541, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 541, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_TestBenchValueBase__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 542, 4)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 542, 4)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_TestBenchValueBase__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 543, 4)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 543, 4)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
_ElementMap.update({
__Value.name() : __Value
})
_AttributeMap.update({
__ID.name() : __ID,
__Name.name() : __Name,
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'TestBenchValueBase', TestBenchValueBase_)
# Complex type {avm}Workflow with content type ELEMENT_ONLY
class Workflow_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Workflow with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Workflow')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 550, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Element Task uses Python identifier Task
__Task = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Task'), 'Task', '__avm_Workflow__Task', True, pyxb.utils.utility.Location(u'avm.xsd', 552, 6), )
Task = property(__Task.value, __Task.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Workflow__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 554, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 554, 4)
Name = property(__Name.value, __Name.set, None, None)
_ElementMap.update({
__Task.name() : __Task
})
_AttributeMap.update({
__Name.name() : __Name
})
Namespace.addCategoryObject('typeBinding', u'Workflow', Workflow_)
# Complex type {avm}WorkflowTaskBase with content type EMPTY
class WorkflowTaskBase_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}WorkflowTaskBase with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'WorkflowTaskBase')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 556, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_WorkflowTaskBase__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 557, 4)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 557, 4)
Name = property(__Name.value, __Name.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Name.name() : __Name
})
Namespace.addCategoryObject('typeBinding', u'WorkflowTaskBase', WorkflowTaskBase_)
# Complex type {avm}Settings with content type EMPTY
class Settings_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}Settings with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Settings')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 575, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'Settings', Settings_)
# Complex type {avm}DesignDomainFeature with content type EMPTY
class DesignDomainFeature_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DesignDomainFeature with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DesignDomainFeature')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 587, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'DesignDomainFeature', DesignDomainFeature_)
# Complex type {avm}Value with content type ELEMENT_ONLY
class Value_ (ValueNode_):
"""Complex type {avm}Value with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Value')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 107, 2)
_ElementMap = ValueNode_._ElementMap.copy()
_AttributeMap = ValueNode_._AttributeMap.copy()
# Base type is ValueNode_
# Element ValueExpression uses Python identifier ValueExpression
__ValueExpression = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ValueExpression'), 'ValueExpression', '__avm_Value__ValueExpression', False, pyxb.utils.utility.Location(u'avm.xsd', 111, 10), )
ValueExpression = property(__ValueExpression.value, __ValueExpression.set, None, None)
# Element DataSource uses Python identifier DataSource
__DataSource = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DataSource'), 'DataSource', '__avm_Value__DataSource', True, pyxb.utils.utility.Location(u'avm.xsd', 112, 10), )
DataSource = property(__DataSource.value, __DataSource.set, None, None)
# Attribute DimensionType uses Python identifier DimensionType
__DimensionType = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'DimensionType'), 'DimensionType', '__avm_Value__DimensionType', DimensionTypeEnum)
__DimensionType._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 114, 8)
__DimensionType._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 114, 8)
DimensionType = property(__DimensionType.value, __DimensionType.set, None, None)
# Attribute DataType uses Python identifier DataType
__DataType = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'DataType'), 'DataType', '__avm_Value__DataType', DataTypeEnum)
__DataType._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 115, 8)
__DataType._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 115, 8)
DataType = property(__DataType.value, __DataType.set, None, None)
# Attribute Dimensions uses Python identifier Dimensions
__Dimensions = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Dimensions'), 'Dimensions', '__avm_Value__Dimensions', pyxb.binding.datatypes.string)
__Dimensions._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 116, 8)
__Dimensions._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 116, 8)
Dimensions = property(__Dimensions.value, __Dimensions.set, None, None)
# Attribute Unit uses Python identifier Unit
__Unit = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Unit'), 'Unit', '__avm_Value__Unit', pyxb.binding.datatypes.string)
__Unit._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 117, 8)
__Unit._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 117, 8)
Unit = property(__Unit.value, __Unit.set, None, None)
# Attribute ID inherited from {avm}ValueNode
_ElementMap.update({
__ValueExpression.name() : __ValueExpression,
__DataSource.name() : __DataSource
})
_AttributeMap.update({
__DimensionType.name() : __DimensionType,
__DataType.name() : __DataType,
__Dimensions.name() : __Dimensions,
__Unit.name() : __Unit
})
Namespace.addCategoryObject('typeBinding', u'Value', Value_)
# Complex type {avm}FixedValue with content type ELEMENT_ONLY
class FixedValue_ (ValueExpressionType_):
"""Complex type {avm}FixedValue with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'FixedValue')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 121, 2)
_ElementMap = ValueExpressionType_._ElementMap.copy()
_AttributeMap = ValueExpressionType_._AttributeMap.copy()
# Base type is ValueExpressionType_
# Element Value uses Python identifier Value
__Value = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Value'), 'Value', '__avm_FixedValue__Value', False, pyxb.utils.utility.Location(u'avm.xsd', 125, 10), )
Value = property(__Value.value, __Value.set, None, None)
# Attribute Uncertainty uses Python identifier Uncertainty
__Uncertainty = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Uncertainty'), 'Uncertainty', '__avm_FixedValue__Uncertainty', pyxb.binding.datatypes.float)
__Uncertainty._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 127, 8)
__Uncertainty._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 127, 8)
Uncertainty = property(__Uncertainty.value, __Uncertainty.set, None, None)
_ElementMap.update({
__Value.name() : __Value
})
_AttributeMap.update({
__Uncertainty.name() : __Uncertainty
})
Namespace.addCategoryObject('typeBinding', u'FixedValue', FixedValue_)
# Complex type {avm}CalculatedValue with content type ELEMENT_ONLY
class CalculatedValue_ (ValueExpressionType_):
"""Complex type {avm}CalculatedValue with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'CalculatedValue')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 131, 2)
_ElementMap = ValueExpressionType_._ElementMap.copy()
_AttributeMap = ValueExpressionType_._AttributeMap.copy()
# Base type is ValueExpressionType_
# Element Expression uses Python identifier Expression
__Expression = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Expression'), 'Expression', '__avm_CalculatedValue__Expression', False, pyxb.utils.utility.Location(u'avm.xsd', 135, 10), )
Expression = property(__Expression.value, __Expression.set, None, None)
# Attribute Type uses Python identifier Type
__Type = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Type'), 'Type', '__avm_CalculatedValue__Type', CalculationTypeEnum, required=True)
__Type._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 137, 8)
__Type._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 137, 8)
Type = property(__Type.value, __Type.set, None, None)
_ElementMap.update({
__Expression.name() : __Expression
})
_AttributeMap.update({
__Type.name() : __Type
})
Namespace.addCategoryObject('typeBinding', u'CalculatedValue', CalculatedValue_)
# Complex type {avm}DerivedValue with content type EMPTY
class DerivedValue_ (ValueExpressionType_):
"""Complex type {avm}DerivedValue with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DerivedValue')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 141, 2)
_ElementMap = ValueExpressionType_._ElementMap.copy()
_AttributeMap = ValueExpressionType_._AttributeMap.copy()
# Base type is ValueExpressionType_
# Attribute ValueSource uses Python identifier ValueSource
__ValueSource = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ValueSource'), 'ValueSource', '__avm_DerivedValue__ValueSource', pyxb.binding.datatypes.IDREF, required=True)
__ValueSource._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 144, 8)
__ValueSource._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 144, 8)
ValueSource = property(__ValueSource.value, __ValueSource.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__ValueSource.name() : __ValueSource
})
Namespace.addCategoryObject('typeBinding', u'DerivedValue', DerivedValue_)
# Complex type {avm}ParametricValue with content type ELEMENT_ONLY
class ParametricValue_ (ValueExpressionType_):
"""Complex type {avm}ParametricValue with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ParametricValue')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 196, 2)
_ElementMap = ValueExpressionType_._ElementMap.copy()
_AttributeMap = ValueExpressionType_._AttributeMap.copy()
# Base type is ValueExpressionType_
# Element Default uses Python identifier Default
__Default = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Default'), 'Default', '__avm_ParametricValue__Default', False, pyxb.utils.utility.Location(u'avm.xsd', 200, 10), )
Default = property(__Default.value, __Default.set, None, None)
# Element Maximum uses Python identifier Maximum
__Maximum = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Maximum'), 'Maximum', '__avm_ParametricValue__Maximum', False, pyxb.utils.utility.Location(u'avm.xsd', 201, 10), )
Maximum = property(__Maximum.value, __Maximum.set, None, None)
# Element Minimum uses Python identifier Minimum
__Minimum = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Minimum'), 'Minimum', '__avm_ParametricValue__Minimum', False, pyxb.utils.utility.Location(u'avm.xsd', 202, 10), )
Minimum = property(__Minimum.value, __Minimum.set, None, None)
# Element AssignedValue uses Python identifier AssignedValue
__AssignedValue = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'AssignedValue'), 'AssignedValue', '__avm_ParametricValue__AssignedValue', False, pyxb.utils.utility.Location(u'avm.xsd', 203, 10), )
AssignedValue = property(__AssignedValue.value, __AssignedValue.set, None, None)
_ElementMap.update({
__Default.name() : __Default,
__Maximum.name() : __Maximum,
__Minimum.name() : __Minimum,
__AssignedValue.name() : __AssignedValue
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ParametricValue', ParametricValue_)
# Complex type {avm}ProbabilisticValue with content type EMPTY
class ProbabilisticValue_ (ValueExpressionType_):
"""Complex type {avm}ProbabilisticValue with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ProbabilisticValue')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 209, 2)
_ElementMap = ValueExpressionType_._ElementMap.copy()
_AttributeMap = ValueExpressionType_._AttributeMap.copy()
# Base type is ValueExpressionType_
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ProbabilisticValue', ProbabilisticValue_)
# Complex type {avm}SecurityClassification with content type EMPTY
class SecurityClassification_ (DistributionRestriction_):
"""Complex type {avm}SecurityClassification with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'SecurityClassification')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 248, 2)
_ElementMap = DistributionRestriction_._ElementMap.copy()
_AttributeMap = DistributionRestriction_._AttributeMap.copy()
# Base type is DistributionRestriction_
# Attribute Notes inherited from {avm}DistributionRestriction
# Attribute Level uses Python identifier Level
__Level = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Level'), 'Level', '__avm_SecurityClassification__Level', pyxb.binding.datatypes.string)
__Level._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 251, 8)
__Level._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 251, 8)
Level = property(__Level.value, __Level.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Level.name() : __Level
})
Namespace.addCategoryObject('typeBinding', u'SecurityClassification', SecurityClassification_)
# Complex type {avm}Proprietary with content type EMPTY
class Proprietary_ (DistributionRestriction_):
"""Complex type {avm}Proprietary with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Proprietary')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 255, 2)
_ElementMap = DistributionRestriction_._ElementMap.copy()
_AttributeMap = DistributionRestriction_._AttributeMap.copy()
# Base type is DistributionRestriction_
# Attribute Notes inherited from {avm}DistributionRestriction
# Attribute Organization uses Python identifier Organization
__Organization = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Organization'), 'Organization', '__avm_Proprietary__Organization', pyxb.binding.datatypes.string, required=True)
__Organization._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 258, 8)
__Organization._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 258, 8)
Organization = property(__Organization.value, __Organization.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Organization.name() : __Organization
})
Namespace.addCategoryObject('typeBinding', u'Proprietary', Proprietary_)
# Complex type {avm}ITAR with content type EMPTY
class ITAR_ (DistributionRestriction_):
"""Complex type {avm}ITAR with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ITAR')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 262, 2)
_ElementMap = DistributionRestriction_._ElementMap.copy()
_AttributeMap = DistributionRestriction_._AttributeMap.copy()
# Base type is DistributionRestriction_
# Attribute Notes inherited from {avm}DistributionRestriction
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ITAR', ITAR_)
# Complex type {avm}PrimitiveProperty with content type ELEMENT_ONLY
class PrimitiveProperty_ (Property_):
"""Complex type {avm}PrimitiveProperty with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'PrimitiveProperty')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 281, 2)
_ElementMap = Property_._ElementMap.copy()
_AttributeMap = Property_._AttributeMap.copy()
# Base type is Property_
# Element Value uses Python identifier Value
__Value = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Value'), 'Value', '__avm_PrimitiveProperty__Value', False, pyxb.utils.utility.Location(u'avm.xsd', 285, 10), )
Value = property(__Value.value, __Value.set, None, None)
# Attribute Name inherited from {avm}Property
# Attribute OnDataSheet inherited from {avm}Property
# Attribute Notes inherited from {avm}Property
# Attribute Definition inherited from {avm}Property
# Attribute ID inherited from {avm}Property
# Attribute XPosition inherited from {avm}Property
# Attribute YPosition inherited from {avm}Property
_ElementMap.update({
__Value.name() : __Value
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'PrimitiveProperty', PrimitiveProperty_)
# Complex type {avm}CompoundProperty with content type ELEMENT_ONLY
class CompoundProperty_ (Property_):
"""Complex type {avm}CompoundProperty with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'CompoundProperty')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 290, 2)
_ElementMap = Property_._ElementMap.copy()
_AttributeMap = Property_._AttributeMap.copy()
# Base type is Property_
# Element CompoundProperty uses Python identifier CompoundProperty
__CompoundProperty = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'CompoundProperty'), 'CompoundProperty', '__avm_CompoundProperty__CompoundProperty', True, pyxb.utils.utility.Location(u'avm.xsd', 294, 10), )
CompoundProperty = property(__CompoundProperty.value, __CompoundProperty.set, None, None)
# Element PrimitiveProperty uses Python identifier PrimitiveProperty
__PrimitiveProperty = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'PrimitiveProperty'), 'PrimitiveProperty', '__avm_CompoundProperty__PrimitiveProperty', True, pyxb.utils.utility.Location(u'avm.xsd', 295, 10), )
PrimitiveProperty = property(__PrimitiveProperty.value, __PrimitiveProperty.set, None, None)
# Attribute Name inherited from {avm}Property
# Attribute OnDataSheet inherited from {avm}Property
# Attribute Notes inherited from {avm}Property
# Attribute Definition inherited from {avm}Property
# Attribute ID inherited from {avm}Property
# Attribute XPosition inherited from {avm}Property
# Attribute YPosition inherited from {avm}Property
_ElementMap.update({
__CompoundProperty.name() : __CompoundProperty,
__PrimitiveProperty.name() : __PrimitiveProperty
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'CompoundProperty', CompoundProperty_)
# Complex type {avm}ParametricEnumeratedValue with content type ELEMENT_ONLY
class ParametricEnumeratedValue_ (ValueExpressionType_):
"""Complex type {avm}ParametricEnumeratedValue with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ParametricEnumeratedValue')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 300, 2)
_ElementMap = ValueExpressionType_._ElementMap.copy()
_AttributeMap = ValueExpressionType_._AttributeMap.copy()
# Base type is ValueExpressionType_
# Element AssignedValue uses Python identifier AssignedValue
__AssignedValue = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'AssignedValue'), 'AssignedValue', '__avm_ParametricEnumeratedValue__AssignedValue', False, pyxb.utils.utility.Location(u'avm.xsd', 304, 10), )
AssignedValue = property(__AssignedValue.value, __AssignedValue.set, None, None)
# Element Enum uses Python identifier Enum
__Enum = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Enum'), 'Enum', '__avm_ParametricEnumeratedValue__Enum', True, pyxb.utils.utility.Location(u'avm.xsd', 305, 10), )
Enum = property(__Enum.value, __Enum.set, None, None)
_ElementMap.update({
__AssignedValue.name() : __AssignedValue,
__Enum.name() : __Enum
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'ParametricEnumeratedValue', ParametricEnumeratedValue_)
# Complex type {avm}DataSource with content type EMPTY
class DataSource_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}DataSource with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DataSource')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 316, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_DataSource__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 317, 4)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 317, 4)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute FromResource uses Python identifier FromResource
__FromResource = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'FromResource'), 'FromResource', '__avm_DataSource__FromResource', STD_ANON)
__FromResource._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 318, 4)
__FromResource._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 318, 4)
FromResource = property(__FromResource.value, __FromResource.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Notes.name() : __Notes,
__FromResource.name() : __FromResource
})
Namespace.addCategoryObject('typeBinding', u'DataSource', DataSource_)
# Complex type {avm}Compound with content type ELEMENT_ONLY
class Compound_ (Container_):
"""Complex type {avm}Compound with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Compound')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 355, 2)
_ElementMap = Container_._ElementMap.copy()
_AttributeMap = Container_._AttributeMap.copy()
# Base type is Container_
# Element Container (Container) inherited from {avm}Container
# Element Property (Property) inherited from {avm}Container
# Element ComponentInstance (ComponentInstance) inherited from {avm}Container
# Element Port (Port) inherited from {avm}Container
# Element Connector (Connector) inherited from {avm}Container
# Element JoinData (JoinData) inherited from {avm}Container
# Element Formula (Formula) inherited from {avm}Container
# Element ContainerFeature (ContainerFeature) inherited from {avm}Container
# Element ResourceDependency (ResourceDependency) inherited from {avm}Container
# Element DomainModel (DomainModel) inherited from {avm}Container
# Element Resource (Resource) inherited from {avm}Container
# Attribute XPosition inherited from {avm}Container
# Attribute Name inherited from {avm}Container
# Attribute YPosition inherited from {avm}Container
# Attribute ID inherited from {avm}Container
# Attribute Description inherited from {avm}Container
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'Compound', Compound_)
# Complex type {avm}DesignSpaceContainer with content type ELEMENT_ONLY
class DesignSpaceContainer_ (Container_):
"""Complex type {avm}DesignSpaceContainer with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DesignSpaceContainer')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 400, 2)
_ElementMap = Container_._ElementMap.copy()
_AttributeMap = Container_._AttributeMap.copy()
# Base type is Container_
# Element Container (Container) inherited from {avm}Container
# Element Property (Property) inherited from {avm}Container
# Element ComponentInstance (ComponentInstance) inherited from {avm}Container
# Element Port (Port) inherited from {avm}Container
# Element Connector (Connector) inherited from {avm}Container
# Element JoinData (JoinData) inherited from {avm}Container
# Element Formula (Formula) inherited from {avm}Container
# Element ContainerFeature (ContainerFeature) inherited from {avm}Container
# Element ResourceDependency (ResourceDependency) inherited from {avm}Container
# Element DomainModel (DomainModel) inherited from {avm}Container
# Element Resource (Resource) inherited from {avm}Container
# Attribute XPosition inherited from {avm}Container
# Attribute Name inherited from {avm}Container
# Attribute YPosition inherited from {avm}Container
# Attribute ID inherited from {avm}Container
# Attribute Description inherited from {avm}Container
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'DesignSpaceContainer', DesignSpaceContainer_)
# Complex type {avm}PortMapTarget with content type EMPTY
class PortMapTarget_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}PortMapTarget with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'PortMapTarget')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 405, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_PortMapTarget__ID', pyxb.binding.datatypes.ID)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 406, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 406, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute PortMap uses Python identifier PortMap
__PortMap = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'PortMap'), 'PortMap', '__avm_PortMapTarget__PortMap', STD_ANON_)
__PortMap._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 407, 4)
__PortMap._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 407, 4)
PortMap = property(__PortMap.value, __PortMap.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__ID.name() : __ID,
__PortMap.name() : __PortMap
})
Namespace.addCategoryObject('typeBinding', u'PortMapTarget', PortMapTarget_)
# Complex type {avm}ConnectorCompositionTarget with content type EMPTY
class ConnectorCompositionTarget_ (pyxb.binding.basis.complexTypeDefinition):
"""Complex type {avm}ConnectorCompositionTarget with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ConnectorCompositionTarget')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 420, 2)
_ElementMap = {}
_AttributeMap = {}
# Base type is pyxb.binding.datatypes.anyType
# Attribute ConnectorComposition uses Python identifier ConnectorComposition
__ConnectorComposition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ConnectorComposition'), 'ConnectorComposition', '__avm_ConnectorCompositionTarget__ConnectorComposition', STD_ANON_2)
__ConnectorComposition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 421, 4)
__ConnectorComposition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 421, 4)
ConnectorComposition = property(__ConnectorComposition.value, __ConnectorComposition.set, None, None)
# Attribute ID uses Python identifier ID
__ID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ID'), 'ID', '__avm_ConnectorCompositionTarget__ID', pyxb.binding.datatypes.string)
__ID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 426, 4)
__ID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 426, 4)
ID = property(__ID.value, __ID.set, None, None)
# Attribute ApplyJoinData uses Python identifier ApplyJoinData
__ApplyJoinData = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'ApplyJoinData'), 'ApplyJoinData', '__avm_ConnectorCompositionTarget__ApplyJoinData', STD_ANON_3)
__ApplyJoinData._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 427, 4)
__ApplyJoinData._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 427, 4)
ApplyJoinData = property(__ApplyJoinData.value, __ApplyJoinData.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__ConnectorComposition.name() : __ConnectorComposition,
__ID.name() : __ID,
__ApplyJoinData.name() : __ApplyJoinData
})
Namespace.addCategoryObject('typeBinding', u'ConnectorCompositionTarget', ConnectorCompositionTarget_)
# Complex type {avm}Formula with content type EMPTY
class Formula_ (ValueNode_):
"""Complex type {avm}Formula with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Formula')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 433, 2)
_ElementMap = ValueNode_._ElementMap.copy()
_AttributeMap = ValueNode_._AttributeMap.copy()
# Base type is ValueNode_
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Formula__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 436, 8)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 436, 8)
Name = property(__Name.value, __Name.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_Formula__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 437, 8)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 437, 8)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_Formula__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 438, 8)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 438, 8)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
# Attribute ID inherited from {avm}ValueNode
_ElementMap.update({
})
_AttributeMap.update({
__Name.name() : __Name,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'Formula', Formula_)
# Complex type {avm}DoDDistributionStatement with content type EMPTY
class DoDDistributionStatement_ (DistributionRestriction_):
"""Complex type {avm}DoDDistributionStatement with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DoDDistributionStatement')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 481, 2)
_ElementMap = DistributionRestriction_._ElementMap.copy()
_AttributeMap = DistributionRestriction_._AttributeMap.copy()
# Base type is DistributionRestriction_
# Attribute Notes inherited from {avm}DistributionRestriction
# Attribute Type uses Python identifier Type
__Type = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Type'), 'Type', '__avm_DoDDistributionStatement__Type', DoDDistributionStatementEnum, required=True)
__Type._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 484, 8)
__Type._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 484, 8)
Type = property(__Type.value, __Type.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Type.name() : __Type
})
Namespace.addCategoryObject('typeBinding', u'DoDDistributionStatement', DoDDistributionStatement_)
# Complex type {avm}TopLevelSystemUnderTest with content type EMPTY
class TopLevelSystemUnderTest_ (ContainerInstanceBase_):
"""Complex type {avm}TopLevelSystemUnderTest with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'TopLevelSystemUnderTest')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 513, 2)
_ElementMap = ContainerInstanceBase_._ElementMap.copy()
_AttributeMap = ContainerInstanceBase_._AttributeMap.copy()
# Base type is ContainerInstanceBase_
# Attribute DesignID uses Python identifier DesignID
__DesignID = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'DesignID'), 'DesignID', '__avm_TopLevelSystemUnderTest__DesignID', pyxb.binding.datatypes.string, required=True)
__DesignID._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 516, 8)
__DesignID._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 516, 8)
DesignID = property(__DesignID.value, __DesignID.set, None, None)
# Attribute IDinSourceModel inherited from {avm}ContainerInstanceBase
# Attribute XPosition inherited from {avm}ContainerInstanceBase
# Attribute YPosition inherited from {avm}ContainerInstanceBase
_ElementMap.update({
})
_AttributeMap.update({
__DesignID.name() : __DesignID
})
Namespace.addCategoryObject('typeBinding', u'TopLevelSystemUnderTest', TopLevelSystemUnderTest_)
# Complex type {avm}Parameter with content type ELEMENT_ONLY
class Parameter_ (TestBenchValueBase_):
"""Complex type {avm}Parameter with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Parameter')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 520, 2)
_ElementMap = TestBenchValueBase_._ElementMap.copy()
_AttributeMap = TestBenchValueBase_._AttributeMap.copy()
# Base type is TestBenchValueBase_
# Element Value (Value) inherited from {avm}TestBenchValueBase
# Attribute ID inherited from {avm}TestBenchValueBase
# Attribute Name inherited from {avm}TestBenchValueBase
# Attribute Notes inherited from {avm}TestBenchValueBase
# Attribute XPosition inherited from {avm}TestBenchValueBase
# Attribute YPosition inherited from {avm}TestBenchValueBase
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'Parameter', Parameter_)
# Complex type {avm}Metric with content type ELEMENT_ONLY
class Metric_ (TestBenchValueBase_):
"""Complex type {avm}Metric with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Metric')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 525, 2)
_ElementMap = TestBenchValueBase_._ElementMap.copy()
_AttributeMap = TestBenchValueBase_._AttributeMap.copy()
# Base type is TestBenchValueBase_
# Element Value (Value) inherited from {avm}TestBenchValueBase
# Attribute ID inherited from {avm}TestBenchValueBase
# Attribute Name inherited from {avm}TestBenchValueBase
# Attribute Notes inherited from {avm}TestBenchValueBase
# Attribute XPosition inherited from {avm}TestBenchValueBase
# Attribute YPosition inherited from {avm}TestBenchValueBase
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'Metric', Metric_)
# Complex type {avm}TestInjectionPoint with content type EMPTY
class TestInjectionPoint_ (ContainerInstanceBase_):
"""Complex type {avm}TestInjectionPoint with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'TestInjectionPoint')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 545, 2)
_ElementMap = ContainerInstanceBase_._ElementMap.copy()
_AttributeMap = ContainerInstanceBase_._AttributeMap.copy()
# Base type is ContainerInstanceBase_
# Attribute IDinSourceModel inherited from {avm}ContainerInstanceBase
# Attribute XPosition inherited from {avm}ContainerInstanceBase
# Attribute YPosition inherited from {avm}ContainerInstanceBase
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'TestInjectionPoint', TestInjectionPoint_)
# Complex type {avm}InterpreterTask with content type EMPTY
class InterpreterTask_ (WorkflowTaskBase_):
"""Complex type {avm}InterpreterTask with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'InterpreterTask')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 559, 2)
_ElementMap = WorkflowTaskBase_._ElementMap.copy()
_AttributeMap = WorkflowTaskBase_._AttributeMap.copy()
# Base type is WorkflowTaskBase_
# Attribute Name inherited from {avm}WorkflowTaskBase
# Attribute COMName uses Python identifier COMName
__COMName = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'COMName'), 'COMName', '__avm_InterpreterTask__COMName', pyxb.binding.datatypes.string, required=True)
__COMName._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 562, 8)
__COMName._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 562, 8)
COMName = property(__COMName.value, __COMName.set, None, None)
# Attribute Parameters uses Python identifier Parameters
__Parameters = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Parameters'), 'Parameters', '__avm_InterpreterTask__Parameters', pyxb.binding.datatypes.string)
__Parameters._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 563, 8)
__Parameters._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 563, 8)
Parameters = property(__Parameters.value, __Parameters.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__COMName.name() : __COMName,
__Parameters.name() : __Parameters
})
Namespace.addCategoryObject('typeBinding', u'InterpreterTask', InterpreterTask_)
# Complex type {avm}ExecutionTask with content type EMPTY
class ExecutionTask_ (WorkflowTaskBase_):
"""Complex type {avm}ExecutionTask with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ExecutionTask')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 567, 2)
_ElementMap = WorkflowTaskBase_._ElementMap.copy()
_AttributeMap = WorkflowTaskBase_._AttributeMap.copy()
# Base type is WorkflowTaskBase_
# Attribute Name inherited from {avm}WorkflowTaskBase
# Attribute Description uses Python identifier Description
__Description = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Description'), 'Description', '__avm_ExecutionTask__Description', pyxb.binding.datatypes.string)
__Description._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 570, 8)
__Description._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 570, 8)
Description = property(__Description.value, __Description.set, None, None)
# Attribute Invocation uses Python identifier Invocation
__Invocation = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Invocation'), 'Invocation', '__avm_ExecutionTask__Invocation', pyxb.binding.datatypes.string, required=True)
__Invocation._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 571, 8)
__Invocation._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 571, 8)
Invocation = property(__Invocation.value, __Invocation.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Description.name() : __Description,
__Invocation.name() : __Invocation
})
Namespace.addCategoryObject('typeBinding', u'ExecutionTask', ExecutionTask_)
# Complex type {avm}ValueFlowMux with content type EMPTY
class ValueFlowMux_ (ValueNode_):
"""Complex type {avm}ValueFlowMux with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ValueFlowMux')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 576, 2)
_ElementMap = ValueNode_._ElementMap.copy()
_AttributeMap = ValueNode_._AttributeMap.copy()
# Base type is ValueNode_
# Attribute ID inherited from {avm}ValueNode
# Attribute Source uses Python identifier Source
__Source = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Source'), 'Source', '__avm_ValueFlowMux__Source', STD_ANON_5)
__Source._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 579, 8)
__Source._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 579, 8)
Source = property(__Source.value, __Source.set, None, None)
_ElementMap.update({
})
_AttributeMap.update({
__Source.name() : __Source
})
Namespace.addCategoryObject('typeBinding', u'ValueFlowMux', ValueFlowMux_)
# Complex type {avm}Connector with content type ELEMENT_ONLY
class Connector_ (ConnectorCompositionTarget_):
"""Complex type {avm}Connector with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Connector')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 157, 2)
_ElementMap = ConnectorCompositionTarget_._ElementMap.copy()
_AttributeMap = ConnectorCompositionTarget_._AttributeMap.copy()
# Base type is ConnectorCompositionTarget_
# Element Role uses Python identifier Role
__Role = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Role'), 'Role', '__avm_Connector__Role', True, pyxb.utils.utility.Location(u'avm.xsd', 161, 10), )
Role = property(__Role.value, __Role.set, None, None)
# Element Property uses Python identifier Property
__Property = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Property'), 'Property', '__avm_Connector__Property', True, pyxb.utils.utility.Location(u'avm.xsd', 162, 10), )
Property = property(__Property.value, __Property.set, None, None)
# Element DefaultJoin uses Python identifier DefaultJoin
__DefaultJoin = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'DefaultJoin'), 'DefaultJoin', '__avm_Connector__DefaultJoin', True, pyxb.utils.utility.Location(u'avm.xsd', 163, 10), )
DefaultJoin = property(__DefaultJoin.value, __DefaultJoin.set, None, None)
# Element Connector uses Python identifier Connector
__Connector = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Connector'), 'Connector', '__avm_Connector__Connector', True, pyxb.utils.utility.Location(u'avm.xsd', 164, 10), )
Connector = property(__Connector.value, __Connector.set, None, None)
# Element ConnectorFeature uses Python identifier ConnectorFeature
__ConnectorFeature = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ConnectorFeature'), 'ConnectorFeature', '__avm_Connector__ConnectorFeature', True, pyxb.utils.utility.Location(u'avm.xsd', 165, 10), )
ConnectorFeature = property(__ConnectorFeature.value, __ConnectorFeature.set, None, None)
# Attribute Definition uses Python identifier Definition
__Definition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Definition'), 'Definition', '__avm_Connector__Definition', pyxb.binding.datatypes.anyURI)
__Definition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 167, 8)
__Definition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 167, 8)
Definition = property(__Definition.value, __Definition.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Connector__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 168, 8)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 168, 8)
Name = property(__Name.value, __Name.set, None, None)
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_Connector__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 169, 8)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 169, 8)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_Connector__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 170, 8)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 170, 8)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_Connector__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 171, 8)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 171, 8)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
# Attribute ConnectorComposition inherited from {avm}ConnectorCompositionTarget
# Attribute ID inherited from {avm}ConnectorCompositionTarget
# Attribute ApplyJoinData inherited from {avm}ConnectorCompositionTarget
_ElementMap.update({
__Role.name() : __Role,
__Property.name() : __Property,
__DefaultJoin.name() : __DefaultJoin,
__Connector.name() : __Connector,
__ConnectorFeature.name() : __ConnectorFeature
})
_AttributeMap.update({
__Definition.name() : __Definition,
__Name.name() : __Name,
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__YPosition.name() : __YPosition
})
Namespace.addCategoryObject('typeBinding', u'Connector', Connector_)
# Complex type {avm}Port with content type EMPTY
class Port_ (PortMapTarget_):
"""Complex type {avm}Port with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Port')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 175, 2)
_ElementMap = PortMapTarget_._ElementMap.copy()
_AttributeMap = PortMapTarget_._AttributeMap.copy()
# Base type is PortMapTarget_
# Attribute Notes uses Python identifier Notes
__Notes = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Notes'), 'Notes', '__avm_Port__Notes', pyxb.binding.datatypes.string)
__Notes._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 178, 8)
__Notes._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 178, 8)
Notes = property(__Notes.value, __Notes.set, None, None)
# Attribute XPosition uses Python identifier XPosition
__XPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'XPosition'), 'XPosition', '__avm_Port__XPosition', pyxb.binding.datatypes.unsignedInt)
__XPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 179, 8)
__XPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 179, 8)
XPosition = property(__XPosition.value, __XPosition.set, None, None)
# Attribute Definition uses Python identifier Definition
__Definition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Definition'), 'Definition', '__avm_Port__Definition', pyxb.binding.datatypes.anyURI)
__Definition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 180, 8)
__Definition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 180, 8)
Definition = property(__Definition.value, __Definition.set, None, None)
# Attribute YPosition uses Python identifier YPosition
__YPosition = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'YPosition'), 'YPosition', '__avm_Port__YPosition', pyxb.binding.datatypes.unsignedInt)
__YPosition._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 181, 8)
__YPosition._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 181, 8)
YPosition = property(__YPosition.value, __YPosition.set, None, None)
# Attribute Name uses Python identifier Name
__Name = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Name'), 'Name', '__avm_Port__Name', pyxb.binding.datatypes.string)
__Name._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 182, 8)
__Name._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 182, 8)
Name = property(__Name.value, __Name.set, None, None)
# Attribute ID inherited from {avm}PortMapTarget
# Attribute PortMap inherited from {avm}PortMapTarget
_ElementMap.update({
})
_AttributeMap.update({
__Notes.name() : __Notes,
__XPosition.name() : __XPosition,
__Definition.name() : __Definition,
__YPosition.name() : __YPosition,
__Name.name() : __Name
})
Namespace.addCategoryObject('typeBinding', u'Port', Port_)
# Complex type {avm}NormalDistribution with content type ELEMENT_ONLY
class NormalDistribution_ (ProbabilisticValue_):
"""Complex type {avm}NormalDistribution with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'NormalDistribution')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 214, 2)
_ElementMap = ProbabilisticValue_._ElementMap.copy()
_AttributeMap = ProbabilisticValue_._AttributeMap.copy()
# Base type is ProbabilisticValue_
# Element Mean uses Python identifier Mean
__Mean = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Mean'), 'Mean', '__avm_NormalDistribution__Mean', False, pyxb.utils.utility.Location(u'avm.xsd', 218, 10), )
Mean = property(__Mean.value, __Mean.set, None, None)
# Element StandardDeviation uses Python identifier StandardDeviation
__StandardDeviation = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'StandardDeviation'), 'StandardDeviation', '__avm_NormalDistribution__StandardDeviation', False, pyxb.utils.utility.Location(u'avm.xsd', 219, 10), )
StandardDeviation = property(__StandardDeviation.value, __StandardDeviation.set, None, None)
_ElementMap.update({
__Mean.name() : __Mean,
__StandardDeviation.name() : __StandardDeviation
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'NormalDistribution', NormalDistribution_)
# Complex type {avm}UniformDistribution with content type EMPTY
class UniformDistribution_ (ProbabilisticValue_):
"""Complex type {avm}UniformDistribution with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'UniformDistribution')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 276, 2)
_ElementMap = ProbabilisticValue_._ElementMap.copy()
_AttributeMap = ProbabilisticValue_._AttributeMap.copy()
# Base type is ProbabilisticValue_
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'UniformDistribution', UniformDistribution_)
# Complex type {avm}Optional with content type ELEMENT_ONLY
class Optional_ (DesignSpaceContainer_):
"""Complex type {avm}Optional with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Optional')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 360, 2)
_ElementMap = DesignSpaceContainer_._ElementMap.copy()
_AttributeMap = DesignSpaceContainer_._AttributeMap.copy()
# Base type is DesignSpaceContainer_
# Element Container (Container) inherited from {avm}Container
# Element Property (Property) inherited from {avm}Container
# Element ComponentInstance (ComponentInstance) inherited from {avm}Container
# Element Port (Port) inherited from {avm}Container
# Element Connector (Connector) inherited from {avm}Container
# Element JoinData (JoinData) inherited from {avm}Container
# Element Formula (Formula) inherited from {avm}Container
# Element ContainerFeature (ContainerFeature) inherited from {avm}Container
# Element ResourceDependency (ResourceDependency) inherited from {avm}Container
# Element DomainModel (DomainModel) inherited from {avm}Container
# Element Resource (Resource) inherited from {avm}Container
# Attribute XPosition inherited from {avm}Container
# Attribute Name inherited from {avm}Container
# Attribute YPosition inherited from {avm}Container
# Attribute ID inherited from {avm}Container
# Attribute Description inherited from {avm}Container
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'Optional', Optional_)
# Complex type {avm}Alternative with content type ELEMENT_ONLY
class Alternative_ (DesignSpaceContainer_):
"""Complex type {avm}Alternative with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'Alternative')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 365, 2)
_ElementMap = DesignSpaceContainer_._ElementMap.copy()
_AttributeMap = DesignSpaceContainer_._AttributeMap.copy()
# Base type is DesignSpaceContainer_
# Element Container (Container) inherited from {avm}Container
# Element Property (Property) inherited from {avm}Container
# Element ComponentInstance (ComponentInstance) inherited from {avm}Container
# Element Port (Port) inherited from {avm}Container
# Element Connector (Connector) inherited from {avm}Container
# Element JoinData (JoinData) inherited from {avm}Container
# Element Formula (Formula) inherited from {avm}Container
# Element ContainerFeature (ContainerFeature) inherited from {avm}Container
# Element ResourceDependency (ResourceDependency) inherited from {avm}Container
# Element DomainModel (DomainModel) inherited from {avm}Container
# Element Resource (Resource) inherited from {avm}Container
# Element ValueFlowMux uses Python identifier ValueFlowMux
__ValueFlowMux = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'ValueFlowMux'), 'ValueFlowMux', '__avm_Alternative__ValueFlowMux', True, pyxb.utils.utility.Location(u'avm.xsd', 369, 10), )
ValueFlowMux = property(__ValueFlowMux.value, __ValueFlowMux.set, None, None)
# Attribute XPosition inherited from {avm}Container
# Attribute Name inherited from {avm}Container
# Attribute YPosition inherited from {avm}Container
# Attribute ID inherited from {avm}Container
# Attribute Description inherited from {avm}Container
_ElementMap.update({
__ValueFlowMux.name() : __ValueFlowMux
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'Alternative', Alternative_)
# Complex type {avm}ComponentPortInstance with content type EMPTY
class ComponentPortInstance_ (PortMapTarget_):
"""Complex type {avm}ComponentPortInstance with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ComponentPortInstance')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 387, 2)
_ElementMap = PortMapTarget_._ElementMap.copy()
_AttributeMap = PortMapTarget_._AttributeMap.copy()
# Base type is PortMapTarget_
# Attribute IDinComponentModel uses Python identifier IDinComponentModel
__IDinComponentModel = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'IDinComponentModel'), 'IDinComponentModel', '__avm_ComponentPortInstance__IDinComponentModel', pyxb.binding.datatypes.string, required=True)
__IDinComponentModel._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 390, 8)
__IDinComponentModel._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 390, 8)
IDinComponentModel = property(__IDinComponentModel.value, __IDinComponentModel.set, None, None)
# Attribute ID inherited from {avm}PortMapTarget
# Attribute PortMap inherited from {avm}PortMapTarget
_ElementMap.update({
})
_AttributeMap.update({
__IDinComponentModel.name() : __IDinComponentModel
})
Namespace.addCategoryObject('typeBinding', u'ComponentPortInstance', ComponentPortInstance_)
# Complex type {avm}ComponentConnectorInstance with content type EMPTY
class ComponentConnectorInstance_ (ConnectorCompositionTarget_):
"""Complex type {avm}ComponentConnectorInstance with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ComponentConnectorInstance')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 413, 2)
_ElementMap = ConnectorCompositionTarget_._ElementMap.copy()
_AttributeMap = ConnectorCompositionTarget_._AttributeMap.copy()
# Base type is ConnectorCompositionTarget_
# Attribute IDinComponentModel uses Python identifier IDinComponentModel
__IDinComponentModel = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'IDinComponentModel'), 'IDinComponentModel', '__avm_ComponentConnectorInstance__IDinComponentModel', pyxb.binding.datatypes.string, required=True)
__IDinComponentModel._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 416, 8)
__IDinComponentModel._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 416, 8)
IDinComponentModel = property(__IDinComponentModel.value, __IDinComponentModel.set, None, None)
# Attribute ConnectorComposition inherited from {avm}ConnectorCompositionTarget
# Attribute ID inherited from {avm}ConnectorCompositionTarget
# Attribute ApplyJoinData inherited from {avm}ConnectorCompositionTarget
_ElementMap.update({
})
_AttributeMap.update({
__IDinComponentModel.name() : __IDinComponentModel
})
Namespace.addCategoryObject('typeBinding', u'ComponentConnectorInstance', ComponentConnectorInstance_)
# Complex type {avm}SimpleFormula with content type EMPTY
class SimpleFormula_ (Formula_):
"""Complex type {avm}SimpleFormula with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'SimpleFormula')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 442, 2)
_ElementMap = Formula_._ElementMap.copy()
_AttributeMap = Formula_._AttributeMap.copy()
# Base type is Formula_
# Attribute Name inherited from {avm}Formula
# Attribute XPosition inherited from {avm}Formula
# Attribute YPosition inherited from {avm}Formula
# Attribute Operation uses Python identifier Operation
__Operation = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Operation'), 'Operation', '__avm_SimpleFormula__Operation', SimpleFormulaOperation)
__Operation._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 445, 8)
__Operation._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 445, 8)
Operation = property(__Operation.value, __Operation.set, None, None)
# Attribute Operand uses Python identifier Operand
__Operand = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Operand'), 'Operand', '__avm_SimpleFormula__Operand', STD_ANON_4, required=True)
__Operand._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 446, 8)
__Operand._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 446, 8)
Operand = property(__Operand.value, __Operand.set, None, None)
# Attribute ID inherited from {avm}ValueNode
_ElementMap.update({
})
_AttributeMap.update({
__Operation.name() : __Operation,
__Operand.name() : __Operand
})
Namespace.addCategoryObject('typeBinding', u'SimpleFormula', SimpleFormula_)
# Complex type {avm}ComplexFormula with content type ELEMENT_ONLY
class ComplexFormula_ (Formula_):
"""Complex type {avm}ComplexFormula with content type ELEMENT_ONLY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_ELEMENT_ONLY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'ComplexFormula')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 467, 2)
_ElementMap = Formula_._ElementMap.copy()
_AttributeMap = Formula_._AttributeMap.copy()
# Base type is Formula_
# Element Operand uses Python identifier Operand
__Operand = pyxb.binding.content.ElementDeclaration(pyxb.namespace.ExpandedName(None, u'Operand'), 'Operand', '__avm_ComplexFormula__Operand', True, pyxb.utils.utility.Location(u'avm.xsd', 471, 10), )
Operand = property(__Operand.value, __Operand.set, None, None)
# Attribute Name inherited from {avm}Formula
# Attribute XPosition inherited from {avm}Formula
# Attribute YPosition inherited from {avm}Formula
# Attribute ID inherited from {avm}ValueNode
# Attribute Expression uses Python identifier Expression
__Expression = pyxb.binding.content.AttributeUse(pyxb.namespace.ExpandedName(None, u'Expression'), 'Expression', '__avm_ComplexFormula__Expression', pyxb.binding.datatypes.string, required=True)
__Expression._DeclarationLocation = pyxb.utils.utility.Location(u'avm.xsd', 473, 8)
__Expression._UseLocation = pyxb.utils.utility.Location(u'avm.xsd', 473, 8)
Expression = property(__Expression.value, __Expression.set, None, None)
_ElementMap.update({
__Operand.name() : __Operand
})
_AttributeMap.update({
__Expression.name() : __Expression
})
Namespace.addCategoryObject('typeBinding', u'ComplexFormula', ComplexFormula_)
# Complex type {avm}DomainModelPort with content type EMPTY
class DomainModelPort_ (Port_):
"""Complex type {avm}DomainModelPort with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = True
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'DomainModelPort')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 186, 2)
_ElementMap = Port_._ElementMap.copy()
_AttributeMap = Port_._AttributeMap.copy()
# Base type is Port_
# Attribute Notes inherited from {avm}Port
# Attribute XPosition inherited from {avm}Port
# Attribute Definition inherited from {avm}Port
# Attribute YPosition inherited from {avm}Port
# Attribute Name inherited from {avm}Port
# Attribute ID inherited from {avm}PortMapTarget
# Attribute PortMap inherited from {avm}PortMapTarget
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'DomainModelPort', DomainModelPort_)
# Complex type {avm}AbstractPort with content type EMPTY
class AbstractPort_ (Port_):
"""Complex type {avm}AbstractPort with content type EMPTY"""
_TypeDefinition = None
_ContentTypeTag = pyxb.binding.basis.complexTypeDefinition._CT_EMPTY
_Abstract = False
_ExpandedName = pyxb.namespace.ExpandedName(Namespace, u'AbstractPort')
_XSDLocation = pyxb.utils.utility.Location(u'avm.xsd', 310, 2)
_ElementMap = Port_._ElementMap.copy()
_AttributeMap = Port_._AttributeMap.copy()
# Base type is Port_
# Attribute Notes inherited from {avm}Port
# Attribute XPosition inherited from {avm}Port
# Attribute Definition inherited from {avm}Port
# Attribute YPosition inherited from {avm}Port
# Attribute Name inherited from {avm}Port
# Attribute ID inherited from {avm}PortMapTarget
# Attribute PortMap inherited from {avm}PortMapTarget
_ElementMap.update({
})
_AttributeMap.update({
})
Namespace.addCategoryObject('typeBinding', u'AbstractPort', AbstractPort_)
Component = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Component'), Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 4, 2))
Namespace.addCategoryObject('elementBinding', Component.name().localName(), Component)
DomainModel = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DomainModel'), DomainModel_, location=pyxb.utils.utility.Location(u'avm.xsd', 5, 2))
Namespace.addCategoryObject('elementBinding', DomainModel.name().localName(), DomainModel)
Property = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Property'), Property_, location=pyxb.utils.utility.Location(u'avm.xsd', 6, 2))
Namespace.addCategoryObject('elementBinding', Property.name().localName(), Property)
Resource = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Resource'), Resource_, location=pyxb.utils.utility.Location(u'avm.xsd', 11, 2))
Namespace.addCategoryObject('elementBinding', Resource.name().localName(), Resource)
DomainModelParameter = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DomainModelParameter'), DomainModelParameter_, location=pyxb.utils.utility.Location(u'avm.xsd', 15, 2))
Namespace.addCategoryObject('elementBinding', DomainModelParameter.name().localName(), DomainModelParameter)
ValueExpressionType = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ValueExpressionType'), ValueExpressionType_, location=pyxb.utils.utility.Location(u'avm.xsd', 17, 2))
Namespace.addCategoryObject('elementBinding', ValueExpressionType.name().localName(), ValueExpressionType)
DistributionRestriction = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DistributionRestriction'), DistributionRestriction_, location=pyxb.utils.utility.Location(u'avm.xsd', 20, 2))
Namespace.addCategoryObject('elementBinding', DistributionRestriction.name().localName(), DistributionRestriction)
DomainModelMetric = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DomainModelMetric'), DomainModelMetric_, location=pyxb.utils.utility.Location(u'avm.xsd', 24, 2))
Namespace.addCategoryObject('elementBinding', DomainModelMetric.name().localName(), DomainModelMetric)
AnalysisConstruct = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'AnalysisConstruct'), AnalysisConstruct_, location=pyxb.utils.utility.Location(u'avm.xsd', 30, 2))
Namespace.addCategoryObject('elementBinding', AnalysisConstruct.name().localName(), AnalysisConstruct)
Design = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Design'), Design_, location=pyxb.utils.utility.Location(u'avm.xsd', 32, 2))
Namespace.addCategoryObject('elementBinding', Design.name().localName(), Design)
Container = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Container'), Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 33, 2))
Namespace.addCategoryObject('elementBinding', Container.name().localName(), Container)
ComponentInstance = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ComponentInstance'), ComponentInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 37, 2))
Namespace.addCategoryObject('elementBinding', ComponentInstance.name().localName(), ComponentInstance)
ComponentPrimitivePropertyInstance = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ComponentPrimitivePropertyInstance'), ComponentPrimitivePropertyInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 39, 2))
Namespace.addCategoryObject('elementBinding', ComponentPrimitivePropertyInstance.name().localName(), ComponentPrimitivePropertyInstance)
ValueNode = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ValueNode'), ValueNode_, location=pyxb.utils.utility.Location(u'avm.xsd', 46, 2))
Namespace.addCategoryObject('elementBinding', ValueNode.name().localName(), ValueNode)
Operand = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Operand'), Operand_, location=pyxb.utils.utility.Location(u'avm.xsd', 48, 2))
Namespace.addCategoryObject('elementBinding', Operand.name().localName(), Operand)
ConnectorFeature = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ConnectorFeature'), ConnectorFeature_, location=pyxb.utils.utility.Location(u'avm.xsd', 50, 2))
Namespace.addCategoryObject('elementBinding', ConnectorFeature.name().localName(), ConnectorFeature)
ContainerFeature = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ContainerFeature'), ContainerFeature_, location=pyxb.utils.utility.Location(u'avm.xsd', 51, 2))
Namespace.addCategoryObject('elementBinding', ContainerFeature.name().localName(), ContainerFeature)
DomainMapping = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DomainMapping'), DomainMapping_, location=pyxb.utils.utility.Location(u'avm.xsd', 52, 2))
Namespace.addCategoryObject('elementBinding', DomainMapping.name().localName(), DomainMapping)
TestBench = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'TestBench'), TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 53, 2))
Namespace.addCategoryObject('elementBinding', TestBench.name().localName(), TestBench)
ContainerInstanceBase = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ContainerInstanceBase'), ContainerInstanceBase_, location=pyxb.utils.utility.Location(u'avm.xsd', 57, 2))
Namespace.addCategoryObject('elementBinding', ContainerInstanceBase.name().localName(), ContainerInstanceBase)
TestBenchValueBase = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'TestBenchValueBase'), TestBenchValueBase_, location=pyxb.utils.utility.Location(u'avm.xsd', 58, 2))
Namespace.addCategoryObject('elementBinding', TestBenchValueBase.name().localName(), TestBenchValueBase)
Workflow = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Workflow'), Workflow_, location=pyxb.utils.utility.Location(u'avm.xsd', 60, 2))
Namespace.addCategoryObject('elementBinding', Workflow.name().localName(), Workflow)
WorkflowTaskBase = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'WorkflowTaskBase'), WorkflowTaskBase_, location=pyxb.utils.utility.Location(u'avm.xsd', 61, 2))
Namespace.addCategoryObject('elementBinding', WorkflowTaskBase.name().localName(), WorkflowTaskBase)
Settings = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Settings'), Settings_, location=pyxb.utils.utility.Location(u'avm.xsd', 64, 2))
Namespace.addCategoryObject('elementBinding', Settings.name().localName(), Settings)
DesignDomainFeature = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DesignDomainFeature'), DesignDomainFeature_, location=pyxb.utils.utility.Location(u'avm.xsd', 66, 2))
Namespace.addCategoryObject('elementBinding', DesignDomainFeature.name().localName(), DesignDomainFeature)
Value = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Value'), Value_, location=pyxb.utils.utility.Location(u'avm.xsd', 7, 2))
Namespace.addCategoryObject('elementBinding', Value.name().localName(), Value)
FixedValue = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'FixedValue'), FixedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 8, 2))
Namespace.addCategoryObject('elementBinding', FixedValue.name().localName(), FixedValue)
CalculatedValue = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'CalculatedValue'), CalculatedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 9, 2))
Namespace.addCategoryObject('elementBinding', CalculatedValue.name().localName(), CalculatedValue)
DerivedValue = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DerivedValue'), DerivedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 10, 2))
Namespace.addCategoryObject('elementBinding', DerivedValue.name().localName(), DerivedValue)
ParametricValue = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ParametricValue'), ParametricValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 16, 2))
Namespace.addCategoryObject('elementBinding', ParametricValue.name().localName(), ParametricValue)
ProbabilisticValue = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ProbabilisticValue'), ProbabilisticValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 18, 2))
Namespace.addCategoryObject('elementBinding', ProbabilisticValue.name().localName(), ProbabilisticValue)
SecurityClassification = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'SecurityClassification'), SecurityClassification_, location=pyxb.utils.utility.Location(u'avm.xsd', 21, 2))
Namespace.addCategoryObject('elementBinding', SecurityClassification.name().localName(), SecurityClassification)
Proprietary = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Proprietary'), Proprietary_, location=pyxb.utils.utility.Location(u'avm.xsd', 22, 2))
Namespace.addCategoryObject('elementBinding', Proprietary.name().localName(), Proprietary)
ITAR = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ITAR'), ITAR_, location=pyxb.utils.utility.Location(u'avm.xsd', 23, 2))
Namespace.addCategoryObject('elementBinding', ITAR.name().localName(), ITAR)
PrimitiveProperty = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'PrimitiveProperty'), PrimitiveProperty_, location=pyxb.utils.utility.Location(u'avm.xsd', 26, 2))
Namespace.addCategoryObject('elementBinding', PrimitiveProperty.name().localName(), PrimitiveProperty)
CompoundProperty = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'CompoundProperty'), CompoundProperty_, location=pyxb.utils.utility.Location(u'avm.xsd', 27, 2))
Namespace.addCategoryObject('elementBinding', CompoundProperty.name().localName(), CompoundProperty)
ParametricEnumeratedValue = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ParametricEnumeratedValue'), ParametricEnumeratedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 28, 2))
Namespace.addCategoryObject('elementBinding', ParametricEnumeratedValue.name().localName(), ParametricEnumeratedValue)
DataSource = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DataSource'), DataSource_, location=pyxb.utils.utility.Location(u'avm.xsd', 31, 2))
Namespace.addCategoryObject('elementBinding', DataSource.name().localName(), DataSource)
Compound = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Compound'), Compound_, location=pyxb.utils.utility.Location(u'avm.xsd', 34, 2))
Namespace.addCategoryObject('elementBinding', Compound.name().localName(), Compound)
DesignSpaceContainer = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DesignSpaceContainer'), DesignSpaceContainer_, location=pyxb.utils.utility.Location(u'avm.xsd', 40, 2))
Namespace.addCategoryObject('elementBinding', DesignSpaceContainer.name().localName(), DesignSpaceContainer)
PortMapTarget = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'PortMapTarget'), PortMapTarget_, location=pyxb.utils.utility.Location(u'avm.xsd', 41, 2))
Namespace.addCategoryObject('elementBinding', PortMapTarget.name().localName(), PortMapTarget)
ConnectorCompositionTarget = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ConnectorCompositionTarget'), ConnectorCompositionTarget_, location=pyxb.utils.utility.Location(u'avm.xsd', 43, 2))
Namespace.addCategoryObject('elementBinding', ConnectorCompositionTarget.name().localName(), ConnectorCompositionTarget)
Formula = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Formula'), Formula_, location=pyxb.utils.utility.Location(u'avm.xsd', 44, 2))
Namespace.addCategoryObject('elementBinding', Formula.name().localName(), Formula)
DoDDistributionStatement = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DoDDistributionStatement'), DoDDistributionStatement_, location=pyxb.utils.utility.Location(u'avm.xsd', 49, 2))
Namespace.addCategoryObject('elementBinding', DoDDistributionStatement.name().localName(), DoDDistributionStatement)
TopLevelSystemUnderTest = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'TopLevelSystemUnderTest'), TopLevelSystemUnderTest_, location=pyxb.utils.utility.Location(u'avm.xsd', 54, 2))
Namespace.addCategoryObject('elementBinding', TopLevelSystemUnderTest.name().localName(), TopLevelSystemUnderTest)
Parameter = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Parameter'), Parameter_, location=pyxb.utils.utility.Location(u'avm.xsd', 55, 2))
Namespace.addCategoryObject('elementBinding', Parameter.name().localName(), Parameter)
Metric = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Metric'), Metric_, location=pyxb.utils.utility.Location(u'avm.xsd', 56, 2))
Namespace.addCategoryObject('elementBinding', Metric.name().localName(), Metric)
TestInjectionPoint = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'TestInjectionPoint'), TestInjectionPoint_, location=pyxb.utils.utility.Location(u'avm.xsd', 59, 2))
Namespace.addCategoryObject('elementBinding', TestInjectionPoint.name().localName(), TestInjectionPoint)
InterpreterTask = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'InterpreterTask'), InterpreterTask_, location=pyxb.utils.utility.Location(u'avm.xsd', 62, 2))
Namespace.addCategoryObject('elementBinding', InterpreterTask.name().localName(), InterpreterTask)
ExecutionTask = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ExecutionTask'), ExecutionTask_, location=pyxb.utils.utility.Location(u'avm.xsd', 63, 2))
Namespace.addCategoryObject('elementBinding', ExecutionTask.name().localName(), ExecutionTask)
ValueFlowMux = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ValueFlowMux'), ValueFlowMux_, location=pyxb.utils.utility.Location(u'avm.xsd', 65, 2))
Namespace.addCategoryObject('elementBinding', ValueFlowMux.name().localName(), ValueFlowMux)
Connector = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Connector'), Connector_, location=pyxb.utils.utility.Location(u'avm.xsd', 12, 2))
Namespace.addCategoryObject('elementBinding', Connector.name().localName(), Connector)
Port = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Port'), Port_, location=pyxb.utils.utility.Location(u'avm.xsd', 13, 2))
Namespace.addCategoryObject('elementBinding', Port.name().localName(), Port)
NormalDistribution = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'NormalDistribution'), NormalDistribution_, location=pyxb.utils.utility.Location(u'avm.xsd', 19, 2))
Namespace.addCategoryObject('elementBinding', NormalDistribution.name().localName(), NormalDistribution)
UniformDistribution = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'UniformDistribution'), UniformDistribution_, location=pyxb.utils.utility.Location(u'avm.xsd', 25, 2))
Namespace.addCategoryObject('elementBinding', UniformDistribution.name().localName(), UniformDistribution)
Optional = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Optional'), Optional_, location=pyxb.utils.utility.Location(u'avm.xsd', 35, 2))
Namespace.addCategoryObject('elementBinding', Optional.name().localName(), Optional)
Alternative = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'Alternative'), Alternative_, location=pyxb.utils.utility.Location(u'avm.xsd', 36, 2))
Namespace.addCategoryObject('elementBinding', Alternative.name().localName(), Alternative)
ComponentPortInstance = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ComponentPortInstance'), ComponentPortInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 38, 2))
Namespace.addCategoryObject('elementBinding', ComponentPortInstance.name().localName(), ComponentPortInstance)
ComponentConnectorInstance = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ComponentConnectorInstance'), ComponentConnectorInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 42, 2))
Namespace.addCategoryObject('elementBinding', ComponentConnectorInstance.name().localName(), ComponentConnectorInstance)
SimpleFormula = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'SimpleFormula'), SimpleFormula_, location=pyxb.utils.utility.Location(u'avm.xsd', 45, 2))
Namespace.addCategoryObject('elementBinding', SimpleFormula.name().localName(), SimpleFormula)
ComplexFormula = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'ComplexFormula'), ComplexFormula_, location=pyxb.utils.utility.Location(u'avm.xsd', 47, 2))
Namespace.addCategoryObject('elementBinding', ComplexFormula.name().localName(), ComplexFormula)
DomainModelPort = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'DomainModelPort'), DomainModelPort_, location=pyxb.utils.utility.Location(u'avm.xsd', 14, 2))
Namespace.addCategoryObject('elementBinding', DomainModelPort.name().localName(), DomainModelPort)
AbstractPort = pyxb.binding.basis.element(pyxb.namespace.ExpandedName(Namespace, u'AbstractPort'), AbstractPort_, location=pyxb.utils.utility.Location(u'avm.xsd', 29, 2))
Namespace.addCategoryObject('elementBinding', AbstractPort.name().localName(), AbstractPort)
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DomainModel'), DomainModel_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 72, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Property'), Property_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 73, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ResourceDependency'), Resource_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 74, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Connector'), Connector_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 75, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DistributionRestriction'), DistributionRestriction_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 76, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Port'), Port_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 77, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Classifications'), pyxb.binding.datatypes.anyURI, nillable=pyxb.binding.datatypes.boolean(1), scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 78, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'AnalysisConstruct'), AnalysisConstruct_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 79, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Supercedes'), pyxb.binding.datatypes.string, nillable=pyxb.binding.datatypes.boolean(1), scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 80, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Formula'), Formula_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 81, 6)))
Component_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DomainMapping'), DomainMapping_, scope=Component_, location=pyxb.utils.utility.Location(u'avm.xsd', 82, 6)))
def _BuildAutomaton ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton
del _BuildAutomaton
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 72, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 73, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 74, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 75, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 76, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 77, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 78, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 79, 6))
counters.add(cc_7)
cc_8 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 80, 6))
counters.add(cc_8)
cc_9 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 81, 6))
counters.add(cc_9)
cc_10 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 82, 6))
counters.add(cc_10)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainModel')), pyxb.utils.utility.Location(u'avm.xsd', 72, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 73, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 74, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 75, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'DistributionRestriction')), pyxb.utils.utility.Location(u'avm.xsd', 76, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'Port')), pyxb.utils.utility.Location(u'avm.xsd', 77, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'Classifications')), pyxb.utils.utility.Location(u'avm.xsd', 78, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'AnalysisConstruct')), pyxb.utils.utility.Location(u'avm.xsd', 79, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_8, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'Supercedes')), pyxb.utils.utility.Location(u'avm.xsd', 80, 6))
st_8 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_8)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_9, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'Formula')), pyxb.utils.utility.Location(u'avm.xsd', 81, 6))
st_9 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_9)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_10, False))
symbol = pyxb.binding.content.ElementUse(Component_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainMapping')), pyxb.utils.utility.Location(u'avm.xsd', 82, 6))
st_10 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_10)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_7, False) ]))
st_7._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_8, True) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_8, False) ]))
st_8._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_9, True) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_9, False) ]))
st_9._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_10, True) ]))
st_10._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Component_._Automaton = _BuildAutomaton()
DomainModelMetric_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Value'), Value_, scope=DomainModelMetric_, location=pyxb.utils.utility.Location(u'avm.xsd', 269, 6)))
def _BuildAutomaton_ ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_
del _BuildAutomaton_
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 269, 6))
counters.add(cc_0)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(DomainModelMetric_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 269, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
DomainModelMetric_._Automaton = _BuildAutomaton_()
Design_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'RootContainer'), Container_, scope=Design_, location=pyxb.utils.utility.Location(u'avm.xsd', 326, 6)))
Design_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DomainFeature'), DesignDomainFeature_, scope=Design_, location=pyxb.utils.utility.Location(u'avm.xsd', 327, 6)))
Design_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ResourceDependency'), Resource_, scope=Design_, location=pyxb.utils.utility.Location(u'avm.xsd', 328, 6)))
def _BuildAutomaton_2 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_2
del _BuildAutomaton_2
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 326, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 327, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 328, 6))
counters.add(cc_2)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Design_._UseForTag(pyxb.namespace.ExpandedName(None, u'RootContainer')), pyxb.utils.utility.Location(u'avm.xsd', 326, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Design_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainFeature')), pyxb.utils.utility.Location(u'avm.xsd', 327, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Design_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 328, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
st_2._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Design_._Automaton = _BuildAutomaton_2()
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Container'), Container_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 337, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Property'), Property_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 338, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ComponentInstance'), ComponentInstance_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 339, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Port'), Port_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 340, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Connector'), Connector_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 341, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'JoinData'), _ImportedBinding__iFAB.assemblyDetail, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 342, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Formula'), Formula_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 343, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ContainerFeature'), ContainerFeature_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 344, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ResourceDependency'), Resource_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 345, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DomainModel'), DomainModel_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 346, 6)))
Container_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Resource'), Resource_, scope=Container_, location=pyxb.utils.utility.Location(u'avm.xsd', 347, 6)))
def _BuildAutomaton_3 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_3
del _BuildAutomaton_3
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
counters.add(cc_7)
cc_8 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
counters.add(cc_8)
cc_9 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
counters.add(cc_9)
cc_10 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
counters.add(cc_10)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'Container')), pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'ComponentInstance')), pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'Port')), pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'JoinData')), pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'Formula')), pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'ContainerFeature')), pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_8, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
st_8 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_8)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_9, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainModel')), pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
st_9 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_9)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_10, False))
symbol = pyxb.binding.content.ElementUse(Container_._UseForTag(pyxb.namespace.ExpandedName(None, u'Resource')), pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
st_10 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_10)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_7, False) ]))
st_7._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_8, True) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_8, False) ]))
st_8._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_9, True) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_9, False) ]))
st_9._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_10, True) ]))
st_10._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Container_._Automaton = _BuildAutomaton_3()
ComponentInstance_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'PortInstance'), ComponentPortInstance_, scope=ComponentInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 376, 6)))
ComponentInstance_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'PrimitivePropertyInstance'), ComponentPrimitivePropertyInstance_, scope=ComponentInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 377, 6)))
ComponentInstance_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ConnectorInstance'), ComponentConnectorInstance_, scope=ComponentInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 378, 6)))
def _BuildAutomaton_4 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_4
del _BuildAutomaton_4
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 376, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 377, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 378, 6))
counters.add(cc_2)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(ComponentInstance_._UseForTag(pyxb.namespace.ExpandedName(None, u'PortInstance')), pyxb.utils.utility.Location(u'avm.xsd', 376, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(ComponentInstance_._UseForTag(pyxb.namespace.ExpandedName(None, u'PrimitivePropertyInstance')), pyxb.utils.utility.Location(u'avm.xsd', 377, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(ComponentInstance_._UseForTag(pyxb.namespace.ExpandedName(None, u'ConnectorInstance')), pyxb.utils.utility.Location(u'avm.xsd', 378, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
st_2._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
ComponentInstance_._Automaton = _BuildAutomaton_4()
ComponentPrimitivePropertyInstance_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Value'), Value_, scope=ComponentPrimitivePropertyInstance_, location=pyxb.utils.utility.Location(u'avm.xsd', 396, 6)))
def _BuildAutomaton_5 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_5
del _BuildAutomaton_5
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 396, 6))
counters.add(cc_0)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(ComponentPrimitivePropertyInstance_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 396, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
ComponentPrimitivePropertyInstance_._Automaton = _BuildAutomaton_5()
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'TopLevelSystemUnderTest'), TopLevelSystemUnderTest_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 502, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Parameter'), Parameter_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 503, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Metric'), Metric_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 504, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'TestInjectionPoint'), TestInjectionPoint_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 505, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'TestComponent'), ComponentInstance_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 506, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Workflow'), Workflow_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 507, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Settings'), Settings_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 508, 6)))
TestBench_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'TestStructure'), DomainModel_, scope=TestBench_, location=pyxb.utils.utility.Location(u'avm.xsd', 509, 6)))
def _BuildAutomaton_6 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_6
del _BuildAutomaton_6
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 502, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 503, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 504, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 505, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 506, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 507, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 508, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 509, 6))
counters.add(cc_7)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'TopLevelSystemUnderTest')), pyxb.utils.utility.Location(u'avm.xsd', 502, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'Parameter')), pyxb.utils.utility.Location(u'avm.xsd', 503, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'Metric')), pyxb.utils.utility.Location(u'avm.xsd', 504, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'TestInjectionPoint')), pyxb.utils.utility.Location(u'avm.xsd', 505, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'TestComponent')), pyxb.utils.utility.Location(u'avm.xsd', 506, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'Workflow')), pyxb.utils.utility.Location(u'avm.xsd', 507, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'Settings')), pyxb.utils.utility.Location(u'avm.xsd', 508, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(TestBench_._UseForTag(pyxb.namespace.ExpandedName(None, u'TestStructure')), pyxb.utils.utility.Location(u'avm.xsd', 509, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
st_7._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
TestBench_._Automaton = _BuildAutomaton_6()
TestBenchValueBase_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Value'), Value_, scope=TestBenchValueBase_, location=pyxb.utils.utility.Location(u'avm.xsd', 537, 6)))
def _BuildAutomaton_7 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_7
del _BuildAutomaton_7
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 537, 6))
counters.add(cc_0)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(TestBenchValueBase_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 537, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
TestBenchValueBase_._Automaton = _BuildAutomaton_7()
Workflow_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Task'), WorkflowTaskBase_, scope=Workflow_, location=pyxb.utils.utility.Location(u'avm.xsd', 552, 6)))
def _BuildAutomaton_8 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_8
del _BuildAutomaton_8
import pyxb.utils.fac as fac
counters = set()
states = []
final_update = set()
symbol = pyxb.binding.content.ElementUse(Workflow_._UseForTag(pyxb.namespace.ExpandedName(None, u'Task')), pyxb.utils.utility.Location(u'avm.xsd', 552, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
Workflow_._Automaton = _BuildAutomaton_8()
Value_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ValueExpression'), ValueExpressionType_, scope=Value_, location=pyxb.utils.utility.Location(u'avm.xsd', 111, 10)))
Value_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DataSource'), DataSource_, scope=Value_, location=pyxb.utils.utility.Location(u'avm.xsd', 112, 10)))
def _BuildAutomaton_9 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_9
del _BuildAutomaton_9
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 111, 10))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 112, 10))
counters.add(cc_1)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Value_._UseForTag(pyxb.namespace.ExpandedName(None, u'ValueExpression')), pyxb.utils.utility.Location(u'avm.xsd', 111, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Value_._UseForTag(pyxb.namespace.ExpandedName(None, u'DataSource')), pyxb.utils.utility.Location(u'avm.xsd', 112, 10))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
st_1._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Value_._Automaton = _BuildAutomaton_9()
FixedValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Value'), pyxb.binding.datatypes.string, scope=FixedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 125, 10)))
def _BuildAutomaton_10 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_10
del _BuildAutomaton_10
import pyxb.utils.fac as fac
counters = set()
states = []
final_update = set()
symbol = pyxb.binding.content.ElementUse(FixedValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 125, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
FixedValue_._Automaton = _BuildAutomaton_10()
CalculatedValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Expression'), pyxb.binding.datatypes.string, scope=CalculatedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 135, 10)))
def _BuildAutomaton_11 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_11
del _BuildAutomaton_11
import pyxb.utils.fac as fac
counters = set()
states = []
final_update = set()
symbol = pyxb.binding.content.ElementUse(CalculatedValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'Expression')), pyxb.utils.utility.Location(u'avm.xsd', 135, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
CalculatedValue_._Automaton = _BuildAutomaton_11()
ParametricValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Default'), ValueExpressionType_, scope=ParametricValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 200, 10)))
ParametricValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Maximum'), ValueExpressionType_, scope=ParametricValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 201, 10)))
ParametricValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Minimum'), ValueExpressionType_, scope=ParametricValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 202, 10)))
ParametricValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'AssignedValue'), ValueExpressionType_, scope=ParametricValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 203, 10)))
def _BuildAutomaton_12 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_12
del _BuildAutomaton_12
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 201, 10))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 202, 10))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 203, 10))
counters.add(cc_2)
states = []
final_update = set()
symbol = pyxb.binding.content.ElementUse(ParametricValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'Default')), pyxb.utils.utility.Location(u'avm.xsd', 200, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(ParametricValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'Maximum')), pyxb.utils.utility.Location(u'avm.xsd', 201, 10))
st_1 = fac.State(symbol, is_initial=False, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(ParametricValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'Minimum')), pyxb.utils.utility.Location(u'avm.xsd', 202, 10))
st_2 = fac.State(symbol, is_initial=False, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(ParametricValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'AssignedValue')), pyxb.utils.utility.Location(u'avm.xsd', 203, 10))
st_3 = fac.State(symbol, is_initial=False, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
transitions = []
transitions.append(fac.Transition(st_1, [
]))
transitions.append(fac.Transition(st_2, [
]))
transitions.append(fac.Transition(st_3, [
]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, True) ]))
st_3._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
ParametricValue_._Automaton = _BuildAutomaton_12()
PrimitiveProperty_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Value'), Value_, scope=PrimitiveProperty_, location=pyxb.utils.utility.Location(u'avm.xsd', 285, 10)))
def _BuildAutomaton_13 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_13
del _BuildAutomaton_13
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 285, 10))
counters.add(cc_0)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(PrimitiveProperty_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 285, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
PrimitiveProperty_._Automaton = _BuildAutomaton_13()
CompoundProperty_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'CompoundProperty'), CompoundProperty_, scope=CompoundProperty_, location=pyxb.utils.utility.Location(u'avm.xsd', 294, 10)))
CompoundProperty_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'PrimitiveProperty'), PrimitiveProperty_, scope=CompoundProperty_, location=pyxb.utils.utility.Location(u'avm.xsd', 295, 10)))
def _BuildAutomaton_14 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_14
del _BuildAutomaton_14
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 294, 10))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 295, 10))
counters.add(cc_1)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(CompoundProperty_._UseForTag(pyxb.namespace.ExpandedName(None, u'CompoundProperty')), pyxb.utils.utility.Location(u'avm.xsd', 294, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(CompoundProperty_._UseForTag(pyxb.namespace.ExpandedName(None, u'PrimitiveProperty')), pyxb.utils.utility.Location(u'avm.xsd', 295, 10))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
st_1._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
CompoundProperty_._Automaton = _BuildAutomaton_14()
ParametricEnumeratedValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'AssignedValue'), FixedValue_, scope=ParametricEnumeratedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 304, 10)))
ParametricEnumeratedValue_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Enum'), ValueExpressionType_, scope=ParametricEnumeratedValue_, location=pyxb.utils.utility.Location(u'avm.xsd', 305, 10)))
def _BuildAutomaton_15 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_15
del _BuildAutomaton_15
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 304, 10))
counters.add(cc_0)
states = []
final_update = None
symbol = pyxb.binding.content.ElementUse(ParametricEnumeratedValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'AssignedValue')), pyxb.utils.utility.Location(u'avm.xsd', 304, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
symbol = pyxb.binding.content.ElementUse(ParametricEnumeratedValue_._UseForTag(pyxb.namespace.ExpandedName(None, u'Enum')), pyxb.utils.utility.Location(u'avm.xsd', 305, 10))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
]))
st_1._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
ParametricEnumeratedValue_._Automaton = _BuildAutomaton_15()
def _BuildAutomaton_16 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_16
del _BuildAutomaton_16
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
counters.add(cc_7)
cc_8 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
counters.add(cc_8)
cc_9 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
counters.add(cc_9)
cc_10 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
counters.add(cc_10)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'Container')), pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'ComponentInstance')), pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'Port')), pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'JoinData')), pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'Formula')), pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'ContainerFeature')), pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_8, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
st_8 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_8)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_9, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainModel')), pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
st_9 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_9)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_10, False))
symbol = pyxb.binding.content.ElementUse(Compound_._UseForTag(pyxb.namespace.ExpandedName(None, u'Resource')), pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
st_10 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_10)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_7, False) ]))
st_7._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_8, True) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_8, False) ]))
st_8._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_9, True) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_9, False) ]))
st_9._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_10, True) ]))
st_10._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Compound_._Automaton = _BuildAutomaton_16()
def _BuildAutomaton_17 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_17
del _BuildAutomaton_17
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
counters.add(cc_7)
cc_8 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
counters.add(cc_8)
cc_9 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
counters.add(cc_9)
cc_10 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
counters.add(cc_10)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'Container')), pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'ComponentInstance')), pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'Port')), pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'JoinData')), pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'Formula')), pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'ContainerFeature')), pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_8, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
st_8 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_8)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_9, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainModel')), pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
st_9 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_9)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_10, False))
symbol = pyxb.binding.content.ElementUse(DesignSpaceContainer_._UseForTag(pyxb.namespace.ExpandedName(None, u'Resource')), pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
st_10 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_10)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_7, False) ]))
st_7._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_8, True) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_8, False) ]))
st_8._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_9, True) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_9, False) ]))
st_9._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_10, True) ]))
st_10._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
DesignSpaceContainer_._Automaton = _BuildAutomaton_17()
def _BuildAutomaton_18 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_18
del _BuildAutomaton_18
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 537, 6))
counters.add(cc_0)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Parameter_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 537, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Parameter_._Automaton = _BuildAutomaton_18()
def _BuildAutomaton_19 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_19
del _BuildAutomaton_19
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=1, metadata=pyxb.utils.utility.Location(u'avm.xsd', 537, 6))
counters.add(cc_0)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Metric_._UseForTag(pyxb.namespace.ExpandedName(None, u'Value')), pyxb.utils.utility.Location(u'avm.xsd', 537, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Metric_._Automaton = _BuildAutomaton_19()
Connector_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Role'), Port_, scope=Connector_, location=pyxb.utils.utility.Location(u'avm.xsd', 161, 10)))
Connector_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Property'), Property_, scope=Connector_, location=pyxb.utils.utility.Location(u'avm.xsd', 162, 10)))
Connector_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'DefaultJoin'), _ImportedBinding__iFAB.assemblyDetail, scope=Connector_, location=pyxb.utils.utility.Location(u'avm.xsd', 163, 10)))
Connector_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Connector'), Connector_, scope=Connector_, location=pyxb.utils.utility.Location(u'avm.xsd', 164, 10)))
Connector_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ConnectorFeature'), ConnectorFeature_, scope=Connector_, location=pyxb.utils.utility.Location(u'avm.xsd', 165, 10)))
def _BuildAutomaton_20 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_20
del _BuildAutomaton_20
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 161, 10))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 162, 10))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 163, 10))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 164, 10))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 165, 10))
counters.add(cc_4)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Connector_._UseForTag(pyxb.namespace.ExpandedName(None, u'Role')), pyxb.utils.utility.Location(u'avm.xsd', 161, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Connector_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 162, 10))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Connector_._UseForTag(pyxb.namespace.ExpandedName(None, u'DefaultJoin')), pyxb.utils.utility.Location(u'avm.xsd', 163, 10))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(Connector_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 164, 10))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(Connector_._UseForTag(pyxb.namespace.ExpandedName(None, u'ConnectorFeature')), pyxb.utils.utility.Location(u'avm.xsd', 165, 10))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
st_4._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Connector_._Automaton = _BuildAutomaton_20()
NormalDistribution_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Mean'), ValueExpressionType_, scope=NormalDistribution_, location=pyxb.utils.utility.Location(u'avm.xsd', 218, 10)))
NormalDistribution_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'StandardDeviation'), ValueExpressionType_, scope=NormalDistribution_, location=pyxb.utils.utility.Location(u'avm.xsd', 219, 10)))
def _BuildAutomaton_21 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_21
del _BuildAutomaton_21
import pyxb.utils.fac as fac
counters = set()
states = []
final_update = None
symbol = pyxb.binding.content.ElementUse(NormalDistribution_._UseForTag(pyxb.namespace.ExpandedName(None, u'Mean')), pyxb.utils.utility.Location(u'avm.xsd', 218, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
symbol = pyxb.binding.content.ElementUse(NormalDistribution_._UseForTag(pyxb.namespace.ExpandedName(None, u'StandardDeviation')), pyxb.utils.utility.Location(u'avm.xsd', 219, 10))
st_1 = fac.State(symbol, is_initial=False, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
transitions = []
transitions.append(fac.Transition(st_1, [
]))
st_0._set_transitionSet(transitions)
transitions = []
st_1._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
NormalDistribution_._Automaton = _BuildAutomaton_21()
def _BuildAutomaton_22 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_22
del _BuildAutomaton_22
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
counters.add(cc_7)
cc_8 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
counters.add(cc_8)
cc_9 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
counters.add(cc_9)
cc_10 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
counters.add(cc_10)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'Container')), pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'ComponentInstance')), pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'Port')), pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'JoinData')), pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'Formula')), pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'ContainerFeature')), pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_8, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
st_8 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_8)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_9, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainModel')), pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
st_9 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_9)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_10, False))
symbol = pyxb.binding.content.ElementUse(Optional_._UseForTag(pyxb.namespace.ExpandedName(None, u'Resource')), pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
st_10 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_10)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_7, False) ]))
st_7._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_8, True) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_8, False) ]))
st_8._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_9, True) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_9, False) ]))
st_9._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_10, True) ]))
st_10._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Optional_._Automaton = _BuildAutomaton_22()
Alternative_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'ValueFlowMux'), ValueFlowMux_, scope=Alternative_, location=pyxb.utils.utility.Location(u'avm.xsd', 369, 10)))
def _BuildAutomaton_23 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_23
del _BuildAutomaton_23
import pyxb.utils.fac as fac
counters = set()
cc_0 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
counters.add(cc_0)
cc_1 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
counters.add(cc_1)
cc_2 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
counters.add(cc_2)
cc_3 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
counters.add(cc_3)
cc_4 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
counters.add(cc_4)
cc_5 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
counters.add(cc_5)
cc_6 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
counters.add(cc_6)
cc_7 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
counters.add(cc_7)
cc_8 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
counters.add(cc_8)
cc_9 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
counters.add(cc_9)
cc_10 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
counters.add(cc_10)
cc_11 = fac.CounterCondition(min=0L, max=None, metadata=pyxb.utils.utility.Location(u'avm.xsd', 369, 10))
counters.add(cc_11)
states = []
final_update = set()
final_update.add(fac.UpdateInstruction(cc_0, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'Container')), pyxb.utils.utility.Location(u'avm.xsd', 337, 6))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_1, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'Property')), pyxb.utils.utility.Location(u'avm.xsd', 338, 6))
st_1 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_1)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_2, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'ComponentInstance')), pyxb.utils.utility.Location(u'avm.xsd', 339, 6))
st_2 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_2)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_3, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'Port')), pyxb.utils.utility.Location(u'avm.xsd', 340, 6))
st_3 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_3)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_4, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'Connector')), pyxb.utils.utility.Location(u'avm.xsd', 341, 6))
st_4 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_4)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_5, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'JoinData')), pyxb.utils.utility.Location(u'avm.xsd', 342, 6))
st_5 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_5)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_6, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'Formula')), pyxb.utils.utility.Location(u'avm.xsd', 343, 6))
st_6 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_6)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_7, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'ContainerFeature')), pyxb.utils.utility.Location(u'avm.xsd', 344, 6))
st_7 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_7)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_8, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'ResourceDependency')), pyxb.utils.utility.Location(u'avm.xsd', 345, 6))
st_8 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_8)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_9, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'DomainModel')), pyxb.utils.utility.Location(u'avm.xsd', 346, 6))
st_9 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_9)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_10, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'Resource')), pyxb.utils.utility.Location(u'avm.xsd', 347, 6))
st_10 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_10)
final_update = set()
final_update.add(fac.UpdateInstruction(cc_11, False))
symbol = pyxb.binding.content.ElementUse(Alternative_._UseForTag(pyxb.namespace.ExpandedName(None, u'ValueFlowMux')), pyxb.utils.utility.Location(u'avm.xsd', 369, 10))
st_11 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_11)
transitions = []
transitions.append(fac.Transition(st_0, [
fac.UpdateInstruction(cc_0, True) ]))
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_0, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_0, False) ]))
st_0._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_1, [
fac.UpdateInstruction(cc_1, True) ]))
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_1, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_1, False) ]))
st_1._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_2, [
fac.UpdateInstruction(cc_2, True) ]))
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_2, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_2, False) ]))
st_2._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_3, [
fac.UpdateInstruction(cc_3, True) ]))
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_3, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_3, False) ]))
st_3._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_4, [
fac.UpdateInstruction(cc_4, True) ]))
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_4, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_4, False) ]))
st_4._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_5, [
fac.UpdateInstruction(cc_5, True) ]))
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_5, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_5, False) ]))
st_5._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_6, [
fac.UpdateInstruction(cc_6, True) ]))
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_6, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_6, False) ]))
st_6._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_7, [
fac.UpdateInstruction(cc_7, True) ]))
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_7, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_7, False) ]))
st_7._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_8, [
fac.UpdateInstruction(cc_8, True) ]))
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_8, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_8, False) ]))
st_8._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_9, [
fac.UpdateInstruction(cc_9, True) ]))
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_9, False) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_9, False) ]))
st_9._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_10, [
fac.UpdateInstruction(cc_10, True) ]))
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_10, False) ]))
st_10._set_transitionSet(transitions)
transitions = []
transitions.append(fac.Transition(st_11, [
fac.UpdateInstruction(cc_11, True) ]))
st_11._set_transitionSet(transitions)
return fac.Automaton(states, counters, True, containing_state=None)
Alternative_._Automaton = _BuildAutomaton_23()
ComplexFormula_._AddElement(pyxb.binding.basis.element(pyxb.namespace.ExpandedName(None, u'Operand'), Operand_, scope=ComplexFormula_, location=pyxb.utils.utility.Location(u'avm.xsd', 471, 10)))
def _BuildAutomaton_24 ():
# Remove this helper function from the namespace after it is invoked
global _BuildAutomaton_24
del _BuildAutomaton_24
import pyxb.utils.fac as fac
counters = set()
states = []
final_update = set()
symbol = pyxb.binding.content.ElementUse(ComplexFormula_._UseForTag(pyxb.namespace.ExpandedName(None, u'Operand')), pyxb.utils.utility.Location(u'avm.xsd', 471, 10))
st_0 = fac.State(symbol, is_initial=True, final_update=final_update, is_unordered_catenation=False)
states.append(st_0)
transitions = []
transitions.append(fac.Transition(st_0, [
]))
st_0._set_transitionSet(transitions)
return fac.Automaton(states, counters, False, containing_state=None)
ComplexFormula_._Automaton = _BuildAutomaton_24()
| [
"[email protected]"
] | |
9ba6a29c9bca6d480329a33582341c51f26bc30b | b1d9e75c07cea7498e4fe8b113a23c55f89292be | /leaflet_storage/base_models.py | 3a0c88e106127c6ad9d83fd7d8442328999c274d | [
"WTFPL"
] | permissive | FranciscoDS/django-leaflet-storage | 9dfdbb5e579431e0af70d0578fa55efe909093e3 | b69f0bb5a1848f743d50b0ec1f76b8ac3216c497 | refs/heads/master | 2020-12-24T13:20:42.757136 | 2013-03-10T21:33:55 | 2013-03-10T21:33:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,693 | py | # -*- coding: utf-8 -*-
from django.contrib.gis.db import models
from django.db.models import get_model as dj_get_model
from django.conf import settings
from django.core.urlresolvers import reverse
from django.contrib.auth.models import User
from django.utils.translation import ugettext as _
from .fields import DictField
class NamedModel(models.Model):
name = models.CharField(max_length=200, verbose_name=_("name"))
class Meta:
abstract = True
ordering = ('name', )
def __unicode__(self):
return self.name
class Licence(NamedModel):
"""
The licence one map is published on.
"""
details = models.URLField(
verbose_name=_('details'),
help_text=_('Link to a page where the licence is detailed.')
)
@classmethod
def get_default(cls):
"""
Returns a default Licence, creates it if it doesn't exist.
Needed to prevent a licence deletion from deleting all the linked
maps.
"""
return cls.objects.get_or_create(
name=getattr(settings, "LEAFLET_STORAGE_DEFAULT_LICENCE_NAME", _('No licence set'))
)[0]
class TileLayer(NamedModel):
url_template = models.CharField(
max_length=200,
help_text=_("URL template using OSM tile format")
)
minZoom = models.IntegerField(default=0)
maxZoom = models.IntegerField(default=18)
attribution = models.CharField(max_length=300)
@property
def json(self):
return dict((field.name, getattr(self, field.name)) for field in self._meta.fields)
@classmethod
def get_default(cls):
"""
Returns the default tile layer (used for a map when no layer is set).
"""
return cls.objects.order_by('pk')[0] # FIXME, make it administrable
class Map(NamedModel):
"""
A single thematical map.
"""
ANONYMOUS = 1
EDITORS = 2
OWNER = 3
EDIT_STATUS = (
(ANONYMOUS, _('Everyone can edit')),
(EDITORS, _('Only editors can edit')),
(OWNER, _('Only owner can edit')),
)
slug = models.SlugField(db_index=True)
description = models.TextField(blank=True, null=True, verbose_name=_("description"))
center = models.PointField(geography=True, verbose_name=_("center"))
zoom = models.IntegerField(default=7, verbose_name=_("zoom"))
locate = models.BooleanField(default=False, verbose_name=_("locate"), help_text=_("Locate user on load?"))
licence = models.ForeignKey(
Licence,
help_text=_("Choose the map licence."),
verbose_name=_('licence'),
on_delete=models.SET_DEFAULT,
default=Licence.get_default
)
modified_at = models.DateTimeField(auto_now=True)
tilelayers = models.ManyToManyField(TileLayer, through="MapToTileLayer")
owner = models.ForeignKey(User, related_name="owned_maps", verbose_name=_("owner"))
editors = models.ManyToManyField(User, blank=True, verbose_name=_("editors"))
edit_status = models.SmallIntegerField(choices=EDIT_STATUS, default=OWNER, verbose_name=_("edit status"))
settings = DictField(blank=True, null=True, verbose_name=_("settings"))
objects = models.GeoManager()
@property
def tilelayers_data(self):
tilelayers_data = []
for rank, m2t in enumerate(MapToTileLayer.objects.filter(map=self), start=1):
tilelayers_data.append({
"tilelayer": m2t.tilelayer.json,
"rank": rank
})
return tilelayers_data
def get_absolute_url(self):
return reverse("map", kwargs={'slug': self.slug, 'username': self.owner.username})
def can_edit(self, user):
"""
Define if an already authenticated user can edit or not the instance.
"""
if user == self.owner or self.edit_status == self.ANONYMOUS:
can = True
elif self.edit_status == self.EDITORS and user in self.editors.all():
can = True
else:
can = False
return can
class MapToTileLayer(models.Model):
tilelayer = models.ForeignKey(TileLayer)
map = models.ForeignKey(Map)
rank = models.IntegerField(null=True, blank=True)
class Meta:
ordering = ['rank', 'tilelayer__name']
class Pictogram(NamedModel):
"""
An image added to an icon of the map.
"""
attribution = models.CharField(max_length=300)
pictogram = models.ImageField(upload_to="pictogram")
@property
def json(self):
return {
"id": self.pk,
"attribution": self.attribution,
"name": self.name,
"src": self.pictogram.url
}
class IconConfigMixin(models.Model):
ICON_CLASS = (
('Default', 'Default'),
('Circle', 'Circle'),
('Drop', 'Drop'),
('Ball', 'Ball'),
)
pictogram = models.ForeignKey(
Pictogram,
null=True,
blank=True,
verbose_name=_("pictogram")
)
icon_class = models.CharField(
null=True,
blank=True,
choices=ICON_CLASS,
max_length=32,
verbose_name=_("icon type"),
help_text=_("Choose the style of the marker.")
)
class Meta:
abstract = True
class Category(NamedModel, IconConfigMixin):
"""
Category of a Feature.
"""
map = models.ForeignKey(Map)
description = models.TextField(
blank=True,
null=True,
verbose_name=_("description")
)
options = DictField(blank=True, null=True, verbose_name=_("options"))
display_on_load = models.BooleanField(
default=False,
verbose_name=_("display on load"),
help_text=_("Display this category on load.")
)
@property
def json(self):
return {
"name": self.name,
"pk": self.pk,
"pictogram_url": self.pictogram.pictogram.url if self.pictogram else None,
"icon_class": self.icon_class,
"display_on_load": self.display_on_load,
"options": self.options,
}
@property
def features(self):
if not hasattr(self, "_features"):
filters = {
"category": self
}
markers = get_model("Marker").objects.filter(**filters)
polylines = get_model("Polyline").objects.filter(**filters)
polygons = get_model("Polygon").objects.filter(**filters)
self._features = list(markers) + list(polylines) + list(polygons)
return self._features
@classmethod
def create_default(cls, map_inst):
return Category.objects.create(
map=map_inst,
name=getattr(settings, "LEAFLET_STORAGE_DEFAULT_CATEGORY_NAME", _("My data")),
display_on_load=True
)
class BaseFeature(NamedModel):
description = models.TextField(
blank=True,
null=True,
verbose_name=_("description")
)
category = models.ForeignKey(Category, verbose_name=_("category"))
options = DictField(blank=True, null=True, verbose_name=_("options"))
objects = models.GeoManager()
@property
def icon(self):
# All the features are processed the same way by vectoformats
# so they need to share all exported properties
return {}
class Meta:
abstract = True
class AbstractMarker(BaseFeature, IconConfigMixin):
"""
Point of interest.
"""
latlng = models.PointField(geography=True)
@property
def icon(self):
# All the features are processed the same way by vectoformats
# so they need to share all exported properties
return {
"class": self.icon_class,
"url": self.pictogram.pictogram.url if self.pictogram else None
}
class Meta:
abstract = True
class AbstractPolyline(BaseFeature):
latlng = models.LineStringField(geography=True)
class Meta:
abstract = True
class AbstractPolygon(BaseFeature):
latlng = models.PolygonField(geography=True)
class Meta:
abstract = True
# ############## #
# Default Models #
# ############## #
class Marker(AbstractMarker):
pass
class Polyline(AbstractPolyline):
pass
class Polygon(AbstractPolygon):
pass
# ###### #
# Getter #
# ###### #
def get_model(name):
"""
Example of settings:
LEAFLET_STORAGE_MODELS = {
"Marker": ('app_name', 'ModelName'),
}
"""
LEAFLET_STORAGE_MODELS = getattr(settings, "LEAFLET_STORAGE_MODELS", {})
if not name in LEAFLET_STORAGE_MODELS:
model = globals()[name]
else:
model = dj_get_model(*LEAFLET_STORAGE_MODELS[name])
return model
| [
"[email protected]"
] | |
d0ca2278af93cf0521f1a71d22dd494b20804760 | ea48ef0588c104e49a7ebec5bd8dc359fdeb6674 | /api/snippets/serializers.py | 802503d73cf66c4cf043190921ed0e2be369d0ef | [] | no_license | Jizishuo/django--text | c0d58d739ef643c7f3793fbead19302778670368 | 152a5c99e7a16a75fda2f1f85edcfdce9274c9c2 | refs/heads/master | 2020-04-01T10:39:18.131551 | 2018-11-18T13:31:59 | 2018-11-18T13:31:59 | 153,125,799 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,560 | py | from rest_framework import serializers
from snippets.models import Snippet, LANGUAGE_CHOICES, STYLE_CHOICES
class SnippetSerializer(serializers.Serializer): # 它序列化的方式很类似于Django的forms
id = serializers.IntegerField(read_only=True)
title = serializers.CharField(required=False, allow_blank=True, max_length=100)
code = serializers.CharField(style={'base_template': 'textarea.html'}) # style的设置等同于Django的widget=widgets.Textarea
linenos = serializers.BooleanField(required=False) # 用于对浏览器的上的显示
language = serializers.ChoiceField(choices=LANGUAGE_CHOICES, default='python')
style = serializers.ChoiceField(choices=STYLE_CHOICES, default='friendly')
def create(self, validated_data):
"""
Create and return a new `Snippet` instance, given the validated data.
"""
return Snippet.objects.create(**validated_data)
def update(self, instance, validated_data):
"""
Update and return an existing `Snippet` instance, given the validated data.
"""
instance.title = validated_data.get('title', instance.title)
instance.code = validated_data.get('code', instance.code)
instance.linenos = validated_data.get('linenos', instance.linenos)
instance.language = validated_data.get('language', instance.language)
instance.style = validated_data.get('style', instance.style)
instance.save()
return instance | [
"[email protected]"
] | |
7e77b6dd2286b4dc252221c40b6c29436ce6a3de | 4af3afd2471ec89c3bbe14adba8f87fe400f5598 | /numpy/numpy_demo.py | 4aa8e8b196a3bf685026495a1ce634eeeeeca068 | [] | no_license | guanguanboy/TestPytorch | a912b64d1e7c1df93a66032e4d411e2da82523c6 | 52fe23bae05027a5a725cb1d028954c5b5597e9a | refs/heads/master | 2023-07-16T15:28:14.257183 | 2021-08-29T04:04:11 | 2021-08-29T04:04:11 | 295,422,137 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 79 | py | import numpy as np
a = np.array([1, 2, 3, 4, 5])
a[a > 3] = 0
print(a) | [
"[email protected]"
] | |
1fecc0c4d13d877ece282bd2a8ebf7d6f3d6fde6 | 5158d2aa9839dcf80340ef369db7eada19f3ff8b | /test.py | 1fe2fcc807e3f3e645ccd632e01c386ea08a6bad | [] | no_license | andrewsmedina/flh-pvcloud | 31de2e5734a6f91db4c618fa20759300e2930596 | 1799b17039dde004461982466f58cc464e6488b8 | refs/heads/master | 2021-01-11T03:39:57.926994 | 2016-10-13T03:56:46 | 2016-10-13T03:56:46 | 71,403,725 | 0 | 0 | null | 2016-10-19T22:14:25 | 2016-10-19T22:14:24 | null | UTF-8 | Python | false | false | 1,691 | py | #!/usr/bin/env python
import os
import sys
import django
import json
def test_pvs_energy_daily():
from pvs.models import Energy
#pvs_serial = '0000000097894c9b'
pvs_serial = '00000000f6392e07'
pvs_en_daily = Energy.get_energy_daily_output(pvs_serial)
result_list = []
for entry in pvs_en_daily[pvs_serial].values():
result_list.append(entry.values())
result_list.sort(key=lambda x: x[0])
print('== pvs energy today daily result ==')
for entry in result_list:
print(entry)
def test_pvs_energy_hourly():
from pvs.models import Energy
pvs_list = Energy.get_distinct_serial()
print('distinct pvs serial: %s' % pvs_list)
#pvs_serial = '0000000097894c9b'
pvs_serial = '00000000f6392e07'
pvs_en_by_hour = Energy.get_energy_daily_output_by_hour(pvs_serial)
#print(pvs_en_by_hour)
#return
result_list = []
for entry in pvs_en_by_hour[pvs_serial].values():
result_list.append(entry.values())
result_list.sort(key=lambda x: x[0])
print('== pvs energy today hourly result ==')
for entry in result_list:
print(entry)
pvs_en_hourly = Energy.get_calculated_energy_hourly_output(pvs_serial)
result_list = []
for entry in pvs_en_hourly[pvs_serial].values():
result_list.append(entry.values())
result_list.sort(key=lambda x: x[0])
print('== pvs calculated energy hourly result ==')
for entry in result_list:
print(entry)
if __name__ == "__main__":
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
django.setup()
test_pvs_energy_hourly()
#test_pvs_energy_daily()
| [
"[email protected]"
] | |
e95147eb86b47413be4c0af28598db219b730732 | f5a53f0f2770e4d7b3fdace83486452ddcc996e1 | /env3/lib/python3.6/site-packages/django_tables2/columns/linkcolumn.py | adf587625eda5d92ecd166bee5d301e54ef9251c | [
"Apache-2.0",
"BSD-3-Clause"
] | permissive | fireman0865/PingBox | 35e8fc9966b51320d571b63967e352a134022128 | 0f00eaf88b88e9441fffd5173a1501e56c13db03 | refs/heads/master | 2023-01-20T07:55:59.433046 | 2020-03-15T13:36:31 | 2020-03-15T13:36:31 | 247,466,832 | 1 | 0 | Apache-2.0 | 2022-12-26T21:30:32 | 2020-03-15T12:59:16 | Python | UTF-8 | Python | false | false | 6,118 | py | from .base import Column, library
class BaseLinkColumn(Column):
"""
The base for other columns that render links.
Arguments:
text (str or callable): If set, this value will be used to render the
text inside link instead of value. The callable gets the record
being rendered as argument.
attrs (dict): In addition to ``attrs`` keys supported by `~.Column`, the
following are available:
- `a` -- ``<a>`` in ``<td>`` elements.
"""
def __init__(self, text=None, *args, **kwargs):
super().__init__(*args, **kwargs)
self.text = text
def text_value(self, record, value):
if self.text is None:
return value
return self.text(record) if callable(self.text) else self.text
def value(self, record, value):
"""
Returns the content for a specific cell similarly to `.render` however
without any html content.
"""
return self.text_value(record, value)
def render(self, record, value):
return self.text_value(record, value)
@library.register
class LinkColumn(BaseLinkColumn):
"""
Renders a normal value as an internal hyperlink to another page.
.. note ::
This column should not be used anymore, the `linkify` keyword argument to
regular columns can be used to achieve the same results.
It's common to have the primary value in a row hyperlinked to the page
dedicated to that record.
The first arguments are identical to that of
`~django.urls.reverse` and allows an internal URL to be
described. If this argument is `None`, then `get_absolute_url`.
(see Django references) will be used.
The last argument *attrs* allows custom HTML attributes to be added to the
rendered ``<a href="...">`` tag.
Arguments:
viewname (str or None): See `~django.urls.reverse`, or use `None`
to use the model's `get_absolute_url`
urlconf (str): See `~django.urls.reverse`.
args (list): See `~django.urls.reverse`. [2]_
kwargs (dict): See `~django.urls.reverse`. [2]_
current_app (str): See `~django.urls.reverse`.
attrs (dict): HTML attributes that are added to the rendered
``<a ...>...</a>`` tag.
text (str or callable): Either static text, or a callable. If set, this
will be used to render the text inside link instead of value (default).
The callable gets the record being rendered as argument.
.. [2] In order to create a link to a URL that relies on information in the
current row, `.Accessor` objects can be used in the *args* or *kwargs*
arguments. The accessor will be resolved using the row's record before
`~django.urls.reverse` is called.
Example:
.. code-block:: python
# models.py
class Person(models.Model):
name = models.CharField(max_length=200)
# urls.py
urlpatterns = patterns('',
url("people/([0-9]+)/", views.people_detail, name="people_detail")
)
# tables.py
from django_tables2.utils import A # alias for Accessor
class PeopleTable(tables.Table):
name = tables.LinkColumn("people_detail", args=[A("pk")])
In order to override the text value (i.e. ``<a ... >text</a>``) consider
the following example:
.. code-block:: python
# tables.py
from django_tables2.utils import A # alias for Accessor
class PeopleTable(tables.Table):
name = tables.LinkColumn("people_detail", text="static text", args=[A("pk")])
age = tables.LinkColumn("people_detail", text=lambda record: record.name, args=[A("pk")])
In the first example, a static text would be rendered (``"static text"``)
In the second example, you can specify a callable which accepts a record object (and thus
can return anything from it)
In addition to *attrs* keys supported by `.Column`, the following are
available:
- `a` -- ``<a>`` elements in ``<td>``.
Adding attributes to the ``<a>``-tag looks like this::
class PeopleTable(tables.Table):
first_name = tables.LinkColumn(attrs={
"a": {"style": "color: red;"}
})
"""
def __init__(
self,
viewname=None,
urlconf=None,
args=None,
kwargs=None,
current_app=None,
attrs=None,
**extra
):
super().__init__(
attrs=attrs,
linkify=dict(
viewname=viewname,
urlconf=urlconf,
args=args,
kwargs=kwargs,
current_app=current_app,
),
**extra
)
@library.register
class RelatedLinkColumn(LinkColumn):
"""
Render a link to a related object using related object's ``get_absolute_url``,
same parameters as ``~.LinkColumn``.
.. note ::
This column should not be used anymore, the `linkify` keyword argument to
regular columns can be used achieve the same results.
If the related object does not have a method called ``get_absolute_url``,
or if it is not callable, the link will be rendered as '#'.
Traversing relations is also supported, suppose a Person has a foreign key to
Country which in turn has a foreign key to Continent::
class PersonTable(tables.Table):
name = tables.Column()
country = tables.RelatedLinkColumn()
continent = tables.RelatedLinkColumn(accessor="country.continent")
will render:
- in column 'country', link to ``person.country.get_absolute_url()`` with the output of
``str(person.country)`` as ``<a>`` contents.
- in column 'continent', a link to ``person.country.continent.get_absolute_url()`` with
the output of ``str(person.country.continent)`` as ``<a>`` contents.
Alternative contents of ``<a>`` can be supplied using the ``text`` keyword argument as
documented for `~.columns.LinkColumn`.
"""
| [
"[email protected]"
] | |
29412168a35029446a3f8458bcf3b90a9ca5c7bb | 12a5b72982291ac7c074210afc2c9dfe2c389709 | /online_judges/Codeforces/228/E/code.py | df84b84afc6501c45b644b377d5dc32fca5c6d4c | [] | no_license | krantirk/Algorithms-and-code-for-competitive-programming. | 9b8c214758024daa246a1203e8f863fc76cfe847 | dcf29bf976024a9d1873eadc192ed59d25db968d | refs/heads/master | 2020-09-22T08:35:19.352751 | 2019-05-21T11:56:39 | 2019-05-21T11:56:39 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 525 | py | n = int(raw_input())
ciel = 0
jiro = 0
maxValues = []
for x in xrange(n):
l = map(int,raw_input().split())
if (l[0] % 2 == 0):
for i in xrange(0,(l[0])/2):
ciel += l[1 + i]
jiro += l[l[0]/2 + i + 1]
else:
for i in xrange(0,((l[0])/2)):
ciel += l[1 + i]
jiro += l[(l[0])/2 + i + 2]
maxValues.append(l[l[0]/2 + 1])
maxValues.sort()
k = 0
if(len(maxValues) % 2 == 0):
k = 1
for i in range(len(maxValues)-1,-1,-1):
if i % 2 == k:
ciel += maxValues[i]
else:
jiro += maxValues[i]
print ciel, jiro
| [
"[email protected]"
] | |
a9c8c88180249969cff26aeb304d575a5691efdc | bc441bb06b8948288f110af63feda4e798f30225 | /app_store_sdk/model/resource_manage/filter_strategy_pb2.pyi | c7feb66ed7c223ea9ba231ff3e42ab314f2b82d4 | [
"Apache-2.0"
] | permissive | easyopsapis/easyops-api-python | 23204f8846a332c30f5f3ff627bf220940137b6b | adf6e3bad33fa6266b5fa0a449dd4ac42f8447d0 | refs/heads/master | 2020-06-26T23:38:27.308803 | 2020-06-16T07:25:41 | 2020-06-16T07:25:41 | 199,773,131 | 5 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,421 | pyi | # @generated by generate_proto_mypy_stubs.py. Do not edit!
import sys
from app_store_sdk.model.console.cmdb_query_strategy_pb2 import (
CmdbQueryStrategy as app_store_sdk___model___console___cmdb_query_strategy_pb2___CmdbQueryStrategy,
)
from app_store_sdk.model.resource_manage.filter_condition_group_pb2 import (
FilterConditionGroup as app_store_sdk___model___resource_manage___filter_condition_group_pb2___FilterConditionGroup,
)
from google.protobuf.descriptor import (
Descriptor as google___protobuf___descriptor___Descriptor,
)
from google.protobuf.internal.containers import (
RepeatedCompositeFieldContainer as google___protobuf___internal___containers___RepeatedCompositeFieldContainer,
RepeatedScalarFieldContainer as google___protobuf___internal___containers___RepeatedScalarFieldContainer,
)
from google.protobuf.message import (
Message as google___protobuf___message___Message,
)
from typing import (
Iterable as typing___Iterable,
Optional as typing___Optional,
Text as typing___Text,
Union as typing___Union,
)
from typing_extensions import (
Literal as typing_extensions___Literal,
)
builtin___bool = bool
builtin___bytes = bytes
builtin___float = float
builtin___int = int
if sys.version_info < (3,):
builtin___buffer = buffer
builtin___unicode = unicode
class FilterStrategy(google___protobuf___message___Message):
DESCRIPTOR: google___protobuf___descriptor___Descriptor = ...
instanceId = ... # type: typing___Text
strategyName = ... # type: typing___Text
strategyObjectId = ... # type: typing___Text
crontab = ... # type: typing___Text
ctime = ... # type: typing___Text
mtime = ... # type: typing___Text
creator = ... # type: typing___Text
modifier = ... # type: typing___Text
nextExecTime = ... # type: typing___Text
enable = ... # type: builtin___bool
updateAuthorizers = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[typing___Text]
readAuthorizers = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[typing___Text]
deleteAuthorizers = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[typing___Text]
notifyUsers = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[typing___Text]
notifyMethods = ... # type: google___protobuf___internal___containers___RepeatedScalarFieldContainer[typing___Text]
org = ... # type: builtin___int
@property
def query(self) -> app_store_sdk___model___console___cmdb_query_strategy_pb2___CmdbQueryStrategy: ...
@property
def filter(self) -> google___protobuf___internal___containers___RepeatedCompositeFieldContainer[app_store_sdk___model___resource_manage___filter_condition_group_pb2___FilterConditionGroup]: ...
def __init__(self,
*,
instanceId : typing___Optional[typing___Text] = None,
strategyName : typing___Optional[typing___Text] = None,
strategyObjectId : typing___Optional[typing___Text] = None,
query : typing___Optional[app_store_sdk___model___console___cmdb_query_strategy_pb2___CmdbQueryStrategy] = None,
filter : typing___Optional[typing___Iterable[app_store_sdk___model___resource_manage___filter_condition_group_pb2___FilterConditionGroup]] = None,
crontab : typing___Optional[typing___Text] = None,
ctime : typing___Optional[typing___Text] = None,
mtime : typing___Optional[typing___Text] = None,
creator : typing___Optional[typing___Text] = None,
modifier : typing___Optional[typing___Text] = None,
nextExecTime : typing___Optional[typing___Text] = None,
enable : typing___Optional[builtin___bool] = None,
updateAuthorizers : typing___Optional[typing___Iterable[typing___Text]] = None,
readAuthorizers : typing___Optional[typing___Iterable[typing___Text]] = None,
deleteAuthorizers : typing___Optional[typing___Iterable[typing___Text]] = None,
notifyUsers : typing___Optional[typing___Iterable[typing___Text]] = None,
notifyMethods : typing___Optional[typing___Iterable[typing___Text]] = None,
org : typing___Optional[builtin___int] = None,
) -> None: ...
if sys.version_info >= (3,):
@classmethod
def FromString(cls, s: builtin___bytes) -> FilterStrategy: ...
else:
@classmethod
def FromString(cls, s: typing___Union[builtin___bytes, builtin___buffer, builtin___unicode]) -> FilterStrategy: ...
def MergeFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
def CopyFrom(self, other_msg: google___protobuf___message___Message) -> None: ...
def HasField(self, field_name: typing_extensions___Literal[u"query",b"query"]) -> builtin___bool: ...
def ClearField(self, field_name: typing_extensions___Literal[u"creator",b"creator",u"crontab",b"crontab",u"ctime",b"ctime",u"deleteAuthorizers",b"deleteAuthorizers",u"enable",b"enable",u"filter",b"filter",u"instanceId",b"instanceId",u"modifier",b"modifier",u"mtime",b"mtime",u"nextExecTime",b"nextExecTime",u"notifyMethods",b"notifyMethods",u"notifyUsers",b"notifyUsers",u"org",b"org",u"query",b"query",u"readAuthorizers",b"readAuthorizers",u"strategyName",b"strategyName",u"strategyObjectId",b"strategyObjectId",u"updateAuthorizers",b"updateAuthorizers"]) -> None: ...
| [
"[email protected]"
] | |
6e33bdb52df70a1d3f34b66cbe70b041167f2189 | 3fbfabfaaada7b9b77e8a1df8fed4de444070d49 | /session_10/Employee.py | ab419235cf5bb43b1a9bf7a2b7ac1bb3742016d0 | [
"MIT"
] | permissive | dravate/spark_python_course | df36a561ab2cf8f763dd02655319cd6bf5b7876c | 519389fdb21d78cd6d19e1ad2f7c782bc1449a83 | refs/heads/main | 2023-07-08T06:53:27.635106 | 2021-08-03T14:44:55 | 2021-08-03T14:44:55 | 385,127,461 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 732 | py | class Employee:
'Common base class for all employees'
empCount = 0
def __init__(self, name, salary):
self.name = name
self.salary = salary
Employee.empCount += 1
def displayCount(self):
print ("Total Employee {}".format( Employee.empCount))
def displayEmployee(self):
print ("Name : ", self.name, ", Salary: ", self.salary)
def __str__(self):
return self.name + ' ' + str(self.salary)
"This would create first object of Employee class"
emp1 = Employee("Zara", 2000)
"This would create second object of Employee class"
emp2 = Employee("Manni", 5000)
emp1.displayEmployee()
#emp2.displayEmployee()
print ("Total Employee {}".format( Employee.empCount))
#print (emp2)
| [
"[email protected]"
] | |
67df998e15aaf63b368447b166ddfc3cd29f7411 | 60d0252aabe5d929af8c94cdddd502605e7bafdd | /crawler_novels/www.ck101.org.py | 4946797433701f255fa0933a31a2593df36791fe | [] | no_license | SlovEnt/Web_Craler_Series | ead253b56f99bcd0bac33c2a66d226673c7f68fe | 9f77858826254d6631486f4770760e9e78baea68 | refs/heads/master | 2020-06-09T23:30:38.984620 | 2019-09-13T13:55:02 | 2019-09-13T13:55:02 | 193,528,032 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 5,238 | py | # -*- coding: utf-8 -*-
__author__ = 'SlovEnt'
__date__ = '2019/6/24 22:07'
import time
import os
from bs4 import BeautifulSoup
from collections import OrderedDict
from chs_tools.get_html_page import get_html_all_content, chrome_get_html_all_content
from chs_tools.print_log import C_PrintLog
from chs_tools.param_info import rtn_parainfo
import traceback
plog = C_PrintLog()
PARAINFO = rtn_parainfo()
DOWN_FLODERS = PARAINFO["NOVEL_DOWN_FLODERS"]
ROOT_URL = "https://www.ck101.org" # 网站根目录
GENERAL_PATH = "" # 通用路径
NOVEL_SUB_ID = "198/198015" # 目录页面ID
ENCODING = "GBK" # 页面文字编码
CHAPTER_POST = 1
"https://www.ck101.org/198/198015/"
if GENERAL_PATH == "":
FULL_URL = "{0}/{1}/".format(ROOT_URL, NOVEL_SUB_ID)
else:
FULL_URL = "{0}/{1}/{2}/index.html".format(ROOT_URL, GENERAL_PATH, NOVEL_SUB_ID)
plog.debug("小说下载首页为:{0}".format(FULL_URL))
def rtn_chapter_list_info(html):
soup = BeautifulSoup(html, 'html.parser')
novelName = soup.find_all(name="div", attrs={"class": "infot"})[0].h1.text
# novelName = novelName.split("《")[1]
# novelName = novelName.split("》")[0]
# novelName = "妾本惊华"
plog.debug("开始下载《{0}》".format(novelName))
chapterListInfoSoup = soup.find_all(name="div" , attrs={"class": "dccss"})
# print(chapterListInfoSoup)
chapterListInfoArr = []
n = 0
for ddItem in chapterListInfoSoup:
# print(ddItem)
n += 1
# if n <= 12:
# continue
chapterListInfoDict = OrderedDict()
chapterListInfoDict2 = OrderedDict()
if "href" not in str(ddItem):
continue
if n < CHAPTER_POST:
continue
chapterListInfoDict["text"] = ddItem.a.text
chapterListInfoDict["href"] = ddItem.a["href"]
chapterListInfoArr.append(chapterListInfoDict)
# chapterListInfoDict2["text"] = "接"
# nextPageUrl = ddItem.a["href"].split(".")
# nextPageUrl = "{0}_2.{1}".format(nextPageUrl[0], nextPageUrl[1])
# chapterListInfoDict2["href"] = nextPageUrl
# chapterListInfoArr.append(chapterListInfoDict2)
plog.tmpinfo(chapterListInfoDict)
return chapterListInfoArr, novelName
def rtn_chapter_txt(chapterHtml):
# print("---------------chapterHtml-----------------\n",chapterHtml,"\n\n\n\n")
chapterHtml = chapterHtml.replace("<br />", "\n")
soup = BeautifulSoup(chapterHtml, 'html.parser')
try:
soupSub = soup.find_all(name="div", attrs={"id": "content"})[0]
# soupSubStr = str(soupSub)
# print("---------------soupSubStr-----------------\n",soupSubStr,"\n\n\n\n")
# soupSubStr = "{0}{1}".format(soupSubStr.split("<div")[0],"</article>")
# soupSub = BeautifulSoup(soupSubStr, 'html.parser')
txtContent = soupSub.text
txtContent = txtContent.replace(" ", "")
txtContent = txtContent.replace(" ", "")
txtContent = txtContent.replace("\n\n", "\n")
txtContent = txtContent.replace("\xa0", "")
txtContent = txtContent.replace("page_top();", "")
txtContent = txtContent.replace("\n电脑天使这边走→", "")
txtContent = txtContent.replace("\nWAP天使戳这边→", "")
txtContent = txtContent.replace('\n")>', "")
txtContent = txtContent.replace("\nAPP天使来这边→", "")
txtContent = txtContent + "\n"
# txtContent = txtContent.split("/c/o/m")[1] + "\n"
print(txtContent)
return txtContent
except:
time.sleep(2)
traceback.print_exc()
print("--------------- chapterHtml error -----------------\n", chapterHtml)
return False
def write_txt_content(txtFileName, chapterName, chapterTxt, encoding):
with open(txtFileName, 'a', encoding=encoding) as f:
chapterName = chapterName.replace("www.ggdown.com", "")
chapterName = chapterName.replace(" :", "")
if chapterName == "接":
pass
else:
f.write(chapterName + "\n")
# print(chapterTxt)
f.write(chapterTxt)
if __name__ == '__main__':
html = get_html_all_content(FULL_URL, "info_right", ENCODING)
# 返回章节信息
chapterListInfo, novelName = rtn_chapter_list_info(html)
novelFilePath = r"{0}\{1}.txt".format(DOWN_FLODERS, novelName)
if CHAPTER_POST == 1:
if (os.path.exists(novelFilePath)):
os.remove(novelFilePath)
n = 0
for chapterInfo in chapterListInfo:
n += 1
chapterUrl = "{0}{1}".format(ROOT_URL, chapterInfo["href"])
plog.debug("{3}/{4} 网址:{0},页面章节标题:{2},文件路径:{1} !!!".format(chapterUrl, novelFilePath, chapterInfo["text"], n, len(chapterListInfo)))
chapterHtml = get_html_all_content(chapterUrl, "content", ENCODING)
chapterTxt = rtn_chapter_txt(chapterHtml)
# print(str(chapterHtml))
if chapterTxt is not False:
write_txt_content(novelFilePath, chapterInfo["text"], chapterTxt, ENCODING)
else:
plog.error("获取失败!!!!!!")
| [
"[email protected]"
] | |
fe9e7ae36613eddbc9b67fab34ddea53b95cc7bc | baf3b9e0f80a545ba1e087d54e4de7a9fe10f279 | /滑动窗口/209_长度最小的子数组.py | 6f00de24476ca4c1e8c21ef53268ba5989b0febb | [] | no_license | llin0420/Leetcode_in_python3 | 5e0755c9b6bb5fe92fd1e0bd7d563e8a3a20f88a | 41684ff99b2f881304b08c3c753b0df48e0a6b40 | refs/heads/master | 2022-12-15T16:23:39.991920 | 2020-09-09T08:05:55 | 2020-09-09T08:05:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,402 | py | # -*- coding: utf-8 -*-
"""
Created on Sun Jul 12 13:17:24 2020
@author: leiya
"""
'''
0712
这道题在滑动窗口中稍微有些特殊
我们说过,尽量将对输出的更新放到小循环外面,即尽量不要在缩小窗口的时候更新输出,以防特殊情况
但是这道题需要将更新放入小循环中,因为只有在小循环中才能找到需要更新的精确数值,一旦出了小循环,就不是如何要求的输出值了
在这道题里就是出了小循环,该滑动窗口和<s,不满足更新输出值的要求,这道题可以和904水果题类比
水果题进入小循环是不符合要求,这道题是进入小循环符合要求,要在符合要求的基础上看看有没有最优解,因为我们是默认移动end的,所以
这两道题的性质决定了更新输出值的位置,但我们仍要尽力将其放到while外面,这道题不在外面无关紧要,因为只有进入循环以后才可能有解
不在循环内的时候不需要更新解,即使一直没有进入内循环,我们也可以通过if min_len == float('inf')来解决特殊情况
0718:这道题还需要明确一点,即该题的窗口也无须回缩(回缩是指start前进的时候,end回缩去比较这个窗口,但在这道题里也不需要去比较这个窗口)
原因在于一开始[start:end]肯定是sum_ < s了,end才会向后移动一位,[start:new_end],当start前进时[start+1:end](由new_end回缩回end),这个窗口是
一开始[start:end]的子集,大窗口的和dou < s,更不用说他的子集了,因此回缩在这道题里没有意义,无须比较,这一点看似写题时候无关紧要,但是一定要
判断清楚再去写,这是这道题之所以这么写的本质
'''
class Solution:
def minSubArrayLen(self, s: int, nums: List[int]) -> int:
#sum_>=s的时候可以收缩窗口,这就是我们要找的while条件,因为
#窗口不定长,所以用while,典型的不定长滑动窗口
start = 0
min_len = float('inf')
sum_ = 0
for end in range(len(nums)):
sum_ += nums[end]
while sum_ >= s:
min_len = min(min_len,end-start+1)
sum_ -= nums[start]
start += 1
if min_len == float('inf'):
return 0
else:
return min_len | [
"[email protected]"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.