blob_id
stringlengths 40
40
| directory_id
stringlengths 40
40
| path
stringlengths 3
616
| content_id
stringlengths 40
40
| detected_licenses
sequencelengths 0
112
| license_type
stringclasses 2
values | repo_name
stringlengths 5
115
| snapshot_id
stringlengths 40
40
| revision_id
stringlengths 40
40
| branch_name
stringclasses 777
values | visit_date
timestamp[us]date 2015-08-06 10:31:46
2023-09-06 10:44:38
| revision_date
timestamp[us]date 1970-01-01 02:38:32
2037-05-03 13:00:00
| committer_date
timestamp[us]date 1970-01-01 02:38:32
2023-09-06 01:08:06
| github_id
int64 4.92k
681M
⌀ | star_events_count
int64 0
209k
| fork_events_count
int64 0
110k
| gha_license_id
stringclasses 22
values | gha_event_created_at
timestamp[us]date 2012-06-04 01:52:49
2023-09-14 21:59:50
⌀ | gha_created_at
timestamp[us]date 2008-05-22 07:58:19
2023-08-21 12:35:19
⌀ | gha_language
stringclasses 149
values | src_encoding
stringclasses 26
values | language
stringclasses 1
value | is_vendor
bool 2
classes | is_generated
bool 2
classes | length_bytes
int64 3
10.2M
| extension
stringclasses 188
values | content
stringlengths 3
10.2M
| authors
sequencelengths 1
1
| author_id
stringlengths 1
132
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
b5bfc185e3c0e76fb33a254d444155ab0931f2c8 | f723b36a64d7c5ccd2a4937d02f05279fc9e907c | /calls/urls.py | 48317b35fa4b2d6bacf2ee72c3c3734774b5c08e | [] | no_license | DmitrySham/grand-django-site | 92259098d209954ee5f5c994989f6c1f7c9826f4 | e65988c441e9fb37fd15126d28301c47643b501d | refs/heads/master | 2023-01-22T08:37:08.921212 | 2023-01-13T15:05:30 | 2023-01-13T15:05:30 | 184,014,992 | 0 | 0 | null | 2022-12-04T20:45:03 | 2019-04-29T06:44:37 | JavaScript | UTF-8 | Python | false | false | 145 | py | from django.urls import path
from calls import views
urlpatterns = [
path('ajax/call/request/', views.call_request, name='calls_request')
]
| [
"[email protected]"
] | |
86b082d38e2f308f0a9eb3f9b74eb82523828273 | b478d1e63cce432b6fd3692c0aa7a84f411ae9dc | /meta_py3/main.py | b2fcdb9da12e44315b927e032eb6c0442104b5d4 | [] | no_license | yiqing95/py_study | 8d414aa00b4ac31070fe5667a98815980eee46d0 | 6ce6b46ad729a795bc9253d6339169e62ef47766 | refs/heads/master | 2016-09-06T17:45:26.081269 | 2015-01-12T15:22:29 | 2015-01-12T15:22:29 | 20,810,777 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 198 | py | from meta_py3 import example2
__author__ = 'yiqing'
from meta_py3.example import *
from meta_py3.helper import printHr
p = Point(3,4)
print(p.x)
printHr()
obj = example2.MyClass(3)
print(obj.x)
| [
"[email protected]"
] | |
17e37b200e4daabdb7bde731b5f7ece860ff30f5 | 9f440599da392a55d7d5b2b7ce571bc3f2dc881e | /rhea/cores/usbext/fpgalink/__init__.py | 40502351eaf29688fab9e182e67fd1cd214d5167 | [
"MIT"
] | permissive | zignig/rhea | 713559f688f85e1304ab43c2b871553da3bf01ae | e0d04ff4fcbd57dfeb6f84fa8f87d6b03caee590 | refs/heads/master | 2020-04-06T06:53:33.541215 | 2016-03-15T12:45:23 | 2016-03-15T12:45:23 | 53,943,632 | 1 | 0 | null | 2016-03-15T12:42:06 | 2016-03-15T12:42:06 | null | UTF-8 | Python | false | false | 196 | py |
from __future__ import absolute_import
from . import _fpgalink_fx2 as fpgalink
from ._fpgalink_fx2 import get_interfaces
from ._fpgalink_fx2 import fpgalink_fx2
from ._fl_convert import convert
| [
"[email protected]"
] | |
64abbd79020cfe186e38c100a66432f254b6f63c | 835e428d1cbe87adf945897ff75f77e93b500d12 | /demonstrations/tutorial_qnn_module_torch.py | b8b5b8ea0148840cf4f468e8203d1730eb4e4f74 | [
"BSD-3-Clause",
"Apache-2.0"
] | permissive | quantshah/qml | 9acb3c932610e30a28369fe72ee49683ac301219 | 45533ef6f6d7b9cfa0384302fe52b5ead772b923 | refs/heads/master | 2022-11-30T08:26:12.972709 | 2022-11-18T19:59:59 | 2022-11-18T19:59:59 | 218,805,085 | 0 | 0 | Apache-2.0 | 2019-10-31T16:02:07 | 2019-10-31T16:02:06 | null | UTF-8 | Python | false | false | 11,188 | py | """
Turning quantum nodes into Torch Layers
=======================================
.. meta::
:property="og:description": Learn how to create hybrid ML models in PennyLane using Torch
:property="og:image": https://pennylane.ai/qml/_images/PyTorch_icon.png
.. related::
tutorial_qnn_module_tf Turning quantum nodes into Keras Layers
*Author: Tom Bromley — Posted: 02 November 2020. Last updated: 28 January 2021.*
Creating neural networks in `PyTorch <https://pytorch.org/>`__ is easy using the
`nn module <https://pytorch.org/docs/stable/nn.html>`__. Models are constructed from elementary
*layers* and can be trained using the PyTorch API. For example, the following code defines a
two-layer network that could be used for binary classification:
"""
import torch
layer_1 = torch.nn.Linear(2, 2)
layer_2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
layers = [layer_1, layer_2, softmax]
model = torch.nn.Sequential(*layers)
###############################################################################
# **What if we want to add a quantum layer to our model?** This is possible in PennyLane:
# :doc:`QNodes <../glossary/hybrid_computation>` can be converted into ``torch.nn`` layers and
# combined with the wide range of built-in classical
# `layers <https://pytorch.org/docs/stable/nn.html>`__ to create truly hybrid
# models. This tutorial will guide you through a simple example to show you how it's done!
#
# .. note::
#
# A similar demo explaining how to
# :doc:`turn quantum nodes into Keras layers <tutorial_qnn_module_tf>`
# is also available.
#
# Fixing the dataset and problem
# ------------------------------
#
# Let us begin by choosing a simple dataset and problem to allow us to focus on how the hybrid
# model is constructed. Our objective is to classify points generated from scikit-learn's
# binary-class
# `make_moons() <https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html>`__ dataset:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import make_moons
# Set random seeds
torch.manual_seed(42)
np.random.seed(42)
X, y = make_moons(n_samples=200, noise=0.1)
y_ = torch.unsqueeze(torch.tensor(y), 1) # used for one-hot encoded labels
y_hot = torch.scatter(torch.zeros((200, 2)), 1, y_, 1)
c = ["#1f77b4" if y_ == 0 else "#ff7f0e" for y_ in y] # colours for each class
plt.axis("off")
plt.scatter(X[:, 0], X[:, 1], c=c)
plt.show()
###############################################################################
# Defining a QNode
# ----------------
#
# Our next step is to define the QNode that we want to interface with ``torch.nn``. Any
# combination of device, operations and measurements that is valid in PennyLane can be used to
# compose the QNode. However, the QNode arguments must satisfy additional :doc:`conditions
# <code/api/pennylane.qnn.TorchLayer>` including having an argument called ``inputs``. All other
# arguments must be arrays or tensors and are treated as trainable weights in the model. We fix a
# two-qubit QNode using the
# :doc:`default.qubit <code/api/pennylane.devices.default_qubit.DefaultQubit>` simulator and
# operations from the :doc:`templates <introduction/templates>` module.
import pennylane as qml
n_qubits = 2
dev = qml.device("default.qubit", wires=n_qubits)
@qml.qnode(dev)
def qnode(inputs, weights):
qml.AngleEmbedding(inputs, wires=range(n_qubits))
qml.BasicEntanglerLayers(weights, wires=range(n_qubits))
return [qml.expval(qml.PauliZ(wires=i)) for i in range(n_qubits)]
###############################################################################
# Interfacing with Torch
# ----------------------
#
# With the QNode defined, we are ready to interface with ``torch.nn``. This is achieved using the
# :class:`~pennylane.qnn.TorchLayer` class of the :mod:`~pennylane.qnn` module, which converts the
# QNode to the elementary building block of ``torch.nn``: a *layer*. We shall see in the
# following how the resultant layer can be combined with other well-known neural network layers
# to form a hybrid model.
#
# We must first define the ``weight_shapes`` dictionary. Recall that all of
# the arguments of the QNode (except the one named ``inputs``) are treated as trainable
# weights. For the QNode to be successfully converted to a layer in ``torch.nn``, we need to provide
# the details of the shape of each trainable weight for them to be initialized. The
# ``weight_shapes`` dictionary maps from the argument names of the QNode to corresponding shapes:
n_layers = 6
weight_shapes = {"weights": (n_layers, n_qubits)}
###############################################################################
# In our example, the ``weights`` argument of the QNode is trainable and has shape given by
# ``(n_layers, n_qubits)``, which is passed to
# :func:`~pennylane.templates.layers.BasicEntanglerLayers`.
#
# Now that ``weight_shapes`` is defined, it is easy to then convert the QNode:
qlayer = qml.qnn.TorchLayer(qnode, weight_shapes)
###############################################################################
# With this done, the QNode can now be treated just like any other ``torch.nn`` layer and we can
# proceed using the familiar Torch workflow.
#
# Creating a hybrid model
# -----------------------
#
# Let's create a basic three-layered hybrid model consisting of:
#
# 1. a 2-neuron fully connected classical layer
# 2. our 2-qubit QNode converted into a layer
# 3. another 2-neuron fully connected classical layer
# 4. a softmax activation to convert to a probability vector
#
# A diagram of the model can be seen in the figure below.
#
# .. figure:: /demonstrations/qnn_module/qnn_torch.png
# :width: 100%
# :align: center
#
# We can construct the model using the
# `Sequential <https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html>`__ API:
clayer_1 = torch.nn.Linear(2, 2)
clayer_2 = torch.nn.Linear(2, 2)
softmax = torch.nn.Softmax(dim=1)
layers = [clayer_1, qlayer, clayer_2, softmax]
model = torch.nn.Sequential(*layers)
###############################################################################
# Training the model
# ------------------
#
# We can now train our hybrid model on the classification dataset using the usual Torch
# approach. We'll use the
# standard `SGD <https://pytorch.org/docs/stable/optim.html#torch.optim.SGD>`__ optimizer
# and the mean absolute error loss function:
opt = torch.optim.SGD(model.parameters(), lr=0.2)
loss = torch.nn.L1Loss()
###############################################################################
# Note that there are more advanced combinations of optimizer and loss function, but here we are
# focusing on the basics.
#
# The model is now ready to be trained!
X = torch.tensor(X, requires_grad=True).float()
y_hot = y_hot.float()
batch_size = 5
batches = 200 // batch_size
data_loader = torch.utils.data.DataLoader(
list(zip(X, y_hot)), batch_size=5, shuffle=True, drop_last=True
)
epochs = 6
for epoch in range(epochs):
running_loss = 0
for xs, ys in data_loader:
opt.zero_grad()
loss_evaluated = loss(model(xs), ys)
loss_evaluated.backward()
opt.step()
running_loss += loss_evaluated
avg_loss = running_loss / batches
print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))
y_pred = model(X)
predictions = torch.argmax(y_pred, axis=1).detach().numpy()
correct = [1 if p == p_true else 0 for p, p_true in zip(predictions, y)]
accuracy = sum(correct) / len(correct)
print(f"Accuracy: {accuracy * 100}%")
###############################################################################
# How did we do? The model looks to have successfully trained and the accuracy is reasonably
# high. In practice, we would aim to push the accuracy higher by thinking carefully about the
# model design and the choice of hyperparameters such as the learning rate.
#
# Creating non-sequential models
# ------------------------------
#
# The model we created above was composed of a sequence of classical and quantum layers. This
# type of model is very common and is suitable in a lot of situations. However, in some cases we
# may want a greater degree of control over how the model is constructed, for example when we
# have multiple inputs and outputs or when we want to distribute the output of one layer into
# multiple subsequent layers.
#
# Suppose we want to make a hybrid model consisting of:
#
# 1. a 4-neuron fully connected classical layer
# 2. a 2-qubit quantum layer connected to the first two neurons of the previous classical layer
# 3. a 2-qubit quantum layer connected to the second two neurons of the previous classical layer
# 4. a 2-neuron fully connected classical layer which takes a 4-dimensional input from the
# combination of the previous quantum layers
# 5. a softmax activation to convert to a probability vector
#
# A diagram of the model can be seen in the figure below.
#
# .. figure:: /demonstrations/qnn_module/qnn2_torch.png
# :width: 100%
# :align: center
#
# This model can also be constructed by creating a new class that inherits from the
# ``torch.nn`` `Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`__ and
# overriding the ``forward()`` method:
class HybridModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.clayer_1 = torch.nn.Linear(2, 4)
self.qlayer_1 = qml.qnn.TorchLayer(qnode, weight_shapes)
self.qlayer_2 = qml.qnn.TorchLayer(qnode, weight_shapes)
self.clayer_2 = torch.nn.Linear(4, 2)
self.softmax = torch.nn.Softmax(dim=1)
def forward(self, x):
x = self.clayer_1(x)
x_1, x_2 = torch.split(x, 2, dim=1)
x_1 = self.qlayer_1(x_1)
x_2 = self.qlayer_2(x_2)
x = torch.cat([x_1, x_2], axis=1)
x = self.clayer_2(x)
return self.softmax(x)
model = HybridModel()
###############################################################################
# As a final step, let's train the model to check if it's working:
opt = torch.optim.SGD(model.parameters(), lr=0.2)
epochs = 6
for epoch in range(epochs):
running_loss = 0
for xs, ys in data_loader:
opt.zero_grad()
loss_evaluated = loss(model(xs), ys)
loss_evaluated.backward()
opt.step()
running_loss += loss_evaluated
avg_loss = running_loss / batches
print("Average loss over epoch {}: {:.4f}".format(epoch + 1, avg_loss))
y_pred = model(X)
predictions = torch.argmax(y_pred, axis=1).detach().numpy()
correct = [1 if p == p_true else 0 for p, p_true in zip(predictions, y)]
accuracy = sum(correct) / len(correct)
print(f"Accuracy: {accuracy * 100}%")
###############################################################################
# Great! We've mastered the basics of constructing hybrid classical-quantum models using
# PennyLane and Torch. Can you think of any interesting hybrid models to construct? How do they
# perform on realistic datasets?
##############################################################################
# About the author
# ----------------
# .. include:: ../_static/authors/tom_bromley.txt | [
"[email protected]"
] | |
92103249322b421545629318572a095a6464b746 | 46bd3e3ba590785cbffed5f044e69f1f9bafbce5 | /env/lib/python3.8/site-packages/supervisor/tests/test_dispatchers.py | 3f88376a16df1a07247d1fe031d2147a0cb4d10c | [] | no_license | adamkluk/casper-getstarted | a6a6263f1547354de0e49ba2f1d57049a5fdec2b | 01e846621b33f54ed3ec9b369e9de3872a97780d | refs/heads/master | 2023-08-13T11:04:05.778228 | 2021-09-19T22:56:59 | 2021-09-19T22:56:59 | 408,036,193 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 130 | py | version https://git-lfs.github.com/spec/v1
oid sha256:b2039ef9d32ffde70df065c6a333cb150fa31e79786df3f98287dc41938ad1e1
size 53720
| [
"[email protected]"
] | |
7401b94189214c99484961a6a267429cd5e290fb | 19f27f432b968521c7bee497a96f2b01963da293 | /manage.py | 0ff8346ecebe236c0d31d614ad2ceeab700db026 | [] | no_license | ethanlee6/myw | eae3eb751f4b06e06ce1dd2a21adf9272f1bf72f | 74c60ebea5519c18d7495c2ee8064b4a576b9b89 | refs/heads/master | 2021-01-24T18:39:43.481407 | 2017-03-15T12:15:01 | 2017-03-15T12:15:01 | 84,459,667 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 775 | py | import os
from flask.ext.script import Manager, Server
from flask.ext.script.commands import ShowUrls
from flask.ext.migrate import Migrate, MigrateCommand
from webapp import create_app
from webapp.models import db, User, Post, Tag, Comment
# default to dev config
env = os.environ.get('WEBAPP_ENV', 'dev')
app = create_app('webapp.config.%sConfig' % env.capitalize())
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command("server", Server())
#manager.add_command("show-urls", ShowUrls())
manager.add_command('db', MigrateCommand)
@manager.shell
def make_shell_context():
return dict(
app=app,
db=db,
User=User,
Post=Post,
Tag=Tag,
Comment=Comment
)
if __name__ == "__main__":
manager.run()
| [
"[email protected]"
] | |
bb4411845beac8ed6a855d3894786bb21f41fa05 | 5179b07b8d1a31df18612ce55d35c56b851cead8 | /tools/train.py | b0290aace7813a3edf21acd4895698b235e05300 | [
"Apache-2.0"
] | permissive | hamidehkerdegari/VFS | 3e9c427c4a8ae0a6b66a3a1378bac5c6f9daaf51 | 8e055cc191578706f05b7484facf44be6fb1525a | refs/heads/master | 2023-08-24T09:40:46.678233 | 2021-09-26T18:24:38 | 2021-09-26T18:24:38 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 6,658 | py | import argparse
import copy
import os
import os.path as osp
import time
import warnings
import mmcv
import torch
from mmcv import Config, DictAction
from mmcv.runner import init_dist, set_random_seed
from mmaction import __version__
from mmaction.apis import train_model
from mmaction.datasets import build_dataset
from mmaction.models import build_model
from mmaction.utils import collect_env, get_root_logger
def parse_args():
parser = argparse.ArgumentParser(description='Train a recognizer')
parser.add_argument('config', help='train config file path')
parser.add_argument(
'--load-from', help='the checkpoint file to load weights from')
parser.add_argument('--work-dir', help='the dir to save logs and models')
parser.add_argument(
'--resume-from', help='the checkpoint file to resume from')
parser.add_argument(
'--auto-resume',
action='store_true',
help='automatically resume training')
group_gpus = parser.add_mutually_exclusive_group()
group_gpus.add_argument(
'--gpus',
type=int,
help='number of gpus to use '
'(only applicable to non-distributed training)')
group_gpus.add_argument(
'--gpu-ids',
type=int,
nargs='+',
help='ids of gpus to use '
'(only applicable to non-distributed training)')
parser.add_argument('--seed', type=int, default=None, help='random seed')
parser.add_argument(
'--deterministic',
action='store_true',
help='whether to set deterministic options for CUDNN backend.')
parser.add_argument(
'--options', nargs='+', action=DictAction, help='custom options')
parser.add_argument(
'--launcher',
choices=['none', 'pytorch', 'slurm', 'mpi'],
default='none',
help='job launcher')
parser.add_argument('--local_rank', type=int, default=0)
parser.add_argument('--suffix', type=str, help='work_dir suffix')
parser.add_argument(
'--disable-wandb', action='store_true', help='disable wandb')
args = parser.parse_args()
if 'LOCAL_RANK' not in os.environ:
os.environ['LOCAL_RANK'] = str(args.local_rank)
return args
def main():
args = parse_args()
cfg = Config.fromfile(args.config)
if args.options is not None:
cfg.merge_from_dict(args.options)
# set cudnn_benchmark
if cfg.get('cudnn_benchmark', False):
print('cudnn_benchmark=True')
torch.backends.cudnn.benchmark = True
# work_dir is determined in this priority:
# CLI > config file > default (base filename)
if args.work_dir is not None:
# update configs according to CLI args if args.work_dir is not None
cfg.work_dir = args.work_dir
elif cfg.get('work_dir', None) is None:
# use config filename as default work_dir if cfg.work_dir is None
cfg.work_dir = osp.join('./work_dirs',
osp.splitext(osp.basename(args.config))[0])
if args.suffix is not None:
cfg.work_dir = f'{cfg.work_dir}-{args.suffix}'
for i, h in enumerate(cfg.log_config.hooks):
if h.type == 'WandbLoggerHook':
if args.disable_wandb:
cfg.log_config.hooks.pop(i)
break
if args.suffix is not None:
wandb_dir = cfg.log_config.hooks[i].init_kwargs.dir
cfg.log_config.hooks[i].init_kwargs.dir = f'{wandb_dir}-' \
f'{args.suffix}'
mmcv.mkdir_or_exist(cfg.log_config.hooks[i].init_kwargs.dir)
if args.load_from is not None:
cfg.load_from = args.load_from
if args.resume_from is not None:
cfg.resume_from = args.resume_from
elif args.auto_resume:
if osp.exists(osp.join(cfg.work_dir, 'latest.pth')):
cfg.resume_from = osp.join(cfg.work_dir, 'latest.pth')
if args.gpu_ids is not None:
cfg.gpu_ids = args.gpu_ids
else:
cfg.gpu_ids = range(1) if args.gpus is None else range(args.gpus)
# init distributed env first, since logger depends on the dist info.
if args.launcher == 'none':
distributed = False
else:
distributed = True
init_dist(args.launcher, **cfg.dist_params)
# create work_dir
mmcv.mkdir_or_exist(osp.abspath(cfg.work_dir))
# dump config
cfg.dump(osp.join(cfg.work_dir, osp.basename(args.config)))
# init logger before other steps
timestamp = time.strftime('%Y%m%d_%H%M%S', time.localtime())
log_file = osp.join(cfg.work_dir, f'{timestamp}.log')
logger = get_root_logger(log_file=log_file, log_level=cfg.log_level)
# init the meta dict to record some important information such as
# environment info and seed, which will be logged
meta = dict()
# log env info
env_info_dict = collect_env()
env_info = '\n'.join([f'{k}: {v}' for k, v in env_info_dict.items()])
dash_line = '-' * 60 + '\n'
logger.info('Environment info:\n' + dash_line + env_info + '\n' +
dash_line)
meta['env_info'] = env_info
# log some basic info
logger.info(f'Distributed training: {distributed}')
logger.info(f'Config: {cfg.text}')
logger.info(f'Config.pretty_text: {cfg.pretty_text}')
# set random seeds
if args.seed is not None:
logger.info('Set random seed to {}, deterministic: {}'.format(
args.seed, args.deterministic))
set_random_seed(args.seed, deterministic=args.deterministic)
cfg.seed = args.seed
meta['seed'] = args.seed
meta['exp_name'] = osp.basename(args.config)
model = build_model(
cfg.model, train_cfg=cfg.train_cfg, test_cfg=cfg.test_cfg)
logger.info(f'Model: {str(model)}')
datasets = [build_dataset(cfg.data.train)]
if len(cfg.workflow) == 2:
if args.validate:
warnings.warn('val workflow is duplicated with `--validate`, '
'it is recommended to use `--validate`. see '
'https://github.com/open-mmlab/mmaction2/pull/123')
val_dataset = copy.deepcopy(cfg.data.val)
datasets.append(build_dataset(val_dataset))
if cfg.checkpoint_config is not None:
# save mmaction version, config file content and class names in
# checkpoints as meta data
cfg.checkpoint_config.meta = dict(
mmaction_version=__version__, config=cfg.text)
train_model(
model,
datasets,
cfg,
distributed=distributed,
validate=False,
timestamp=timestamp,
meta=meta)
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
8a73c785a44ece6263c3e40dfde840832bed6655 | 65c03709b91ce8f006641b30d481b4fda651520e | /Coding/3_indexing_slicing.py | a52c46b665b5ac657b828965eb9a307d71a3bd84 | [] | no_license | ahad-emu/python-code | 332121ad289b169ca8099c88bde13d7121be1030 | 135805c78de38eaf1bd5500b44625b36b7b653c0 | refs/heads/master | 2020-09-09T01:01:41.313964 | 2020-07-04T16:31:37 | 2020-07-04T16:31:37 | 221,296,928 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 502 | py | #indexing....
my_string = "hello World"
print(my_string)
print(my_string[0]) #index zero
print(my_string[7]) #index seven
print(my_string[8]) #index eight
print(my_string[-1]) #last index
print(my_string[-2]) #second last index
#slicing....
my_string = "ABCDEFGHIJKL"
print(my_string)
print(my_string[2:]) #index two to last
print(my_string[:3]) #index zero to two
print(my_string[2:6]) #index 2 to 5
print(my_string[::2]) #one step jump
print(my_string[::-1]) #reverse
| [
"[email protected]"
] | |
d1194035877ccf46cd000542fa0cb83f128378d8 | 163bbb4e0920dedd5941e3edfb2d8706ba75627d | /Code/CodeRecords/2847/60900/255175.py | 9f7f8927fa27a8621f5be9e8716e364de835126c | [] | no_license | AdamZhouSE/pythonHomework | a25c120b03a158d60aaa9fdc5fb203b1bb377a19 | ffc5606817a666aa6241cfab27364326f5c066ff | refs/heads/master | 2022-11-24T08:05:22.122011 | 2020-07-28T16:21:24 | 2020-07-28T16:21:24 | 259,576,640 | 2 | 1 | null | null | null | null | UTF-8 | Python | false | false | 192 | py | n = input()
str1 = input()
nums = str1.split(" ")
str2 = input()
nums2 = str2.split(" ")
count = 0
for i in range(int(nums2[0]),int(nums2[1])):
count = count + int(nums[i-1])
print(count) | [
"[email protected]"
] | |
fbf24e42c6d7e8f22c1daee7c96ee466bdb31af8 | 7dc05dc9ba548cc97ebe96ed1f0dab8dfe8d8b81 | /branches/0.4/pida/core/application.py | 94dcd60a715f1aa4cab7fa59b29e7d1b46b9eb49 | [] | no_license | BackupTheBerlios/pida-svn | b68da6689fa482a42f5dee93e2bcffb167a83b83 | 739147ed21a23cab23c2bba98f1c54108f8c2516 | refs/heads/master | 2020-05-31T17:28:47.927074 | 2006-05-18T21:42:32 | 2006-05-18T21:42:32 | 40,817,392 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 10,003 | py | # -*- coding: utf-8 -*-
# vim:set shiftwidth=4 tabstop=4 expandtab textwidth=79:
#Copyright (c) 2005 Ali Afshar [email protected]
#Permission is hereby granted, free of charge, to any person obtaining a copy
#of this software and associated documentation files (the "Software"), to deal
#in the Software without restriction, including without limitation the rights
#to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
#copies of the Software, and to permit persons to whom the Software is
#furnished to do so, subject to the following conditions:
#The above copyright notice and this permission notice shall be included in
#all copies or substantial portions of the Software.
#THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
#IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
#FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
#AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
#LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
#OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
#SOFTWARE.
# system import(s)
import os
import sys
import optparse
import warnings
def die(message):
"""Die in a command line way."""
print message
print 'Exiting. (this is fatal)'
sys.exit(1)
# First gtk import, let's check it
try:
import gtk
major, minor, rev = gtk.pygtk_version
if major < 2 or minor < 6:
die('PIDA requires PyGTK >= 2.6. It only found %s.%s'
% (major, minor))
except ImportError:
die('PIDA requires Python GTK bindings. They were not found.')
# the threads evilness
gtk.threads_init()
def die_gui(message):
"""Die in a GUI way."""
mu = ('<b>There was an error starting PIDA</b>\n\n'
'%s\n\n<i>This is fatal</i>' % message)
dlg = gtk.MessageDialog(parent=None,
flags=0,
type=gtk.MESSAGE_ERROR,
buttons=gtk.BUTTONS_CLOSE)
dlg.set_markup(mu)
dlg.run()
die(message)
# Python 2.4
major, minor = sys.version_info[:2]
if major < 2 or minor < 4:
die_gui('Python 2.4 is required to run PIDA. Only %s.%s was found' %
(major, minor))
# Setuptools is needed to run PIDA
try:
import setuptools
import pkg_resources
pkg_resources.require('pida')
except ImportError:
raise
die_gui('PIDA requires setuptools to be installed.')
# This can test if PIDA is installed
try:
import pida.core.boss as boss
import pida.pidagtk.debugwindow as debugwindow
except ImportError:
die_gui('PIDA could not import itself.')
# Start lock threads here because an exception may be raised
# and the dialog would be frozen
gtk.threads_enter()
# Now we can use a gui exception hook
old_excepthook = sys.excepthook
sys.excepthook = debugwindow.show
def get_version():
from pkg_resources import resource_string
try:
version_file = resource_string('pida', 'data/version')
except:
version_file = 'unversioned'
return version_file
pida_version = get_version()
class environment(object):
"""Handle environment variable and command line arguments"""
def __init__(self):
self.__editorname = None
self.__parseargs()
def __parseargs(self):
home_dir_option = None
default_home = os.path.join(os.path.expanduser('~'), '.pida')
if default_home == os.path.join('~', '.pida'):
# When on win32
from win32com.shell import shell, shellcon
default_home = shell.SHGetSpecialFolderLocation(
0,
shellcon.CSIDL_APPDATA
)
default_home = shell.SHGetPathFromIDList(default_home)
default_home = os.path.join(default_home, "Pida")
del shell
del shellcon
op = optparse.OptionParser()
op.add_option('-d', '--home-directory', type='string', nargs=1,
action='store',
help=('The location of the pida home directory. '
'If this directory does not exist, it will be created. '
'Default: %s' % default_home),
default=default_home)
op.add_option('-o', '--option', type='string', nargs=1,
action='append',
help=('Set an option. Options should be in the form: '
'servicename/group/name=value. '
'For example (without quotes): '
'"pida -o editormanager/general/editor_type=Vim". '
'More than one option can be set by repeated use of -o.'))
op.add_option('-v', '--version', action='store_true',
help='Print version information and exit.')
op.add_option('-D', '--debug', action='store_true',
help=('Run PIDA with added debug information. '
'This merely sets the environment variables: '
'PIDA_DEBUG=1 and PIDA_LOG_STDERR=1, '
'and so the same effect may be achieved by setting them.'))
op.add_option('-r', '--remote', action='store_true',
help=('Run PIDA remotely to open a file in an existing instance '
'of PIDA. Usage pida -r <filename>.'))
op.add_option('-F', '--first-run-wizard', action='store_true',
help='Run the PIDA first time wizard')
op.add_option('-t', '--testing-mode', action='store_true',
help='Run te PIDA self test')
opts, args = op.parse_args()
envhome = self.__parseenv()
if envhome is not None:
home_dir_option = envhome
else:
home_dir_option = opts.home_directory
self.__home_dir = home_dir_option
self.__create_home_tree(self.__home_dir)
self.__args = args
self.opts = opts
def __parseenv(self):
if 'PIDA_HOME' in os.environ:
return os.environ['PIDA_HOME']
def __create_home_tree(self, root):
dirs = {}
self.__mkdir(root)
for name in ['conf', 'log', 'run', 'vcs', 'sockets', 'data',
'projects', 'library']:
path = os.path.join(root, name)
self.__mkdir(path)
dirs[name] = path
return dirs
def __mkdir(self, path):
if not os.path.exists(path):
os.mkdir(path)
def get_positional_args(self):
return self.__args
positional_args = property(get_positional_args)
def get_home_dir(self):
return self.__home_dir
home_dir = property(get_home_dir)
def get_version(self):
return pida_version
version = property(get_version)
def override_configuration_system(self, services):
if self.__editorname:
svc = services.get('editormanager')
svc.set_option('general', 'type', self.__editorname)
#svc.options.save()
if not self.opts.option:
return
for opt in self.opts.option:
if '=' in opt:
name, value = opt.split('=', 1)
if '/' in name:
parts = name.split('/', 3)
if len(parts) == 3:
service, group, option = parts
try:
svc = services.get(service)
svc.options.get(group).get(option).load(value)
except:
pass
def override_editor_option(self, editorname):
self.__editorname = editorname
class application(object):
"""The pIDA Application."""
def __init__(self,
bosstype=boss.boss,
mainloop=gtk.main,
mainstop=gtk.main_quit,
environment=environment()):
self.__mainloop = mainloop
self.__mainstop = mainstop
self.__env = environment
self.__boss = bosstype(application=self, env=self.__env)
self.boss = self.__boss
self.env = self.__env
def start(self):
"""Start PIDA."""
self.__boss.start()
self.__mainloop()
def stop(self):
"""Stop PIDA."""
self.__mainstop()
def run_pida(env, bosstype, mainloop, mainstop):
if run_firstrun(env):
app = application(bosstype, mainloop, mainstop, env)
app.start()
return 0
else:
return 1
def run_version(env, *args):
print 'PIDA, version %s' % pida_version
return 0
def run_remote(env, *args):
import pida.utils.pidaremote as pidaremote
pidaremote.main(env.home_dir, env.positional_args)
return 0
def run_firstrun(env, *args):
first_filaname = os.path.join(env.home_dir, '.firstrun')
if not os.path.exists(first_filaname) or env.opts.first_run_wizard:
import pida.utils.firstrun as firstrun
ftw = firstrun.FirstTimeWindow()
response, editor = ftw.run(first_filaname)
if response == gtk.RESPONSE_ACCEPT:
if editor is None:
raise RuntimeError('No Working Editors')
else:
env.override_editor_option(editor)
return True
else:
return False
else:
return True
def main(bosstype=boss.boss, mainloop=gtk.main, mainstop=gtk.main_quit):
warnings.filterwarnings("ignore", category=DeprecationWarning)
env = environment()
if env.opts.debug:
os.environ['PIDA_DEBUG'] = '1'
os.environ['PIDA_LOG_STDERR'] = '1'
if env.opts.testing_mode:
sys.excepthook = old_excepthook
if env.opts.version is not None:
run_func = run_version
elif env.opts.remote:
run_func = run_remote
else:
run_func = run_pida
exit_val = run_func(env, bosstype, mainloop, mainstop)
gtk.threads_leave()
sys.exit(exit_val)
if __name__ == '__main__':
main()
| [
"aafshar@ef0b12da-61f9-0310-ba38-b2629ec279a7"
] | aafshar@ef0b12da-61f9-0310-ba38-b2629ec279a7 |
cc8f3b6012f30c1bdad4f411f454e6e816b04bde | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02549/s176160941.py | bdc2e8e51f883ca3eca69259dc2774ce9724f789 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 491 | py | N, K = map(int, input().split())
L = [0] * K
R = [0] * K
for i in range(0, K):
L[i], R[i] = map(int, input().split())
moves = [0] * N
moves[0] = 1
rui_wa = [0] * N
rui_wa[0] = 1
for i in range(1, N):
for j in range(0, K):
l = max(i - L[j], 0)
r = max(i - R[j], 0)
if i - L[j] < 0:
continue
moves[i] += (rui_wa[l] - rui_wa[r - 1]) % 998244353
rui_wa[i] = (moves[i] + rui_wa[i - 1]) % 998244353
print(moves[N - 1] % 998244353)
| [
"[email protected]"
] | |
5de0b81f7eb9ffcb6f37c172ee267011003055f3 | 8a03b8459902d1bf0806f8d3387fb962bb57cf58 | /User_create/Negative_changepwd.py | b654fadc05e357cbb963c843646791c0392766c4 | [] | no_license | chetandg123/cQube | f95a0e86b1e98cb418de209ad26ae2ba463cfcbc | a862a1cdf46faaaff5cad49d78c4e5f0454a6407 | refs/heads/master | 2022-07-18T12:43:06.839896 | 2020-05-22T13:23:52 | 2020-05-22T13:23:52 | 258,089,042 | 0 | 0 | null | 2020-05-08T16:28:26 | 2020-04-23T03:55:52 | HTML | UTF-8 | Python | false | false | 1,828 | py | import time
import unittest
from selenium import webdriver
from Data.Paramters import Data
class Click_ChangePwd(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Chrome(Data.Path)
self.driver.maximize_window()
self.driver.implicitly_wait(10)
self.driver.get(Data.URL)
self.driver.find_element_by_xpath(Data.email).send_keys(Data.username)
self.driver.find_element_by_xpath(Data.pwd).send_keys(Data.password)
self.driver.find_element_by_xpath(Data.loginbtn).click()
time.sleep(5)
def test_set_negative_newpwd(self):
self.driver.find_element_by_xpath(Data.Dashboard).click()
time.sleep(3)
self.driver.find_element_by_xpath("/html/body/app-root/app-home/mat-sidenav-container/mat-sidenav/div/mat-nav-list/mat-list/mat-list-item/div/button/span/mat-icon").click()
time.sleep(3)
self.driver.find_element_by_xpath("/html/body/app-root/app-home/mat-sidenav-container/mat-sidenav/div/mat-nav-list/mat-list/div/a[2]/div/span").click()
pwd =self.driver.find_element_by_xpath("//h2").text
self.assertEqual(pwd,"Change Password","Change password is not found!..")
self.driver.find_element_by_xpath("//input[@name='newPasswd']").send_keys("tibil123")
time.sleep(2)
self.driver.find_element_by_xpath("//input[@name='cnfpass']").send_keys("tibil12")
time.sleep(2)
self.driver.find_element_by_xpath("//button[@type='submit']").click()
time.sleep(3)
errormsg = self.driver.find_element_by_xpath("//p").text
print(errormsg)
self.assertEqual(errormsg,"Password not matched" ,"Matching password!")
def tearDown(self):
time.sleep(5)
self.driver.close()
if __name__ == "__main__":
unittest.main() | [
"[email protected]"
] | |
7229a9c285b03df22f176624c5e0f5b54b27a88d | a2fab78b021469748337bdbe46d60f4b2dccf6b9 | /day04/03.字符串的遍历.py | c5d627376bea9ed9bd537324019d43ced7a0f603 | [] | no_license | yywecanwin/PythonLearning | 06175886b42f6ec6be5ee8fa379365779e8e14e6 | f59d381692f22b3c7cf605aec88500f6c0267ffc | refs/heads/master | 2020-08-01T13:03:17.458829 | 2020-02-11T02:53:33 | 2020-02-11T02:53:33 | 211,006,180 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 628 | py | # -*- coding: utf-8 -*-
# author:yaoyao time:2019/9/28
"""
字符串的遍历:
一个一个的得到里面的元素
"""
s = "hello python"
"""
# 遍历的方式1:
# 1.定义一个变量i 表示元素的索引,赋值为0,因为元素的索引是从0开始的
i = 0
# 2.while循环遍历字符串
while i <= len(s)-1:
#3.在循环中, 根据索引得到元素, 把元素打印出来
print(s[i])
# 4.在循环中,让i加1,是为了让索引加1,便于下次循环时得到下一个元素
i += 1
"""
"""
for 变量 in range()函数或者容器
"""
# 遍历方式2:for循环
for c in s:
print(c)
| [
"[email protected]"
] | |
e03b7ef67849e583abb795e43e173297706316ff | 798960eb97cd1d46a2837f81fb69d123c05f1164 | /symphony/cli/pyworkforce/graphql/input/check_list_category.py | 8ab69aef97666923166c55f03daa8d9166c133bc | [
"BSD-3-Clause",
"Apache-2.0"
] | permissive | kyaaqba/magma | 36d5fa00ce4f827e6ca5ebd82d97a3d36e5f5b5b | fdb7be22a2076f9a9b158c9670a9af6cad68b85f | refs/heads/master | 2023-01-27T12:04:52.393286 | 2020-08-20T20:23:50 | 2020-08-20T20:23:50 | 289,102,268 | 0 | 0 | NOASSERTION | 2020-08-20T20:18:42 | 2020-08-20T20:18:41 | null | UTF-8 | Python | false | false | 590 | py | #!/usr/bin/env python3
# @generated AUTOGENERATED file. Do not Change!
from dataclasses import dataclass
from datetime import datetime
from functools import partial
from gql.gql.datetime_utils import DATETIME_FIELD
from numbers import Number
from typing import Any, Callable, List, Mapping, Optional
from dataclasses_json import DataClassJsonMixin
from ..input.check_list_item import CheckListItemInput
@dataclass
class CheckListCategoryInput(DataClassJsonMixin):
title: str
checkList: List[CheckListItemInput]
id: Optional[str] = None
description: Optional[str] = None
| [
"[email protected]"
] | |
6a51492ded638f3df2d2790b30ce3d10c9e269b9 | 5eddc2a278cb8f54da00db186c784e03a7b3011f | /csaapi/apps/farm_site/services.py | 5fa5b1565b3fc591b00038169efacb6de7334198 | [] | no_license | quinceleaf/csa-member-management | 350a48262cead1f03199c5c021a958fb410a791b | 8df57aa190935e79916b64d2a3de9e4e6c2d357d | refs/heads/main | 2023-06-18T11:10:47.633613 | 2021-07-20T03:47:55 | 2021-07-20T03:47:55 | 387,568,813 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 88 | py |
def create_subscription(*, )
pass
# fan out payments
# fan out deliveries | [
"[email protected]"
] | |
9ac78261b3e0bfe904692b30ec71925efb1b2fd5 | e203ddace08580170e3b4de9c79588209e857c1c | /books.py | 23233198dc918f7183dbddd721d36fc2b0141ebf | [] | no_license | stradtkt/OOPTreehouse-Python | e17f3fd48840049b8b741aa0e30e54d1409804b2 | 84e0ef2142118bf44c416a3b1dde3519ff57fd15 | refs/heads/main | 2023-02-26T15:03:27.053205 | 2021-02-04T13:04:26 | 2021-02-04T13:04:26 | 334,620,181 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 457 | py | class Book:
def __init__(self, title, author):
self.title = title
self.author = author
def __str__(self):
return '{}: {}'.format(self.title, self.author)
class Bookcase:
def __init__(self, books=None):
self.books = books
@classmethod
def create_bookcase(cls, book_list):
books = []
for title, author in book_list:
books.append(Book(title, author))
return cls(books) | [
"[email protected]"
] | |
24ee0c3b5ba31c62359bb82634292671f9df0b24 | e6b4b9dcca11d6a8abd110cd681b2712f9843030 | /src/env/dm_control/dm_control/composer/observation/observable/base_test.py | 0f519c2ba2e7da274db7fe54fd6ede820fd6dc34 | [
"MIT",
"Apache-2.0"
] | permissive | nicklashansen/svea-vit | a1b1d74fba88aaa94c876d354e7d6ed60cd3f064 | 33d3ea2682409ee82bf9c5129ceaf06ab01cd48e | refs/heads/main | 2023-07-21T18:35:08.439052 | 2023-07-11T20:09:50 | 2023-07-11T20:09:50 | 379,015,671 | 16 | 3 | MIT | 2023-07-11T20:09:52 | 2021-06-21T17:43:32 | Python | UTF-8 | Python | false | false | 5,914 | py | # Copyright 2018 The dm_control Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
"""Tests for observable."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Internal dependencies.
from absl.testing import absltest
from dm_control import mujoco
from dm_control.composer.observation import fake_physics
from dm_control.composer.observation.observable import base
import numpy as np
import six
_MJCF = """
<mujoco>
<worldbody>
<light pos="0 0 1"/>
<body name="body" pos="0 0 0">
<joint name="my_hinge" type="hinge" pos="-.1 -.2 -.3" axis="1 -1 0"/>
<geom name="my_box" type="box" size=".1 .2 .3" rgba="0 0 1 1"/>
<geom name="small_sphere" type="sphere" size=".12" pos=".1 .2 .3"/>
</body>
<camera name="world" mode="targetbody" target="body" pos="1 1 1" />
</worldbody>
</mujoco>
"""
class _FakeBaseObservable(base.Observable):
def _callable(self, physics):
pass
class ObservableTest(absltest.TestCase):
def testBaseProperties(self):
fake_observable = _FakeBaseObservable(update_interval=42,
buffer_size=5,
delay=10,
aggregator=None,
corruptor=None)
self.assertEqual(fake_observable.update_interval, 42)
self.assertEqual(fake_observable.buffer_size, 5)
self.assertEqual(fake_observable.delay, 10)
fake_observable.update_interval = 48
self.assertEqual(fake_observable.update_interval, 48)
fake_observable.buffer_size = 7
self.assertEqual(fake_observable.buffer_size, 7)
fake_observable.delay = 13
self.assertEqual(fake_observable.delay, 13)
enabled = not fake_observable.enabled
fake_observable.enabled = not fake_observable.enabled
self.assertEqual(fake_observable.enabled, enabled)
def testGeneric(self):
physics = fake_physics.FakePhysics()
repeated_observable = base.Generic(
fake_physics.FakePhysics.repeated, update_interval=42)
repeated_observation = repeated_observable.observation_callable(physics)()
self.assertEqual(repeated_observable.update_interval, 42)
np.testing.assert_array_equal(repeated_observation, [0, 0])
def testMujocoFeature(self):
physics = mujoco.Physics.from_xml_string(_MJCF)
hinge_observable = base.MujocoFeature(
kind='qpos', feature_name='my_hinge')
hinge_observation = hinge_observable.observation_callable(physics)()
np.testing.assert_array_equal(
hinge_observation, physics.named.data.qpos['my_hinge'])
box_observable = base.MujocoFeature(
kind='geom_xpos', feature_name='small_sphere', update_interval=5)
box_observation = box_observable.observation_callable(physics)()
self.assertEqual(box_observable.update_interval, 5)
np.testing.assert_array_equal(
box_observation, physics.named.data.geom_xpos['small_sphere'])
observable_from_callable = base.MujocoFeature(
kind='geom_xpos', feature_name=lambda: ['my_box', 'small_sphere'])
observation_from_callable = (
observable_from_callable.observation_callable(physics)())
np.testing.assert_array_equal(
observation_from_callable,
physics.named.data.geom_xpos[['my_box', 'small_sphere']])
def testMujocoCamera(self):
physics = mujoco.Physics.from_xml_string(_MJCF)
camera_observable = base.MujocoCamera(
camera_name='world', height=480, width=640, update_interval=7)
self.assertEqual(camera_observable.update_interval, 7)
camera_observation = camera_observable.observation_callable(physics)()
np.testing.assert_array_equal(
camera_observation, physics.render(480, 640, 'world'))
self.assertEqual(camera_observation.shape,
camera_observable.array_spec.shape)
self.assertEqual(camera_observation.dtype,
camera_observable.array_spec.dtype)
camera_observable.height = 300
camera_observable.width = 400
camera_observation = camera_observable.observation_callable(physics)()
self.assertEqual(camera_observable.height, 300)
self.assertEqual(camera_observable.width, 400)
np.testing.assert_array_equal(
camera_observation, physics.render(300, 400, 'world'))
self.assertEqual(camera_observation.shape,
camera_observable.array_spec.shape)
self.assertEqual(camera_observation.dtype,
camera_observable.array_spec.dtype)
def testCorruptor(self):
physics = fake_physics.FakePhysics()
def add_twelve(old_value, random_state):
del random_state # Unused.
return [x + 12 for x in old_value]
repeated_observable = base.Generic(
fake_physics.FakePhysics.repeated, corruptor=add_twelve)
corrupted = repeated_observable.observation_callable(
physics=physics, random_state=None)()
np.testing.assert_array_equal(corrupted, [12, 12])
def testInvalidAggregatorName(self):
name = 'invalid_name'
with six.assertRaisesRegex(self, KeyError, 'Unrecognized aggregator name'):
_ = _FakeBaseObservable(update_interval=3, buffer_size=2, delay=1,
aggregator=name, corruptor=None)
if __name__ == '__main__':
absltest.main()
| [
"[email protected]"
] | |
a48262df7b31b657505b623ed8035c6792e85210 | c0ffc02a5c72bea9a86d15e4a1ff01a7b67b6858 | /2nd11.py | 6c0435486d72ef0e986b5f69acf172236e6c785b | [] | no_license | db2398/2nd | 6f39d05a0b3f9f2aba050f35f9a9e83ba7e1511f | 404942c046ab894df1b52016ac7d4d49651f8295 | refs/heads/master | 2020-06-11T16:53:30.291296 | 2019-07-01T09:13:00 | 2019-07-01T09:13:00 | 194,029,475 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 55 | py | c,d=map(int,input().split())
result=c**d
print(result)
| [
"[email protected]"
] | |
d2dc956bbc48eb170fbbda451cf3630d7b8168b1 | 5545d3c3e910ccb5b45b2277a71ad3c3ea3caedc | /jamenson/runtime/Attic/runtime.py | 85f8fe28ad0310322de14198533d79ebdb9fe6a4 | [
"Apache-2.0"
] | permissive | matthagy/Jamenson | 61de19c71da6e133bf7d8efbb933a1036cf1e6f5 | 18a0fdd60b3d56ed4a6d4e792132535324490634 | refs/heads/master | 2016-09-11T04:31:28.895242 | 2013-04-04T00:14:44 | 2013-04-04T00:14:44 | 1,781,863 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,969 | py |
'''objects used by runtime
'''
from itertools import count
import string
class symbol(object):
#instance cache for `is based comparisions and `id based hashing
_cache = {}
__slots__ = ['printForm']
@classmethod
def raw(cls, printForm):
self = object.__new__(cls)
self.printForm = printForm
return self
def __new__(cls, printForm):
try:
return cls._cache[printForm]
except KeyError:
self = cls._cache[printForm] = cls.raw(printForm)
return self
def __repr__(self):
return 'symbol(%s)' % (self.printForm)
def __str__(self):
return bprint(self)
def __reduce__(self):
if gensymbolp(self):
return (gensym, (self.printForm[2:],))
else:
return (symbol, (self.printForm,))
def reset_gensym_counter(start=0):
global gensym_counter
gensym_counter = iter(count(start)).next
reset_gensym_counter()
def gensym(base='gensym'):
return symbol.raw('#:%s%d' % (base,gensym_counter()))
def gensymbolp(op):
return op.printForm not in symbol._cache
class cons(object):
__slots__ = 'car cdr'.split()
def __init__(self, car, cdr):
self.car = car
self.cdr = cdr
def __iter__(self):
op = self
while op is not nil:
if not isinstance(op, cons):
raise TypeError("iterating over non-cons cdr")
yield op.car
op = op.cdr
def __nonzero__(self):
return self is not nil
def __repr__(self):
return str(self)
#if self is nil:
# return 'nil'
#return 'cons(%r, %r)' % (self.car, self.cdr)
def __str__(self):
return bprint(self)
def __reduce__(self):
if self is nil:
return (load_nil, ())
else:
return (cons, (self.car, self.cdr))
def __eq__(self, other):
if not isinstance(other, cons):
return NotImplemented
return self is other or (self.car == other.car and
self.cdr == other.cdr)
nil = cons(None, None)
nil.car = nil
nil.cdr = nil
def load_nil():
return nil
def clist(*seq):
head = acc = nil
for op in seq:
cell = cons(op, nil)
if acc is nil:
head = cell
else:
acc.cdr = cell
acc = cell
return head
def bprint(op):
acc = []
bprint_collect_parts(acc.append, set(), op)
return ''.join(acc)
noQuoteChars = set(string.ascii_letters +
string.digits +
string.punctuation + ' ') - set('"')
escapeChars = {
'\n': '\\n',
'\t': '\\t',
'"': '\\"'}
qsymbol = symbol('%quote')
def bprint_collect_parts(emit, memo, op):
if isinstance(op, symbol):
emit(op.printForm)
elif op is nil:
emit('nil')
elif isinstance(op, cons):
if op.car is qsymbol:
assert op.cdr.cdr is nil, 'bad quote %r' % (op.cdr,)
emit("'")
bprint_collect_parts(emit, memo, op.cdr.car)
return
if id(op) in memo:
emit('#<circular cons>')
return
memo.add(id(op))
emit('(')
first = True
while op is not nil:
if first:
first = False
else:
emit(' ')
bprint_collect_parts(emit, memo, op.car)
if isinstance(op.cdr, cons):
op = op.cdr
else:
emit(' . ')
bprint_collect_parts(emit, memo, op.cdr)
break
emit(')')
elif isinstance(op, (int,long,float)):
emit(str(op))
elif op is None or op is False or op is True:
emit(str(op).lower())
elif isinstance(op, str):
emit('"')
for c in op:
if c in noQuoteChars:
emit(c)
elif c in escapeChars:
emit(escapeChars[c])
else:
emit('\\x%x' % ord(c))
emit('"')
else:
emit('#<')
emit(repr(op))
emit('>')
class MacroFunction(object):
__slots__ = ['func', 'robust']
def __init__(self, func, robust=False):
self.func = func
self.robust = robust
def __call__(self, *args, **kwds):
raise RuntimeError("cannot directly call macro %s" % self.func.__name__)
def macroExpand(self, translator, *args, **kwds):
return self.func(translator, *args, **kwds)
def __getstate__(self):
return self.func, self.robust
def __setstate__(self, state):
self.func, self.robust = state
import types
class obj(object):
def __init__(self, **kwds):
vars(self).update(kwds)
def __repr__(self):
return '(%s %s)' % (self.__class__.__name__,
' '.join(":%s %r" % t
for t in vars(self).iteritems()))
| [
"[email protected]"
] | |
042e1d38d801465d0ca7ae7a6feda110a7e5825c | 5cea76d53779d466f19a5cf0b51e003586cc4a7b | /python开发技术详解/源文件/02/2.4/2.4.1/number_type.py | 12972ea330682a3ae610b87a14be45e5770f2447 | [] | no_license | evan886/python | 40152fdb4885876189580141abe27a983d04e04d | d33e996e93275f6b347ecc2d30f8efe05accd10c | refs/heads/master | 2021-06-28T12:35:10.793186 | 2021-05-26T14:33:40 | 2021-05-26T14:33:40 | 85,560,342 | 2 | 1 | null | 2017-10-11T05:31:06 | 2017-03-20T09:51:50 | JavaScript | GB18030 | Python | false | false | 326 | py | #!/usr/bin/python
# -*- coding: UTF-8 -*-
# 下面的两个i并不是同一个对象
i = 1
print id(i)
i = 2
print id(i)
# 整型
i = 1
print type(i)
# 长整型
l = 9999999990
print type(l)
# 浮点型
f = 1.2
print type(f)
# 布尔型
b = True
print type(b)
# 复数类型
c = 7 + 8j
print type(c)
| [
"[email protected]"
] | |
f04777412a8523157317d3eac4f93709fc5b3593 | 1da23d3bc4a7e21d81fe26c6b9f2b7f50711239b | /server/rating/calculation/online.py | 54cb691486cf77569c23edf725df62292f77533f | [
"MIT"
] | permissive | eIGato/mahjong-portal | 42dc62d3f98656ba15c02c3060f351f03ac3304a | 550a2a872c4287adab6ce30c3440dc2141430a20 | refs/heads/master | 2021-07-10T01:52:35.089662 | 2020-10-21T11:45:40 | 2020-10-21T11:45:40 | 212,129,601 | 0 | 0 | MIT | 2019-10-01T15:19:36 | 2019-10-01T15:19:36 | null | UTF-8 | Python | false | false | 573 | py | from player.models import Player
from rating.calculation.rr import RatingRRCalculation
from tournament.models import Tournament, TournamentResult
class RatingOnlineCalculation(RatingRRCalculation):
TOURNAMENT_TYPES = [Tournament.ONLINE]
SECOND_PART_MIN_TOURNAMENTS = 3
def get_players(self):
player_ids = TournamentResult.objects.filter(tournament__tournament_type=Tournament.ONLINE).values_list(
"player_id", flat=True
)
return Player.objects.filter(id__in=player_ids).exclude(is_replacement=True).exclude(is_hide=True)
| [
"[email protected]"
] | |
766acc5663cd498b1b0e9bc3c0a1d75f176b8b8b | 83003007b7bc12493e2bca2b5c78be5ea86df56c | /Day56-Day70/Day60/rabbit.py | df44054acbf7a81a072a6cb377f8dbb2ea4dd6e6 | [] | no_license | a6361117/code | fa7fe2f33c522ad38d92e6c429b50ef8a271bb1e | bd8bf877416acc5400dbda90212b7e83020ff643 | refs/heads/master | 2022-09-07T22:22:24.765271 | 2020-05-26T14:27:47 | 2020-05-26T14:27:47 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,264 | py |
#绘制兔
from turtle import *
speed(10)
#兔的面部
color('pink')
pensize(5)
circle(radius=100)#脸
#眼睛
pencolor('black')
#左眼
pu()
goto(-45,92)
pd()
begin_fill()
color((0,0,0),(0,0,0.1))
circle(radius=15)
#右眼
pu()
goto(45,92)
pd()
circle(radius=15)
end_fill()
#鼻子
pu()
goto(20,60)
color('pink')
pd()
begin_fill()
goto(-20,60)
goto(0,45)
goto(20,60)
end_fill()
#嘴
goto(0,45)
goto(0,40)
seth(-90)
circle(10,120)
pu()
goto(0,40)
seth(-90)
pd()
circle(-10,120)
#小兔的耳朵
#左耳
pu()
goto(-60,180)#
seth(200)
pd()
circle(radius=350,extent=90)
goto(-98,110)
#右耳
pu()
goto(60,180)#
seth(-20)
pd()
circle(radius=-350,extent=90)
goto(98,110)
#小兔的身体
pu()
goto(20,3)
seth(-25)
pd()
circle(radius=-250,extent=25)
circle(radius=-135,extent=260)
seth(50)
circle(radius=-250,extent=25)
##小兔的胳膊
#左臂
pu()
seth(180)
goto(-30,-3)
pd()
#小短胳膊
##circle(radius=270,extent=20)
##circle(radius=20,extent=190)
circle(radius=248,extent=30)
circle(radius=29,extent=185)
#右臂
pu()
seth(0)
goto(30,-3)
pd()
circle(radius=-248,extent=30)
circle(radius=-27,extent=184)
##小兔的脚
##左脚
pu()
goto(-162,-260)#
pd()
seth(0)
circle(radius=41)
#右脚
pu()
goto(164,-260)
pd()
circle(radius=41)
done()
| [
"[email protected]"
] | |
fc3617765023ab1000296d388685479f6ba1ca6f | 743d1918178e08d4557abed3a375c583130a0e06 | /src/CPSCAnalysis/getCPSCRelated.py | e63093d367e5958dd952311a6b852f55229f43a2 | [] | no_license | aquablue1/dns_probe | 2a027c04e0928ec818a82c5bf04f485a883cfcb3 | edd4dff9bea04092ac76c17c6e77fab63f9f188f | refs/heads/master | 2020-03-25T19:40:07.346354 | 2018-11-17T05:31:43 | 2018-11-17T05:31:43 | 144,094,014 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,508 | py | """
" Get the original CPSC related DNS traffic from original data files.
" Since CPSC DNS (ns1/2.cpsc.ucalgary.ca) mostly involved in the inbound traffic.
" Therefore only the inbound traffic is considered.
" By Zhengping on 2018-08-10
"""
from src.util.FolderReader import folderReader
from src.util.FileReader import fileReader
from src.util.FileWriter import batchFileWriter
from src.util.DNSFieldLocMap import FieldToLoc
import os
def doHourlyCPSCRelatedGen(inputFilename):
inputFile = fileReader(inputFilename)
checkedNames = ["ns1.cpsc.ucalgary.ca", "ns2.cpsc.ucalgary.ca", "mirror.cpsc.ucalgary.ca"]
ret_list = []
for line in inputFile:
queriedName = line.split("\t")[FieldToLoc["query"]]
if queriedName in checkedNames:
ret_list.append(line)
return ret_list
def doDailyCPSCRelatedGen(inputFolder, outputFolder):
filenames = folderReader(inputFolder, date)
outputHandler = batchFileWriter(outputFolder)
for filename in filenames:
outputFilename = "CPSCRow_%s" % filename.split("/")[-1]
hourlyRowData = doHourlyCPSCRelatedGen(filename)
for line in hourlyRowData:
outputHandler.writeString(outputFilename, line+"\n")
if __name__ == '__main__':
date = "2018-07-01"
inputFolder = "../../data/%s/inbound" % date
outputFolder = "../../result/CPSCRow/%s/" % date
if not os.path.exists(outputFolder):
os.makedirs(outputFolder)
doDailyCPSCRelatedGen(inputFolder, outputFolder)
| [
"[email protected]"
] | |
c15ff70830104dc267e24f059b88cd1002f1879d | ecae7275fd43ec93ca5771083e05ae864685faf9 | /DataScience/pandas/2column1.py | eb1bc2f91c3de96c00fb9272b9179e11d6d5d730 | [] | no_license | shamoldas/pythonBasic | 104ca8d50099c2f511802db1f161f6d050f879cc | 3a7252a15f6d829f55700ec2ff7f7d153f3ec663 | refs/heads/main | 2023-01-09T06:38:55.357476 | 2020-11-11T12:27:31 | 2020-11-11T12:27:31 | 311,960,017 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 314 | py |
# importing pandas
import pandas as pd
df = pd.DataFrame({'Last': ['Gaitonde', 'Singh', 'Mathur'],
'First': ['Ganesh', 'Sartaj', 'Anjali']})
print('Before Join')
print(df, '\n')
print('After join')
df['Name'] = df['First'].str.cat(df['Last'], sep =" ")
print(df)
| [
"[email protected]"
] | |
a0ba64b046817c1d4b87a37d70ac854c54c543fe | 53fab060fa262e5d5026e0807d93c75fb81e67b9 | /backup/user_192/ch160_2020_06_19_20_23_54_764349.py | e8c31d19b6bb9456664ada3169ee602ac3e1ff52 | [] | no_license | gabriellaec/desoft-analise-exercicios | b77c6999424c5ce7e44086a12589a0ad43d6adca | 01940ab0897aa6005764fc220b900e4d6161d36b | refs/heads/main | 2023-01-31T17:19:42.050628 | 2020-12-16T05:21:31 | 2020-12-16T05:21:31 | 306,735,108 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 218 | py | import math
x = 0
for x in range(0, 90):
python = math.sin(x)
bhaskara = (4*x*(180 - x))/(40500 - x*(180 - x))
modulo = bhaskara - python
if python != bhaskara:
print(max(abs(modulo))
| [
"[email protected]"
] | |
ce01006fc28f38174aeae02dffe49f0214c5ae14 | 9554891e5e91fa9d7f75df0f28ae1d220c552478 | /tests/settings.py | 0bfc93f139030f93750f7d8315cca6601c124b85 | [
"MIT"
] | permissive | kmmbvnr/django-polymodels | 2e79cd72c68935a7e83953e0864ced1cb4a530c5 | 7a9b64b1851fea23a64d3d9421a69911e1669a49 | refs/heads/master | 2022-06-21T04:27:15.836175 | 2020-05-07T03:12:18 | 2020-05-07T10:36:06 | 261,932,926 | 1 | 0 | MIT | 2020-05-07T02:44:49 | 2020-05-07T02:44:48 | null | UTF-8 | Python | false | false | 245 | py | from __future__ import unicode_literals
SECRET_KEY = 'not-anymore'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
},
}
INSTALLED_APPS = [
'django.contrib.contenttypes',
'polymodels',
'tests',
]
| [
"[email protected]"
] | |
2958f0f909860b6534a0178f12383d7da22b1669 | 4bd4bacecee33cada173e427b5ecb1d758bafaad | /src/scalarizr/externals/chef/auth.py | ceb60cae41ffdfa7437210aa80b15e234cc31fef | [] | no_license | kenorb-contrib/scalarizr | 3f2492b20910c42f6ab38749545fdbb79969473f | 3cc8b64d5a1b39c4cf36f5057f1a6a84a9a74c83 | refs/heads/master | 2022-11-26T10:00:58.706301 | 2017-11-02T16:41:34 | 2017-11-02T16:41:34 | 108,550,233 | 0 | 2 | null | 2020-07-24T11:05:36 | 2017-10-27T13:33:46 | Python | UTF-8 | Python | false | false | 2,435 | py | from __future__ import with_statement
import base64
import datetime
import hashlib
import re
def _ruby_b64encode(value):
"""The Ruby function Base64.encode64 automatically breaks things up
into 60-character chunks.
"""
b64 = base64.b64encode(value)
for i in xrange(0, len(b64), 60):
yield b64[i:i+60]
def ruby_b64encode(value):
return '\n'.join(_ruby_b64encode(value))
def sha1_base64(value):
"""An implementation of Mixlib::Authentication::Digester."""
return ruby_b64encode(hashlib.sha1(value).digest())
class UTC(datetime.tzinfo):
"""UTC timezone stub."""
ZERO = datetime.timedelta(0)
def utcoffset(self, dt):
return self.ZERO
def tzname(self, dt):
return 'UTC'
def dst(self, dt):
return self.ZERO
utc = UTC()
def canonical_time(timestamp):
if timestamp.tzinfo is not None:
timestamp = timestamp.astimezone(utc).replace(tzinfo=None)
return timestamp.replace(microsecond=0).isoformat() + 'Z'
canonical_path_regex = re.compile(r'/+')
def canonical_path(path):
path = canonical_path_regex.sub('/', path)
if len(path) > 1:
path = path.rstrip('/')
return path
def canonical_request(http_method, path, hashed_body, timestamp, user_id):
# Canonicalize request parameters
http_method = http_method.upper()
path = canonical_path(path)
if isinstance(timestamp, datetime.datetime):
timestamp = canonical_time(timestamp)
hashed_path = sha1_base64(path)
return ('Method:%(http_method)s\n'
'Hashed Path:%(hashed_path)s\n'
'X-Ops-Content-Hash:%(hashed_body)s\n'
'X-Ops-Timestamp:%(timestamp)s\n'
'X-Ops-UserId:%(user_id)s' % vars())
def sign_request(key, http_method, path, body, host, timestamp, user_id):
"""Generate the needed headers for the Opscode authentication protocol."""
timestamp = canonical_time(timestamp)
hashed_body = sha1_base64(body or '')
# Simple headers
headers = {
'x-ops-sign': 'version=1.0',
'x-ops-userid': user_id,
'x-ops-timestamp': timestamp,
'x-ops-content-hash': hashed_body,
}
# Create RSA signature
req = canonical_request(http_method, path, hashed_body, timestamp, user_id)
sig = _ruby_b64encode(key.private_encrypt(req))
for i, line in enumerate(sig):
headers['x-ops-authorization-%s'%(i+1)] = line
return headers
| [
"[email protected]"
] | |
33a33cfd3f32dd9321b486aeb4d948593d5c76b2 | b15178f2ec828894c3b2d31b3ff6164be37ab875 | /setup.py | a511bad7d960a83c9af9d54df61c11eb837181ee | [
"CC0-1.0",
"LicenseRef-scancode-public-domain"
] | permissive | biomodels/BIOMD0000000007 | 08e9de5d8d6745cde85d337c385e0f41f53906d3 | 1c03559e6e807621fa757386ea03dfae2c0ca312 | refs/heads/master | 2021-01-25T06:05:51.198922 | 2014-10-16T05:13:44 | 2014-10-16T05:13:44 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 377 | py | from setuptools import setup, find_packages
setup(name='BIOMD0000000007',
version=20140916,
description='BIOMD0000000007 from BioModels',
url='http://www.ebi.ac.uk/biomodels-main/BIOMD0000000007',
maintainer='Stanley Gu',
maintainer_url='[email protected]',
packages=find_packages(),
package_data={'': ['*.xml', 'README.md']},
) | [
"[email protected]"
] | |
d827e99e9bfe24739b29b9efd7b67641f05c3576 | ff3e0d75fda9a1a94fd8ba7618c0aab499b8393d | /musicians/migrations/0004_auto_20200813_0055.py | 255088a50889f0134f21340a8b9558fc20ab73a7 | [
"MIT"
] | permissive | victorsemenov1980/DjangoFullStack | bbe2897c20633b3eba8db807442eb0921668e6f1 | 655a3a9980057913c1aeeb1cd54683ccf12ad901 | refs/heads/master | 2023-04-05T23:34:13.836215 | 2021-04-22T18:08:51 | 2021-04-22T18:08:51 | 289,705,449 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,423 | py | # Generated by Django 3.1 on 2020-08-13 00:55
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('musicians', '0003_info'),
]
operations = [
migrations.RemoveField(
model_name='service',
name='Featured',
),
migrations.RemoveField(
model_name='service',
name='Featured_Price',
),
migrations.RemoveField(
model_name='service',
name='Price_hour',
),
migrations.RemoveField(
model_name='service',
name='Price_service',
),
migrations.AddField(
model_name='main',
name='Bio',
field=models.TextField(default='none'),
preserve_default=False,
),
migrations.AddField(
model_name='main',
name='Instrument',
field=models.CharField(default='none', max_length=255),
preserve_default=False,
),
migrations.AddField(
model_name='main',
name='Organization',
field=models.CharField(default='none', max_length=255),
preserve_default=False,
),
migrations.AddField(
model_name='service',
name='Description',
field=models.CharField(blank=True, max_length=255),
),
]
| [
"[email protected]"
] | |
0ab52593e61a8c030d9e303a4c84011ce9f94f21 | 75e24fc71cf0833bb6040fa5037a0523c67d4581 | /nlplingo/active_learning/metrics.py | 5c880ba632dbf6cfbae101db65920c9732147a90 | [
"Apache-2.0"
] | permissive | BBN-E/nlplingo | 53d5ff2aa17d03a1c6db8afc8ed2b0cf683b1c55 | 32ff17b1320937faa3d3ebe727032f4b3e7a353d | refs/heads/main | 2022-12-19T19:28:11.666850 | 2020-10-09T01:16:32 | 2020-10-09T01:16:32 | 302,090,268 | 3 | 1 | null | null | null | null | UTF-8 | Python | false | false | 467 | py | import numpy as np
def best_vs_second_best(predictions):
"""Computes best vs second best metric
:type predictions: numpy.nparray
:rtype: numpy.nparray
"""
pred_sorted_arg = np.argsort(-predictions, axis=1)
best_vs_second_best_score = 1 - abs(
predictions[range(predictions.shape[0]), pred_sorted_arg[:, 0]] -
predictions[range(predictions.shape[0]), pred_sorted_arg[:, 1]]
)
return best_vs_second_best_score
| [
"[email protected]"
] | |
a2ae33df39f4c18bf1122e51783c1b3641f8a71b | 0a004fc3fe8e36fd7ce0ed2cc7e8140982315e03 | /unsupervised_learning/0x00-dimensionality_reduction/0-pca.py | 96f2f628a740e86a328e4e2a17f3fdae39d1650a | [] | no_license | pafuentess/holbertonschool-machine_learning | 266ed4f05e106e194cdafe39544e48904f6538f4 | 3bffd1391b3fc790f0137d0afbe90eb8e2f7d713 | refs/heads/master | 2023-03-26T15:12:14.721409 | 2021-03-20T20:28:15 | 2021-03-20T20:28:15 | 279,388,813 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 284 | py | #!/usr/bin/env python3
""" doc """
import numpy as np
def pca(X, var=0.95):
""" doc """
U, sigma, V = np.linalg.svd(X)
a_sum = np.cumsum(sigma)
dim = [i for i in range(len(sigma)) if((a_sum[i]) / a_sum[-1]) >= var]
ndim = dim[0] + 1
return V.T[:, :ndim]
| [
"[email protected]"
] | |
6c13a2bb9c012badbf065b7117c98cf2344d8b14 | f7f834e68ce816011ae30be0883deef090fbeeed | /camp/Z_Template_2018/Day 5 - Space Invaders/space_invaders.py | be8cc7bd451a55706eed78c51f0099e5ac7b5db7 | [] | no_license | Rosebotics/PythonGameDesign2019 | 97b568cf999dea8642e254a22e528539946118e3 | 2f03476df940257adc2928f0c985c01daa5166f4 | refs/heads/master | 2020-06-04T04:42:35.656392 | 2019-06-22T16:21:57 | 2019-06-22T16:21:57 | 191,875,778 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 4,301 | py | import pygame, sys, random, time
from pygame.locals import *
class Missile:
def __init__(self, screen, x):
# TODO: Save the screen into a field
# TODO: Save the x into a field
# TODO: Set the y to 591 as a field (which is just above the fighter)
# TODO: Set a field called exploded to False
pass
def move(self):
# TODO: Move the missile up 5
pass
def draw(self):
# TODO: Draw a red line from x, y that is 8 pixels in height
pass
class Fighter:
def __init__(self, screen, x, y):
self.screen = screen
self.image = pygame.image.load("fighter.png").convert()
self.image.set_colorkey((255, 255, 255))
self.x = x
self.y = y
self.missiles = []
def draw(self):
self.screen.blit(self.image, (self.x, self.y))
def fire(self):
self.missiles.append(Missile(self.screen, self.x + 50))
def remove_exploded_missles(self):
for k in range(len(self.missiles) - 1, -1, -1):
if self.missiles[k].exploded or self.missiles[k].y < 0:
del self.missiles[k]
class Badguy:
def __init__(self, screen, x, y):
self.dead = False
self.screen = screen
self.x = x
self.y = y
self.image = pygame.image.load("badguy.png").convert()
self.image.set_colorkey((0, 0, 0))
self.original_x = x
self.moving_right = True
def move(self):
if self.moving_right:
self.x = self.x + 2
if self.x > self.original_x + 100:
self.moving_right = False
else:
self.x = self.x - 2
if self.x < self.original_x - 100:
self.moving_right = True
def draw(self):
self.screen.blit(self.image, (self.x, self.y))
def hit_by(self, missile):
return pygame.Rect(self.x, self.y, 70, 45).collidepoint(missile.x, missile.y)
class EnemyFleet:
def __init__(self, screen, enemy_rows):
self.badguys = []
for j in range(enemy_rows):
for k in range(8):
self.badguys.append(Badguy(screen, 80 * k, 50 * j + 20))
@property
def is_defeated(self):
return len(self.badguys) == 0
def move(self):
for badguy in self.badguys:
badguy.move()
def draw(self):
for badguy in self.badguys:
badguy.draw()
def remove_dead_badguys(self):
for k in range(len(self.badguys) - 1, -1, -1):
if self.badguys[k].dead:
del self.badguys[k]
def main():
pygame.init()
clock = pygame.time.Clock()
pygame.display.set_caption("Space Invaders")
screen = pygame.display.set_mode((640, 650))
# TODO: Set enemy_rows to an initial value of 3.
# TODO: Create an EnemyFleet object (called enemy) with the screen and enemy_rows
# TODO: Create a Fighter (called fighter) at location 320, 590
while True:
clock.tick(60)
for event in pygame.event.get():
pressed_keys = pygame.key.get_pressed()
# TODO: If the event type is KEYDOWN and pressed_keys[K_SPACE} is True, then fire a missile
if event.type == QUIT:
sys.exit()
screen.fill((0, 0, 0))
pressed_keys = pygame.key.get_pressed()
# TODO: If K_LEFT is pressed move the fighter left 3
# TODO: If K_RIGHT is pressed move the fighter right 3
# TODO: Draw the fighter
# TODO: Move the enemy
# TODO: Draw the enemy
# TODO: For each missle in the fighter missiles
# TODO: Move the missle
# TODO: Draw the missle
# TODO: For each badguy in the enemy badguys
# TODO: For each missle in the fighter missiles
# TODO: If the badguy is hit by the missle
# TODO: Mark the badguy as dead = True
# TODO: Mark the missile as exploded = True
# TODO: Use the fighter to remove exploded missiles
# TODO: Use the enemy to remove dead badguys
# TODO: If the enemy id_defeated
# TODO: Increment the enemy_rows
# TODO: Create a new enemy with the screen and enemy_rows
pygame.display.update()
main()
| [
"[email protected]"
] | |
7c7ec50d29b03c3642ab2ceba8b96c4be5487afb | 669e9241b02bdaa303fbc2fd4023b90d4d179a59 | /Basketball Scoreboard/challenge1.py | 72070c13f348ee839784ae72678555d7d2e7e973 | [] | no_license | benjaminpotter/HatchProjects | 0854cf46ae7c3781468116a5d63b703dd54ae68c | 7f6a948d3474c755d071751b725c059e6c7f3553 | refs/heads/master | 2022-01-28T16:58:03.449073 | 2019-08-16T13:47:30 | 2019-08-16T13:47:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 997 | py | def setup():
size(400, 400)
threePoint = 0
fieldGoal = 0
freeThrow = 0
def drawScoreboard():
global threePoint, fieldGoal, freeThrow
background(0, 0, 0)
noFill()
stroke(255, 0, 0)
rect(30, 337, 110, 34)
rect(155, 337, 110, 34)
rect(278, 337, 116, 34)
fill(255)
textSize(22)
text("3-Point", 50, 361)
text("Field Goal", 160, 361)
text("Free Throw", 279, 361)
textSize(150)
fill(threePoint * 1.2 + 30, fieldGoal * 1.3 + 30, freeThrow * 1.8 + 30)
text(threePoint * 3 + fieldGoal * 2 + freeThrow, 116, 200)
def addPoints():
global threePoint, fieldGoal, freeThrow
if mouseX > 30 and mouseX < 140 and mouseY > 337 and mouseY < 371:
threePoint += 1
elif mouseX > 155 and mouseX < 265 and mouseY > 337 and mouseY < 371:
fieldGoal += 1
elif mouseX > 278 and mouseX < 388 and mouseY > 337 and mouseY < 371:
freeThrow += 1
def draw():
drawScoreboard()
def mousePressed():
addPoints() | [
"[email protected]"
] | |
08b01af01392cb5b5e0ab0605c707494fea4e10e | 05c9f1af21a698e09f7ec37a075624250e907262 | /samples/cloud_loadbalancers/session_persistence.py | 65361528513dff78dabf813b885ccaf5a90b79a5 | [
"Apache-2.0"
] | permissive | pycontribs/pyrax | 5f5a1d6816f5a831b1ae4b74ffaf438a1c0269a6 | 2397136b75e6fcc906ee406e9c1bc7aaef94387a | refs/heads/master | 2023-08-28T16:43:21.037208 | 2022-09-21T15:14:38 | 2022-09-21T15:14:38 | 5,975,139 | 10 | 27 | Apache-2.0 | 2021-07-12T21:23:11 | 2012-09-27T01:05:57 | Python | UTF-8 | Python | false | false | 1,492 | py | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright (c)2012 Rackspace US, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import os
import sys
import pyrax
pyrax.set_setting("identity_type", "rackspace")
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
clb = pyrax.cloud_loadbalancers
try:
lb = clb.list()[0]
except IndexError:
print("You do not have any load balancers yet.")
print("Please create one and then re-run this script.")
sys.exit()
print("Load Balancer:", lb)
orig = lb.session_persistence
print("Current setting of session persistence:", orig or '""')
print()
if orig:
print("Clearing...")
lb.session_persistence = ""
else:
print("Setting persistence to HTTP_COOKIE...")
lb.session_persistence = "HTTP_COOKIE"
print("New setting of session persistence:", lb.session_persistence or '""')
| [
"[email protected]"
] | |
67e621ecfca50542026a0bc3eba12f59122ad3b5 | efd3564def48ae6e5fff6068da21fc61f88486ee | /iam/models.py | fe44011c6227b49f36f6ae826fa39489099e4904 | [
"MIT"
] | permissive | druuu/IAM-Manager | 0c4e4f75879d44f4519e3c4655778f532e4455cb | 5ed542ed52ff6e18ea70122510fc9d5e6998159d | refs/heads/master | 2021-01-16T19:18:10.412258 | 2016-05-12T10:02:14 | 2016-05-12T10:02:14 | 58,738,368 | 0 | 0 | null | 2016-05-13T12:29:36 | 2016-05-13T12:29:36 | null | UTF-8 | Python | false | false | 115 | py | from __future__ import unicode_literals
from django.db import models
from django.contrib.auth.models import User
| [
"[email protected]"
] | |
7ff4f342c296f14581f59bf952c57db0709b0254 | 0cc4eb3cb54f8394c127ace62d3108fdb5230c85 | /.spack-env/view/lib/python3.7/site-packages/jedi/third_party/typeshed/third_party/2and3/Crypto/PublicKey/__init__.pyi | 26410a457f1a06184022094247938b2894f9cbe2 | [] | no_license | jacobmerson/spack-develop-env | 5b2d76f58c0b64ae97c64f77a3c4d33a770c71c8 | 5fca20ca343b1a76f05fc635c87f94ed25417d94 | refs/heads/master | 2022-07-04T02:22:50.264727 | 2020-05-06T05:13:50 | 2020-05-06T05:13:50 | 261,657,112 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 213 | pyi | /lore/mersoj/spack/spack/opt/spack/linux-rhel7-x86_64/gcc-7.3.0/py-jedi-0.17.0-zugnvpgjfmuk5x4rfhhxlsknl2g226yt/lib/python3.7/site-packages/jedi/third_party/typeshed/third_party/2and3/Crypto/PublicKey/__init__.pyi | [
"[email protected]"
] | |
570cc838272c8d6af88062cc6f7e249fd0b36979 | ea57ef44636ce151b3ef5322466cdfcb02482515 | /pendulum/constants.py | abc6ec06eacd7553dcf6ee58a8d094672a79966c | [
"MIT"
] | permissive | Sn3akyP3t3/pendulum | acb3dc5067576c4569a08b1d8a8ecfce918b4724 | 7ce170bdc64199d74e09e347402983f1bb015f63 | refs/heads/master | 2020-03-22T01:15:01.160870 | 2018-07-01T15:49:09 | 2018-07-01T15:49:09 | 139,292,657 | 0 | 0 | MIT | 2018-07-01T01:46:00 | 2018-07-01T01:46:00 | null | UTF-8 | Python | false | false | 2,836 | py | # The day constants
SUNDAY = 0
MONDAY = 1
TUESDAY = 2
WEDNESDAY = 3
THURSDAY = 4
FRIDAY = 5
SATURDAY = 6
# Number of X in Y.
YEARS_PER_CENTURY = 100
YEARS_PER_DECADE = 10
MONTHS_PER_YEAR = 12
WEEKS_PER_YEAR = 52
DAYS_PER_WEEK = 7
HOURS_PER_DAY = 24
MINUTES_PER_HOUR = 60
SECONDS_PER_MINUTE = 60
SECONDS_PER_HOUR = MINUTES_PER_HOUR * SECONDS_PER_MINUTE
SECONDS_PER_DAY = HOURS_PER_DAY * SECONDS_PER_HOUR
US_PER_SECOND = 1000000
# Formats
ATOM = 'YYYY-MM-DDTHH:mm:ssZ'
COOKIE = 'dddd, DD-MMM-YYYY HH:mm:ss zz'
ISO8601 = 'YYYY-MM-DDTHH:mm:ssZ'
ISO8601_EXTENDED = 'YYYY-MM-DDTHH:mm:ss.SSSSSSZ'
RFC822 = 'ddd, DD MMM YY HH:mm:ss ZZ'
RFC850 = 'dddd, DD-MMM-YY HH:mm:ss zz'
RFC1036 = 'ddd, DD MMM YY HH:mm:ss ZZ'
RFC1123 = 'ddd, DD MMM YYYY HH:mm:ss ZZ'
RFC2822 = 'ddd, DD MMM YYYY HH:mm:ss ZZ'
RFC3339 = ISO8601
RFC3339_EXTENDED = ISO8601_EXTENDED
RSS = 'ddd, DD MMM YYYY HH:mm:ss ZZ'
W3C = ISO8601
EPOCH_YEAR = 1970
DAYS_PER_N_YEAR = 365
DAYS_PER_L_YEAR = 366
USECS_PER_SEC = 1000000
SECS_PER_MIN = 60
SECS_PER_HOUR = 60 * SECS_PER_MIN
SECS_PER_DAY = SECS_PER_HOUR * 24
# 400-year chunks always have 146097 days (20871 weeks).
SECS_PER_400_YEARS = 146097 * SECS_PER_DAY
# The number of seconds in an aligned 100-year chunk, for those that
# do not begin with a leap year and those that do respectively.
SECS_PER_100_YEARS = (
(76 * DAYS_PER_N_YEAR + 24 * DAYS_PER_L_YEAR) * SECS_PER_DAY,
(75 * DAYS_PER_N_YEAR + 25 * DAYS_PER_L_YEAR) * SECS_PER_DAY
)
# The number of seconds in an aligned 4-year chunk, for those that
# do not begin with a leap year and those that do respectively.
SECS_PER_4_YEARS = (
(4 * DAYS_PER_N_YEAR + 0 * DAYS_PER_L_YEAR) * SECS_PER_DAY,
(3 * DAYS_PER_N_YEAR + 1 * DAYS_PER_L_YEAR) * SECS_PER_DAY
)
# The number of seconds in non-leap and leap years respectively.
SECS_PER_YEAR = (
DAYS_PER_N_YEAR * SECS_PER_DAY,
DAYS_PER_L_YEAR * SECS_PER_DAY
)
DAYS_PER_YEAR = (
DAYS_PER_N_YEAR,
DAYS_PER_L_YEAR
)
# The month lengths in non-leap and leap years respectively.
DAYS_PER_MONTHS = (
(-1, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31),
(-1, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31)
)
# The day offsets of the beginning of each (1-based) month in non-leap
# and leap years respectively.
# For example, in a leap year there are 335 days before December.
MONTHS_OFFSETS = (
(-1, 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365),
(-1, 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366)
)
DAY_OF_WEEK_TABLE = (
0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4
)
TM_SUNDAY = 0
TM_MONDAY = 1
TM_TUESDAY = 2
TM_WEDNESDAY = 3
TM_THURSDAY = 4
TM_FRIDAY = 5
TM_SATURDAY = 6
TM_JANUARY = 0
TM_FEBRUARY = 1
TM_MARCH = 2
TM_APRIL = 3
TM_MAY = 4
TM_JUNE = 5
TM_JULY = 6
TM_AUGUST = 7
TM_SEPTEMBER = 8
TM_OCTOBER = 9
TM_NOVEMBER = 10
TM_DECEMBER = 11
| [
"[email protected]"
] | |
1e1d7ca3bfe15837aaed003514b62088a040f6d2 | 868ac4e558cf5fe945e8b557564f34f79b3ad01e | /purity_fb/purity_fb_1dot11/models/snmp_agent_response.py | 3eb0a329ee36e940b618e7040ff1ee601a4825ff | [
"Apache-2.0"
] | permissive | mabdelhafez/purity_fb_python_client | f4253ce8497fb3cff648e0a0cd1e567f48129fa7 | a9856875b3df43b4302a2e4addd1a6b71f51f5ce | refs/heads/master | 2022-04-20T09:24:22.031408 | 2020-04-20T22:11:32 | 2020-04-20T22:15:44 | 257,372,596 | 0 | 0 | NOASSERTION | 2020-04-20T18:40:24 | 2020-04-20T18:40:23 | null | UTF-8 | Python | false | false | 4,171 | py | # coding: utf-8
"""
Pure Storage FlashBlade REST 1.11 Python SDK
Pure Storage FlashBlade REST 1.11 Python SDK, developed by [Pure Storage, Inc](http://www.purestorage.com/). Documentations can be found at [purity-fb.readthedocs.io](http://purity-fb.readthedocs.io/).
OpenAPI spec version: 1.11
Contact: [email protected]
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from pprint import pformat
from six import iteritems
import re
class SnmpAgentResponse(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
"""
"""
Attributes:
swagger_types (dict): The key is attribute name
and the value is attribute type.
attribute_map (dict): The key is attribute name
and the value is json key in definition.
"""
swagger_types = {
'pagination_info': 'PaginationInfo',
'items': 'list[SnmpAgent]'
}
attribute_map = {
'pagination_info': 'pagination_info',
'items': 'items'
}
def __init__(self, pagination_info=None, items=None):
"""
SnmpAgentResponse - a model defined in Swagger
"""
self._pagination_info = None
self._items = None
if pagination_info is not None:
self.pagination_info = pagination_info
if items is not None:
self.items = items
@property
def pagination_info(self):
"""
Gets the pagination_info of this SnmpAgentResponse.
pagination information, only available in GET requests
:return: The pagination_info of this SnmpAgentResponse.
:rtype: PaginationInfo
"""
return self._pagination_info
@pagination_info.setter
def pagination_info(self, pagination_info):
"""
Sets the pagination_info of this SnmpAgentResponse.
pagination information, only available in GET requests
:param pagination_info: The pagination_info of this SnmpAgentResponse.
:type: PaginationInfo
"""
self._pagination_info = pagination_info
@property
def items(self):
"""
Gets the items of this SnmpAgentResponse.
A list of SNMP agents.
:return: The items of this SnmpAgentResponse.
:rtype: list[SnmpAgent]
"""
return self._items
@items.setter
def items(self, items):
"""
Sets the items of this SnmpAgentResponse.
A list of SNMP agents.
:param items: The items of this SnmpAgentResponse.
:type: list[SnmpAgent]
"""
self._items = items
def to_dict(self):
"""
Returns the model properties as a dict
"""
result = {}
for attr, _ in iteritems(self.swagger_types):
value = getattr(self, attr)
if isinstance(value, list):
result[attr] = list(map(
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
value
))
elif hasattr(value, "to_dict"):
result[attr] = value.to_dict()
elif isinstance(value, dict):
result[attr] = dict(map(
lambda item: (item[0], item[1].to_dict())
if hasattr(item[1], "to_dict") else item,
value.items()
))
else:
result[attr] = value
return result
def to_str(self):
"""
Returns the string representation of the model
"""
return pformat(self.to_dict())
def __repr__(self):
"""
For `print` and `pprint`
"""
return self.to_str()
def __eq__(self, other):
"""
Returns true if both objects are equal
"""
if not isinstance(other, SnmpAgentResponse):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
"""
Returns true if both objects are not equal
"""
return not self == other
| [
"[email protected]"
] | |
52890da7dbeb50e2962006085b369e2b130e3485 | f5ffd566166948c4202eb1e66bef44cf55a70033 | /test/test_array_of_role_no_i_ds.py | ce70095d1ed2d0425958c0329fc4b10467e24d33 | [] | no_license | skyportal/skyportal_client | ed025ac6d23589238a9c133d712d4f113bbcb1c9 | 15514e4dfb16313e442d06f69f8477b4f0757eaa | refs/heads/master | 2023-02-10T02:54:20.757570 | 2021-01-05T02:18:03 | 2021-01-05T02:18:03 | 326,860,562 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 2,321 | py | """
Fritz: SkyPortal API
SkyPortal provides an API to access most of its underlying functionality. To use it, you will need an API token. This can be generated via the web application from your profile page or, if you are an admin, you may use the system provisioned token stored inside of `.tokens.yaml`. ### Accessing the SkyPortal API Once you have a token, you may access SkyPortal programmatically as follows. #### Python ```python import requests token = 'ea70a5f0-b321-43c6-96a1-b2de225e0339' def api(method, endpoint, data=None): headers = {'Authorization': f'token {token}'} response = requests.request(method, endpoint, json=data, headers=headers) return response response = api('GET', 'http://localhost:5000/api/sysinfo') print(f'HTTP code: {response.status_code}, {response.reason}') if response.status_code in (200, 400): print(f'JSON response: {response.json()}') ``` #### Command line (curl) ```shell curl -s -H 'Authorization: token ea70a5f0-b321-43c6-96a1-b2de225e0339' http://localhost:5000/api/sysinfo ``` ### Response In the above examples, the SkyPortal server is located at `http://localhost:5000`. In case of success, the HTTP response is 200: ``` HTTP code: 200, OK JSON response: {'status': 'success', 'data': {}, 'version': '0.9.dev0+git20200819.84c453a'} ``` On failure, it is 400; the JSON response has `status=\"error\"` with the reason for the failure given in `message`: ```js { \"status\": \"error\", \"message\": \"Invalid API endpoint\", \"data\": {}, \"version\": \"0.9.1\" } ``` # Authentication <!-- ReDoc-Inject: <security-definitions> --> # noqa: E501
The version of the OpenAPI document: 0.9.dev0+git20201221.76627dd
Generated by: https://openapi-generator.tech
"""
import sys
import unittest
import openapi_client
from openapi_client.model.array_of_role_no_i_ds import ArrayOfRoleNoIDs
class TestArrayOfRoleNoIDs(unittest.TestCase):
"""ArrayOfRoleNoIDs unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testArrayOfRoleNoIDs(self):
"""Test ArrayOfRoleNoIDs"""
# FIXME: construct object with mandatory attributes with example values
# model = ArrayOfRoleNoIDs() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| [
"[email protected]"
] | |
293ff497bc9c02162313472b028ec2ddb6e186bc | dd7dc458691dcff1b2493c927acd62695c2187c4 | /lib/python2.7/site-packages/envisage/ui/workbench/workbench_plugin.py | 224c2068f00fc03f60552f917b2f9ce3c91fd991 | [] | no_license | stephenosullivan/science | 16e0c7fb441af29810cad630e6187961ad57398e | 164e82df0655337ac4966273d9cc489d002d8987 | refs/heads/master | 2021-03-27T09:52:05.330679 | 2015-07-25T04:51:25 | 2015-07-25T04:51:25 | 39,672,995 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,048 | py | """ The Envisage workbench plugin. """
# Enthought library imports.
from envisage.api import ExtensionPoint, Plugin, ServiceOffer
from traits.api import Callable, List
# This module's package.
PKG = '.'.join(__name__.split('.')[:-1])
class WorkbenchPlugin(Plugin):
""" The Envisage workbench plugin.
The workbench plugin uses the PyFace workbench to provide the basis of an
IDE-like user interface. The interface is made up of perspectives, views
and editors.
Note that this is not intended to be a 'general-purpose' plugin for user
interfaces - it provides an IDE-like style and that is all. If your
application requires another style of interface then write another plugin
(you can still re-use all the menu, group and action contribution stuff!).
"""
# The Ids of the extension points that this plugin offers.
ACTION_SETS = PKG + '.action_sets'
PERSPECTIVES = PKG + '.perspectives'
PREFERENCES_PAGES = PKG + '.preferences_pages'
WORKBENCH_SERVICE_OFFERS = PKG + '.service_offers'
VIEWS = PKG + '.views'
# The Ids of the extension points that this plugin contributes to.
PREFERENCES = 'envisage.preferences'
SERVICE_OFFERS = 'envisage.service_offers'
#### 'IPlugin' interface ##################################################
# The plugin's unique identifier.
id = 'envisage.ui.workbench'
# The plugin's name (suitable for displaying to the user).
name = 'Workbench'
#### Extension points offered by this plugin ##############################
action_sets = ExtensionPoint(
List(Callable), id=ACTION_SETS, desc="""
An action set contains the toobars, menus, groups and actions that you
would like to add to top-level workbench windows (i.e. the main
application window). You can create new toolbars, menus and groups
and/or add to existing ones.
Each contribution to this extension point must be a factory that
creates an action set, where 'factory' means any callable with the
following signature::
callable(**traits) -> IActionSet
The easiest way to contribute such a factory is to create a class
that derives from 'envisage.ui.action.api.ActionSet'.
"""
)
perspectives = ExtensionPoint(
List(Callable), id=PERSPECTIVES, desc="""
A perspective is simply an arrangment of views around the (optionally
hidden) editor area.
Each contribution to this extension point must be a factory that
creates a perspective, where 'factory' means any callable with the
following signature::
callable(**traits) -> IPerspective
The easiest way to contribute such a factory is to create a class
that derives from 'pyface.workbench.api.IPerspective'.
"""
)
preferences_pages = ExtensionPoint(
List(Callable), id=PREFERENCES_PAGES, desc="""
A preferences page appears in the preferences dialog to allow the user
to manipulate some preference values.
Each contribution to this extension point must be a factory that
creates a preferences page, where 'factory' means any callable with the
following signature::
callable(**traits) -> IPreferencesPage
The easiest way to contribute such a factory is to create a class
that derives from 'apptools.preferences.ui.api.IPreferencesPage'.
"""
)
service_offers = ExtensionPoint(
List(ServiceOffer),
id = WORKBENCH_SERVICE_OFFERS,
desc = """
Services are simply objects that a plugin wants to make available to
other plugins. This extension point allows you to offer 'per
window' services that are created 'on-demand' (where 'on demand' means
the first time somebody looks up a service of the appropriate
protocol).
.
e.g.
my_service_offer = ServiceOffer(
protocol = 'acme.IMyService',
factory = an_object_or_a_callable_that_creates_one,
properties = {'a dictionary' : 'that is passed to the factory'}
)
Any properties specified are passed as keywrod arguments to the
factory, i.e. the factory signature is::
callable(**properties)
"""
)
views = ExtensionPoint(
List(Callable), id=VIEWS, desc="""
A view provides information to the user to support their current
task. Views can contain anything you like(!) and are arranged around
the (optionally hidden) editor area. The user can re-arrange views as
he/she sees fit.
Each contribution to this extension point must be a factory that
creates a view, where 'factory' means any callable with the following
signature::
callable(**traits) -> IView
The easiest way to contribute such a factory is to create a class
that derives from 'pyface.workbench.api.View'.
It is also common to use a simple function (especially when a view
is a representation of a service) e.g::
def foo_view_factory(**traits):
' Create a view that is a representation of a service. '
foo = self.application.get_service('IFoo')
return FooView(foo=foo, **traits)
"""
)
#### Contributions to extension points made by this plugin ################
my_action_sets = List(contributes_to=ACTION_SETS)
def _my_action_sets_default(self):
""" Trait initializer. """
from default_action_set import DefaultActionSet
return [DefaultActionSet]
my_preferences = List(contributes_to=PREFERENCES)
def _my_preferences_default(self):
""" Trait initializer. """
return ['pkgfile://envisage.ui.workbench/preferences.ini']
my_preferences_pages = List(contributes_to=PREFERENCES_PAGES)
def _my_preferences_pages_default(self):
""" Trait initializer. """
from workbench_preferences_page import WorkbenchPreferencesPage
return [WorkbenchPreferencesPage]
my_service_offers = List(contributes_to=SERVICE_OFFERS)
def _my_service_offers_default(self):
""" Trait initializer. """
preferences_manager_service_offer = ServiceOffer(
protocol = 'apptools.preferences.ui.preferences_manager'
'.PreferencesManager',
factory = self._create_preferences_manager_service
)
workbench_service_offer = ServiceOffer(
protocol = 'envisage.ui.workbench.workbench.Workbench',
factory = self._create_workbench_service
)
return [preferences_manager_service_offer, workbench_service_offer]
###########################################################################
# Private interface.
###########################################################################
def _create_preferences_manager_service(self, **properties):
""" Factory method for the preferences manager service. """
from apptools.preferences.ui.api import PreferencesManager
preferences_manager = PreferencesManager(
pages=[factory() for factory in self.preferences_pages]
)
return preferences_manager
def _create_workbench_service(self, **properties):
""" Factory method for the workbench service. """
# We don't actually create the workbench here, we just return a
# reference to it.
#
# fixme: This guard is really just for testing when we have the
# workbench plugin as a source egg (i.e. if the egg is on our path
# then we get the plugin for any egg-based application, even if it is
# not a workbench application!).
return getattr(self.application, 'workbench', None)
### EOF ######################################################################
| [
"[email protected]"
] | |
e15921c3602f09639e1a75b780376560ca94e509 | 0dc816af0b9feecc4ba672eca979654caa0c91bc | /main/ordinance/views.py | a7404d9f30bb4aefab716237a5b288cab1a41885 | [] | no_license | Stelmaszv/remote-learning | b57589ed5bde8387c0d114951b13ad37ebf80f68 | ae567c473e50826edb98a4b434e63cc446be0852 | refs/heads/master | 2022-11-25T17:08:15.658486 | 2020-08-07T14:43:59 | 2020-08-07T14:43:59 | 256,490,629 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 8,921 | py | from core.baseview import baseCreate,baseListView,baseShowView,baseUpdateView;
from core.decorators import login_required,login_manager,login_educator
from authorization.forms import educator,manager
from .forms import Lesson as LessonForm,TasksSolution,TasksSetRote,AccountForm,DashbordForm
from .models import Lesson,Tasks,Classroom,Dashbord,DashbordType
from authorization.models import Account,AccountType
from authorization.formMenager import passwordGeneartor
from helpel import email
from django.shortcuts import redirect,render
import datetime
class add_Student(baseCreate):
template_name = 'ordinance/addperson.html'
success_url = '/ordinance/myStudents/'
form=educator
getObject = Account
@login_educator
def get(self, request, *args, **kwargs) ->baseCreate:
return self.addGet(request)
def postSave(self, request, *args, **kwargs)-> None:
item = Account.objects.latest('id')
item.username = self.form.cleaned_data['first_name'] + ' ' + self.form.cleaned_data['last_name']
password = passwordGeneartor().setPassword()
print(password)
item.set_password(password)
item.staff = True
item.is_student = request.user.is_educator
item.save()
email().sent('Dane do konta', 'kotek',['[email protected]'])
class add_Personel(baseCreate):
template_name = 'ordinance/addperson.html'
success_url = '/ordinance/myPersonel/'
form=manager
@login_manager
def get(self, request, *args, **kwargs) ->baseCreate:
return self.addGet(request)
def postSave(self, request, *args, **kwargs) -> None:
item = Account.objects.latest('id')
item.username = self.form.cleaned_data['first_name'] + ' ' + self.form.cleaned_data['last_name']
password = passwordGeneartor().setPassword()
print(password)
item.set_password(password)
item.staff = True
item.save()
email().sent('Dane do konta', 'kotek', ['[email protected]'])
class addDashbord(baseCreate):
template_name = 'ordinance/addLesson.html'
success_url = '/'
form = DashbordForm
def postSave(self, request, *args, **kwargs) -> None:
Type = DashbordType.objects.get(name='normal')
self.item.author=request.user
self.item.type=Type
self.item.save()
class addLesson(baseCreate):
template_name = 'ordinance/addLesson.html'
success_url = '/'
form=LessonForm
def get(self, request, *args, **kwargs)->baseCreate:
self.form.email=request.user.email
return self.addGet(request)
def post(self,request, *args, **kwargs)->baseCreate:
self.form.email = request.user.email
print(request)
return self.addPost(request)
def postSave(self, request, *args, **kwargs) -> None:
classrom=Classroom.objects.get(name=self.item.classroom).students.all()
for student in classrom:
task = Tasks(student=student, data_recived=False,lessons=self.item,rote=0)
task.save()
self.item.tasks.add(task)
self.item.save()
Type=DashbordType.objects.get(name='lesson')
place=AccountType.objects.get(name='student')
dashbord=Dashbord(theme=self.item.theme,description=self.item.description,place=place,lesson=self.item,type=Type,author=request.user)
dashbord.save()
class myStudents(baseListView):
template_name = 'ordinance/myStudents.html'
@login_educator
def get(self, request, *args, **kwargs)->baseListView:
return self.addGet(request)
def setContext(self, request)->baseListView:
self.context = {
'items': Account.objects.filter(is_student__name=request.user.is_educator).order_by('-last_name')
}
class myLesson(baseListView):
template_name = 'ordinance/myLessons.html'
def get(self, request, *args, **kwargs)->baseListView:
return self.addGet(request)
def setContext(self, request)->baseListView:
self.context = {
'items': Lesson.objects.filter(teacher=request.user).order_by('-data')
}
class myTask(baseListView):
template_name = 'ordinance/myTasks.html'
def get(self, request, *args, **kwargs)->baseListView:
return self.addGet(request)
def setContext(self, request)->baseListView:
self.context = {
'items': self.set_Data(self.set_Objects(request),request)
}
def set_Data(self,objects,request)->list:
for item in objects:
for task in item.tasks.all():
if task.student == request.user:
item.idAction=task.id
item.stan = 'ToAceptRecived'
if task.data_recived == True:
item.stan = 'ConfirmRecived'
if task.rote>0:
item.stan = 'rote'
item.rote = task.rote
return objects
def set_Objects(self,request)->list:
lesson = Lesson.objects.all()
lessonNewArray=[];
for item in lesson:
if item.classroom == request.user.is_student:
lessonNewArray.append(item)
return lessonNewArray
class sentSolution(baseUpdateView):
success_url = '/'
template_name = 'ordinance/sentSolution.html'
getObject = Tasks
form = TasksSolution
def setContext(self,request, *args, **kwarg)->baseUpdateView:
self.context={
'item':Tasks.objects.get(id=self.kwargs.get("id")),
'form':self.form
}
class setRote(baseUpdateView):
success_url = '/'
template_name = 'ordinance/sentSolution.html'
getObject = Tasks
form = TasksSetRote
def postSave(self, request, *args, **kwargs)-> None:
self.item.rotedata=datetime.datetime.now()
self.item.save()
class myRotes(baseListView):
getObject = Tasks
template_name = 'ordinance/myrotes.html'
def get(self, request, *args, **kwargs)->baseListView:
return self.addGet(request)
def setContext(self,request)->baseListView:
self.context={
'items':self.get_object(request),
}
def get_object(self,request):
query=self.getObject.objects.filter(student__email=request.user.email)
return query
class ShowLesson(baseShowView):
template_name='ordinance/showlesson.html'
getObject=Lesson
def setContext(self,request)->baseShowView:
self.context={
'context':self.get_object(),
'students':self.get_students()
}
def get_students(self)->list:
tasks=self.get_object().tasks.all()
for task in tasks:
task.status = 'ToAceptRecived'
if task.data_recived == True:
task.status= 'ConfirmRecived'
if task.taskfile:
task.status = ''
return tasks
class sentMess(baseUpdateView):
success_url = '/ordinance/myStudents/'
template_name = 'ordinance/sentMess.html'
getObject = Account
form = AccountForm
def post(self,request, *args, **kwargs)->baseUpdateView:
self.setContext(request)
self.form = self.setform(request)
if self.form.is_valid():
email().sent(self.form.cleaned_data['subject'], self.form.cleaned_data['message'], [self.get_object().email])
return redirect(self.success_url)
else:
self.setContext(request)
return render(request, self.template_name, self.context)
return render(request, self.template_name, self.context)
class passwordReset(baseShowView):
template_name = 'ordinance/showlesson.html'
success_url = '/ordinance/myStudents/'
getObject = Account
def get(self, request, *args, **kwargs)->baseShowView:
password = passwordGeneartor().setPassword()
print(password)
item = Account.objects.get(id=self.kwargs.get("id"))
mess= 'Email : '+item.email+' hasło: '+password
email().sent('Nowe hasło', mess, [item.email])
item.set_password(password)
item.save()
return redirect(self.success_url)
class ConfirmRecivedLesson(baseUpdateView):
getObject = Tasks
template_name = 'ordinance/showlesson.html'
def get(self, request, *args, **kwargs)->baseUpdateView:
id_ = self.kwargs.get("id")
item=Tasks.objects.get(id=id_)
item.data_recived=True
item.save()
self.success_url = '/ordinance/ShowLesson/'+str(item.lessons.id)
return redirect(self.success_url)
class myPersonel(baseListView):
template_name = 'ordinance/myPersonel.html'
@login_manager
def get(self, request, *args, **kwargs)->baseListView:
return self.addGet(request)
def setContext(self, request)->baseListView:
self.context = {
'items': Account.objects.filter(is_student__name=request.user.is_educator).order_by('-last_name')
}
| [
"[email protected]"
] | |
995f17f49f0cc20090ed4da3fc31fdabd4c2e5df | 6a61ef12621c8a917d160db62415487fe2c469f7 | /aliyun-python-sdk-outboundbot/aliyunsdkoutboundbot/request/v20191226/DeleteJobGroupRequest.py | 6edfb7caf350c296ba47360d1600bde52a8e0e09 | [
"Apache-2.0"
] | permissive | zhangwp-cn/aliyun-openapi-python-sdk | f0b15369665a956490534c942676ed15410196f7 | a560e38f97351db05d13f0588f7bdfb4292ed3ae | refs/heads/master | 2022-09-08T13:31:26.842867 | 2020-06-04T03:23:30 | 2020-06-04T03:23:30 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,607 | py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
#
# http://www.apache.org/licenses/LICENSE-2.0
#
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from aliyunsdkcore.request import RpcRequest
from aliyunsdkoutboundbot.endpoint import endpoint_data
class DeleteJobGroupRequest(RpcRequest):
def __init__(self):
RpcRequest.__init__(self, 'OutboundBot', '2019-12-26', 'DeleteJobGroup','outboundbot')
if hasattr(self, "endpoint_map"):
setattr(self, "endpoint_map", endpoint_data.getEndpointMap())
if hasattr(self, "endpoint_regional"):
setattr(self, "endpoint_regional", endpoint_data.getEndpointRegional())
def get_InstanceId(self):
return self.get_query_params().get('InstanceId')
def set_InstanceId(self,InstanceId):
self.add_query_param('InstanceId',InstanceId)
def get_JobGroupId(self):
return self.get_query_params().get('JobGroupId')
def set_JobGroupId(self,JobGroupId):
self.add_query_param('JobGroupId',JobGroupId) | [
"[email protected]"
] | |
b1c06b4f2309fc8aaaf3b6ce7edbcf6c31ace1aa | e456ec1f174091a1024dd0ebd7c8f011b3399367 | /Test.py | 0045530c15143de17da121041d45b4836a662166 | [] | no_license | Gummy27/Forritun | 414e98c0020bdf71a8c2a9b3757ece19c0d01172 | 6a312db6e5a451fac1e6830d7e249663739f15f2 | refs/heads/master | 2023-02-15T03:01:00.307741 | 2021-01-07T11:01:37 | 2021-01-07T11:01:37 | 177,342,511 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 57 | py | s1 = "1"
s2 = "2"
print(f'Hérna kemur númerið 1: {1}') | [
"[email protected]"
] | |
0b4681cbbbd15b1ae82f979dfb0855a484f541fc | 8e3b452b08139f25be824fae2b8b7aabb158d888 | /6.00.1.x/Week3/Lecture5/lectureCode_Lec5-towers.py | 13861370c38b6bf6a8bbf93b0af680633678f9d6 | [] | no_license | prasannabe2004/MITx | d38a11e38a0abb73ffa37dccb363f779011155ab | 1954b5fc31004c94f46fc8194b7fa773108c4493 | refs/heads/master | 2020-05-16T19:14:00.963550 | 2015-08-07T18:50:12 | 2015-08-07T18:50:12 | 25,537,861 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 285 | py | def printMove(fr, to):
print('move from ' + str(fr) + ' to ' + str(to))
def Towers(n, fr, to, spare):
if n == 1:
printMove(fr, to)
else:
Towers(n-1, fr, spare, to)
Towers(1, fr, to, spare)
Towers(n-1, spare, to, fr)
Towers(5, 'f','t','s') | [
"[email protected]"
] | |
2e9e653a3ba5f6b2d39e8bc2a9b81531627f0d53 | be5c86e8fe3f5836b7d2097dd5272c72b5b28f15 | /binary-search/Python/0069-sqrtx(调试代码).py | 34fb4dc1e8fd789b231dfc3dc042a189448bc516 | [
"Apache-2.0"
] | permissive | lemonnader/LeetCode-Solution-Well-Formed | d24674898ceb5441c036016dc30afc58e4a1247a | baabdb1990fd49ab82a712e121f49c4f68b29459 | refs/heads/master | 2021-04-23T18:49:40.337569 | 2020-03-24T04:50:27 | 2020-03-24T04:50:27 | 249,972,064 | 1 | 0 | Apache-2.0 | 2020-03-25T12:26:25 | 2020-03-25T12:26:24 | null | UTF-8 | Python | false | false | 1,303 | py | class Solution:
def mySqrt(self, x: int) -> int:
if x == 0:
return 0
left = 1
right = x // 2
while left < right:
# 调试代码开始:为了仔细观察区间左右端点,我们每进入一次循环,让线程休眠 1 秒
import time
time.sleep(1)
print('调试代码,观察区间左右端点、中位数,和进入的分支: left = {} , right = {} , '.format(left, right), end='')
# 调试代码结束
# 错误代码,在分支左区间不发生收缩的情况下,中位数应该取右中位数
# mid = left + (right - left) // 2
mid = (left + right) >> 1
# 调试代码
print('mid = {} ,'.format(mid), end=' ')
square = mid * mid
if square > x:
# 调试代码
print('进入 right = mid - 1 这个分支。')
right = mid - 1
else:
# 调试代码
print('进入 left = mid 这个分支。')
left = mid
return left
if __name__ == '__main__':
# 当 x = 8 的时候,代码能得出正确答案
x = 9
solution = Solution()
res = solution.mySqrt(x)
print(res)
| [
"[email protected]"
] | |
21e37b4f7a6e38423629ff7f88949c775997a74a | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p02536/s375378291.py | 93ecd40e03e6ad422973faf79ca95508b26c6569 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,207 | py | import sys
sys.setrecursionlimit(10 ** 9)
class UnionFind():
def __init__(self, n):
self.n = n
self.root = [-1]*(n+1)
self.rank = [0]*(n+1)
def find(self, x):#親となる要素を探索
if self.root[x] < 0:
return x
else:
self.root[x] = self.find(self.root[x])#再帰
return self.root[x]
def unite(self, x, y):
x = self.find(x)
y = self.find(y)
if x == y:
return
elif self.rank[x] > self.rank[y]:#深い木に連結
self.root[x] += self.root[y]
self.root[y] = x#yの親をxとする
else:
self.root[y] += self.root[x]
self.root[x] = y
if self.rank[x] == self.rank[y]:
self.rank[y] += 1
def issame(self, x, y):#x, yが同じ集合か判定
return self.find(x) == self.find(y)
def count(self, x):#要素の個数
return (-1)*self.root[self.find(x)]
n, m = map(int, input().split())
uf = UnionFind(n)
for i in range(m):
a, b = map(int, input().split())
uf.unite(a-1, b-1)
ans = set()
for i in range(n):
ans.add(uf.find(i))
print(len(ans)-1) | [
"[email protected]"
] | |
e1508f8201b4113f896bf0ace8208bf541a2431b | de4d88db6ea32d20020c169f734edd4b95c3092d | /aiotdlib/api/types/sponsored_message.py | d0baa01e34edffdfdd0b1242e871c3ddd8921c86 | [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
] | permissive | thiagosm/aiotdlib | 5cc790a5645f7e4cc61bbd0791433ed182d69062 | 4528fcfca7c5c69b54a878ce6ce60e934a2dcc73 | refs/heads/main | 2023-08-15T05:16:28.436803 | 2021-10-18T20:41:27 | 2021-10-18T20:41:27 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,493 | py | # =============================================================================== #
# #
# This file has been generated automatically!! Do not change this manually! #
# #
# =============================================================================== #
from __future__ import annotations
import typing
from pydantic import Field
from .internal_link_type import InternalLinkType
from .message_content import MessageContent
from ..base_object import BaseObject
class SponsoredMessage(BaseObject):
"""
Describes a sponsored message
:param id: Unique sponsored message identifier
:type id: :class:`int`
:param sponsor_chat_id: Chat identifier
:type sponsor_chat_id: :class:`int`
:param link: An internal link to be opened when the sponsored message is clicked; may be null. If null, the sponsor chat needs to be opened instead, defaults to None
:type link: :class:`InternalLinkType`, optional
:param content: Content of the message
:type content: :class:`MessageContent`
"""
ID: str = Field("sponsoredMessage", alias="@type")
id: int
sponsor_chat_id: int
link: typing.Optional[InternalLinkType] = None
content: MessageContent
@staticmethod
def read(q: dict) -> SponsoredMessage:
return SponsoredMessage.construct(**q)
| [
"[email protected]"
] | |
58735fe65a67b7f724a8be2f26ad1e17b44edd41 | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03285/s132035467.py | cb2b23dc5e61a4328b3cff2153a67f2c568cb830 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 118 | py | n=int(input())
for i in range(n):
for j in range(n):
if i*4+j*7==n:
print('Yes')
exit()
print('No') | [
"[email protected]"
] | |
af66f3e9667cc2d7a9aca8543be26bbdbeffb849 | af9c0aafa10b7901533de0b32177ab80b4782d3f | /notes/code/youtube/comments_one_video.py | 0ae8f2715bd2cd2765d7e2162e6561247db18f41 | [
"MIT"
] | permissive | Akramz/msds692 | d1d33298b7599950e95838c0fc9ddbd47a98ed5b | 42f4c2a0dc7569152bac2439e9b6385f2f101f7b | refs/heads/master | 2023-01-25T00:44:11.197544 | 2020-12-05T22:05:14 | 2020-12-05T22:05:14 | 319,362,758 | 1 | 0 | MIT | 2020-12-07T15:31:12 | 2020-12-07T15:31:11 | null | UTF-8 | Python | false | false | 708 | py | import sys
from googleapiclient.discovery import build
DEVELOPER_KEY = sys.argv[1]
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
video_id = "gU_gYzwTbYQ" # bonkers the cat
# code from https://developers.google.com/youtube/v3/docs/comments/list
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)
results = youtube.commentThreads().list(
part="snippet",
videoId=video_id,
textFormat="plainText"
).execute()
for item in results["items"]:
comment = item["snippet"]["topLevelComment"]
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
print("Comment by %s: %s" % (author, text))
| [
"[email protected]"
] | |
1b14e0893000f94e90a7478eb66d700400cb0141 | 7882860350c714e6c08368288dab721288b8d9db | /1일차/if(8번문제).py | 9db67865be7d0871db81bafb600eeaa1d088a3f2 | [] | no_license | park-seonju/Algorithm | 682fca984813a54b92a3f2ab174e4f05a95921a8 | 30e5bcb756e9388693624e8880e57bc92bfda969 | refs/heads/master | 2023-08-11T18:23:49.644259 | 2021-09-27T10:07:49 | 2021-09-27T10:07:49 | 388,741,922 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 192 | py | result=[]
for i in range(100,301):
a=int(i/100) # int로 해야함
b=int(i/10)
if(a % 2 == 0 and b % 2 == 0 and i % 2 == 0):
result.append(str(i))
print(",".join(result))
| [
"[email protected]"
] | |
76283425866e43198277a6f4f43dcc74ae590214 | e1009433697344f0ce6ec953f086be698fa4e6c4 | /parsmodel.py | 10d1dbeb9a2e033a4397f7c7bf345fec03e56af2 | [] | no_license | bladas/online-store | 7e848bad1137cf7886cec6bf7563867e5f8f5e36 | 6fd68e0d1318b796b05a94fa5547d5e87a2b0172 | refs/heads/master | 2023-05-02T07:11:55.614313 | 2020-01-06T14:20:19 | 2020-01-06T14:20:19 | 216,340,778 | 0 | 0 | null | 2023-04-21T20:38:49 | 2019-10-20T10:00:46 | Python | UTF-8 | Python | false | false | 2,339 | py | import json
from home.models import Category, UnderCategory, Product
def create(json, Category, UnderCategory, Product):
with open('citrus.json', 'r') as json_file:
data = json.load(json_file)
for elem in data:
print(elem.get('name'))
print(elem.get('category'))
print(elem.get('undercategory'))
print(elem.get('price'))
# new_category = Category.objects.create(title=elem.get('category'))
# new_uc = UnderCategory.objects.create(title=elem.get('undercategory'), category=new_category)
# new_product = Product.objects.create(name=elem.get('name'), ucategory=new_uc)
# new_category.save()
# new_uc.save()
# new_product = Product.objects.create(name=elem.get('name'), ucategory=new_uc)
try:
category = Category.objects.get(title=elem.get('category'))
try:
ucategory = UnderCategory.objects.get(title=elem.get('undercategory'), category=category)
new_product = Product.objects.create(name=elem.get('name'), ucategory=ucategory,
price=elem.get('price'))
new_product.save()
except:
new_uc = UnderCategory.objects.create(title=elem.get('undercategory'), category=new_category)
new_uc.save()
new_product = Product.objects.create(name=elem.get('name'), ucategory=new_uc,
price=elem.get('price'))
new_product.save()
except:
new_category = Category.objects.create(title=elem.get('category'))
new_category.save()
try:
print(UnderCategory.objects.get(title=elem.get('undercategory'), category=new_category))
except:
new_uc = UnderCategory.objects.create(title=elem.get('undercategory'), category=new_category)
new_uc.save()
new_product = Product.objects.create(name=elem.get('name'), ucategory=new_uc,price=elem.get('price'))
new_product.save()
# print(create())
create(json, Category, UnderCategory, Product)
| [
"[email protected]"
] | |
56d37d047190975695cb0168c225c11656be6066 | d94b6845aeeb412aac6850b70e22628bc84d1d6d | /routing_transformer/routing_tf_api.py | ddc35172d2adda48e5cb8cb0ef32aaa4146d4629 | [
"CC-BY-4.0",
"Apache-2.0"
] | permissive | ishine/google-research | 541aea114a68ced68736340e037fc0f8257d1ea2 | c1ae273841592fce4c993bf35cdd0a6424e73da4 | refs/heads/master | 2023-06-08T23:02:25.502203 | 2023-05-31T01:00:56 | 2023-05-31T01:06:45 | 242,478,569 | 0 | 0 | Apache-2.0 | 2020-06-23T01:55:11 | 2020-02-23T07:59:42 | Jupyter Notebook | UTF-8 | Python | false | false | 7,727 | py | # coding=utf-8
# Copyright 2023 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pdb
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
tf.get_logger().setLevel('ERROR')
from tensor2tensor import models
from tensor2tensor import problems
from tensor2tensor.utils import trainer_lib
from tensor2tensor.utils import hparams_lib
from tensor2tensor.utils import registry
from tensor2tensor.utils import metrics
from tensor2tensor.data_generators import text_encoder
from tensor2tensor.data_generators import problem
from routing_transformer.problems import pg19
from tensorflow.compat.v1 import estimator as tf_estimator
from tqdm import tqdm
from routing_transformer.sparse_transformer import SparseTransformer
import numpy as np
import random
from scipy.special import log_softmax
VOCAB_PATH = "/mnt/nfs/work1/miyyer/simengsun/in-book-retrieval/RT-data/vocab.pg19_length8k.32768.subwords"
HPARAMS_PATH = "/mnt/nfs/work1/miyyer/simengsun/in-book-retrieval/RT-models/rt-checkpoint/hparams.json"
CKPT_PATH = "/mnt/nfs/work1/miyyer/simengsun/in-book-retrieval/RT-models/rt-checkpoint/ckpt-3530000"
MAX_SEQUENCE_LENGTH = 8192
class SparseTransformerWrapper(object):
def __init__(self, max_seq_length=None):
# Load hyperparameters
self.max_seq_length = max_seq_length or MAX_SEQUENCE_LENGTH
# Needed since RT uses blocks of size 256
assert self.max_seq_length % 256 == 0
hparams = hparams_lib.create_hparams_from_json(HPARAMS_PATH)
hparams.use_tpu = False
hparams = zero_dropout(hparams)
# Build TF1 graph of model
sptf_model = SparseTransformer(hparams, tf_estimator.ModeKeys.EVAL)
self.input_nodes = {
"targets": tf.placeholder(tf.int32, [None, self.max_seq_length])
}
self.output_nodes = sptf_model.body(self.input_nodes)
# Map the checkpoint variables to the graph
init_from_checkpoint(CKPT_PATH, variable_prefix="sparse_transformer/body")
# create a session object, and actually initialize the graph
self.sess = tf.Session()
self.sess.run(tf.global_variables_initializer())
self.encoder = text_encoder.SubwordTextEncoder(VOCAB_PATH)
def forward(self, sentences, encode_sentences=True, relevant_subsequences=None):
encoded_sents = []
encoded_seqs_no_pad = []
if encode_sentences:
for sent in sentences:
encoded = []
for line in sent.split("\n"):
new_tokens = self.encoder.encode(line.strip())
if len(encoded) + len(new_tokens) >= self.max_seq_length:
break
encoded.extend(new_tokens)
encoded.append(text_encoder.EOS_ID)
encoded_seqs_no_pad.append(encoded)
# pad shorter sequences to the full length
encoded = encoded + [text_encoder.PAD_ID for _ in range(self.max_seq_length - len(encoded))]
assert len(encoded) == self.max_seq_length
encoded_sents.append(encoded)
else:
# assume sentences are encoded, pad/truncate them
for sent in sentences:
sent = sent[:self.max_seq_length]
encoded_seqs_no_pad.append(sent)
sent = sent + [text_encoder.PAD_ID for _ in range(self.max_seq_length - len(sent))]
encoded_sents.append(sent)
feed_dict = {
self.input_nodes["targets"]: np.array(encoded_sents)
}
outputs = self.sess.run(self.output_nodes, feed_dict=feed_dict)
return_outputs = {
"logits": np.squeeze(outputs[0], axis=(2, 3)),
"loss": outputs[1]["training"],
"encoded_seqs_no_pad": encoded_seqs_no_pad
}
if relevant_subsequences is not None:
for i, rss in enumerate(relevant_subsequences):
encoded_subseq = self.encoder.encode(rss)
positions = find_sub_list(encoded_subseq, encoded_sents[i])
misaligned_prefix_length = 0
while positions is None:
misaligned_prefix_length += 1
encoded_subseq = encoded_subseq[1:]
positions = find_sub_list(encoded_subseq, encoded_sents[i])
start, end = positions[-1]
relevant_logits = return_outputs["logits"][i][start:end]
log_probs = log_softmax(relevant_logits, axis=1)
gold_log_probs = [lp[index] for index, lp in zip(encoded_subseq, log_probs)]
return_outputs["subseq_log_loss"] = -1 * np.mean(gold_log_probs)
return_outputs["misaligned_prefix_length"] = misaligned_prefix_length
return return_outputs
def close(self):
self.sess.close()
def find_sub_list(sl, l):
"""Find sub-string, so as to be able to compute ppl of a sub-string."""
sll=len(sl)
matches = []
for ind in (i for i,e in enumerate(l) if e == sl[0]):
if l[ind:ind + sll] == sl:
matches.append(
(ind, ind + sll)
)
if matches:
return matches
def zero_dropout(hparams):
hparams.input_dropout = 0.0
hparams.dropout = 0.0
hparams.relu_dropout = 0.0
hparams.attention_dropout = 0.0
hparams.layer_prepostprocess_dropout = 0.0
return hparams
def log_variables(name, var_names):
tf.logging.info("%s (%d total): %s", name, len(var_names),
random.sample(var_names, min(len(var_names), 5)))
def init_from_checkpoint(checkpoint_path,
checkpoint_prefix=None,
variable_prefix=None,
target_variables=None):
"""Initializes all of the variables using `init_checkpoint."""
tf.logging.info("Loading variables from %s", checkpoint_path)
checkpoint_variables = {
name: name for name, _ in tf.train.list_variables(checkpoint_path) if "Adafactor" not in name
}
if target_variables is None:
target_variables = tf.trainable_variables()
target_variables = {var.name.split(":")[0]: var for var in target_variables}
if checkpoint_prefix is not None:
checkpoint_variables = {
checkpoint_prefix + "/" + name: varname
for name, varname in checkpoint_variables.items()
}
if variable_prefix is not None:
target_variables = {
variable_prefix + "/" + name: var
for name, var in target_variables.items()
}
checkpoint_var_names = set(checkpoint_variables.keys())
target_var_names = set(target_variables.keys())
intersected_var_names = target_var_names & checkpoint_var_names
assignment_map = {
checkpoint_variables[name]: target_variables[name]
for name in intersected_var_names
}
tf.train.init_from_checkpoint(checkpoint_path, assignment_map)
log_variables("Loaded variables", intersected_var_names)
log_variables("Uninitialized variables", target_var_names - checkpoint_var_names)
log_variables("Unused variables", checkpoint_var_names - target_var_names)
| [
"[email protected]"
] | |
8365348c8c8d72df578af246b3fea656a5feed86 | 727f1bc2205c88577b419cf0036c029b8c6f7766 | /out-bin/py/google/fhir/labels/bundle_to_label.runfiles/com_google_fhir/external/pypi__apache_beam_2_9_0/apache_beam/runners/direct/bundle_factory.py | 032aadc4fe49359d4995e2916d7a25262bdded85 | [
"Apache-2.0"
] | permissive | rasalt/fhir | 55cf78feed3596a3101b86f9e9bbf6652c6ed4ad | d49883cc4d4986e11ca66058d5a327691e6e048a | refs/heads/master | 2020-04-13T00:16:54.050913 | 2019-01-15T14:22:15 | 2019-01-15T14:22:15 | 160,260,223 | 0 | 0 | Apache-2.0 | 2018-12-03T22:07:01 | 2018-12-03T22:07:01 | null | UTF-8 | Python | false | false | 154 | py | /home/rkharwar/.cache/bazel/_bazel_rkharwar/c4bcd65252c8f8250f091ba96375f9a5/external/pypi__apache_beam_2_9_0/apache_beam/runners/direct/bundle_factory.py | [
"[email protected]"
] | |
f951e1cff4773f3d7bbafa8a8da8f51e39292a6b | 6fa7f99d3d3d9b177ef01ebf9a9da4982813b7d4 | /cyzbSvpfSzDjGi4TB_6.py | 3dc41e9ce3d55a84ccef17f3ab0f837b05e5f6c6 | [] | no_license | daniel-reich/ubiquitous-fiesta | 26e80f0082f8589e51d359ce7953117a3da7d38c | 9af2700dbe59284f5697e612491499841a6c126f | refs/heads/master | 2023-04-05T06:40:37.328213 | 2021-04-06T20:17:44 | 2021-04-06T20:17:44 | 355,318,759 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 74 | py |
def harmonic(n):
return round(sum([1/x for x in range(1, n+1)] ), 3 )
| [
"[email protected]"
] | |
d01c60729dde3704ca76f6163bb970a349a73025 | 2ee66e6485f0c68adb6367b3697c4a3cb32c9b7e | /tests/test_mpi.py | b945048d2f32120065d20655192115c3e52a001e | [
"MIT"
] | permissive | yuriyi/devito | 8836f741b360946ce2b439b88b78bf267279e90b | d0a29b3ad3653c20f88b35c2780471cef107b7a2 | refs/heads/master | 2020-04-01T05:49:17.221770 | 2018-10-13T18:34:46 | 2018-10-13T18:34:46 | 152,920,977 | 0 | 0 | MIT | 2018-10-13T22:52:12 | 2018-10-13T22:52:12 | null | UTF-8 | Python | false | false | 38,484 | py | import numpy as np
import pytest
from conftest import skipif_yask
from devito import (Grid, Constant, Function, TimeFunction, SparseFunction,
SparseTimeFunction, Dimension, ConditionalDimension,
SubDimension, Eq, Inc, Operator)
from devito.ir.iet import Call, Conditional, FindNodes
from devito.mpi import MPI, copy, sendrecv, update_halo
from devito.parameters import configuration
from devito.types import LEFT, RIGHT
@skipif_yask
class TestPythonMPI(object):
@pytest.mark.parallel(nprocs=[2, 4])
def test_partitioning(self):
grid = Grid(shape=(15, 15))
f = Function(name='f', grid=grid)
distributor = grid.distributor
expected = { # nprocs -> [(rank0 shape), (rank1 shape), ...]
2: [(15, 8), (15, 7)],
4: [(8, 8), (8, 7), (7, 8), (7, 7)]
}
assert f.shape == expected[distributor.nprocs][distributor.myrank]
@pytest.mark.parallel(nprocs=[2, 4])
def test_partitioning_fewer_dims(self):
"""Test domain decomposition for Functions defined over a strict subset
of grid-decomposed dimensions."""
size_x, size_y = 16, 16
grid = Grid(shape=(size_x, size_y))
x, y = grid.dimensions
# A function with fewer dimensions that in `grid`
f = Function(name='f', grid=grid, dimensions=(y,), shape=(size_y,))
distributor = grid.distributor
expected = { # nprocs -> [(rank0 shape), (rank1 shape), ...]
2: [(8,), (8,)],
4: [(8,), (8,), (8,), (8,)]
}
assert f.shape == expected[distributor.nprocs][distributor.myrank]
@pytest.mark.parallel(nprocs=9)
def test_neighborhood_2d(self):
grid = Grid(shape=(3, 3))
x, y = grid.dimensions
distributor = grid.distributor
# Rank map:
# ---------------y
# | 0 | 1 | 2 |
# -------------
# | 3 | 4 | 5 |
# -------------
# | 6 | 7 | 8 |
# -------------
# |
# x
expected = {
0: {x: {LEFT: MPI.PROC_NULL, RIGHT: 3}, y: {LEFT: MPI.PROC_NULL, RIGHT: 1}},
1: {x: {LEFT: MPI.PROC_NULL, RIGHT: 4}, y: {LEFT: 0, RIGHT: 2}},
2: {x: {LEFT: MPI.PROC_NULL, RIGHT: 5}, y: {LEFT: 1, RIGHT: MPI.PROC_NULL}},
3: {x: {LEFT: 0, RIGHT: 6}, y: {LEFT: MPI.PROC_NULL, RIGHT: 4}},
4: {x: {LEFT: 1, RIGHT: 7}, y: {LEFT: 3, RIGHT: 5}},
5: {x: {LEFT: 2, RIGHT: 8}, y: {LEFT: 4, RIGHT: MPI.PROC_NULL}},
6: {x: {LEFT: 3, RIGHT: MPI.PROC_NULL}, y: {LEFT: MPI.PROC_NULL, RIGHT: 7}},
7: {x: {LEFT: 4, RIGHT: MPI.PROC_NULL}, y: {LEFT: 6, RIGHT: 8}},
8: {x: {LEFT: 5, RIGHT: MPI.PROC_NULL}, y: {LEFT: 7, RIGHT: MPI.PROC_NULL}},
}
assert expected[distributor.myrank] == distributor.neighbours
@pytest.mark.parallel(nprocs=2)
def test_halo_exchange_bilateral(self):
"""
Test halo exchange between two processes organised in a 1x2 cartesian grid.
The initial ``data_with_halo`` looks like:
rank0 rank1
0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 0 0 2 2 2 2 0
0 1 1 1 1 0 0 2 2 2 2 0
0 1 1 1 1 0 0 2 2 2 2 0
0 1 1 1 1 0 0 2 2 2 2 0
0 0 0 0 0 0 0 0 0 0 0 0
After the halo exchange, the following is expected and tested for:
rank0 rank1
0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 2 1 2 2 2 2 0
0 1 1 1 1 2 1 2 2 2 2 0
0 1 1 1 1 2 1 2 2 2 2 0
0 1 1 1 1 2 1 2 2 2 2 0
0 0 0 0 0 0 0 0 0 0 0 0
"""
grid = Grid(shape=(12, 12))
f = Function(name='f', grid=grid)
distributor = grid.distributor
f.data[:] = distributor.myrank + 1
# Now trigger a halo exchange...
f.data_with_halo # noqa
if distributor.myrank == 0:
assert np.all(f.data_ro_with_halo[1:-1, -1] == 2.)
assert np.all(f.data_ro_with_halo[:, 0] == 0.)
else:
assert np.all(f.data_ro_with_halo[1:-1, 0] == 1.)
assert np.all(f.data_ro_with_halo[:, -1] == 0.)
assert np.all(f.data_ro_with_halo[0] == 0.)
assert np.all(f.data_ro_with_halo[-1] == 0.)
@pytest.mark.parallel(nprocs=2)
def test_halo_exchange_bilateral_asymmetric(self):
"""
Test halo exchange between two processes organised in a 1x2 cartesian grid.
In this test, the size of left and right halo regions are different.
The initial ``data_with_halo`` looks like:
rank0 rank1
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 0 0 0 2 2 2 2 0
0 0 1 1 1 1 0 0 0 2 2 2 2 0
0 0 1 1 1 1 0 0 0 2 2 2 2 0
0 0 1 1 1 1 0 0 0 2 2 2 2 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
After the halo exchange, the following is expected and tested for:
rank0 rank1
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 2 1 1 2 2 2 2 0
0 0 1 1 1 1 2 1 1 2 2 2 2 0
0 0 1 1 1 1 2 1 1 2 2 2 2 0
0 0 1 1 1 1 2 1 1 2 2 2 2 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0
"""
grid = Grid(shape=(12, 12))
f = Function(name='f', grid=grid, space_order=(1, 2, 1))
distributor = grid.distributor
f.data[:] = distributor.myrank + 1
# Now trigger a halo exchange...
f.data_with_halo # noqa
if distributor.myrank == 0:
assert np.all(f.data_ro_with_halo[2:-1, -1] == 2.)
assert np.all(f.data_ro_with_halo[:, 0:2] == 0.)
else:
assert np.all(f.data_ro_with_halo[2:-1, 0:2] == 1.)
assert np.all(f.data_ro_with_halo[:, -1] == 0.)
assert np.all(f.data_ro_with_halo[0:2] == 0.)
assert np.all(f.data_ro_with_halo[-1] == 0.)
@pytest.mark.parallel(nprocs=4)
def test_halo_exchange_quadrilateral(self):
"""
Test halo exchange between four processes organised in a 2x2 cartesian grid.
The initial ``data_with_halo`` looks like:
rank0 rank1
0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 0 0 2 2 2 2 0
0 1 1 1 1 0 0 2 2 2 2 0
0 1 1 1 1 0 0 2 2 2 2 0
0 1 1 1 1 0 0 2 2 2 2 0
0 0 0 0 0 0 0 0 0 0 0 0
rank2 rank3
0 0 0 0 0 0 0 0 0 0 0 0
0 3 3 3 3 0 0 4 4 4 4 0
0 3 3 3 3 0 0 4 4 4 4 0
0 3 3 3 3 0 0 4 4 4 4 0
0 3 3 3 3 0 0 4 4 4 4 0
0 0 0 0 0 0 0 0 0 0 0 0
After the halo exchange, the following is expected and tested for:
rank0 rank1
0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 1 2 1 2 2 2 2 0
0 1 1 1 1 2 1 2 2 2 2 0
0 1 1 1 1 2 1 2 2 2 2 0
0 1 1 1 1 2 1 2 2 2 2 0
0 3 3 3 3 4 3 4 4 4 4 0
rank2 rank3
0 1 1 1 1 2 1 2 2 2 2 0
0 3 3 3 3 4 3 4 4 4 4 0
0 3 3 3 3 4 3 4 4 4 4 0
0 3 3 3 3 4 3 4 4 4 4 0
0 3 3 3 3 4 3 4 4 4 4 0
0 0 0 0 0 0 0 0 0 0 0 0
"""
grid = Grid(shape=(12, 12))
f = Function(name='f', grid=grid)
distributor = grid.distributor
f.data[:] = distributor.myrank + 1
# Now trigger a halo exchange...
f.data_with_halo # noqa
if distributor.myrank == 0:
assert np.all(f.data_ro_with_halo[0] == 0.)
assert np.all(f.data_ro_with_halo[:, 0] == 0.)
assert np.all(f.data_ro_with_halo[1:-1, -1] == 2.)
assert np.all(f.data_ro_with_halo[-1, 1:-1] == 3.)
assert f.data_ro_with_halo[-1, -1] == 4.
elif distributor.myrank == 1:
assert np.all(f.data_ro_with_halo[0] == 0.)
assert np.all(f.data_ro_with_halo[:, -1] == 0.)
assert np.all(f.data_ro_with_halo[1:-1, 0] == 1.)
assert np.all(f.data_ro_with_halo[-1, 1:-1] == 4.)
assert f.data_ro_with_halo[-1, 0] == 3.
elif distributor.myrank == 2:
assert np.all(f.data_ro_with_halo[-1] == 0.)
assert np.all(f.data_ro_with_halo[:, 0] == 0.)
assert np.all(f.data_ro_with_halo[1:-1, -1] == 4.)
assert np.all(f.data_ro_with_halo[0, 1:-1] == 1.)
assert f.data_ro_with_halo[0, -1] == 2.
else:
assert np.all(f.data_ro_with_halo[-1] == 0.)
assert np.all(f.data_ro_with_halo[:, -1] == 0.)
assert np.all(f.data_ro_with_halo[1:-1, 0] == 3.)
assert np.all(f.data_ro_with_halo[0, 1:-1] == 2.)
assert f.data_ro_with_halo[0, 0] == 1.
@skipif_yask
@pytest.mark.parallel(nprocs=[2, 4])
def test_ctypes_neighbours(self):
grid = Grid(shape=(4, 4))
distributor = grid.distributor
PN = MPI.PROC_NULL
attrs = ['xleft', 'xright', 'yleft', 'yright']
expected = { # nprocs -> [(rank0 xleft xright ...), (rank1 xleft ...), ...]
2: [(PN, PN, PN, 1), (PN, PN, 0, PN)],
4: [(PN, 2, PN, 1), (PN, 3, 0, PN), (0, PN, PN, 3), (1, PN, 2, PN)]
}
mapper = dict(zip(attrs, expected[distributor.nprocs][distributor.myrank]))
_, _, obj = distributor._C_neighbours
assert all(getattr(obj.value._obj, k) == v for k, v in mapper.items())
@skipif_yask
class TestCodeGeneration(object):
def test_iet_copy(self):
grid = Grid(shape=(4, 4))
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
iet = copy(f, [t])
assert str(iet.parameters) == """\
(buf(buf_x, buf_y), buf_x_size, buf_y_size, dat(dat_time, dat_x, dat_y),\
dat_time_size, dat_x_size, dat_y_size, otime, ox, oy)"""
assert """\
for (int x = 0; x <= buf_x_size - 1; x += 1)
{
for (int y = 0; y <= buf_y_size - 1; y += 1)
{
buf[x][y] = dat[otime][x + ox][y + oy];
}
}""" in str(iet)
def test_iet_sendrecv(self):
grid = Grid(shape=(4, 4))
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
iet = sendrecv(f, [t])
assert str(iet.parameters) == """\
(dat(dat_time, dat_x, dat_y), dat_time_size, dat_x_size, dat_y_size,\
buf_x_size, buf_y_size, ogtime, ogx, ogy, ostime, osx, osy, fromrank, torank, comm)"""
assert str(iet.body[0]) == """\
float (*restrict dat)[dat_x_size][dat_y_size] __attribute__((aligned(64))) =\
(float (*)[dat_x_size][dat_y_size]) dat_vec;
float bufs[buf_x_size][buf_y_size] __attribute__((aligned(64)));
MPI_Request rrecv;
float bufg[buf_x_size][buf_y_size] __attribute__((aligned(64)));
MPI_Request rsend;
MPI_Status srecv;
MPI_Irecv((float*)bufs,buf_x_size*buf_y_size,MPI_FLOAT,fromrank,13,comm,&rrecv);
gather_f((float*)bufg,buf_x_size,buf_y_size,(float*)dat,dat_time_size,dat_x_size,\
dat_y_size,ogtime,ogx,ogy);
MPI_Isend((float*)bufg,buf_x_size*buf_y_size,MPI_FLOAT,torank,13,comm,&rsend);
MPI_Wait(&rsend,MPI_STATUS_IGNORE);
MPI_Wait(&rrecv,&srecv);
if (fromrank != MPI_PROC_NULL)
{
scatter_f((float*)bufs,buf_x_size,buf_y_size,(float*)dat,dat_time_size,dat_x_size,\
dat_y_size,ostime,osx,osy);
}"""
@pytest.mark.parallel(nprocs=1)
def test_iet_update_halo(self):
grid = Grid(shape=(4, 4))
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
iet = update_halo(f, [t])
assert str(iet.parameters) == """\
(f(t, x, y), mxl, mxr, myl, myr, comm, nb, otime, t_size, x_size, y_size)"""
assert """\
MPI_Comm *comm = (MPI_Comm*) _comm;
struct neighbours *nb = (struct neighbours*) _nb;
if (mxl)
{
sendrecv(f_vec,t_size,x_size + 1 + 1,y_size + 1 + 1,1,y_size + 1 + 1,\
otime,1,0,otime,x_size + 1,0,nb->xright,nb->xleft,comm);
}
if (mxr)
{
sendrecv(f_vec,t_size,x_size + 1 + 1,y_size + 1 + 1,1,y_size + 1 + 1,\
otime,x_size,0,otime,0,0,nb->xleft,nb->xright,comm);
}
if (myl)
{
sendrecv(f_vec,t_size,x_size + 1 + 1,y_size + 1 + 1,x_size + 1 + 1,1,\
otime,0,1,otime,0,y_size + 1,nb->yright,nb->yleft,comm);
}
if (myr)
{
sendrecv(f_vec,t_size,x_size + 1 + 1,y_size + 1 + 1,x_size + 1 + 1,1,\
otime,0,y_size,otime,0,0,nb->yleft,nb->yright,comm);
}"""
@skipif_yask
class TestSparseFunction(object):
@pytest.mark.parallel(nprocs=4)
@pytest.mark.parametrize('coords,expected', [
([(1., 1.), (1., 3.), (3., 1.), (3., 3.)], (0, 1, 2, 3)),
])
def test_ownership(self, coords, expected):
"""Given a sparse point ``p`` with known coordinates, this test checks
that the MPI rank owning ``p`` is retrieved correctly."""
grid = Grid(shape=(4, 4), extent=(4.0, 4.0))
sf = SparseFunction(name='sf', grid=grid, npoint=4, coordinates=coords)
assert len(sf.gridpoints) == len(expected)
assert all(sf._is_owned(i) == (j == grid.distributor.myrank)
for i, j in zip(sf.gridpoints, expected))
@pytest.mark.parallel(nprocs=4)
def test_scatter_gather(self):
"""
Test scattering and gathering of sparse data from and to a single MPI rank.
The initial data distribution looks like:
rank0 rank1 rank2 rank3
[0, 1, 2, 3] [] [] []
Logically (i.e., given point coordinates and domain decomposition), 0 belongs
to rank0, 1 belongs to rank1, etc. Thus, after scattering, the data distribution
is expected to be:
rank0 rank1 rank2 rank3
[0] [1] [2] [3]
Then, locally on each rank, some trivial computation is performed, and we obtain:
rank0 rank1 rank2 rank3
[0] [2] [4] [6]
Finally, we gather the data values and we get:
rank0 rank1 rank2 rank3
[0, 2, 4, 6] [] [] []
"""
grid = Grid(shape=(4, 4), extent=(4.0, 4.0))
# Initialization
if grid.distributor.myrank == 0:
coords = [(1., 1.), (1., 3.), (3., 1.), (3., 3.)]
else:
coords = []
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
sf.data[:] = list(range(len(coords)))
# Scatter
data = sf._dist_scatter()[sf]
assert len(data) == 1
assert data[0] == grid.distributor.myrank
# Do some local computation
data = data*2
# Gather
sf._dist_gather(data)
if grid.distributor.myrank == 0:
assert np.all(sf.data == [0, 2, 4, 6])
else:
assert not sf.data
@skipif_yask
class TestOperatorSimple(object):
@pytest.mark.parallel(nprocs=[2, 4, 8, 16, 32])
def test_trivial_eq_1d(self):
grid = Grid(shape=(32,))
x = grid.dimensions[0]
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
f.data_with_halo[:] = 1.
op = Operator(Eq(f.forward, f[t, x-1] + f[t, x+1] + 1))
op.apply(time=1)
assert np.all(f.data_ro_domain[1] == 3.)
if f.grid.distributor.myrank == 0:
assert f.data_ro_domain[0, 0] == 5.
assert np.all(f.data_ro_domain[0, 1:] == 7.)
elif f.grid.distributor.myrank == f.grid.distributor.nprocs - 1:
assert f.data_ro_domain[0, -1] == 5.
assert np.all(f.data_ro_domain[0, :-1] == 7.)
else:
assert np.all(f.data_ro_domain[0] == 7.)
@pytest.mark.parallel(nprocs=2)
def test_trivial_eq_1d_save(self):
grid = Grid(shape=(32,))
x = grid.dimensions[0]
time = grid.time_dim
f = TimeFunction(name='f', grid=grid, save=5)
f.data_with_halo[:] = 1.
op = Operator(Eq(f.forward, f[time, x-1] + f[time, x+1] + 1))
op.apply()
time_M = op._prepare_arguments()['time_M']
assert np.all(f.data_ro_domain[1] == 3.)
glb_pos_map = f.grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x]:
assert np.all(f.data_ro_domain[-1, time_M:] == 31.)
else:
assert np.all(f.data_ro_domain[-1, :-time_M] == 31.)
@pytest.mark.parallel(nprocs=4)
def test_trivial_eq_2d(self):
grid = Grid(shape=(8, 8,))
x, y = grid.dimensions
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid, space_order=1)
f.data_with_halo[:] = 1.
eqn = Eq(f.forward, f[t, x-1, y] + f[t, x+1, y] + f[t, x, y-1] + f[t, x, y+1])
op = Operator(eqn)
op.apply(time=1)
# Expected computed values
corner, side, interior = 10., 13., 16.
glb_pos_map = f.grid.distributor.glb_pos_map
assert np.all(f.data_ro_interior[0] == interior)
if LEFT in glb_pos_map[x] and LEFT in glb_pos_map[y]:
assert f.data_ro_domain[0, 0, 0] == corner
assert np.all(f.data_ro_domain[0, 1:, :1] == side)
assert np.all(f.data_ro_domain[0, :1, 1:] == side)
elif LEFT in glb_pos_map[x] and RIGHT in glb_pos_map[y]:
assert f.data_ro_domain[0, 0, -1] == corner
assert np.all(f.data_ro_domain[0, :1, :-1] == side)
assert np.all(f.data_ro_domain[0, 1:, -1:] == side)
elif RIGHT in glb_pos_map[x] and LEFT in glb_pos_map[y]:
assert f.data_ro_domain[0, -1, 0] == corner
assert np.all(f.data_ro_domain[0, -1:, 1:] == side)
assert np.all(f.data_ro_domain[0, :-1, :1] == side)
else:
assert f.data_ro_domain[0, -1, -1] == corner
assert np.all(f.data_ro_domain[0, :-1, -1:] == side)
assert np.all(f.data_ro_domain[0, -1:, :-1] == side)
@pytest.mark.parallel(nprocs=4)
def test_multiple_eqs_funcs(self):
grid = Grid(shape=(12,))
x = grid.dimensions[0]
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
f.data_with_halo[:] = 0.
g = TimeFunction(name='g', grid=grid)
g.data_with_halo[:] = 0.
op = Operator([Eq(f.forward, f[t, x+1] + g[t, x-1] + 1),
Eq(g.forward, f[t, x-1] + g[t, x+1] + 1)])
op.apply(time=1)
assert np.all(f.data_ro_domain[1] == 1.)
if f.grid.distributor.myrank == 0:
assert f.data_ro_domain[0, 0] == 2.
assert np.all(f.data_ro_domain[0, 1:] == 3.)
elif f.grid.distributor.myrank == f.grid.distributor.nprocs - 1:
assert f.data_ro_domain[0, -1] == 2.
assert np.all(f.data_ro_domain[0, :-1] == 3.)
else:
assert np.all(f.data_ro_domain[0] == 3.)
# Also check that there are no redundant halo exchanges. Here, only
# two are expected before the `x` Iteration, one for `f` and one for `g`
calls = FindNodes(Call).visit(op)
assert len(calls) == 2
def test_nostencil_implies_nohaloupdate(self):
grid = Grid(shape=(12,))
f = TimeFunction(name='f', grid=grid)
g = Function(name='g', grid=grid)
op = Operator([Eq(f.forward, f + 1.),
Eq(g, f + 1.)])
calls = FindNodes(Call).visit(op)
assert len(calls) == 0
@pytest.mark.parallel(nprocs=1)
def test_stencil_nowrite_implies_haloupdate(self):
grid = Grid(shape=(12,))
x = grid.dimensions[0]
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
g = Function(name='g', grid=grid)
op = Operator(Eq(g, f[t, x-1] + f[t, x+1] + 1.))
calls = FindNodes(Call).visit(op)
assert len(calls) == 1
@pytest.mark.parallel(nprocs=1)
def test_avoid_redundant_haloupdate(self):
grid = Grid(shape=(12,))
x = grid.dimensions[0]
t = grid.stepping_dim
i = Dimension(name='i')
j = Dimension(name='j')
f = TimeFunction(name='f', grid=grid)
g = Function(name='g', grid=grid)
op = Operator([Eq(f.forward, f[t, x-1] + f[t, x+1] + 1.),
Inc(f[t+1, i], 1.), # no halo update as it's an Inc
Eq(g, f[t, j] + 1)]) # access `f` at `t`, not `t+1`!
calls = FindNodes(Call).visit(op)
assert len(calls) == 1
@pytest.mark.parallel(nprocs=2)
def test_redo_haloupdate_due_to_antidep(self):
grid = Grid(shape=(12,))
x = grid.dimensions[0]
t = grid.stepping_dim
f = TimeFunction(name='f', grid=grid)
g = TimeFunction(name='g', grid=grid)
op = Operator([Eq(f.forward, f[t, x-1] + f[t, x+1] + 1.),
Eq(g.forward, f[t+1, x-1] + f[t+1, x+1] + g)])
op.apply(time=0)
calls = FindNodes(Call).visit(op)
assert len(calls) == 2
assert np.all(f.data_ro_domain[1] == 1.)
glb_pos_map = f.grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x]:
assert np.all(g.data_ro_domain[1, 1:] == 2.)
else:
assert np.all(g.data_ro_domain[1, :-1] == 2.)
def test_haloupdate_not_requried(self):
grid = Grid(shape=(4, 4))
u = TimeFunction(name='u', grid=grid, space_order=4, time_order=2, save=None)
v = TimeFunction(name='v', grid=grid, space_order=0, time_order=0, save=5)
g = Function(name='g', grid=grid, space_order=0)
i = Function(name='i', grid=grid, space_order=0)
shift = Constant(name='shift', dtype=np.int32)
step = Eq(u.forward, u - u.backward + 1)
g_inc = Inc(g, u * v.subs(grid.time_dim, grid.time_dim - shift))
i_inc = Inc(i, (v*v).subs(grid.time_dim, grid.time_dim - shift))
op = Operator([step, g_inc, i_inc])
# No stencil in the expressions, so no halo update required!
calls = FindNodes(Call).visit(op)
assert len(calls) == 0
@skipif_yask
class TestOperatorAdvanced(object):
@pytest.mark.parallel(nprocs=[4])
def test_injection_wodup(self):
"""
Test injection operator when the sparse points don't need to be replicated
("wodup" -> w/o duplication) over multiple MPI ranks.
"""
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
f = Function(name='f', grid=grid, space_order=0)
f.data[:] = 0.
if grid.distributor.myrank == 0:
coords = [(0.5, 0.5), (0.5, 2.5), (2.5, 0.5), (2.5, 2.5)]
else:
coords = []
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
sf.data[:] = 4.
# This is the situation at this point
# O is a grid point
# * is a sparse point
#
# O --- O --- O --- O
# | * | | * |
# O --- O --- O --- O
# | | | |
# O --- O --- O --- O
# | * | | * |
# O --- O --- O --- O
op = Operator(sf.inject(field=f, expr=sf + 1))
op.apply()
assert np.all(f.data == 1.25)
@pytest.mark.parallel(nprocs=4)
def test_injection_wodup_wtime(self):
"""
Just like ``test_injection_wodup``, but using a SparseTimeFunction
instead of a SparseFunction. Hence, the data scattering/gathering now
has to correctly pack/unpack multidimensional arrays.
"""
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
save = 3
f = TimeFunction(name='f', grid=grid, save=save, space_order=0)
f.data[:] = 0.
if grid.distributor.myrank == 0:
coords = [(0.5, 0.5), (0.5, 2.5), (2.5, 0.5), (2.5, 2.5)]
else:
coords = []
sf = SparseTimeFunction(name='sf', grid=grid, nt=save,
npoint=len(coords), coordinates=coords)
sf.data[0, :] = 4.
sf.data[1, :] = 8.
sf.data[2, :] = 12.
op = Operator(sf.inject(field=f, expr=sf + 1))
op.apply()
assert np.all(f.data[0] == 1.25)
assert np.all(f.data[1] == 2.25)
assert np.all(f.data[2] == 3.25)
@pytest.mark.parallel(nprocs=[4])
def test_injection_dup(self):
"""
Test injection operator when the sparse points are replicated over
multiple MPI ranks.
"""
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
f = Function(name='f', grid=grid)
f.data[:] = 0.
if grid.distributor.myrank == 0:
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
else:
coords = []
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
sf.data[:] = 4.
# Global view (left) and local view (right, after domain decomposition)
# O is a grid point
# x is a halo point
# A, B, C, D are sparse points
# Rank0 Rank1
# O --- O --- O --- O O --- O --- x x --- O --- O
# | A | | | | A | | | | |
# O --- O --- O --- O O --- O --- x x --- O --- O
# | | C | B | --> | | C | | C | B |
# O --- O --- O --- O x --- x --- x x --- x --- x
# | | D | | Rank2 Rank3
# O --- O --- O --- O x --- x --- x x --- x --- x
# | | C | | C | B |
# O --- O --- x x --- O --- O
# | | D | | D | |
# O --- O --- x x --- O --- O
#
# Expected `f.data` (global view)
#
# 1.25 --- 1.25 --- 0.00 --- 0.00
# | | | |
# 1.25 --- 2.50 --- 2.50 --- 1.25
# | | | |
# 0.00 --- 2.50 --- 3.75 --- 1.25
# | | | |
# 0.00 --- 1.25 --- 1.25 --- 0.00
op = Operator(sf.inject(field=f, expr=sf + 1))
op.apply()
glb_pos_map = grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x] and LEFT in glb_pos_map[y]: # rank0
assert np.all(f.data_ro_domain == [[1.25, 1.25], [1.25, 2.5]])
elif LEFT in glb_pos_map[x] and RIGHT in glb_pos_map[y]: # rank1
assert np.all(f.data_ro_domain == [[0., 0.], [2.5, 1.25]])
elif RIGHT in glb_pos_map[x] and LEFT in glb_pos_map[y]:
assert np.all(f.data_ro_domain == [[0., 2.5], [0., 1.25]])
elif RIGHT in glb_pos_map[x] and RIGHT in glb_pos_map[y]:
assert np.all(f.data_ro_domain == [[3.75, 1.25], [1.25, 0.]])
@pytest.mark.parallel(nprocs=[4])
def test_interpolation_wodup(self):
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
f = Function(name='f', grid=grid, space_order=0)
f.data[:] = 4.
if grid.distributor.myrank == 0:
coords = [(0.5, 0.5), (0.5, 2.5), (2.5, 0.5), (2.5, 2.5)]
else:
coords = []
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
sf.data[:] = 0.
# This is the situation at this point
# O is a grid point
# * is a sparse point
#
# O --- O --- O --- O
# | * | | * |
# O --- O --- O --- O
# | | | |
# O --- O --- O --- O
# | * | | * |
# O --- O --- O --- O
op = Operator(sf.interpolate(expr=f))
op.apply()
assert np.all(sf.data == 4.)
@pytest.mark.parallel(nprocs=[4])
def test_interpolation_dup(self):
"""
Test interpolation operator when the sparse points are replicated over
multiple MPI ranks.
"""
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
# Init Function+data
f = Function(name='f', grid=grid)
glb_pos_map = grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x]:
f.data[:] = [[1., 1.], [2., 2.]]
else:
f.data[:] = [[3., 3.], [4., 4.]]
if grid.distributor.myrank == 0:
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
else:
coords = []
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
sf.data[:] = 0.
# Global view (left) and local view (right, after domain decomposition)
# O is a grid point
# x is a halo point
# A, B, C, D are sparse points
# Rank0 Rank1
# O --- O --- O --- O O --- O --- x x --- O --- O
# | A | | | | A | | | | |
# O --- O --- O --- O O --- O --- x x --- O --- O
# | | C | B | --> | | C | | C | B |
# O --- O --- O --- O x --- x --- x x --- x --- x
# | | D | | Rank2 Rank3
# O --- O --- O --- O x --- x --- x x --- x --- x
# | | C | | C | B |
# O --- O --- x x --- O --- O
# | | D | | D | |
# O --- O --- x x --- O --- O
#
# The initial `f.data` is (global view)
#
# 1. --- 1. --- 1. --- 1.
# | | | |
# 2. --- 2. --- 2. --- 2.
# | | | |
# 3. --- 3. --- 3. --- 3.
# | | | |
# 4. --- 4. --- 4. --- 4.
#
# Expected `sf.data` (global view)
#
# 1.5 --- 2.5 --- 2.5 --- 3.5
op = Operator(sf.interpolate(expr=f))
op.apply()
if grid.distributor.myrank == 0:
assert np.all(sf.data == [1.5, 2.5, 2.5, 3.5])
else:
assert sf.data.size == 0
@pytest.mark.parallel(nprocs=2)
def test_subsampling(self):
grid = Grid(shape=(40,))
x = grid.dimensions[0]
t = grid.stepping_dim
time = grid.time_dim
nt = 9
f = TimeFunction(name='f', grid=grid)
f.data_with_halo[:] = 1.
# Setup subsampled function
factor = 4
nsamples = (nt+factor-1)//factor
times = ConditionalDimension('t_sub', parent=time, factor=factor)
fsave = TimeFunction(name='fsave', grid=grid, save=nsamples, time_dim=times)
eqns = [Eq(f.forward, f[t, x-1] + f[t, x+1]), Eq(fsave, f)]
op = Operator(eqns)
op.apply(time=nt-1)
assert np.all(f.data_ro_domain[0] == fsave.data_ro_domain[nsamples-1])
glb_pos_map = f.grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x]:
assert np.all(fsave.data_ro_domain[nsamples-1, nt-1:] == 256.)
else:
assert np.all(fsave.data_ro_domain[nsamples-1, :-(nt-1)] == 256.)
# Also check there are no redundant halo exchanges
calls = FindNodes(Call).visit(op)
assert len(calls) == 1
# In particular, there is no need for a halo exchange within the conditional
conditional = FindNodes(Conditional).visit(op)
assert len(conditional) == 1
assert len(FindNodes(Call).visit(conditional[0])) == 0
@pytest.mark.parallel(nprocs=2)
def test_arguments_subrange(self):
"""
Test op.apply when a subrange is specified for a distributed dimension.
"""
grid = Grid(shape=(16,))
x = grid.dimensions[0]
f = TimeFunction(name='f', grid=grid)
op = Operator(Eq(f.forward, f + 1.))
op.apply(time=0, x_m=4, x_M=11)
glb_pos_map = f.grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x]:
assert np.all(f.data_ro_domain[1, :4] == 0.)
assert np.all(f.data_ro_domain[1, 4:] == 1.)
else:
assert np.all(f.data_ro_domain[1, :-4] == 1.)
assert np.all(f.data_ro_domain[1, -4:] == 0.)
@pytest.mark.parallel(nprocs=2)
def test_bcs_basic(self):
"""
Test MPI in presence of boundary condition loops. Here, no halo exchange
is expected (as there is no stencil in the computed expression) but we
check that:
* the left BC loop is computed by the leftmost rank only
* the right BC loop is computed by the rightmost rank only
"""
grid = Grid(shape=(20,))
x = grid.dimensions[0]
t = grid.stepping_dim
thickness = 4
u = TimeFunction(name='u', grid=grid, time_order=1)
xleft = SubDimension.left(name='xleft', parent=x, thickness=thickness)
xi = SubDimension.middle(name='xi', parent=x,
thickness_left=thickness, thickness_right=thickness)
xright = SubDimension.right(name='xright', parent=x, thickness=thickness)
t_in_centre = Eq(u[t+1, xi], 1)
leftbc = Eq(u[t+1, xleft], u[t+1, xleft+1] + 1)
rightbc = Eq(u[t+1, xright], u[t+1, xright-1] + 1)
op = Operator([t_in_centre, leftbc, rightbc])
op.apply(time_m=1, time_M=1)
glb_pos_map = u.grid.distributor.glb_pos_map
if LEFT in glb_pos_map[x]:
assert np.all(u.data_ro_domain[0, thickness:] == 1.)
assert np.all(u.data_ro_domain[0, :thickness] == range(thickness+1, 1, -1))
else:
assert np.all(u.data_ro_domain[0, :-thickness] == 1.)
assert np.all(u.data_ro_domain[0, -thickness:] == range(2, thickness+2))
@pytest.mark.parallel(nprocs=9)
def test_nontrivial_operator(self):
"""
Test MPI in a non-trivial scenario: ::
* 9 processes logically organised in a 3x3 cartesian grid (as opposed to
most tests in this module, which only use 2 or 4 processed);
* star-like stencil expression;
* non-trivial Higdon-like BCs;
* simultaneous presence of TimeFunction(grid), Function(grid), and
Function(dimensions)
"""
size_x, size_y = 9, 9
tkn = 2
# Grid and Dimensions
grid = Grid(shape=(size_x, size_y,))
x, y = grid.dimensions
t = grid.stepping_dim
# SubDimensions to implement BCs
xl, yl = [SubDimension.left('%sl' % d.name, d, tkn) for d in [x, y]]
xi, yi = [SubDimension.middle('%si' % d.name, d, tkn, tkn) for d in [x, y]]
xr, yr = [SubDimension.right('%sr' % d.name, d, tkn) for d in [x, y]]
# Functions
u = TimeFunction(name='f', grid=grid)
m = Function(name='m', grid=grid)
c = Function(name='c', grid=grid, dimensions=(x,), shape=(size_x,))
# Data initialization
u.data_with_halo[:] = 0.
m.data_with_halo[:] = 1.
c.data_with_halo[:] = 0.
# Equations
c_init = Eq(c, 1.)
eqn = Eq(u[t+1, xi, yi], u[t, xi, yi] + m[xi, yi] + c[xi] + 1.)
bc_left = Eq(u[t+1, xl, yi], u[t+1, xl+1, yi] + 1.)
bc_right = Eq(u[t+1, xr, yi], u[t+1, xr-1, yi] + 1.)
bc_top = Eq(u[t+1, xi, yl], u[t+1, xi, yl+1] + 1.)
bc_bottom = Eq(u[t+1, xi, yr], u[t+1, xi, yr-1] + 1.)
op = Operator([c_init, eqn, bc_left, bc_right, bc_top, bc_bottom])
op.apply(time=0)
# Expected (global view):
# 0 0 5 5 5 5 5 0 0
# 0 0 4 4 4 4 4 0 0
# 5 4 3 3 3 3 3 4 5
# 5 4 3 3 3 3 3 4 5
# 5 4 3 3 3 3 3 4 5
# 5 4 3 3 3 3 3 4 5
# 0 0 4 4 4 4 4 0 0
# 0 0 5 5 5 5 5 0 0
assert np.all(u.data_ro_domain[0] == 0) # The write occures at t=1
glb_pos_map = u.grid.distributor.glb_pos_map
# Check cornes
if LEFT in glb_pos_map[x] and LEFT in glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[0, 0, 5], [0, 0, 4], [5, 4, 3]])
elif LEFT in glb_pos_map[x] and RIGHT in glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[5, 0, 0], [4, 0, 0], [3, 4, 5]])
elif RIGHT in glb_pos_map[x] and LEFT in glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[5, 4, 3], [0, 0, 4], [0, 0, 5]])
elif RIGHT in glb_pos_map[x] and RIGHT in glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[3, 4, 5], [4, 0, 0], [5, 0, 0]])
# Check sides
if not glb_pos_map[x] and LEFT in glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[5, 4, 3], [5, 4, 3], [5, 4, 3]])
elif not glb_pos_map[x] and RIGHT in glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[3, 4, 5], [3, 4, 5], [3, 4, 5]])
elif LEFT in glb_pos_map[x] and not glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[5, 5, 5], [4, 4, 4], [3, 3, 3]])
elif RIGHT in glb_pos_map[x] and not glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == [[3, 3, 3], [4, 4, 4], [5, 5, 5]])
# Check center
if not glb_pos_map[x] and not glb_pos_map[y]:
assert np.all(u.data_ro_domain[1] == 3)
class TestIsotropicAcoustic(object):
"""
Test the acoustic wave model with MPI.
"""
# TODO: Cannot mark the following test as `xfail` since this marker
# doesn't cope well with the `parallel` mark. Leaving it commented out
# for the time being...
# @pytest.mark.parametrize('shape, kernel, space_order, nbpml', [
# # 1 tests with varying time and space orders
# ((60, ), 'OT2', 4, 10),
# ])
# @pytest.mark.parallel(nprocs=2)
# def test_adjoint_F(self, shape, kernel, space_order, nbpml):
# from test_adjoint import TestAdjoint
# TestAdjoint().test_adjoint_F('layers', shape, kernel, space_order, nbpml)
pass
if __name__ == "__main__":
configuration['mpi'] = True
TestOperatorAdvanced().test_interpolation_dup()
| [
"[email protected]"
] | |
6ecd7aef7feeaf0c0a1b5b863f5a9956e43c4838 | 99094cc79bdbb69bb24516e473f17b385847cb3a | /58.Length of Last Word/Solution.py | 6a986db084927025fd5e816d63158989ce2edd7a | [] | no_license | simonxu14/LeetCode_Simon | 7d389bbfafd3906876a3f796195bb14db3a1aeb3 | 13f4595374f30b482c4da76e466037516ca3a420 | refs/heads/master | 2020-04-06T03:33:25.846686 | 2016-09-10T00:23:11 | 2016-09-10T00:23:11 | 40,810,940 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 248 | py | __author__ = 'Simon'
class Solution(object):
def lengthOfLastWord(self, s):
"""
:type s: str
:rtype: int
"""
li = s.split()
if li:
return len(li[-1])
else:
return 0 | [
"[email protected]"
] | |
ff44601100038aba800c66cb8d18e73458d7b4df | bdf86d69efc1c5b21950c316ddd078ad8a2f2ec0 | /venv/Lib/site-packages/twisted/application/runner/_runner.py | 66f1f11ee0f27fe0b61e6dfa8b9fee0befdaa03b | [
"LicenseRef-scancode-unknown-license-reference",
"MIT"
] | permissive | DuaNoDo/PythonProject | 543e153553c58e7174031b910fd6451399afcc81 | 2c5c8aa89dda4dec2ff4ca7171189788bf8b5f2c | refs/heads/master | 2020-05-07T22:22:29.878944 | 2019-06-14T07:44:35 | 2019-06-14T07:44:35 | 180,941,166 | 1 | 1 | null | 2019-06-04T06:27:29 | 2019-04-12T06:05:42 | Python | UTF-8 | Python | false | false | 5,763 | py | # -*- test-case-name: twisted.application.runner.test.test_runner -*-
# Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Twisted application runner.
"""
from os import kill
from signal import SIGTERM
from sys import stderr
from attr import attrib, attrs, Factory
from twisted.logger import (
globalLogBeginner, textFileLogObserver,
FilteringLogObserver, LogLevelFilterPredicate,
LogLevel, Logger,
)
from ._exit import exit, ExitStatus
from ._pidfile import nonePIDFile, AlreadyRunningError, InvalidPIDFileError
@attrs(frozen=True)
class Runner(object):
"""
Twisted application runner.
@cvar _log: The logger attached to this class.
@type _log: L{Logger}
@ivar _reactor: The reactor to start and run the application in.
@type _reactor: L{IReactorCore}
@ivar _pidFile: The file to store the running process ID in.
@type _pidFile: L{IPIDFile}
@ivar _kill: Whether this runner should kill an existing running
instance of the application.
@type _kill: L{bool}
@ivar _defaultLogLevel: The default log level to start the logging
system with.
@type _defaultLogLevel: L{constantly.NamedConstant} from L{LogLevel}
@ivar _logFile: A file stream to write logging output to.
@type _logFile: writable file-like object
@ivar _fileLogObserverFactory: A factory for the file log observer to
use when starting the logging system.
@type _pidFile: callable that takes a single writable file-like object
argument and returns a L{twisted.logger.FileLogObserver}
@ivar _whenRunning: Hook to call after the reactor is running;
this is where the application code that relies on the reactor gets
called.
@type _whenRunning: callable that takes the keyword arguments specified
by C{whenRunningArguments}
@ivar _whenRunningArguments: Keyword arguments to pass to
C{whenRunning} when it is called.
@type _whenRunningArguments: L{dict}
@ivar _reactorExited: Hook to call after the reactor exits.
@type _reactorExited: callable that takes the keyword arguments
specified by C{reactorExitedArguments}
@ivar _reactorExitedArguments: Keyword arguments to pass to
C{reactorExited} when it is called.
@type _reactorExitedArguments: L{dict}
"""
_log = Logger()
_reactor = attrib()
_pidFile = attrib(default=nonePIDFile)
_kill = attrib(default=False)
_defaultLogLevel = attrib(default=LogLevel.info)
_logFile = attrib(default=stderr)
_fileLogObserverFactory = attrib(default=textFileLogObserver)
_whenRunning = attrib(default=lambda **_: None)
_whenRunningArguments = attrib(default=Factory(dict))
_reactorExited = attrib(default=lambda **_: None)
_reactorExitedArguments = attrib(default=Factory(dict))
def run(self):
"""
Run this command.
"""
pidFile = self._pidFile
self.killIfRequested()
try:
with pidFile:
self.startLogging()
self.startReactor()
self.reactorExited()
except AlreadyRunningError:
exit(ExitStatus.EX_CONFIG, "Already running.")
return # When testing, patched exit doesn't exit
def killIfRequested(self):
"""
If C{self._kill} is true, attempt to kill a running instance of the
application.
"""
pidFile = self._pidFile
if self._kill:
if pidFile is nonePIDFile:
exit(ExitStatus.EX_USAGE, "No PID file specified.")
return # When testing, patched exit doesn't exit
try:
pid = pidFile.read()
except EnvironmentError:
exit(ExitStatus.EX_IOERR, "Unable to read PID file.")
return # When testing, patched exit doesn't exit
except InvalidPIDFileError:
exit(ExitStatus.EX_DATAERR, "Invalid PID file.")
return # When testing, patched exit doesn't exit
self.startLogging()
self._log.info("Terminating process: {pid}", pid=pid)
kill(pid, SIGTERM)
exit(ExitStatus.EX_OK)
return # When testing, patched exit doesn't exit
def startLogging(self):
"""
Start the L{twisted.logger} logging system.
"""
logFile = self._logFile
fileLogObserverFactory = self._fileLogObserverFactory
fileLogObserver = fileLogObserverFactory(logFile)
logLevelPredicate = LogLevelFilterPredicate(
defaultLogLevel=self._defaultLogLevel
)
filteringObserver = FilteringLogObserver(
fileLogObserver, [logLevelPredicate]
)
globalLogBeginner.beginLoggingTo([filteringObserver])
def startReactor(self):
"""
Register C{self._whenRunning} with the reactor so that it is called
once the reactor is running, then start the reactor.
"""
self._reactor.callWhenRunning(self.whenRunning)
self._log.info("Starting reactor...")
self._reactor.run()
def whenRunning(self):
"""
Call C{self._whenRunning} with C{self._whenRunningArguments}.
@note: This method is called after the reactor starts running.
"""
self._whenRunning(**self._whenRunningArguments)
def reactorExited(self):
"""
Call C{self._reactorExited} with C{self._reactorExitedArguments}.
@note: This method is called after the reactor exits.
"""
self._reactorExited(**self._reactorExitedArguments)
| [
"[email protected]"
] | |
2f6c2bce524bc945e8b1906c4fd08726bca5888c | 41dc19883789f45b6086399a1ae23995f53b4b2c | /BayesMadeSimple/distribution.py | dea353cbb2da5a315f6990d03872b1985e04638a | [
"MIT"
] | permissive | sunny2309/scipy_conf_notebooks | f86179ddcd67168b709c755cc01862ed7c9ab2bd | 30a85d5137db95e01461ad21519bc1bdf294044b | refs/heads/master | 2022-10-28T17:27:42.717171 | 2021-01-25T02:24:05 | 2021-01-25T02:24:05 | 221,385,814 | 2 | 0 | MIT | 2022-10-20T02:55:20 | 2019-11-13T06:12:07 | Jupyter Notebook | UTF-8 | Python | false | false | 16,338 | py | """
Pmf: Represents a Probability Mass Function (PMF).
Cdf: Represents a Cumulative Distribution Function (CDF).
Copyright 2019 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.interpolate import interp1d
def underride(d, **options):
"""Add key-value pairs to d only if key is not in d.
d: dictionary
options: keyword args to add to d
returns: modified d
"""
for key, val in options.items():
d.setdefault(key, val)
return d
class Pmf(pd.Series):
"""Represents a probability Mass Function (PMF)."""
def __init__(self, *args, **kwargs):
"""Initialize a Pmf.
Note: this cleans up a weird Series behavior, which is
that Series() and Series([]) yield different results.
See: https://github.com/pandas-dev/pandas/issues/16737
"""
if args:
super().__init__(*args, **kwargs)
else:
underride(kwargs, dtype=np.float64)
super().__init__([], **kwargs)
def copy(self, **kwargs):
"""Make a copy.
returns: new Pmf
"""
return Pmf(self, **kwargs)
def __getitem__(self, qs):
"""Look up qs and return ps."""
try:
return super().__getitem__(qs)
except (KeyError, ValueError, IndexError):
return 0
@property
def qs(self):
"""Get the quantities.
returns: NumPy array
"""
return self.index.values
@property
def ps(self):
"""Get the probabilities.
returns: NumPy array
"""
return self.values
def _repr_html_(self):
"""Returns an HTML representation of the series.
Mostly used for Jupyter notebooks.
"""
df = pd.DataFrame(dict(probs=self))
return df._repr_html_()
def normalize(self):
"""Make the probabilities add up to 1 (modifies self).
returns: normalizing constant
"""
total = self.sum()
self /= total
return total
def mean(self):
"""Computes expected value.
returns: float
"""
#TODO: error if not normalized
return np.sum(self.ps * self.qs)
def median(self):
"""Median (50th percentile).
returns: float
"""
return self.quantile(0.5)
def quantile(self, ps):
"""Quantiles.
Computes the inverse CDF of ps, that is,
the values that correspond to the given probabilities.
returns: float
"""
return self.make_cdf().quantile(ps)
def var(self):
"""Variance of a PMF.
returns: float
"""
m = self.mean()
d = self.qs - m
return np.sum(d**2 * self.ps)
def std(self):
"""Standard deviation of a PMF.
returns: float
"""
return np.sqrt(self.var())
def sample(self, *args, **kwargs):
"""Makes a random sample.
args: same as ps.Series.sample
options: same as ps.Series.sample
returns: Series
"""
# TODO: finish this
underride(kwargs, weights=self.ps)
return self.index.sample(*args, **kwargs)
def choice(self, *args, **kwargs):
"""Makes a random sample.
Uses the probabilities as weights unless `p` is provided.
args: same as np.random.choice
options: same as np.random.choice
returns: NumPy array
"""
underride(kwargs, p=self.ps)
return np.random.choice(self.qs, *args, **kwargs)
def bar(self, **options):
"""Makes a bar plot.
options: same as plt.bar
"""
underride(options, label=self.name)
plt.bar(self.qs, self.ps, **options)
def __add__(self, x):
"""Computes the Pmf of the sum of values drawn from self and x.
x: another Pmf or a scalar
returns: new Pmf
"""
if isinstance(x, Pmf):
return pmf_add(self, x)
else:
return Pmf(self.ps, index=self.qs + x)
__radd__ = __add__
def __sub__(self, x):
"""Computes the Pmf of the diff of values drawn from self and other.
x: another Pmf
returns: new Pmf
"""
if isinstance(x, Pmf):
return pmf_sub(self, x)
else:
return Pmf(self.ps, index=self.qs - x)
# TODO: implement rsub
# __rsub__ = __sub__
# TODO: mul, div, truediv, divmod?
def make_joint(self, other, **options):
"""Make joint distribution
:param self:
:param other:
:param options: passed to Pmf constructor
:return: new Pmf
"""
qs = pd.MultiIndex.from_product([self.qs, other.qs])
ps = np.multiply.outer(self.ps, other.ps).flatten()
return Pmf(ps, index=qs, **options)
def marginal(self, i, name=None):
"""Gets the marginal distribution of the indicated variable.
i: index of the variable we want
name: string
Returns: Pmf
"""
# TODO: rewrite this using multiindex operations
pmf = Pmf(name=name)
for vs, p in self.items():
pmf[vs[i]] += p
return pmf
def conditional(self, i, j, val, name=None):
"""Gets the conditional distribution of the indicated variable.
Distribution of vs[i], conditioned on vs[j] = val.
i: index of the variable we want
j: which variable is conditioned on
val: the value the jth variable has to have
name: string
Returns: Pmf
"""
# TODO: rewrite this using multiindex operations
pmf = Pmf(name=name)
for vs, p in self.items():
if vs[j] == val:
pmf[vs[i]] += p
pmf.normalize()
return pmf
def update(self, likelihood, data):
"""Bayesian update.
likelihood: function that takes (data, hypo) and returns
likelihood of data under hypo
data: whatever format like_func understands
returns: normalizing constant
"""
for hypo in self.qs:
self[hypo] *= likelihood(data, hypo)
return self.normalize()
def max_prob(self):
"""Value with the highest probability.
returns: the value with the highest probability
"""
return self.idxmax()
def make_cdf(self, normalize=True):
"""Make a Cdf from the Pmf.
It can be good to normalize the cdf even if the Pmf was normalized,
to guarantee that the last element of `ps` is 1.
returns: Cdf
"""
cdf = Cdf(self.cumsum())
if normalize:
cdf.normalize()
return cdf
def quantile(self, ps):
"""Quantities corresponding to given probabilities.
ps: sequence of probabilities
return: sequence of quantities
"""
cdf = self.sort_index().cumsum()
interp = interp1d(cdf.values, cdf.index,
kind='next',
copy=False,
assume_sorted=True,
bounds_error=False,
fill_value=(self.qs[0], np.nan))
return interp(ps)
def credible_interval(self, p):
"""Credible interval containing the given probability.
p: float 0-1
returns: array of two quantities
"""
tail = (1-p) / 2
ps = [tail, 1-tail]
return self.quantile(ps)
@staticmethod
def from_seq(seq, normalize=True, sort=True, **options):
"""Make a PMF from a sequence of values.
seq: any kind of sequence
normalize: whether to normalize the Pmf, default True
sort: whether to sort the Pmf by values, default True
options: passed to the pd.Series constructor
returns: Pmf object
"""
series = pd.Series(seq).value_counts(sort=False)
options['copy'] = False
pmf = Pmf(series, **options)
if sort:
pmf.sort_index(inplace=True)
if normalize:
pmf.normalize()
return pmf
# Comparison operators
def gt(self, x):
"""Probability that a sample from this Pmf > x.
x: number
returns: float probability
"""
if isinstance(x, Pmf):
return pmf_gt(self, x)
else:
return self[self.qs > x].sum()
__gt__ = gt
def lt(self, x):
"""Probability that a sample from this Pmf < x.
x: number
returns: float probability
"""
if isinstance(x, Pmf):
return pmf_lt(self, x)
else:
return self[self.qs < x].sum()
__lt__ = lt
def ge(self, x):
"""Probability that a sample from this Pmf >= x.
x: number
returns: float probability
"""
if isinstance(x, Pmf):
return pmf_ge(self, x)
else:
return self[self.qs >= x].sum()
__ge__ = ge
def le(self, x):
"""Probability that a sample from this Pmf <= x.
x: number
returns: float probability
"""
if isinstance(x, Pmf):
return pmf_le(self, x)
else:
return self[self.qs <= x].sum()
__le__ = le
def eq(self, x):
"""Probability that a sample from this Pmf == x.
x: number
returns: float probability
"""
if isinstance(x, Pmf):
return pmf_eq(self, x)
else:
return self[self.qs == x].sum()
__eq__ = eq
def ne(self, x):
"""Probability that a sample from this Pmf != x.
x: number
returns: float probability
"""
if isinstance(x, Pmf):
return pmf_ne(self, x)
else:
return self[self.qs != x].sum()
__ne__ = ne
def pmf_conv(pmf1, pmf2, ufunc):
"""Convolve two PMFs.
pmf1:
pmf2:
ufunc: elementwise function for arrays
returns: new Pmf
"""
qs = ufunc(pmf1.qs, pmf2.qs).flatten()
ps = np.multiply.outer(pmf1.ps, pmf2.ps).flatten()
series = pd.Series(ps).groupby(qs).sum()
return Pmf(series)
def pmf_add(pmf1, pmf2):
"""Distribution of the sum.
pmf1:
pmf2:
returns: new Pmf
"""
return pmf_conv(pmf1, pmf2, np.add.outer)
def pmf_sub(pmf1, pmf2):
"""Distribution of the difference.
pmf1:
pmf2:
returns: new Pmf
"""
return pmf_conv(pmf1, pmf2, np.subtract.outer)
def pmf_outer(pmf1, pmf2, ufunc):
"""Computes the outer product of two PMFs.
pmf1:
pmf2:
ufunc: function to apply to the qs
returns: NumPy array
"""
qs = ufunc.outer(pmf1.qs, pmf2.qs)
ps = np.multiply.outer(pmf1.ps, pmf2.ps)
return qs * ps
def pmf_gt(pmf1, pmf2):
"""Probability that a value from pmf1 is greater than a value from pmf2.
pmf1: Pmf object
pmf2: Pmf object
returns: float probability
"""
outer = pmf_outer(pmf1, pmf2, np.greater)
return outer.sum()
def pmf_lt(pmf1, pmf2):
"""Probability that a value from pmf1 is less than a value from pmf2.
pmf1: Pmf object
pmf2: Pmf object
returns: float probability
"""
outer = pmf_outer(pmf1, pmf2, np.less)
return outer.sum()
def pmf_ge(pmf1, pmf2):
"""Probability that a value from pmf1 is >= than a value from pmf2.
pmf1: Pmf object
pmf2: Pmf object
returns: float probability
"""
outer = pmf_outer(pmf1, pmf2, np.greater_equal)
return outer.sum()
def pmf_le(pmf1, pmf2):
"""Probability that a value from pmf1 is <= than a value from pmf2.
pmf1: Pmf object
pmf2: Pmf object
returns: float probability
"""
outer = pmf_outer(pmf1, pmf2, np.less_equal)
return outer.sum()
def pmf_eq(pmf1, pmf2):
"""Probability that a value from pmf1 equals a value from pmf2.
pmf1: Pmf object
pmf2: Pmf object
returns: float probability
"""
outer = pmf_outer(pmf1, pmf2, np.equal)
return outer.sum()
def pmf_ne(pmf1, pmf2):
"""Probability that a value from pmf1 is <= than a value from pmf2.
pmf1: Pmf object
pmf2: Pmf object
returns: float probability
"""
outer = pmf_outer(pmf1, pmf2, np.not_equal)
return outer.sum()
class Cdf(pd.Series):
"""Represents a Cumulative Distribution Function (CDF)."""
def __init__(self, *args, **kwargs):
"""Initialize a Cdf.
Note: this cleans up a weird Series behavior, which is
that Series() and Series([]) yield different results.
See: https://github.com/pandas-dev/pandas/issues/16737
"""
if args:
super().__init__(*args, **kwargs)
else:
underride(kwargs, dtype=np.float64)
super().__init__([], **kwargs)
def copy(self, **kwargs):
"""Make a copy.
returns: new Cdf
"""
return Cdf(self, **kwargs)
@property
def forward(self):
interp = interp1d(self.qs, self.ps,
kind='previous',
copy=False,
assume_sorted=True,
bounds_error=False,
fill_value=(0,1))
return interp
@property
def inverse(self):
interp = interp1d(self.ps, self.qs,
kind='next',
copy=False,
assume_sorted=True,
bounds_error=False,
fill_value=(self.qs[0], np.nan))
return interp
# calling a Cdf like a function does forward lookup
__call__ = forward
# quantile is the same as an inverse lookup
quantile = inverse
@staticmethod
def from_seq(seq, normalize=True, sort=True, **options):
"""Make a CDF from a sequence of values.
seq: any kind of sequence
normalize: whether to normalize the Cdf, default True
sort: whether to sort the Cdf by values, default True
options: passed to the pd.Series constructor
returns: CDF object
"""
pmf = Pmf.from_seq(seq, normalize=False, sort=sort, **options)
return pmf.make_cdf(normalize=normalize)
@property
def qs(self):
"""Get the quantities.
returns: NumPy array
"""
return self.index.values
@property
def ps(self):
"""Get the probabilities.
returns: NumPy array
"""
return self.values
def _repr_html_(self):
"""Returns an HTML representation of the series.
Mostly used for Jupyter notebooks.
"""
df = pd.DataFrame(dict(probs=self))
return df._repr_html_()
def normalize(self):
"""Make the probabilities add up to 1 (modifies self).
returns: normalizing constant
"""
total = self.ps[-1]
self /= total
return total
def make_pmf(self, normalize=False):
"""Make a Pmf from the Cdf.
returns: Cdf
"""
ps = self.ps
diff = np.ediff1d(ps, to_begin=ps[0])
pmf = Pmf(pd.Series(diff, index=self.index.copy()))
if normalize:
pmf.normalize()
return pmf
def choice(self, *args, **kwargs):
"""Makes a random sample.
Uses the probabilities as weights unless `p` is provided.
args: same as np.random.choice
options: same as np.random.choice
returns: NumPy array
"""
# TODO: Make this more efficient by implementing the inverse CDF method.
pmf = self.make_pmf()
return pmf.choice(*args, *kwargs)
def mean(self):
"""Expected value.
returns: float
"""
return self.make_pmf().mean()
def var(self):
"""Variance.
returns: float
"""
return self.make_pmf().var()
def std(self):
"""Standard deviation.
returns: float
"""
return self.make_pmf().std()
def median(self):
"""Median (50th percentile).
returns: float
"""
return self.quantile(0.5)
| [
"[email protected]"
] | |
af7f3350737682e5f14e67fac76f11d84afe65b1 | edd8ad3dcb6ee9b019c999b712f8ee0c468e2b81 | /Python 300/08. Iteration Statement/144.py | 641f2af3cf169953aeb4921da394052b49442b99 | [] | no_license | narinn-star/Python | 575cba200de35b9edf3832c4e41ccce657075751 | 14eba211cd3a9e9708a30073ba5b31d21d39eeef | refs/heads/master | 2023-05-25T22:57:26.079294 | 2021-06-07T15:29:39 | 2021-06-07T15:29:39 | 331,647,462 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 88 | py | #for문 _ animals
lists = ['dog', 'cat','parrot']
for i in lists:
print(i, len(i)) | [
"[email protected]"
] | |
d1ac6305dbd6d50b835b3c72c2b048137df5ea1f | de24f83a5e3768a2638ebcf13cbe717e75740168 | /moodledata/vpl_data/81/usersdata/212/47049/submittedfiles/dec2bin.py | 6411c5dd71bf68dde30b00d10c31f9fc65086a43 | [] | no_license | rafaelperazzo/programacao-web | 95643423a35c44613b0f64bed05bd34780fe2436 | 170dd5440afb9ee68a973f3de13a99aa4c735d79 | refs/heads/master | 2021-01-12T14:06:25.773146 | 2017-12-22T16:05:45 | 2017-12-22T16:05:45 | 69,566,344 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 138 | py | # -*- coding: utf-8 -*-
p=int(input('digite o valor do menor número:'))
q=int(input('digite o valor do maior número:'))
n=1%10
print(n)
| [
"[email protected]"
] | |
46451d297fa736664316b7c35106ff642cada2ff | cbb7f79a50b05e2ab670ae19bbd1c3b8dead437d | /dict_ordem.py | d24ab507f66b1828b5ff9371ba46aa626fa734e0 | [] | no_license | lfbessegato/Python_Avancado | 3b680d65fe543bd915b5798a85be1f7dadfad4c4 | bb73b99d64f92693a6fe71748f2c24aaabe7d4e1 | refs/heads/master | 2022-09-07T20:28:07.037656 | 2020-05-29T20:24:07 | 2020-05-29T20:24:07 | 265,316,529 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 178 | py | from collections import OrderedDict
# Ordered -> Mantém a Ordem
d = OrderedDict()
d['python'] = 10
d['java'] = 5
d['php'] = 6
d['C'] = 10
for key in d:
print(key, d[key])
| [
"[email protected]"
] | |
c5a55686d52aef4a636fcd08c7d52bca631af994 | 8c3ba133fa34cf2f936ba9176459690008e9e1fb | /imagepy/menus/Window/widgets_plgs.py | 4a05938af587349c3a114d2efe75198b21d28d8b | [
"BSD-2-Clause"
] | permissive | qixinbo/imagepy | fcd272b231b3f49fafd51425f46e826a73841c1f | a2722443dfddf2b0b81b44512427b8a273a7424c | refs/heads/master | 2023-03-16T15:58:57.330418 | 2022-09-03T13:35:46 | 2022-09-03T13:35:46 | 519,933,892 | 0 | 0 | BSD-4-Clause | 2022-08-01T02:02:26 | 2022-08-01T02:02:25 | null | UTF-8 | Python | false | false | 532 | py | from sciapp.action import Free
class Widgets(Free):
"""ImageKiller: derived from sciapp.action.Free"""
title = 'Widgets'
asyn = False
def run(self, para = None):
self.app.switch_widget()
class ToolBar(Free):
title = 'Toolbar'
asyn = False
def run(self, para = None):
self.app.switch_toolbar()
class TableWindow(Free):
"""ImageKiller: derived from sciapp.action.Free"""
title = 'Tables Window'
asyn = False
#process
def run(self, para = None):
self.app.switch_table()
plgs = [Widgets, ToolBar, TableWindow] | [
"[email protected]"
] | |
f6fa771d57a3a10af786708c35aa3393e0e40935 | 9c2ca939f29b861afec382cd17a462775a3974d0 | /run_worker.py | fcec489b5ac3ac725751dac7c59693090a0cba6f | [
"BSD-2-Clause"
] | permissive | merrlyne/gchatautorespond | 1e2009823e16289ea2cea709cfee5cd2a3e97459 | a7f8d7b715ca9851a65588a268ce39addb906b6d | refs/heads/master | 2020-03-20T12:49:18.882038 | 2018-03-29T18:38:58 | 2018-03-29T18:38:58 | 137,441,551 | 0 | 1 | null | 2018-06-15T04:38:49 | 2018-06-15T04:38:49 | null | UTF-8 | Python | false | false | 1,564 | py | from gevent import monkey
monkey.patch_all()
import django
django.setup()
import logging
from threading import Thread
from django.conf import settings
from gevent.wsgi import WSGIServer
from raven.contrib.flask import Sentry
from gchatautorespond.lib.chatworker.worker import Worker, app
from gchatautorespond.lib.chatworker.bot import ContextFilter
if __name__ == '__main__':
worker = Worker()
# Loading takes some time; don't block the api while it goes on.
thread = Thread(target=worker.load)
thread.start()
app.config['worker'] = worker
app.config['LOGGER_NAME'] = 'gchatautorespond.worker'
app.config.update({'SENTRY_' + k.upper(): v for (k, v) in settings.RAVEN_CONFIG.items()
if k != 'dsn'})
# Add the ContextFilter to all stream handlers.
# It can't be attached to the loggers since that wouldn't handle subloggers,
# nor can it be attached to null/sentry handlers, since it'd produce output twice.
handlers = set()
for logger_name in settings.LOGGING['loggers']:
logger = logging.getLogger(logger_name)
for handler in logger.handlers:
if isinstance(handler, logging.StreamHandler):
handlers.add(handler)
for handler in handlers:
handler.addFilter(ContextFilter)
if 'dsn' in settings.RAVEN_CONFIG:
sentry = Sentry(app, dsn=settings.RAVEN_CONFIG['dsn'],
logging=True, level=logging.ERROR)
server = WSGIServer(('127.0.0.1', settings.WORKER_PORT), app)
server.serve_forever()
| [
"[email protected]"
] | |
b2a8e001c69a95a4fb2a947d732d78d6d7d8c012 | 632b94beca62f7c8af5ae1d1e8e095a352600429 | /build/ros_controllers/ros_controllers/position_controllers/catkin_generated/pkg.installspace.context.pc.py | 4ddc4e67bff606fc70fdb62976ffda91a4cd6eb2 | [] | no_license | Haoran-Zhao/US_UR3 | d9eb17a7eceed75bc623be4f4db417a38f5a9f8d | a0c25e1daf613bb45dbd08075e3185cb9cd03657 | refs/heads/master | 2020-08-31T07:02:45.403001 | 2020-05-27T16:58:52 | 2020-05-27T16:58:52 | 218,629,020 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 507 | py | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "${prefix}/include".split(';') if "${prefix}/include" != "" else []
PROJECT_CATKIN_DEPENDS = "controller_interface;forward_command_controller".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "-lposition_controllers".split(';') if "-lposition_controllers" != "" else []
PROJECT_NAME = "position_controllers"
PROJECT_SPACE_DIR = "/home/haoran/US_UR3/install"
PROJECT_VERSION = "0.13.6"
| [
"[email protected]"
] | |
2fea31c0cd40ed40aa5a152c571bd75391e2bf24 | b47f2e3f3298388b1bcab3213bef42682985135e | /experiments/heat-3d/tmp_files/6909.py | efaecf45f0b280f386864f84a69acd803b7e70e3 | [
"BSD-2-Clause"
] | permissive | LoopTilingBenchmark/benchmark | 29cc9f845d323431e3d40e878cbfc6d1aad1f260 | 52a3d2e70216552a498fd91de02a2fa9cb62122c | refs/heads/master | 2020-09-25T09:45:31.299046 | 2019-12-04T23:25:06 | 2019-12-04T23:25:06 | 225,975,074 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 375 | py | from chill import *
source('/uufs/chpc.utah.edu/common/home/u1142914/lib/ytopt_vinu/polybench/polybench-code/stencils/heat-3d/kernel.c')
destination('/uufs/chpc.utah.edu/common/home/u1142914/lib/ytopt_vinu/experiments/heat-3d/tmp_files/6909.c')
procedure('kernel_heat_3d')
loop(0)
tile(0,2,8,2)
tile(0,4,64,3)
tile(0,6,128,4)
tile(1,2,8,2)
tile(1,4,64,3)
tile(1,6,128,4)
| [
"[email protected]"
] | |
c6061d5f295fed6a46483bf27ca17b45bf838027 | 4c7ea6295a487ec18543e82f66e08a3a2a2fd124 | /apps/logs/action/action_monster_level_reward.py | 8a7e07ee77cf001a6d538be71ca87a390ab9e53c | [] | no_license | robot-nan/GameLogServer | 16217689d88ac5353a61881b03adb1b372cc3e16 | ff2afd6d29e9dce6157a66ff62b4d1ea97d04184 | refs/heads/master | 2021-11-07T21:27:30.494271 | 2015-09-23T15:01:55 | 2015-09-23T15:01:55 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,292 | py | # -*- coding:utf-8 -*-
"""
宠物等级奖励
"""
from apps.logs.action import action_base
from apps.utils import game_define
def log(user, gold, stone, equip_str, item_str):
"""
输出日志
"""
action = game_define.EVENT_ACTION_GET_MONSTER_LEVEL
cur_gold = user.player.get_gold()
cur_stone = user.player.get_stone()
log_lst = action_base.log_base(user)
log_lst.append(str(action))
log_lst.append(str(gold))
log_lst.append(str(cur_gold))
log_lst.append(str(stone))
log_lst.append(str(cur_stone))
log_lst.append(str(equip_str))
log_lst.append(str(item_str))
log_str = '$$'.join(log_lst)
return log_str
def parse(log_part_lst):
"""
解析
"""
result = dict()
result['action'] = int(log_part_lst[0])
result['add_gold'] = int(log_part_lst[1])
result['cur_gold'] = int(log_part_lst[2])
result['add_stone'] = int(log_part_lst[3])
result['cur_stone'] = int(log_part_lst[4])
result['add_equip_list'] = action_base.get_val(log_part_lst, 5, [], True)
result['add_item_list'] = action_base.get_val(log_part_lst, 6, [], True)
result['old_gold'] = result['cur_gold'] - result['add_gold']
result['old_stone'] = result['cur_stone'] - result['add_stone']
return result | [
"[email protected]"
] | |
96b3f469a19190afd2c4b5b108d36137e49ac8d2 | 055581f9d6c81eda2f73ea05b90b7a2256da1219 | /parts/zodiac/zope/interface/common/tests/test_import_interfaces.py | 876383faf7234e93c6a5f7995278350ec5f54606 | [] | no_license | Tosti770/zodiac | 488a91c3e872a62d09a3ebb22a951dadcbd1c2df | af0380e20eb90699a84e3b7c6cb2085a1fb81667 | refs/heads/master | 2020-04-13T06:54:26.333228 | 2014-03-03T20:10:11 | 2014-03-03T20:10:11 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 121 | py | /home/ruben/zodiac/eggs/zope.interface-4.0.5-py2.7-linux-x86_64.egg/zope/interface/common/tests/test_import_interfaces.py | [
"[email protected]"
] | |
3bc24697dee04be43497c122b3028c6926362734 | b213fbd2f4f628aa0f2387c846673ac68e18aa91 | /Binary_Search/600.py | 4e544b47614dc36d42421f95f4fbc7fd3ea4e675 | [
"MIT"
] | permissive | wilbertgeng/LintCode_exercise | 94309b4451e34f1931fce6c2ae90d0c2e7c41d35 | e7a343b746e98ca3b4bc7b36655af7291f3150db | refs/heads/main | 2023-05-13T06:06:50.887791 | 2021-05-26T20:33:51 | 2021-05-26T20:33:51 | 347,850,106 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,701 | py | """600. Smallest Rectangle Enclosing Black Pixels
"""
class Solution:
"""
@param image: a binary matrix with '0' and '1'
@param x: the location of one of the black pixels
@param y: the location of one of the black pixels
@return: an integer
"""
def minArea(self, image, x, y):
# write your code here
if not image or not image[0]:
return 0
m = len(image)
n = len(image[0])
left = self.findFirst(image, 0, y, self.checkColumn)
right = self.findLast(image, y, n - 1, self.checkColumn)
up = self.findFirst(image, 0, x, self.checkRow)
down = self.findLast(image, x, m - 1, self.checkRow)
return (right - left + 1) * (down - up + 1)
def findFirst(self, image, start, end, checkFunc):
while start + 1 < end:
mid = (start + end) // 2
if not checkFunc(image, mid):
start = mid
else:
end = mid
if checkFunc(image, start):
return start
return end
def findLast(self, image, start, end, checkFunc):
while start + 1 < end:
mid = (start + end) // 2
if not checkFunc(image, mid):
end = mid
else:
start = mid
if checkFunc(image, end):
return end
return start
def checkRow(self, image, row):
for i in range(len(image[0])):
if image[row][i] == "1":
return True
return False
def checkColumn(self, image, col):
for i in range(len(image)):
if image[i][col] == "1":
return True
return False
| [
"[email protected]"
] | |
a1d78a24879670100945b229da0affe9517608fe | c04a7a9627e2914c92d48d473ada6214e9044d9b | /music_spider/common/DbManager.py | d2f4f7289ff12e8ab9a1e32b5ff778e9af2a161b | [] | no_license | xsren/uplooking_spider | b00194399f927a21cb395698fadd076413e39754 | e4d9cfed8c9f28458df5806d583109e58b9391d6 | refs/heads/master | 2020-03-24T11:20:20.500996 | 2018-08-18T09:12:37 | 2018-08-18T09:12:37 | 142,682,770 | 1 | 1 | null | null | null | null | UTF-8 | Python | false | false | 9,791 | py | # coding:utf8
import sys
import time
import traceback
from queue import Queue
from pymongo import UpdateOne, ReplaceOne, InsertOne, UpdateMany
from pymongo.errors import BulkWriteError
from twisted.internet import reactor, defer
# 本地库
sys.path.append('../main')
import settings
import common
class DbManager:
def __init__(self, logger, mc, write_queues, task_queues, count_queue):
self.logger = logger
self.mc = mc
self.write_queues = write_queues
self.task_queues = task_queues
self.count_queue = count_queue
def init_write_queue(self, dbName, collName):
if self.write_queues.get(dbName, None) == None:
self.write_queues[dbName] = {}
self.write_queues[dbName][collName] = Queue(maxsize=settings.write_queue_size)
else:
if self.write_queues[dbName].get(collName, None) == None:
self.write_queues[dbName][collName] = Queue(maxsize=settings.write_queue_size)
def init_task_queue(self, dbName, collName):
if self.task_queues.get(dbName, None) == None:
self.task_queues[dbName] = {}
self.task_queues[dbName][collName] = Queue(maxsize=settings.write_queue_size)
else:
if self.task_queues[dbName].get(collName, None) == None:
self.task_queues[dbName][collName] = Queue(maxsize=settings.write_queue_size)
'''
一般网站公用的处理代码 common
'''
def cleanup_handle_queue(self):
self.logger.info("clear ... cleanup begin")
try:
reactor.callInThread(self._handle_write_queue, 1)
# self._handle_write_queue(limit=1)
self._handle_count_queue(limit=1)
except BulkWriteError as bwe:
self.logger.error(bwe.details)
# you can also take this component and do more analysis
werrors = bwe.details['writeErrors']
self.logger.error(werrors)
except Exception as e:
self.logger.error(str(e))
traceback.print_exc()
self.logger.info("clear ... cleanup end")
def _handle_write_queue(self, limit):
for dbName, v in self.write_queues.items():
for collName, _queue in v.items():
if _queue.qsize() >= limit:
t0 = time.time()
requests, dups = [], []
qsize = _queue.qsize()
while _queue.qsize() > 0:
try:
tup = _queue.get_nowait()
_queue.task_done()
except Exception as e:
self.logger.error(str(e))
break
if tup[0] in dups:
continue
else:
dups.append(tup[0])
requests.append(tup[1])
if len(requests) > 0:
self.mc[dbName][collName].bulk_write(requests)
t_diff = time.time() - t0
info = "handle_write_queue,db:%s,coll:%s,size:%s,t_diff:%s" % (dbName, collName, qsize, t_diff)
self.logger.info(info)
def _handle_count_queue(self, limit):
if self.count_queue.qsize() >= limit:
t0 = time.time()
requests = []
qsize = self.count_queue.qsize()
while self.count_queue.qsize() > 0:
try:
tmp = self.count_queue.get_nowait()
self.count_queue.task_done()
except Exception as e:
self.logger.error(str(e))
break
requests.append(tmp)
if len(requests) > 0:
self.mc[settings.count_db_name][settings.count_coll_name].bulk_write(requests)
t_diff = time.time() - t0
info = "handle_count_queue,size:%s,t_diff,%s" % (qsize, t_diff)
self.logger.info(info)
@defer.inlineCallbacks
def _common_put_task_to_db(self, dbName, collName, data):
t0 = time.time()
self.init_write_queue(dbName, collName)
# 统计
res = yield self.mc[dbName][collName].find({"url": {"$in": list(set([t['url'] for t in data]))}}, {'url': 1})
exists = [r['url'] for r in res]
self.saveCountData(dbName, collName, common.NEW_TASK, len(data) - len(exists))
# 更新数据
for t in data:
if t["url"] not in exists:
self.write_queues[dbName][collName].put((t['url'], InsertOne(t)))
t_diff = time.time() - t0
info = "%s, %s, %s" % (dbName, collName, t_diff)
self.logger.debug(info)
defer.returnValue([])
@defer.inlineCallbacks
def _common_get_task_from_db(self, dbName, collName, count):
t0 = time.time()
self.init_task_queue(dbName, collName)
info = '%s, %s, qsize:%s' % (dbName, collName, self.task_queues[dbName][collName].qsize())
self.logger.debug(info)
if self.task_queues[dbName][collName].qsize() <= 0:
t1 = time.time()
tasks = yield self.mc[dbName][collName].find({'status': common.NOT_CRAWL},
limit=count * 10) # .limit(settings.get_tasks_num_one_time)
# tasks = self.mc[dbName][collName].find({'status':common.NOT_CRAWL}, limit=settings.get_tasks_num_one_time)
requests, ts = [], []
for task in tasks:
requests.append(
UpdateMany({'url': task["url"]}, {"$set": {"status": common.CRAWLING, "last_crawl_time": 0}}))
task.pop('_id')
ts.append(task)
if len(requests) > 0:
# self.mc[dbName][collName].bulk_write(requests)
yield self.mc[dbName][collName].bulk_write(requests)
for t in ts:
self.task_queues[dbName][collName].put(t)
t_diff = time.time() - t1
info = "query mongo, %s, %s, get:%s, use time:%s" % (dbName, collName, len(ts), t_diff)
self.logger.debug(info)
ts = []
for x in range(count):
try:
t = self.task_queues[dbName][collName].get_nowait()
self.task_queues[dbName][collName].task_done()
ts.append(t)
except:
# self.logger.error(str(e))
continue
t_diff = time.time() - t0
info = "total, %s, %s, return : %s , use time : %s" % (dbName, collName, len(ts), t_diff)
self.logger.debug(info)
defer.returnValue(ts)
def _common_change_task_status(self, dbName, collName, data):
t0 = time.time()
self.init_write_queue(dbName, collName)
# 统计
success = [t['url'] for t in data if t['status'] == common.CRAWL_SUCCESS]
self.saveCountData(dbName, collName, common.ONE_TASK, len(success))
# 更新数据
for t in data:
# self.logger.debug('url:%s,status:%s'%(t['url'],t['status']))
self.write_queues[dbName][collName].put(
(t['url'],
UpdateMany({'url': t['url']},
{"$set": {'status': t['status'],
'last_crawl_time': time.time()
}})
)
)
t_diff = time.time() - t0
info = "%s, %s, %s" % (dbName, collName, t_diff)
self.logger.debug(info)
def _common_put_data_to_db(self, dbName, collName, data):
"""为了性能,使用的是eplace方法"""
t0 = time.time()
self.init_write_queue(dbName, collName)
# 统计
self.saveCountData(dbName, collName, common.ONE_DATA, len(data))
#
for t in data:
t['crawl_time'] = time.time()
self.write_queues[dbName][collName].put((t['url'], ReplaceOne({'url': t['url']}, t, upsert=True)))
t_diff = time.time() - t0
info = "%s, %s, %s" % (dbName, collName, t_diff)
self.logger.debug(info)
def _common_insert_data_if_not_exist(self, dbName, collName, data):
"""如果数据不存在则插入,否则pass"""
t0 = time.time()
for t in data:
if not self.mc[dbName][collName].find_one({'url': t['url']}):
t['crawl_time'] = time.time()
self.mc[dbName][collName].insert_one(t)
t_diff = time.time() - t0
info = "%s, %s, %s" % (dbName, collName, t_diff)
self.logger.debug(info)
def _common_update_data(self, dbName, collName, data):
"""更新数据"""
t0 = time.time()
self.init_write_queue(dbName, collName)
for t in data:
t['crawl_time'] = time.time()
self.write_queues[dbName][collName].put((t['url'],
UpdateOne({'url': t['url']}, t, upsert=True)))
t_diff = time.time() - t0
info = "%s, %s, %s" % (dbName, collName, t_diff)
self.logger.debug(info)
# 存储统计数据
def saveCountData(self, dbName, collName, _type, count):
date = time.strftime("%Y-%m-%d", time.localtime())
# 网站
u1 = UpdateOne({'date': date, 'dbName': dbName, 'collName': collName, "_type": _type},
{'$inc': {'total': count}}, upsert=True)
# 总体
u2 = UpdateOne({'date': date, 'dbName': "all", 'collName': "all", "_type": _type}, {'$inc': {'total': count}},
upsert=True)
self.count_queue.put(u1)
self.count_queue.put(u2)
if __name__ == '__main__':
pass
| [
"[email protected]"
] | |
21a3244c094f2c6fdba9b385874dde119094b631 | 525690b220962de7f6253dd1dc557717cffc3441 | /openstack/tests/unit/cloud_eye/test_cloudeye_service.py | 35b73816fe5fdd9edf0eaeabe9ed72d39d48f02c | [
"Apache-2.0"
] | permissive | huaweicloudsdk/sdk-python | bb8dc2bc195d0bdaddf13fef484e3f28aeb2681f | 60d75438d71ffb7998f5dc407ffa890cc98d3171 | refs/heads/master | 2021-06-05T00:04:59.030371 | 2018-09-30T09:40:49 | 2018-09-30T09:40:49 | 110,813,153 | 20 | 18 | NOASSERTION | 2020-07-23T17:01:59 | 2017-11-15T09:31:50 | Python | UTF-8 | Python | false | false | 1,102 | py | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import testtools
from openstack.cloud_eye import cloud_eye_service
class TestCloudEyeService(testtools.TestCase):
def test_service(self):
sot = cloud_eye_service.CloudEyeService()
self.assertEqual('cloud-eye', sot.service_type)
self.assertEqual('public', sot.interface)
self.assertIsNone(sot.region)
self.assertIsNone(sot.service_name)
self.assertEqual(1, len(sot.valid_versions))
self.assertEqual('v1', sot.valid_versions[0].module)
self.assertEqual('v1', sot.valid_versions[0].path)
| [
"[email protected]"
] | |
0e4635e67e5d0d55f1378f08260a7f06ee2e70cc | c6292c1dd68f0c4dd3389628de0d2b786fa0ee64 | /0x06-python-classes/0-square.py | 4c030f02dd19538abaad2c6f969458caac16080a | [] | no_license | mj31508/holbertonschool-higher_level_programming2 | 835be695b568cd189c1448c54218a0201830005f | 3fa47001c041cd0c74f88c3a19677e126bee37b4 | refs/heads/master | 2021-07-06T22:31:05.040354 | 2017-09-29T05:28:45 | 2017-09-29T05:28:45 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 79 | py | #!/usr/bin/python3
"""
Defines empty class Square
"""
class Square:
pass
| [
"[email protected]"
] | |
9005aa6da759029734d49699d61f6dfb82e382ee | 9743d5fd24822f79c156ad112229e25adb9ed6f6 | /xai/brain/wordbase/adjectives/_subordinating.py | f414e402006d840f8543ad90d26aa0594767e83c | [
"MIT"
] | permissive | cash2one/xai | de7adad1758f50dd6786bf0111e71a903f039b64 | e76f12c9f4dcf3ac1c7c08b0cc8844c0b0a104b6 | refs/heads/master | 2021-01-19T12:33:54.964379 | 2017-01-28T02:00:50 | 2017-01-28T02:00:50 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 285 | py |
from xai.brain.wordbase.adjectives._subordinate import _SUBORDINATE
#calss header
class _SUBORDINATING(_SUBORDINATE, ):
def __init__(self,):
_SUBORDINATE.__init__(self)
self.name = "SUBORDINATING"
self.specie = 'adjectives'
self.basic = "subordinate"
self.jsondata = {}
| [
"[email protected]"
] | |
b204c2541423dde521dadee3fceaa2623a7ebe59 | 7a4ed01a40e8d79126b26f5e8fca43c8e61e78fd | /Geeky Shows/Core Python/128.reduce_Function[159].py | de2e40159a3a8f0c556f8d593fb761aa271c252a | [] | no_license | satyam-seth-learnings/python_learning | 5a7f75bb613dcd7fedc31a1567a434039b9417f8 | 7e76c03e94f5c314dcf1bfae6f26b4a8a6e658da | refs/heads/main | 2023-08-25T14:08:11.423875 | 2021-10-09T13:00:49 | 2021-10-09T13:00:49 | 333,840,032 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 113 | py | from functools import reduce
a=[10,20,30,40,50]
result=reduce(lambda n,m:n+m,a)
print(result)
print(type(result)) | [
"[email protected]"
] | |
e4f116f2d1e3e8c08ce2c7c35289d60e7a2455e5 | 3b2940c38412e5216527e35093396470060cca2f | /top/api/rest/SimbaAdgroupsChangedGetRequest.py | 5b87c69b8afb5cee0f9b6ef7edd9ba1abccebb0d | [] | no_license | akingthink/goods | 842eb09daddc2611868b01ebd6e330e5dd7d50be | ffdb5868a8df5c2935fc6142edcdf4c661c84dca | refs/heads/master | 2021-01-10T14:22:54.061570 | 2016-03-04T09:48:24 | 2016-03-04T09:48:24 | 45,093,302 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 398 | py | '''
Created by auto_sdk on 2015-01-20 12:44:31
'''
from top.api.base import RestApi
class SimbaAdgroupsChangedGetRequest(RestApi):
def __init__(self,domain='gw.api.taobao.com',port=80):
RestApi.__init__(self,domain, port)
self.nick = None
self.page_no = None
self.page_size = None
self.start_time = None
def getapiname(self):
return 'taobao.simba.adgroups.changed.get'
| [
"[email protected]"
] | |
5d857613084649081412e62cd5a3dd7998e0f1ec | a66460a46611483dfbdc94c7996893f427e60d97 | /ansible/my_env/lib/python2.7/site-packages/ansible/modules/cloud/amazon/ec2_ami_facts.py | fe8b57e7640e37f06e0f2c3b8a4e942fbe51bad9 | [
"MIT"
] | permissive | otus-devops-2019-02/yyashkin_infra | 06b57807dde26f94f501828c07503d6bf1d70816 | 0cd0c003884155ac922e3e301305ac202de7028c | refs/heads/master | 2020-04-29T02:42:22.056724 | 2019-05-15T16:24:35 | 2019-05-15T16:24:35 | 175,780,718 | 0 | 0 | MIT | 2019-05-15T16:24:36 | 2019-03-15T08:37:35 | HCL | UTF-8 | Python | false | false | 8,737 | py | #!/usr/bin/python
# Copyright: Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ec2_ami_facts
version_added: '2.5'
short_description: Gather facts about ec2 AMIs
description: Gather facts about ec2 AMIs
author:
- Prasad Katti, @prasadkatti
requirements: [ boto3 ]
options:
image_ids:
description: One or more image IDs.
aliases: [image_id]
filters:
description:
- A dict of filters to apply. Each dict item consists of a filter key and a filter value.
- See U(https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) for possible filters.
- Filter names and values are case sensitive.
owners:
description:
- Filter the images by the owner. Valid options are an AWS account ID, self,
- or an AWS owner alias ( amazon | aws-marketplace | microsoft ).
aliases: [owner]
executable_users:
description:
- Filter images by users with explicit launch permissions. Valid options are an AWS account ID, self, or all (public AMIs).
aliases: [executable_user]
describe_image_attributes:
description:
- Describe attributes (like launchPermission) of the images found.
default: no
type: bool
extends_documentation_fragment:
- aws
- ec2
'''
EXAMPLES = '''
# Note: These examples do not set authentication details, see the AWS Guide for details.
- name: gather facts about an AMI using ami-id
ec2_ami_facts:
image_ids: ami-5b488823
- name: gather facts about all AMIs with tag key Name and value webapp
ec2_ami_facts:
filters:
"tag:Name": webapp
- name: gather facts about an AMI with 'AMI Name' equal to foobar
ec2_ami_facts:
filters:
name: foobar
- name: gather facts about Ubuntu 17.04 AMIs published by Canonical (099720109477)
ec2_ami_facts:
owners: 099720109477
filters:
name: "ubuntu/images/ubuntu-zesty-17.04-*"
'''
RETURN = '''
images:
description: a list of images
returned: always
type: complex
contains:
architecture:
description: The architecture of the image
returned: always
type: string
sample: x86_64
block_device_mappings:
description: Any block device mapping entries
returned: always
type: complex
contains:
device_name:
description: The device name exposed to the instance
returned: always
type: string
sample: /dev/sda1
ebs:
description: EBS volumes
returned: always
type: complex
creation_date:
description: The date and time the image was created
returned: always
type: string
sample: '2017-10-16T19:22:13.000Z'
description:
description: The description of the AMI
returned: always
type: string
sample: ''
ena_support:
description: whether enhanced networking with ENA is enabled
returned: always
type: bool
sample: true
hypervisor:
description: The hypervisor type of the image
returned: always
type: string
sample: xen
image_id:
description: The ID of the AMI
returned: always
type: string
sample: ami-5b466623
image_location:
description: The location of the AMI
returned: always
type: string
sample: 408466080000/Webapp
image_type:
description: The type of image
returned: always
type: string
sample: machine
launch_permissions:
description: launch permissions of the ami
returned: when image is owned by calling account and describe_image_attributes is yes
type: complex
sample: [{"group": "all"}, {"user_id": "408466080000"}]
name:
description: The name of the AMI that was provided during image creation
returned: always
type: string
sample: Webapp
owner_id:
description: The AWS account ID of the image owner
returned: always
type: string
sample: '408466080000'
public:
description: whether the image has public launch permissions
returned: always
type: bool
sample: true
root_device_name:
description: The device name of the root device
returned: always
type: string
sample: /dev/sda1
root_device_type:
description: The type of root device used by the AMI
returned: always
type: string
sample: ebs
sriov_net_support:
description: whether enhanced networking is enabled
returned: always
type: string
sample: simple
state:
description: The current state of the AMI
returned: always
type: string
sample: available
tags:
description: Any tags assigned to the image
returned: always
type: complex
virtualization_type:
description: The type of virtualization of the AMI
returned: always
type: string
sample: hvm
'''
try:
from botocore.exceptions import ClientError, BotoCoreError
except ImportError:
pass
from ansible.module_utils.aws.core import AnsibleAWSModule
from ansible.module_utils.ec2 import (boto3_conn, ec2_argument_spec, get_aws_connection_info, ansible_dict_to_boto3_filter_list,
camel_dict_to_snake_dict, boto3_tag_list_to_ansible_dict)
def list_ec2_images(ec2_client, module):
image_ids = module.params.get("image_ids")
owners = module.params.get("owners")
executable_users = module.params.get("executable_users")
filters = module.params.get("filters")
owner_param = []
# describe_images is *very* slow if you pass the `Owners`
# param (unless it's self), for some reason.
# Converting the owners to filters and removing from the
# owners param greatly speeds things up.
# Implementation based on aioue's suggestion in #24886
for owner in owners:
if owner.isdigit():
if 'owner-id' not in filters:
filters['owner-id'] = list()
filters['owner-id'].append(owner)
elif owner == 'self':
# self not a valid owner-alias filter (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html)
owner_param.append(owner)
else:
if 'owner-alias' not in filters:
filters['owner-alias'] = list()
filters['owner-alias'].append(owner)
filters = ansible_dict_to_boto3_filter_list(filters)
try:
images = ec2_client.describe_images(ImageIds=image_ids, Filters=filters, Owners=owner_param, ExecutableUsers=executable_users)
images = [camel_dict_to_snake_dict(image) for image in images["Images"]]
except (ClientError, BotoCoreError) as err:
module.fail_json_aws(err, msg="error describing images")
for image in images:
try:
image['tags'] = boto3_tag_list_to_ansible_dict(image.get('tags', []))
if module.params.get("describe_image_attributes"):
launch_permissions = ec2_client.describe_image_attribute(Attribute='launchPermission', ImageId=image['image_id'])['LaunchPermissions']
image['launch_permissions'] = [camel_dict_to_snake_dict(perm) for perm in launch_permissions]
except (ClientError, BotoCoreError) as err:
# describing launch permissions of images owned by others is not permitted, but shouldn't cause failures
pass
images.sort(key=lambda e: e.get('creation_date', '')) # it may be possible that creation_date does not always exist
module.exit_json(images=images)
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
image_ids=dict(default=[], type='list', aliases=['image_id']),
filters=dict(default={}, type='dict'),
owners=dict(default=[], type='list', aliases=['owner']),
executable_users=dict(default=[], type='list', aliases=['executable_user']),
describe_image_attributes=dict(default=False, type='bool')
)
)
module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)
region, ec2_url, aws_connect_params = get_aws_connection_info(module, boto3=True)
if region:
ec2_client = boto3_conn(module, conn_type='client', resource='ec2', region=region, endpoint=ec2_url, **aws_connect_params)
else:
module.fail_json(msg="region must be specified")
list_ec2_images(ec2_client, module)
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
284b16862546a04753ca39ee352a14563fc28272 | eaf97194e79c31d80f7786b64bbf621581a95dec | /example.py | bba3baa0753070fdc4a03e7eb9cbacab6300db59 | [] | no_license | codesharedot/levolution-price | 333902c32137f9a82bd9d21b26575d646e0f4bb9 | 60c9d52fa42190e4fa929ead32ca611766906005 | refs/heads/master | 2020-08-02T14:26:47.832555 | 2019-09-27T19:27:47 | 2019-09-27T19:27:47 | 211,388,241 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 616 | py | import requests
import json
from forex_python.converter import CurrencyRates
import os
c = CurrencyRates()
rate = c.get_rate('USD', 'EUR')
print(rate)
levolution_api_url = 'https://api.coinmarketcap.com/v1/ticker/levolution/'
response = requests.get(levolution_api_url)
response_json = response.json()
print(response_json)
for coin in response.json():
price = coin.get("price_usd", "U$S Price not provided")
coin_price = float(("{0:.2f}").format(float(price)))
print("$ " + str(coin_price))
coin_price_eur = float(("{0:.2f}").format(float(price)*rate))
print("€ " + str(coin_price_eur))
| [
"[email protected]"
] | |
df3847d46c128ea4255e64467cb577d4e348b21b | 469e3e8de616263bab857df1050d426f40c30d5c | /module3.py | 5d847f0e4e820befd42f51495c91329a0d3b6499 | [
"MIT"
] | permissive | listenzcc/QuickPythonConfig | d487e3c35e906f84503d8992152ee79909d0da30 | ff883c1dd2b7a23a114ec794e3d711fd5d1d15c1 | refs/heads/main | 2023-01-07T20:30:39.060803 | 2020-11-10T08:52:10 | 2020-11-10T08:52:10 | 306,575,328 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 371 | py | # Example of using customized Config object
from Package.defines import Config
config = Config()
config.reload_logger('develop')
config.reload_cfg()
def main():
print('---------------------------------------------------------')
print(config.peek())
config.set('Module 3', 'says', 'It should be a brand new configure')
print(config.peek())
| [
"[email protected]"
] | |
6f45e31cd38fe22467cfb6b9bef6d61c3073ffef | 38c76d29799896a8335bd83b6220acd71d5d8bed | /pyeuler/p053.py | 32ec8ddbb6ba17ace49313419244944a5c2dde50 | [] | no_license | oozk/pyeuler | c010505624bb95043883faa55a776d954c0496dc | 74fd549985722f6d53a1394179d094a106c70689 | refs/heads/master | 2023-04-13T17:50:23.187918 | 2023-04-05T13:00:44 | 2023-04-05T13:00:44 | 261,848,886 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 373 | py | #!/usr/bin/env python3
###
# Problem 53
# https://projecteuler.net/problem=53
###
from math import factorial
def p053(l):
factorials = dict((i, factorial(i)) for i in range(0, l+1))
n_choose_r = lambda n, r: factorials[n] / factorials[r] / factorials[n-r]
return sum(1 for n in range(1, l+1) for r in range(1, n) if n_choose_r(n, r) > 1e6)
print(p053(100))
| [
"[email protected]"
] | |
d8a83a203ad3e39efa214b7786b7be640e6c5c2d | af192ea16aad4264a92039d594d72acca91d0e33 | /tests/tests.py | d329884aa00948981f710044b18f363c5eea0ca8 | [
"MIT"
] | permissive | TakumiHQ/emoji-unicode | ceed81325829e2c44b6d1b04c4dbc7257cc95c86 | 85e8193f05f822641a58eb539b765481b084f83c | refs/heads/master | 2021-01-18T16:08:52.817116 | 2015-11-20T13:10:44 | 2015-11-20T13:10:44 | 66,449,921 | 1 | 0 | null | 2016-08-24T09:17:27 | 2016-08-24T09:17:27 | null | UTF-8 | Python | false | false | 6,812 | py | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import unittest
import logging
import os
import json
import io
from emoji_unicode import replace, normalize, Emoji
from emoji_unicode.utils import code_point_to_unicode, unicode_to_code_point
from emoji_unicode import data_parser
logging.disable(logging.CRITICAL)
DIR = os.path.dirname(__file__)
FIXTURES = os.path.join(DIR, 'fixtures')
EMOJI_PRETTY_JSON = None
def _get_emoji_pretty():
global EMOJI_PRETTY_JSON
if EMOJI_PRETTY_JSON is not None:
return EMOJI_PRETTY_JSON
with io.open(os.path.join(FIXTURES, 'emoji_pretty.json'), encoding='utf-8') as fh:
EMOJI_PRETTY_JSON = fh.read()
return EMOJI_PRETTY_JSON
def get_emoji_pretty():
return json.loads(_get_emoji_pretty())
def code_points_to_unicode(code_points):
return ''.join(
code_point_to_unicode(p)
for p in code_points.split('-')
)
def get_emojis(include_skin_variations=True, include_variations=True):
# todo: include variations (emoji + emo_variation), android doesn't use them, check iOS
emojis = []
for e in get_emoji_pretty():
emojis.append({
'unicode': code_points_to_unicode(e['unified']),
'code_point': e['unified'],
'short_name': e['short_name']
})
if include_skin_variations:
emojis.extend(
{
'unicode': code_points_to_unicode(point),
'code_point': point,
'short_name': e['short_name']
}
for point in e.get('skin_variations', {}).keys()
)
if include_variations:
emojis.extend(
{
'unicode': code_points_to_unicode(point),
'code_point': point,
'short_name': e['short_name']
}
for point in e.get('variations', [])
)
return emojis
def get_emojis_unicode(**kw):
return [e['unicode'] for e in get_emojis(**kw)]
class MetaTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_code_points_to_unicode(self):
self.assertEqual(
code_points_to_unicode('1F58B-1F58B-1F58B'),
'\U0001f58b\U0001f58b\U0001f58b'
)
def test_get_emojis(self):
self.assertEqual(len(get_emojis()), 1736)
self.assertEqual(len(get_emojis(include_skin_variations=False)), 1416)
self.assertEqual(len(get_emojis(include_variations=False)), 1619)
def test_get_emojis_unicode(self):
self.assertEqual(len(get_emojis_unicode()), 1736)
self.assertEqual(len(get_emojis_unicode(include_skin_variations=False)), 1416)
self.assertEqual(len(get_emojis(include_variations=False)), 1619)
class UtilsTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_code_point_to_unicode(self):
self.assertEqual(
code_point_to_unicode('1F58B'),
'\U0001f58b'
)
def test_unicode_to_code_point(self):
self.assertEqual(
unicode_to_code_point('\U0001f58b'),
'1F58B'.lower()
)
class ModelEmojiTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_unicode(self):
emoji = Emoji(unicode='foo')
self.assertEqual(emoji.unicode, 'foo')
def test_code_points(self):
emoji = Emoji(unicode='\U0001f58b\U0001f58b\U0001f58b\uFE0F\u200D')
self.assertEqual(emoji.code_points, '1F58B-1F58B-1F58B'.lower())
def test_as_map(self):
emoji = Emoji(unicode='\U0001f58b\U0001f58b\U0001f58b\uFE0F\u200D')
self.assertEqual(
emoji.as_map(),
[('\U0001f58b', '1f58b'), ('\U0001f58b', '1f58b'), ('\U0001f58b', '1f58b')]
)
class ParserTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_replace(self):
"""
It should replace all emojis
"""
emojis = get_emojis()
# With no spaces will fail due to fitzpatrick tone being a modifier and also a emoji
txt = ' '.join(get_emojis_unicode())
txt_code_points = ' '.join(normalize(e['code_point']) for e in emojis)
res = replace(txt, lambda emoji: emoji.code_points)
self.assertEqual(res, txt_code_points)
def test_replace_with_no_fitz(self):
"""
It should replace no-spaced emojis, excluding fitzpatrick tone emojis
"""
emojis = get_emojis()
txt = ''.join(
e['unicode']
for e in emojis
if 'skin-tone' not in e['short_name']
)
txt_code_points = ''.join(
normalize(e['code_point'])
for e in emojis
if 'skin-tone' not in e['short_name']
)
res = replace(txt, lambda emoji: emoji.code_points)
self.assertEqual(res, txt_code_points)
def test_replace_remove(self):
txt = ''.join(get_emojis_unicode())
res = replace(txt, lambda emoji: '')
self.assertEqual(res, '')
def test_replace_digits(self):
"""
It should not match single digits
"""
txt = '#*0123456789'
res = replace(txt, lambda emoji: '')
self.assertEqual(res, txt)
def test_replace_text_variations(self):
"""
It should not match emojis with text variation
"""
txt = '\u203C\uFE0E'
res = replace(txt, lambda emoji: '')
self.assertEqual(res, txt)
def test_normalize(self):
self.assertEqual(normalize('00A900'), 'a900')
def test_normalize_variations(self):
self.assertEqual(normalize('00A9-FE0F-200D-F00'), 'a9-f00')
def test_normalize_separator(self):
self.assertEqual(normalize('00A9_FE0F_200D_F00', separator='_'), 'a9_f00')
class DataParserTest(unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
pass
def test_parse(self):
res = set(data_parser.parse())
self.assertTrue('\u00A9' in res)
self.assertTrue('\u2194-\u2199' in res) # range
def test_read_template(self):
template = data_parser.read_template()
self.assertTrue('{{code_points}}' in template)
self.assertTrue('RE_PATTERN_TEMPLATE' in template)
def test_render_template(self):
code_points = data_parser.parse()
template = data_parser.read_template()
rendered_template = data_parser.render_template(template, code_points)
self.assertTrue('{{code_points}}' not in rendered_template)
self.assertTrue('RE_PATTERN_TEMPLATE' in rendered_template)
| [
"[email protected]"
] | |
81692c3527f89a21d770bcf0dfe69059814ffe59 | 64f5c0f229e1b1186f12d75b4ba21c07adfcf152 | /index/models.py | fc2791e3989c627bacd03d10534b92d43824f717 | [] | no_license | Emehinola/intouch | 22dd3a81c935956914362604b8fd60d6d7cd2a46 | d370a48c21b93aed797c32a0621c3fa8bda89857 | refs/heads/master | 2023-01-24T22:37:41.862324 | 2020-12-13T08:54:55 | 2020-12-13T08:54:55 | 318,006,792 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 397 | py | from django.db import models
# Create your models here.
# home page view, i.e the surveys views
class HomeViewCount(models.Model):
views = models.IntegerField(default=0) # number of views or visits
time = models.DateTimeField(auto_now_add=True)
class Meta:
ordering = ['-time']
verbose_name_plural = 'Views'
def __str__(self):
return f'{self.views}'
| [
"[email protected]"
] | |
e0a422afafd0c518668c019f26bccbc9e6a9bb01 | 6572f29c4472f1bd131dfb0fba441cb5b641ec83 | /django/mysite_personal_models/blog/urls.py | 6bdac35817b4968ca9cf7207271375c46c4feef1 | [] | no_license | kan-abhulimen/jango-training | 1ccbe04c9f2f481d4482e9fdfd50b1a5b43cc7ae | 734087392cd9635f00596a7955882f4849883930 | refs/heads/master | 2020-03-07T18:37:44.332426 | 2018-01-27T16:11:29 | 2018-01-27T16:11:29 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 545 | py | from django.conf.urls import url, include
from django.views.generic import ListView, DetailView
from blog.models import Post
urlpatterns = [
url(r'^$', ListView.as_view(
queryset=Post.objects.all().order_by("-date")[:25],
template_name="blog/blog.html")),
url(r'^(?P<pk>\d+)$', DetailView.as_view(
model = Post,
template_name="blog/post.html")),
]
| [
"[email protected]"
] | |
c442740a0bcbc288556a64daa57037c8a3f469ab | d0326c87cda35a4c80d1bb137894a33ca3f1bcc9 | /jetracer/nvidia_racecar.py | ec1631454f4df2f3fb9181022e6a30c9bf3caab6 | [
"MIT"
] | permissive | tokk-nv/jetracer | 5a36fcf809348b609331d369d71cca20010c954a | e83f11522f75d5f89486442ce2e36624e20970a7 | refs/heads/master | 2023-07-03T21:58:25.670731 | 2021-08-09T23:33:58 | 2021-08-09T23:33:58 | 321,274,145 | 1 | 0 | MIT | 2021-06-01T20:47:28 | 2020-12-14T07:59:07 | Jupyter Notebook | UTF-8 | Python | false | false | 1,115 | py | from .racecar import Racecar
import traitlets
from adafruit_servokit import ServoKit
class NvidiaRacecar(Racecar):
i2c_address = traitlets.Integer(default_value=0x40)
steering_gain = traitlets.Float(default_value=-0.65)
steering_offset = traitlets.Float(default_value=0)
steering_channel = traitlets.Integer(default_value=0)
throttle_gain = traitlets.Float(default_value=0.8)
throttle_channel = traitlets.Integer(default_value=1)
def __init__(self, *args, **kwargs):
super(NvidiaRacecar, self).__init__(*args, **kwargs)
self.kit = ServoKit(channels=16, address=self.i2c_address)
self.steering_motor = self.kit.continuous_servo[self.steering_channel]
self.throttle_motor = self.kit.continuous_servo[self.throttle_channel]
@traitlets.observe('steering')
def _on_steering(self, change):
self.steering_motor.throttle = change['new'] * self.steering_gain + self.steering_offset
@traitlets.observe('throttle')
def _on_throttle(self, change):
self.throttle_motor.throttle = change['new'] * self.throttle_gain | [
"[email protected]"
] | |
47955f0cfeb6ad8ef9ded6bffccc8aa706933bee | acb8e84e3b9c987fcab341f799f41d5a5ec4d587 | /langs/5/l5d.py | d8160fd04b64e99d78c07839a70756ef5e92a3dc | [] | no_license | G4te-Keep3r/HowdyHackers | 46bfad63eafe5ac515da363e1c75fa6f4b9bca32 | fb6d391aaecb60ab5c4650d4ae2ddd599fd85db2 | refs/heads/master | 2020-08-01T12:08:10.782018 | 2016-11-13T20:45:50 | 2016-11-13T20:45:50 | 73,624,224 | 0 | 1 | null | null | null | null | UTF-8 | Python | false | false | 486 | py | import sys
def printFunction(lineRemaining):
if lineRemaining[0] == '"' and lineRemaining[-1] == '"':
if len(lineRemaining) > 2:
#data to print
lineRemaining = lineRemaining[1:-1]
print ' '.join(lineRemaining)
else:
print
def main(fileName):
with open(fileName) as f:
for line in f:
data = line.split()
if data[0] == 'l5D':
printFunction(data[1:])
else:
print 'ERROR'
return
if __name__ == '__main__':
main(sys.argv[1]) | [
"[email protected]"
] | |
e6334f2a64f9a1b31d53af7cad6ac3abe5758f7d | 8cce0b5a4be09783016906a36192c52e9daa84aa | /cv_workshops/13-section/7-clazz.py | 4640afbe5beea856e54129e53ebbe39d9916db00 | [
"MIT"
] | permissive | Castrol68/opencv-practice | fcc9495553d3a10fb045c396697391a5d2a06f36 | 83d76132d004ebbc96d99d34a0fd3fc37a044f9f | refs/heads/master | 2023-08-31T07:18:51.497902 | 2020-05-03T17:43:12 | 2020-05-03T17:43:12 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,057 | py | #!/usr/bin/env python3
# -*- coding=utf-8 -*-
import tensorflow as tf
"""
TensorFlow - hello world
使用安装的TensorFlow 2.0并导入
"""
def main():
# 导入数据集, 数据集下载地址为: http://yann.lecun.com/exdb/mnist/
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# 将整数数据集转换为浮点数
x_train, x_test = x_train / 255.0, x_test / 255.0
# 搭建Sequential模型,并将数据堆叠起来
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 训练
model.fit(x_train, y_train, epochs=5)
# 验证
model.evaluate(x_test, y_test)
if "__main__" == __name__:
main()
| [
"[email protected]"
] | |
6ec621dff6324b2822383c42b374ac54637d859a | dc72e1eb44cfaed330d9477d0c27bee307a81e4a | /Jackpointnew/hand/scripts.py | 47913e46358e99f954ce1218749325896b5b7a09 | [] | no_license | bussiere/JackPointFInal | ba200d85606e17b423535af20a58c04bf5afa550 | c414480fee519e68aece68068e941278fe10cf0a | refs/heads/master | 2021-07-24T14:25:56.982106 | 2013-07-08T11:10:41 | 2013-07-08T11:10:41 | 5,333,141 | 0 | 0 | null | 2021-06-10T17:45:33 | 2012-08-07T20:29:46 | Python | UTF-8 | Python | false | false | 3,808 | py | from django.contrib.auth.models import User
from django.contrib.auth import authenticate
from django.contrib.auth.decorators import login_required
from django.contrib import auth
from skill.models import Skill
from carac.models import Carac
from item.models import Item
from carac.forms import CaracFormChoice
from skill.forms import SkillForm
from item.forms import ItemForm
from jack.forms import JackRegisterForm
from django.forms.formsets import formset_factory
from django.forms.formsets import BaseFormSet
from hand.forms import AskForm
from hand.models import Question,Answer
from jack.models import CaracUser,SkillUser,ItemUser
from tag.models import Tag
from engine.models import ThreadEngine
from engine.script import sendnotification
#TODO
# A factyoriser enregistrement skills carac items
def enregistrementAnswer(request):
user = User.objects.get(id=request.user.id)
reponse = request.POST['Reponse']
tags = request.POST['Tags']
threadengineid = int(request.POST['ThreadEngineId'])
threadengine = ThreadEngine.objects.get(id=threadengineid)
questionid = int(request.POST['QuestionId'])
tags = tags.split("#")
question = Question.objects.get(id=questionid)
answer = Answer.objects.create(user=user,Text=reponse)
answer.Question.add(question)
#TODO
# a factoriser
for tag in tags :
tag = tag.strip()
try :
result = Tag.objects.get(Name=tag)
except :
result = Tag.objects.create(Name=tag)
result.save()
answer.Tags.add(result)
answer.save()
threadengine.Answer.add(answer)
threadengine.save()
def enregistrementAsk(request,caracs,skills,items,intitule,description,tags) :
question = Question.objects.create()
question.save()
question.user = User.objects.get(id=request.user.id)
question.Text = description
question.Intitule = intitule
question.save()
#TODO
#Factoriser et expliquer les tags
tags = tags.split('#')
# TODO
# A factoriser
for tag in tags :
tag = tag.strip()
try :
result = Tag.objects.get(Name=tag)
except :
result = Tag.objects.create(Name=tag)
result.save()
question.Tags.add(result)
question.save()
for carac in caracs.keys():
caracdb = Carac.objects.get(Nom=carac)
try :
result = CaracUser.objects.get(carac=caracdb,Level=int(caracs[carac][0]))
except :
result = CaracUser.objects.create(Level=0)
result.Carac.add(caracdb)
result.Level = int(caracs[carac][0])
result.save()
question.Caracs.add(result)
for skill in skills.keys():
skilldb = Skill.objects.get(Nom=skill)
print "nomSki"
print skilldb.Nom
private = False
try :
result = SkillUser.objects.get(Skills=skilldb,Level=int(skills[skill][0]))
except :
result = SkillUser.objects.create(Level=0)
result.Skill.add(skilldb)
result.Private = private
result.Level = int(skills[skill][0])
result.save()
question.Skills.add(result)
for item in items.keys():
itemdb = Item.objects.get(Nom=item)
try :
result = ItemUser.objects.get(Item=itemdb)
except :
result = ItemUser.objects.create()
result.Item.add(itemdb)
result.Private = private
result.save()
question.Items.add(result)
question.save()
threadengine = ThreadEngine.objects.create()
threadengine.Question.add(question)
threadengine.save()
sendnotification(question,threadengine)
| [
"[email protected]"
] | |
6a069aad4eae164b7a135b62b70c4a3ad36591d9 | 75d8667735782cd1d0eb4877e52c89da5cd92dde | /nova/tests/unit/virt/vmwareapi/test_configdrive.py | 9462d39ae2ad4d8c8a8ebadc31116bba226df242 | [
"Apache-2.0"
] | permissive | bopopescu/nova-token | ffecfd3ec561936b7d9d7e691bc57383cde05436 | ec98f69dea7b3e2b9013b27fd55a2c1a1ac6bfb2 | refs/heads/master | 2022-11-22T09:53:31.073483 | 2016-05-14T02:47:01 | 2016-05-15T22:02:55 | 282,105,621 | 0 | 0 | Apache-2.0 | 2020-07-24T02:42:19 | 2020-07-24T02:42:18 | null | UTF-8 | Python | false | false | 14,534 | py | begin_unit
comment|'# Copyright 2013 IBM Corp.'
nl|'\n'
comment|'# Copyright 2011 OpenStack Foundation'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Licensed under the Apache License, Version 2.0 (the "License"); you may'
nl|'\n'
comment|'# not use this file except in compliance with the License. You may obtain'
nl|'\n'
comment|'# a copy of the License at'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# http://www.apache.org/licenses/LICENSE-2.0'
nl|'\n'
comment|'#'
nl|'\n'
comment|'# Unless required by applicable law or agreed to in writing, software'
nl|'\n'
comment|'# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT'
nl|'\n'
comment|'# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the'
nl|'\n'
comment|'# License for the specific language governing permissions and limitations'
nl|'\n'
comment|'# under the License.'
nl|'\n'
nl|'\n'
name|'import'
name|'fixtures'
newline|'\n'
name|'import'
name|'mock'
newline|'\n'
name|'from'
name|'mox3'
name|'import'
name|'mox'
newline|'\n'
nl|'\n'
name|'from'
name|'nova'
name|'import'
name|'context'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'image'
name|'import'
name|'glance'
newline|'\n'
name|'from'
name|'nova'
name|'import'
name|'objects'
newline|'\n'
name|'from'
name|'nova'
name|'import'
name|'test'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
name|'import'
name|'fake_instance'
newline|'\n'
name|'import'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'image'
op|'.'
name|'fake'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
name|'import'
name|'utils'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'virt'
op|'.'
name|'vmwareapi'
name|'import'
name|'fake'
name|'as'
name|'vmwareapi_fake'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'virt'
op|'.'
name|'vmwareapi'
name|'import'
name|'stubs'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'tests'
name|'import'
name|'uuidsentinel'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'virt'
name|'import'
name|'fake'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'virt'
op|'.'
name|'vmwareapi'
name|'import'
name|'driver'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'virt'
op|'.'
name|'vmwareapi'
name|'import'
name|'vm_util'
newline|'\n'
name|'from'
name|'nova'
op|'.'
name|'virt'
op|'.'
name|'vmwareapi'
name|'import'
name|'vmops'
newline|'\n'
nl|'\n'
nl|'\n'
DECL|class|ConfigDriveTestCase
name|'class'
name|'ConfigDriveTestCase'
op|'('
name|'test'
op|'.'
name|'NoDBTestCase'
op|')'
op|':'
newline|'\n'
nl|'\n'
DECL|variable|REQUIRES_LOCKING
indent|' '
name|'REQUIRES_LOCKING'
op|'='
name|'True'
newline|'\n'
nl|'\n'
op|'@'
name|'mock'
op|'.'
name|'patch'
op|'.'
name|'object'
op|'('
name|'driver'
op|'.'
name|'VMwareVCDriver'
op|','
string|"'_register_openstack_extension'"
op|')'
newline|'\n'
DECL|member|setUp
name|'def'
name|'setUp'
op|'('
name|'self'
op|','
name|'mock_register'
op|')'
op|':'
newline|'\n'
indent|' '
name|'super'
op|'('
name|'ConfigDriveTestCase'
op|','
name|'self'
op|')'
op|'.'
name|'setUp'
op|'('
op|')'
newline|'\n'
name|'vm_util'
op|'.'
name|'vm_refs_cache_reset'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'context'
op|'='
name|'context'
op|'.'
name|'RequestContext'
op|'('
string|"'fake'"
op|','
string|"'fake'"
op|','
name|'is_admin'
op|'='
name|'False'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'flags'
op|'('
name|'cluster_name'
op|'='
string|"'test_cluster'"
op|','
nl|'\n'
name|'host_ip'
op|'='
string|"'test_url'"
op|','
nl|'\n'
name|'host_username'
op|'='
string|"'test_username'"
op|','
nl|'\n'
name|'host_password'
op|'='
string|"'test_pass'"
op|','
nl|'\n'
name|'use_linked_clone'
op|'='
name|'False'
op|','
name|'group'
op|'='
string|"'vmware'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'flags'
op|'('
name|'enabled'
op|'='
name|'False'
op|','
name|'group'
op|'='
string|"'vnc'"
op|')'
newline|'\n'
name|'vmwareapi_fake'
op|'.'
name|'reset'
op|'('
op|')'
newline|'\n'
name|'stubs'
op|'.'
name|'set_stubs'
op|'('
name|'self'
op|')'
newline|'\n'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'image'
op|'.'
name|'fake'
op|'.'
name|'stub_out_image_service'
op|'('
name|'self'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'conn'
op|'='
name|'driver'
op|'.'
name|'VMwareVCDriver'
op|'('
name|'fake'
op|'.'
name|'FakeVirtAPI'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'network_info'
op|'='
name|'utils'
op|'.'
name|'get_test_network_info'
op|'('
op|')'
newline|'\n'
name|'self'
op|'.'
name|'node_name'
op|'='
name|'self'
op|'.'
name|'conn'
op|'.'
name|'_nodename'
newline|'\n'
name|'image_ref'
op|'='
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'image'
op|'.'
name|'fake'
op|'.'
name|'get_valid_image_id'
op|'('
op|')'
newline|'\n'
name|'instance_values'
op|'='
op|'{'
nl|'\n'
string|"'vm_state'"
op|':'
string|"'building'"
op|','
nl|'\n'
string|"'project_id'"
op|':'
string|"'fake'"
op|','
nl|'\n'
string|"'user_id'"
op|':'
string|"'fake'"
op|','
nl|'\n'
string|"'name'"
op|':'
string|"'1'"
op|','
nl|'\n'
string|"'kernel_id'"
op|':'
string|"'1'"
op|','
nl|'\n'
string|"'ramdisk_id'"
op|':'
string|"'1'"
op|','
nl|'\n'
string|"'mac_addresses'"
op|':'
op|'['
op|'{'
string|"'address'"
op|':'
string|"'de:ad:be:ef:be:ef'"
op|'}'
op|']'
op|','
nl|'\n'
string|"'memory_mb'"
op|':'
number|'8192'
op|','
nl|'\n'
string|"'flavor'"
op|':'
name|'objects'
op|'.'
name|'Flavor'
op|'('
name|'vcpus'
op|'='
number|'4'
op|','
name|'extra_specs'
op|'='
op|'{'
op|'}'
op|')'
op|','
nl|'\n'
string|"'instance_type_id'"
op|':'
number|'0'
op|','
nl|'\n'
string|"'vcpus'"
op|':'
number|'4'
op|','
nl|'\n'
string|"'root_gb'"
op|':'
number|'80'
op|','
nl|'\n'
string|"'image_ref'"
op|':'
name|'image_ref'
op|','
nl|'\n'
string|"'host'"
op|':'
string|"'fake_host'"
op|','
nl|'\n'
string|"'task_state'"
op|':'
string|"'scheduling'"
op|','
nl|'\n'
string|"'reservation_id'"
op|':'
string|"'r-3t8muvr0'"
op|','
nl|'\n'
string|"'id'"
op|':'
number|'1'
op|','
nl|'\n'
string|"'uuid'"
op|':'
name|'uuidsentinel'
op|'.'
name|'foo'
op|','
nl|'\n'
string|"'node'"
op|':'
name|'self'
op|'.'
name|'node_name'
op|','
nl|'\n'
string|"'metadata'"
op|':'
op|'['
op|']'
op|','
nl|'\n'
string|"'expected_attrs'"
op|':'
op|'['
string|"'system_metadata'"
op|']'
op|','
nl|'\n'
op|'}'
newline|'\n'
name|'self'
op|'.'
name|'test_instance'
op|'='
name|'fake_instance'
op|'.'
name|'fake_instance_obj'
op|'('
name|'self'
op|'.'
name|'context'
op|','
nl|'\n'
op|'**'
name|'instance_values'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'test_instance'
op|'.'
name|'flavor'
op|'='
name|'objects'
op|'.'
name|'Flavor'
op|'('
name|'vcpus'
op|'='
number|'4'
op|','
name|'memory_mb'
op|'='
number|'8192'
op|','
nl|'\n'
name|'ephemeral_gb'
op|'='
number|'0'
op|','
name|'swap'
op|'='
number|'0'
op|','
nl|'\n'
name|'extra_specs'
op|'='
op|'{'
op|'}'
op|')'
newline|'\n'
nl|'\n'
op|'('
name|'image_service'
op|','
name|'image_id'
op|')'
op|'='
name|'glance'
op|'.'
name|'get_remote_image_service'
op|'('
name|'context'
op|','
nl|'\n'
name|'image_ref'
op|')'
newline|'\n'
name|'metadata'
op|'='
name|'image_service'
op|'.'
name|'show'
op|'('
name|'context'
op|','
name|'image_id'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'image'
op|'='
name|'objects'
op|'.'
name|'ImageMeta'
op|'.'
name|'from_dict'
op|'('
op|'{'
nl|'\n'
string|"'id'"
op|':'
name|'image_ref'
op|','
nl|'\n'
string|"'disk_format'"
op|':'
string|"'vmdk'"
op|','
nl|'\n'
string|"'size'"
op|':'
name|'int'
op|'('
name|'metadata'
op|'['
string|"'size'"
op|']'
op|')'
op|','
nl|'\n'
op|'}'
op|')'
newline|'\n'
nl|'\n'
DECL|class|FakeInstanceMetadata
name|'class'
name|'FakeInstanceMetadata'
op|'('
name|'object'
op|')'
op|':'
newline|'\n'
DECL|member|__init__
indent|' '
name|'def'
name|'__init__'
op|'('
name|'self'
op|','
name|'instance'
op|','
name|'content'
op|'='
name|'None'
op|','
name|'extra_md'
op|'='
name|'None'
op|','
nl|'\n'
name|'network_info'
op|'='
name|'None'
op|')'
op|':'
newline|'\n'
indent|' '
name|'pass'
newline|'\n'
nl|'\n'
DECL|member|metadata_for_config_drive
dedent|''
name|'def'
name|'metadata_for_config_drive'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'return'
op|'['
op|']'
newline|'\n'
nl|'\n'
dedent|''
dedent|''
name|'self'
op|'.'
name|'useFixture'
op|'('
name|'fixtures'
op|'.'
name|'MonkeyPatch'
op|'('
nl|'\n'
string|"'nova.api.metadata.base.InstanceMetadata'"
op|','
nl|'\n'
name|'FakeInstanceMetadata'
op|')'
op|')'
newline|'\n'
nl|'\n'
DECL|function|fake_make_drive
name|'def'
name|'fake_make_drive'
op|'('
name|'_self'
op|','
name|'_path'
op|')'
op|':'
newline|'\n'
indent|' '
name|'pass'
newline|'\n'
comment|"# We can't actually make a config drive v2 because ensure_tree has"
nl|'\n'
comment|'# been faked out'
nl|'\n'
dedent|''
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.virt.configdrive.ConfigDriveBuilder.make_drive'"
op|','
nl|'\n'
name|'fake_make_drive'
op|')'
newline|'\n'
nl|'\n'
DECL|function|fake_upload_iso_to_datastore
name|'def'
name|'fake_upload_iso_to_datastore'
op|'('
name|'iso_path'
op|','
name|'instance'
op|','
op|'**'
name|'kwargs'
op|')'
op|':'
newline|'\n'
indent|' '
name|'pass'
newline|'\n'
dedent|''
name|'self'
op|'.'
name|'stub_out'
op|'('
string|"'nova.virt.vmwareapi.images.upload_iso_to_datastore'"
op|','
nl|'\n'
name|'fake_upload_iso_to_datastore'
op|')'
newline|'\n'
nl|'\n'
DECL|member|tearDown
dedent|''
name|'def'
name|'tearDown'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'super'
op|'('
name|'ConfigDriveTestCase'
op|','
name|'self'
op|')'
op|'.'
name|'tearDown'
op|'('
op|')'
newline|'\n'
name|'vmwareapi_fake'
op|'.'
name|'cleanup'
op|'('
op|')'
newline|'\n'
name|'nova'
op|'.'
name|'tests'
op|'.'
name|'unit'
op|'.'
name|'image'
op|'.'
name|'fake'
op|'.'
name|'FakeImageService_reset'
op|'('
op|')'
newline|'\n'
nl|'\n'
dedent|''
op|'@'
name|'mock'
op|'.'
name|'patch'
op|'.'
name|'object'
op|'('
name|'vmops'
op|'.'
name|'VMwareVMOps'
op|','
string|"'_get_instance_metadata'"
op|','
nl|'\n'
name|'return_value'
op|'='
string|"'fake_metadata'"
op|')'
newline|'\n'
DECL|member|_spawn_vm
name|'def'
name|'_spawn_vm'
op|'('
name|'self'
op|','
name|'fake_get_instance_meta'
op|','
nl|'\n'
name|'injected_files'
op|'='
name|'None'
op|','
name|'admin_password'
op|'='
name|'None'
op|','
nl|'\n'
name|'block_device_info'
op|'='
name|'None'
op|')'
op|':'
newline|'\n'
nl|'\n'
indent|' '
name|'injected_files'
op|'='
name|'injected_files'
name|'or'
op|'['
op|']'
newline|'\n'
name|'self'
op|'.'
name|'conn'
op|'.'
name|'spawn'
op|'('
name|'self'
op|'.'
name|'context'
op|','
name|'self'
op|'.'
name|'test_instance'
op|','
name|'self'
op|'.'
name|'image'
op|','
nl|'\n'
name|'injected_files'
op|'='
name|'injected_files'
op|','
nl|'\n'
name|'admin_password'
op|'='
name|'admin_password'
op|','
nl|'\n'
name|'network_info'
op|'='
name|'self'
op|'.'
name|'network_info'
op|','
nl|'\n'
name|'block_device_info'
op|'='
name|'block_device_info'
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_vm_with_config_drive_verify_method_invocation
dedent|''
name|'def'
name|'test_create_vm_with_config_drive_verify_method_invocation'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'test_instance'
op|'.'
name|'config_drive'
op|'='
string|"'True'"
newline|'\n'
name|'self'
op|'.'
name|'mox'
op|'.'
name|'StubOutWithMock'
op|'('
name|'vmops'
op|'.'
name|'VMwareVMOps'
op|','
string|"'_create_config_drive'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'mox'
op|'.'
name|'StubOutWithMock'
op|'('
name|'vmops'
op|'.'
name|'VMwareVMOps'
op|','
string|"'_attach_cdrom_to_vm'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'conn'
op|'.'
name|'_vmops'
op|'.'
name|'_create_config_drive'
op|'('
name|'self'
op|'.'
name|'test_instance'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
nl|'\n'
op|')'
op|'.'
name|'AndReturn'
op|'('
string|"'[ds1] fake.iso'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'conn'
op|'.'
name|'_vmops'
op|'.'
name|'_attach_cdrom_to_vm'
op|'('
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|','
nl|'\n'
name|'mox'
op|'.'
name|'IgnoreArg'
op|'('
op|')'
op|')'
newline|'\n'
name|'self'
op|'.'
name|'mox'
op|'.'
name|'ReplayAll'
op|'('
op|')'
newline|'\n'
comment|'# if spawn does not call the _create_config_drive or'
nl|'\n'
comment|'# _attach_cdrom_to_vm call with the correct set of parameters'
nl|'\n'
comment|"# then mox's VerifyAll will throw a Expected methods never called"
nl|'\n'
comment|'# Exception'
nl|'\n'
name|'self'
op|'.'
name|'_spawn_vm'
op|'('
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_vm_without_config_drive
dedent|''
name|'def'
name|'test_create_vm_without_config_drive'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'test_instance'
op|'.'
name|'config_drive'
op|'='
name|'None'
newline|'\n'
name|'self'
op|'.'
name|'mox'
op|'.'
name|'StubOutWithMock'
op|'('
name|'vmops'
op|'.'
name|'VMwareVMOps'
op|','
string|"'_create_config_drive'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'mox'
op|'.'
name|'StubOutWithMock'
op|'('
name|'vmops'
op|'.'
name|'VMwareVMOps'
op|','
string|"'_attach_cdrom_to_vm'"
op|')'
newline|'\n'
name|'self'
op|'.'
name|'mox'
op|'.'
name|'ReplayAll'
op|'('
op|')'
newline|'\n'
comment|'# if spawn ends up calling _create_config_drive or'
nl|'\n'
comment|'# _attach_cdrom_to_vm then mox will log a Unexpected method call'
nl|'\n'
comment|'# exception'
nl|'\n'
name|'self'
op|'.'
name|'_spawn_vm'
op|'('
op|')'
newline|'\n'
nl|'\n'
DECL|member|test_create_vm_with_config_drive
dedent|''
name|'def'
name|'test_create_vm_with_config_drive'
op|'('
name|'self'
op|')'
op|':'
newline|'\n'
indent|' '
name|'self'
op|'.'
name|'test_instance'
op|'.'
name|'config_drive'
op|'='
string|"'True'"
newline|'\n'
name|'self'
op|'.'
name|'_spawn_vm'
op|'('
op|')'
newline|'\n'
dedent|''
dedent|''
endmarker|''
end_unit
| [
"[email protected]"
] | |
68a6ad72b6ba110b69031ee89e2ee46750bbdae1 | 74be41563cba82ec784aed3c893a53261a428ab1 | /myapp/ocr_api/views.py | db6910d7ae1c32dea6bf6323e19f1305a4a8f71f | [] | no_license | Bakushin10/Django | e06ad485084d917886a50e5f3c38f8b049c85fb1 | 184db43f58e4679c2a556f9603f5e3bec61da1eb | refs/heads/master | 2022-12-12T13:34:24.391273 | 2019-11-05T14:33:29 | 2019-11-05T14:33:29 | 202,837,195 | 0 | 0 | null | 2022-12-08T06:53:31 | 2019-08-17T04:55:27 | Python | UTF-8 | Python | false | false | 5,626 | py | import os, sys, json
from PIL import Image, ImageDraw2
from django.shortcuts import render
from django.shortcuts import render, HttpResponseRedirect
from django.http import HttpResponse, JsonResponse
from ocr_api.models import OCRInputModel, JsonOCRInputModel
from ocr_api.serializers import OCRInputModelSerializer, JsonOCRInputModelSerializer
from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
def post_dummy_data(request):
"""
basic API call for a POST method
it will post dummy data
based on the OCRInputModel defined ocr_api.models
"""
if request.method == "POST":
data = {"ocrJson" : "Two"}
serializer = OCRInputModelSerializer(data = data)
if serializer.is_valid(raise_exception = True):
data_saved = serializer.save()
return HttpResponse("success: {} was created".format(data))
@csrf_exempt
def post_ocr_results(request):
"""
**** Please dont call this API if data is alrady stored at endpoint ****
demo API POST call for OCR result.
read a Json from a local file and post it to endpoint.
"""
if request.method == "POST":
response = readJson()
print("{} {} {}".format("*"*10, "tatal amount of data : ", len(response["response"])))
print("{} {}".format("*"*10, request))
for json_data in response["response"]:
# print(json_data)
x , y = [], []
for coordinate in json_data["coordinates"]:
x.append(coordinate["y"])
y.append(coordinate["x"])
data = {
"field" : str(json_data["field"]),
"hasField" : json_data["hasField"],
"coordinates" : str(json_data["coordinates"]),
"x_coordinates" : str(x),
"y_coordinates" : str(y),
"text" : json_data["text"]
}
serializer = JsonOCRInputModelSerializer(data = data)
if serializer.is_valid(raise_exception = True):
data_saved = serializer.save()
return HttpResponse("{} {} {}".format("All ", len(response["response"]), " data posted!"))
@csrf_exempt
def get_ocr_results(request):
"""
retrieve fake OCR data from an endpoint
"""
data = JsonOCRInputModel.objects.all()
if request.method == "GET":
serializer = JsonOCRInputModelSerializer(data, many = True)
dataToDisplay = getDataToDisplay(serializer.data)
return JsonResponse(dataToDisplay, safe = False)
@csrf_exempt
def get_ocr_results_by_id(request, username):
"""
retrieve fake OCR data by id
"""
if request.method != "GET":
return HttpResponse("GET request only")
data = JsonOCRInputModel.objects.filter(field=username)
if len(data) == 0:
return HttpResponse("no data found")
return HttpResponse(data.values())
@csrf_exempt
def get_ocred_image(request):
"""
retrieve fake OCR data from an endpoint
"""
data = JsonOCRInputModel.objects.all()
SUCCESS_MESSAGE = {"image successfully ocred": "OK"}
ERROR_MESSAGE = {"image could not be ocred": "ERROR"}
if request.method == "GET":
serializer = JsonOCRInputModelSerializer(data, many = True)
imagePath = "ocr_api/img/"
imageName = "sample.jpg"
try:
drawLinesOnImages(imagePath, imageName, serializer.data)
except:
return JsonResponse(ERROR_MESSAGE, safe = False)
return JsonResponse(SUCCESS_MESSAGE, safe = False)
@csrf_exempt
def get_dummy_data(request):
"""
basic API call for a GET method
"""
data = OCRInputModel.objects.all()
if request.method == "GET":
serializer = OCRInputModelSerializer(data, many=True)
dataToDisplay = getDataToDisplay(serializer.data)
return JsonResponse(dataToDisplay, safe=False)
def readJson():
"""
read JSON data from local file
"""
#path = "ocr_api/json/test.json"
path = "ocr_api/json/ocrReturnValues.json"
#return ""
with open(os.path.join(sys.path[0], path)) as f:
data = json.load(f)
print("{} {}".format("*"*10, data["response"]))
return data
def getDataToDisplay(data):
"""
add "total amount amount of data" for readability purposes
"""
return ["total amount data : " + str(len(data))] + data
def drawLinesOnImages(imagePath, imageName, data):
detectTextOnImage(imagePath, imageName, data)
# detectTextBoxOnImage(imagePath)
def detectTextOnImage(imagePath,imageName, data):
"""
draw line to the image based on the x and y coordinates from JSON
"""
im = Image.open(imagePath + imageName)
d = ImageDraw2.Draw(im)
pen = ImageDraw2.Pen(color="red")
for j in data:
x = j["x_coordinates"].replace("[","").replace("]","").split(",")
y = j["y_coordinates"].replace("[","").replace("]","").split(",")
#LB, LT, RT, RB = (c[0]["x"], c[0]["y"]), (c[1]["x"], c[1]["y"]), (c[2]["x"], c[2]["y"]), (c[3]["x"], c[3]["y"])
LB, LT, RT, RB = (int(y[0]), int(x[0])), (int(y[1]), int(x[1])), (int(y[2]), int(x[2])), (int(y[3]), int(x[3]))
d.line([LB, LT, RT, RB, LB], pen) #red line
im.save(imagePath + "ocred_" + imageName)
print("image saved")
def detectTextBoxOnImage():
"""
detect the textbox on a policy
"""
pass | [
"[email protected]"
] | |
b1f831dc5b99ada2504bfeae16902b81d431db0e | ca7aa979e7059467e158830b76673f5b77a0f5a3 | /Python_codes/p03470/s172972650.py | 57f94dcf85afe314754def063304ba328d8d9803 | [] | no_license | Aasthaengg/IBMdataset | 7abb6cbcc4fb03ef5ca68ac64ba460c4a64f8901 | f33f1c5c3b16d0ea8d1f5a7d479ad288bb3f48d8 | refs/heads/main | 2023-04-22T10:22:44.763102 | 2021-05-13T17:27:22 | 2021-05-13T17:27:22 | 367,112,348 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 140 | py | N = int(input())
d = sorted([int(input()) for i in range(N)])
ans = 1
for i in range(N-1):
if d[i] < d[i+1]:
ans += 1
print(ans) | [
"[email protected]"
] | |
be170dcc3bdd5793fcc52084d7dd6184d1ea3928 | 51aa2894c317f60726fe9a778999eb7851b6be3e | /120_design_patterns/014_command/_exercises/templates/4-Command Pattern/Assignment/Solution/security_commands.py | dccb3920efff4cf8a244e3bb8df5cf9715871ec1 | [] | no_license | pranaymate/Python_Topics | dd7b288ab0f5bbee71d57080179d6481aae17304 | 33d29e0a5bf4cde104f9c7f0693cf9897f3f2101 | refs/heads/master | 2022-04-25T19:04:31.337737 | 2020-04-26T00:36:03 | 2020-04-26T00:36:03 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 455 | py | # ____ a__.s... ______ S..
# ____ c_a.. ______ AC..
#
#
# c_ SecurityArmCommand AC..
# ___ - security
# __ no. isi.. ? S..
# r_ T..
# ? ?
#
# ___ execute
# s____.a..
#
# ___ undo(
# s___.di..
#
#
# c_ SecurityDisarmCommand(AbsCommand):
# ___ - security
# __ no. isi.. ? S..
# r_ T..
# ? ?
#
# ___ execute
# s___.di..
#
# ___ undo
# s___.a.. | [
"[email protected]"
] | |
76b7b7fd8b8e98790d7b81290bc4cc77bea998c9 | a74a592d3e34c0cb2e19363a92410c520dc0ecda | /backend/course/models.py | 1ed6ffe755403a8dcb6cbf4e3dc54d87f002688f | [] | no_license | crowdbotics-apps/youthbuild-course-a-18675 | 5e4f0231b6127b215576c87593b8a073518200de | 4f17bfa2f588be23f24d862ca86fba569908e90e | refs/heads/master | 2022-11-08T20:55:36.094513 | 2020-07-07T19:33:25 | 2020-07-07T19:33:25 | 277,903,947 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,850 | py | from django.conf import settings
from django.db import models
class Course(models.Model):
"Generated Model"
author = models.ForeignKey(
"users.User", on_delete=models.CASCADE, related_name="course_author",
)
title = models.CharField(null=True, blank=True, max_length=256,)
description = models.TextField(null=True, blank=True,)
categories = models.ManyToManyField(
"course.Category", blank=True, related_name="course_categories",
)
class Lesson(models.Model):
"Generated Model"
module = models.ForeignKey(
"course.Module", on_delete=models.CASCADE, related_name="lesson_module",
)
title = models.CharField(max_length=256,)
description = models.TextField()
media = models.URLField()
class Category(models.Model):
"Generated Model"
name = models.CharField(max_length=256,)
class Enrollment(models.Model):
"Generated Model"
user = models.ForeignKey(
"users.User", on_delete=models.CASCADE, related_name="enrollment_user",
)
course = models.ForeignKey(
"course.Course", on_delete=models.CASCADE, related_name="enrollment_course",
)
class Event(models.Model):
"Generated Model"
name = models.CharField(max_length=256,)
user = models.ForeignKey(
"users.User", on_delete=models.CASCADE, related_name="event_user",
)
date = models.DateTimeField()
class Module(models.Model):
"Generated Model"
course = models.ForeignKey(
"course.Course", on_delete=models.CASCADE, related_name="module_course",
)
title = models.CharField(max_length=256,)
description = models.TextField()
class SubscriptionType(models.Model):
"Generated Model"
name = models.CharField(max_length=256,)
class Recording(models.Model):
"Generated Model"
event = models.ForeignKey(
"course.Event", on_delete=models.CASCADE, related_name="recording_event",
)
media = models.URLField()
user = models.ForeignKey(
"users.User", on_delete=models.CASCADE, related_name="recording_user",
)
published = models.DateTimeField()
class Group(models.Model):
"Generated Model"
name = models.CharField(max_length=256,)
class Subscription(models.Model):
"Generated Model"
subscription_type = models.ForeignKey(
"course.SubscriptionType",
on_delete=models.CASCADE,
related_name="subscription_subscription_type",
)
user = models.ForeignKey(
"users.User", on_delete=models.CASCADE, related_name="subscription_user",
)
class PaymentMethod(models.Model):
"Generated Model"
user = models.ForeignKey(
"users.User", on_delete=models.CASCADE, related_name="paymentmethod_user",
)
primary = models.BooleanField()
token = models.CharField(max_length=256,)
# Create your models here.
| [
"[email protected]"
] | |
a3b0628f021f81f748f640274f9ddface28f23ea | 500b03fa6cb776c1d51db4a3a3aa252ddf5a50e6 | /book_exercise/py_intro/basics/Chapter 4: If statement/num_close.py | d5a70064327dbf0d92c9c632d600f35af5edadad | [] | no_license | carloslvm/learning-python | b3796a0a5b751baae8c551a9f6fe262f98980691 | 07f885454cf21b7d215a58da7fcb907715e546bd | refs/heads/master | 2022-07-27T21:39:11.937801 | 2022-07-09T17:47:56 | 2022-07-09T17:47:56 | 163,447,616 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 364 | py | #!/usr/bin/python3
# Guessing a float number
num = 10.000
user_num = float(input('Try to guess the float number: '))
if user_num == 10.001 or user_num == 9.999:
print('You were close.')
elif user_num != 10.001 and user_num != 9.999:
print('You were not close.')
elif user_num == num:
print('That\'s correct.')
else:
print('You were no close.')
| [
"[email protected]"
] | |
677767ca7ceb624e8d26a88b9ec1aea211c9eb4c | 29f5b2d3a3582afad36ce03d23ac8e25743c7a1d | /quickstart.py | 4e6bea7265daf75d13f0f31247acd9327aebbe9f | [] | no_license | kylinRao/djangowebbuild | 9b1e1f32ae8b8872e950ff91658296d92113597e | 75a06b8e35d50176d824e3a4e790a79796c28f70 | refs/heads/master | 2021-01-19T04:32:49.411920 | 2016-06-08T01:33:38 | 2016-06-08T01:33:38 | 60,658,778 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,301 | py | from __future__ import print_function
import httplib2
import os
from apiclient import discovery
import oauth2client
from oauth2client import client
from oauth2client import tools
try:
import argparse
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
except ImportError:
flags = None
# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/gmail-python-quickstart.json
SCOPES = 'https://www.googleapis.com/auth/gmail.readonly'
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Gmail API Python Quickstart'
def get_credentials():
"""Gets valid user credentials from storage.
If nothing has been stored, or if the stored credentials are invalid,
the OAuth2 flow is completed to obtain the new credentials.
Returns:
Credentials, the obtained credential.
"""
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'gmail-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
def main():
"""Shows basic usage of the Gmail API.
Creates a Gmail API service object and outputs a list of label names
of the user's Gmail account.
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('gmail', 'v1', http=http)
results = service.users().labels().list(userId='me').execute()
labels = results.get('labels', [])
if not labels:
print('No labels found.')
else:
print('Labels:')
for label in labels:
print(label['name'])
if __name__ == '__main__':
main()
| [
"[email protected]"
] | |
0006232e66e9b267c54344acf505f520ca34f480 | 0adf94fc39a02018165b62e93dd83edddd041230 | /.history/configurations/settings_20190226153809.py | 75cf74155e05e00a2fbe0afc31d36aded4447481 | [] | no_license | SabitDeepto/BrJobs | 1e3baa143331cf46b9c70911c6644d1efd4fffd6 | 1a458c8c667f8093a2325d963e5542655467c7aa | refs/heads/master | 2020-04-24T08:02:26.350007 | 2019-03-17T05:53:30 | 2019-03-17T05:53:30 | 171,818,024 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 4,011 | py | """
Django settings for configurations project.
Generated by 'django-admin startproject' using Django 2.1.3.
For more information on this file, see
https://docs.djangoproject.com/en/2.1/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.1/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '+nu9r@lhaog&+yl!%vwmk1a-xed5!2ml&pm=n(t)(!8bed$^ny'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'Test',
'Jobs',
'ckeditor',
'ckeditor_uploader',
'register',
'debug_toolbar',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
# ...
'debug_toolbar.middleware.DebugToolbarMiddleware',
# ...
]
ROOT_URLCONF = 'configurations.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'configurations.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
# AUTH_PASSWORD_VALIDATORS = [
# {
# 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
# },
# {
# 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
# },
# {
# 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
# },
# {
# 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
# },
# ]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Dhaka'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_URL = '/static/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
MEDIA_URL = '/media/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
#...
SITE_ID = 1
####################################
## CKEDITOR CONFIGURATION ##
####################################
CKEDITOR_JQUERY_URL = 'https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js'
CKEDITOR_UPLOAD_PATH = 'uploads/'
CKEDITOR_IMAGE_BACKEND = "pillow"
CKEDITOR_CONFIGS = {
'default': {
'toolbar': None,
},
}
###################################
# AUTH_USER_MODEL = 'Test.User'
# AUTH_USER_MODEL = 'TestApp.User'
LOGIN_REDIRECT_URL = '/'
LOGOUT_REDIRECT_URL = '/'
# AUTH_USER_MODEL = 'Jobs.User' | [
"[email protected]"
] | |
8bbe10ec9f9d538ff2af58beaee2fe77b23096dc | 5785d7ed431b024dd910b642f10a6781df50e4aa | /revise-daily/educative.io/medium-dp/longest-common-subsequence/11_edit_distance.py | 0afe28a66ab6ed5f3dc9fc015d21e758fbb667d4 | [] | no_license | kashyapa/interview-prep | 45d77324446da34d99bf8efedb3544b367b5523e | 7060c090c40602fb9c4778eace2078e1b51e235b | refs/heads/master | 2023-07-28T13:12:49.515299 | 2021-09-06T14:33:25 | 2021-09-06T14:33:25 | 403,706,510 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 2,004 | py | def find_min_operations(s1, s2):
return find_min_operations_recursive(s1, s2, 0, 0)
def find_min_operations_dp(s1, s2):
n1, n2 = len(s1), len(s2)
dp = [[-1 for _ in range(n2 + 1)] for _ in range(n1 + 1)]
# if s2 is empty, we can remove all the characters of s1 to make it empty too
for i1 in range(n1 + 1):
dp[i1][0] = i1
# if s1 is empty, we have to insert all the characters of s2
for i2 in range(n2 + 1):
dp[0][i2] = i2
for i1 in range(1, n1 + 1):
for i2 in range(1, n2 + 1):
# If the strings have a matching character, we can recursively match for the remaining lengths
if s1[i1 - 1] == s2[i2 - 1]:
dp[i1][i2] = dp[i1 - 1][i2 - 1]
else:
dp[i1][i2] = 1 + min(dp[i1 - 1][i2], # delete
min(dp[i1][i2 - 1], # insert
dp[i1 - 1][i2 - 1])) # replace
return dp[n1][n2]
def find_min_operations_recursive(s1, s2, i1, i2):
n1, n2 = len(s1), len(s2)
# if we have reached the end of s1, then we have to insert all the remaining characters of s2
if i1 == n1:
return n2 - i2
# if we have reached the end of s2, then we have to delete all the remaining characters of s1
if i2 == n2:
return n1 - i1
# If the strings have a matching character, we can recursively match for the remaining lengths
if s1[i1] == s2[i2]:
return find_min_operations_recursive(s1, s2, i1 + 1, i2 + 1)
# perform deletion
c1 = 1 + find_min_operations_recursive(s1, s2, i1 + 1, i2)
# perform insertion
c2 = 1 + find_min_operations_recursive(s1, s2, i1, i2 + 1)
# perform replacement
c3 = 1 + find_min_operations_recursive(s1, s2, i1 + 1, i2 + 1)
return min(c1, min(c2, c3))
def main():
print(find_min_operations("bat", "but"))
print(find_min_operations("abdca", "cbda"))
print(find_min_operations("passpot", "ppsspqrt"))
main()
| [
"[email protected]"
] | |
b532081aaa769fc56e4142beaeafe9cc3c22fc13 | ff4ab3a8aac4d26534d1d980f85f7b4ca3e0905b | /config.py | b909dff713e1be51064f9d8b0ab0d5244ecb1a54 | [] | no_license | pandeynandancse/Named_Entity_Reognition_BERT | be19db084079d035a59356cdf6ede8153f856055 | a6205240d312f6ca02b3ef5c8cc34512126815f0 | refs/heads/master | 2022-11-29T09:33:32.747433 | 2020-08-11T15:54:51 | 2020-08-11T15:54:51 | 286,786,451 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 444 | py | import transformers
MAX_LEN = 128
TRAIN_BATCH_SIZE = 32
VALID_BATCH_SIZE = 8
EPOCHS = 10
BASE_MODEL_PATH = "../input/bert_base_uncased"
MODEL_PATH = "model.bin"
TRAINING_FILE = "../input/ner_dataset.csv"
#also can grab tokenizer from tokenizers library that is also from hugging face
TOKENIZER = transformers.BertTokenizer.from_pretrained(
BASE_MODEL_PATH,
do_lower_case=True # set to true becoz we are using bert uncased
)
| [
"[email protected]"
] | |
ffa2ae378f1565ed2e05a6f4d0a4a43f0bf68c78 | c9ddbdb5678ba6e1c5c7e64adf2802ca16df778c | /cases/synthetic/sieve-big-1095.py | 05e4945e404661ab2c9357d6ed718b6d3fb30b48 | [] | no_license | Virtlink/ccbench-chocopy | c3f7f6af6349aff6503196f727ef89f210a1eac8 | c7efae43bf32696ee2b2ee781bdfe4f7730dec3f | refs/heads/main | 2023-04-07T15:07:12.464038 | 2022-02-03T15:42:39 | 2022-02-03T15:42:39 | 451,969,776 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 31,742 | py | # A resizable list of integers
class Vector(object):
items: [int] = None
size: int = 0
def __init__(self:"Vector"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector", idx: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector") -> int:
return self.size
# A resizable list of integers
class Vector2(object):
items: [int] = None
items2: [int] = None
size: int = 0
size2: int = 0
def __init__(self:"Vector2"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector2") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector2") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector2") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector2") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector2", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector2", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector2", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector2", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector2", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector2", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = $Index
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector2", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector2", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector2") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector2") -> int:
return self.size
# A resizable list of integers
class Vector3(object):
items: [int] = None
items2: [int] = None
items3: [int] = None
size: int = 0
size2: int = 0
size3: int = 0
def __init__(self:"Vector3"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector3") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector3") -> int:
return len(self.items)
# Returns current capacity
def capacity3(self:"Vector3") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector3") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector3") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity3(self:"Vector3") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector3", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector3", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append3(self:"Vector3", item: int, item2: int, item3: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector3", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector3", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all3(self:"Vector3", new_items: [int], new_items2: [int], new_items3: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector3", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector3", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at3(self:"Vector3", idx: int, idx2: int, idx3: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector3", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector3", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get3(self:"Vector3", idx: int, idx2: int, idx3: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector3") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector3") -> int:
return self.size
# Retrieves the current size of the vector
def length3(self:"Vector3") -> int:
return self.size
# A resizable list of integers
class Vector4(object):
items: [int] = None
items2: [int] = None
items3: [int] = None
items4: [int] = None
size: int = 0
size2: int = 0
size3: int = 0
size4: int = 0
def __init__(self:"Vector4"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector4") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector4") -> int:
return len(self.items)
# Returns current capacity
def capacity3(self:"Vector4") -> int:
return len(self.items)
# Returns current capacity
def capacity4(self:"Vector4") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity3(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity4(self:"Vector4") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector4", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector4", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append3(self:"Vector4", item: int, item2: int, item3: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append4(self:"Vector4", item: int, item2: int, item3: int, item4: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector4", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector4", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all3(self:"Vector4", new_items: [int], new_items2: [int], new_items3: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all4(self:"Vector4", new_items: [int], new_items2: [int], new_items3: [int], new_items4: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
item4:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector4", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector4", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at3(self:"Vector4", idx: int, idx2: int, idx3: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at4(self:"Vector4", idx: int, idx2: int, idx3: int, idx4: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector4", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector4", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get3(self:"Vector4", idx: int, idx2: int, idx3: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get4(self:"Vector4", idx: int, idx2: int, idx3: int, idx4: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector4") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector4") -> int:
return self.size
# Retrieves the current size of the vector
def length3(self:"Vector4") -> int:
return self.size
# Retrieves the current size of the vector
def length4(self:"Vector4") -> int:
return self.size
# A resizable list of integers
class Vector5(object):
items: [int] = None
items2: [int] = None
items3: [int] = None
items4: [int] = None
items5: [int] = None
size: int = 0
size2: int = 0
size3: int = 0
size4: int = 0
size5: int = 0
def __init__(self:"Vector5"):
self.items = [0]
# Returns current capacity
def capacity(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity2(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity3(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity4(self:"Vector5") -> int:
return len(self.items)
# Returns current capacity
def capacity5(self:"Vector5") -> int:
return len(self.items)
# Increases capacity of vector by one element
def increase_capacity(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity2(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity3(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity4(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Increases capacity of vector by one element
def increase_capacity5(self:"Vector5") -> int:
self.items = self.items + [0]
return self.capacity()
# Appends one item to end of vector
def append(self:"Vector5", item: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append2(self:"Vector5", item: int, item2: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append3(self:"Vector5", item: int, item2: int, item3: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append4(self:"Vector5", item: int, item2: int, item3: int, item4: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends one item to end of vector
def append5(self:"Vector5", item: int, item2: int, item3: int, item4: int, item5: int) -> object:
if self.size == self.capacity():
self.increase_capacity()
self.items[self.size] = item
self.size = self.size + 1
# Appends many items to end of vector
def append_all(self:"Vector5", new_items: [int]) -> object:
item:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all2(self:"Vector5", new_items: [int], new_items2: [int]) -> object:
item:int = 0
item2:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all3(self:"Vector5", new_items: [int], new_items2: [int], new_items3: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all4(self:"Vector5", new_items: [int], new_items2: [int], new_items3: [int], new_items4: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
item4:int = 0
for item in new_items:
self.append(item)
# Appends many items to end of vector
def append_all5(self:"Vector5", new_items: [int], new_items2: [int], new_items3: [int], new_items4: [int], new_items5: [int]) -> object:
item:int = 0
item2:int = 0
item3:int = 0
item4:int = 0
item5:int = 0
for item in new_items:
self.append(item)
# Removes an item from the middle of vector
def remove_at(self:"Vector5", idx: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at2(self:"Vector5", idx: int, idx2: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at3(self:"Vector5", idx: int, idx2: int, idx3: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at4(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Removes an item from the middle of vector
def remove_at5(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int, idx5: int) -> object:
if idx < 0:
return
while idx < self.size - 1:
self.items[idx] = self.items[idx + 1]
idx = idx + 1
self.size = self.size - 1
# Retrieves an item at a given index
def get(self:"Vector5", idx: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get2(self:"Vector5", idx: int, idx2: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get3(self:"Vector5", idx: int, idx2: int, idx3: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get4(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int) -> int:
return self.items[idx]
# Retrieves an item at a given index
def get5(self:"Vector5", idx: int, idx2: int, idx3: int, idx4: int, idx5: int) -> int:
return self.items[idx]
# Retrieves the current size of the vector
def length(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length2(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length3(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length4(self:"Vector5") -> int:
return self.size
# Retrieves the current size of the vector
def length5(self:"Vector5") -> int:
return self.size
# A faster (but more memory-consuming) implementation of vector
class DoublingVector(Vector):
doubling_limit:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector2(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector2") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector2") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector3(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
doubling_limit3:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector3") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector3") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity3(self:"DoublingVector3") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector4(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
doubling_limit3:int = 1000
doubling_limit4:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity3(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity4(self:"DoublingVector4") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# A faster (but more memory-consuming) implementation of vector
class DoublingVector5(Vector):
doubling_limit:int = 1000
doubling_limit2:int = 1000
doubling_limit3:int = 1000
doubling_limit4:int = 1000
doubling_limit5:int = 1000
# Overriding to do fewer resizes
def increase_capacity(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity2(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity3(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity4(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Overriding to do fewer resizes
def increase_capacity5(self:"DoublingVector5") -> int:
if (self.capacity() <= self.doubling_limit // 2):
self.items = self.items + self.items
else:
# If doubling limit has been reached, fall back to
# standard capacity increases
self.items = self.items + [0]
return self.capacity()
# Makes a vector in the range [i, j)
def vrange(i:int, j:int) -> Vector:
v:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange2(i:int, j:int, i2:int, j2:int) -> Vector:
v:Vector = None
v2:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange3(i:int, j:int, i2:int, j2:int, i3:int, j3:int) -> Vector:
v:Vector = None
v2:Vector = None
v3:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange4(i:int, j:int, i2:int, j2:int, i3:int, j3:int, i4:int, j4:int) -> Vector:
v:Vector = None
v2:Vector = None
v3:Vector = None
v4:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
def vrange5(i:int, j:int, i2:int, j2:int, i3:int, j3:int, i4:int, j4:int, i5:int, j5:int) -> Vector:
v:Vector = None
v2:Vector = None
v3:Vector = None
v4:Vector = None
v5:Vector = None
v = DoublingVector()
while i < j:
v.append(i)
i = i + 1
return v
# Sieve of Eratosthenes (not really)
def sieve(v:Vector) -> object:
i:int = 0
j:int = 0
k:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve2(v:Vector, v2:Vector) -> object:
i:int = 0
i2:int = 0
j:int = 0
j2:int = 0
k:int = 0
k2:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve3(v:Vector, v2:Vector, v3:Vector) -> object:
i:int = 0
i2:int = 0
i3:int = 0
j:int = 0
j2:int = 0
j3:int = 0
k:int = 0
k2:int = 0
k3:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve4(v:Vector, v2:Vector, v3:Vector, v4:Vector) -> object:
i:int = 0
i2:int = 0
i3:int = 0
i4:int = 0
j:int = 0
j2:int = 0
j3:int = 0
j4:int = 0
k:int = 0
k2:int = 0
k3:int = 0
k4:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
def sieve5(v:Vector, v2:Vector, v3:Vector, v4:Vector, v5:Vector) -> object:
i:int = 0
i2:int = 0
i3:int = 0
i4:int = 0
i5:int = 0
j:int = 0
j2:int = 0
j3:int = 0
j4:int = 0
j5:int = 0
k:int = 0
k2:int = 0
k3:int = 0
k4:int = 0
k5:int = 0
while i < v.length():
k = v.get(i)
j = i + 1
while j < v.length():
if v.get(j) % k == 0:
v.remove_at(j)
else:
j = j + 1
i = i + 1
# Input parameter
n:int = 50
n2:int = 50
n3:int = 50
n4:int = 50
n5:int = 50
# Data
v:Vector = None
v2:Vector = None
v3:Vector = None
v4:Vector = None
v5:Vector = None
i:int = 0
i2:int = 0
i3:int = 0
i4:int = 0
i5:int = 0
# Crunch
v = vrange(2, n)
v2 = vrange(2, n)
v3 = vrange(2, n)
v4 = vrange(2, n)
v5 = vrange(2, n)
sieve(v)
# Print
while i < v.length():
print(v.get(i))
i = i + 1
| [
"[email protected]"
] | |
60979a8ade62ba28d58cd3e3ed95c34182d3d71e | 63b79c404d83e4980891c488f4d9592558ecda35 | /assets/src/ba_data/python/bastd/ui/coop/__init__.py | 3262255308ce6f45e4b6dc0da8fd42ee5b06cf20 | [
"MIT"
] | permissive | kakekakeka/ballistica | 56e8879cd5b4b990e5e05da3dfd300d7cbb45446 | 3ffeff8ce401a00128363ff08b406471092adaa9 | refs/heads/master | 2022-11-14T08:11:57.160782 | 2020-07-01T05:43:13 | 2020-07-01T05:49:44 | 276,755,445 | 2 | 0 | MIT | 2020-07-02T22:18:37 | 2020-07-02T22:18:36 | null | UTF-8 | Python | false | false | 1,178 | py | # Copyright (c) 2011-2020 Eric Froemling
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# -----------------------------------------------------------------------------
| [
"[email protected]"
] | |
1d92cd4caf466194bc8dd0b7e0808a2f847ca1c2 | 093781f6a182c4988bb72c518e9747b723528e65 | /14_pooling.py | dc3c8a0f6d26c083f847ea87a72901437bc557b4 | [] | no_license | cjy02044027/quincy-pytorch | 889d821685865687853df8c080352e534ac71b0d | c6a226196ec3d7d23121291c3b5696ea57152f57 | refs/heads/master | 2023-01-12T10:46:39.394664 | 2020-02-14T06:23:10 | 2020-02-14T06:23:10 | null | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 1,296 | py | #!usr/bin/env python
# -*- coding:utf-8 _*-
"""
@version:
author:yanqiang
@time: 2019/04/09
@file: main.py
@description: 最大池化和平均池化
"""
import torch
from torch import nn
print(torch.__version__)
# 二维最大池化层和平均池化层
def pool2d(X, pool_size, mode='max'):
"""
池化层
:param X:
:param pool_size:
:param mode: max 最大池化 avg平均池化
:return:
"""
X = X.float()
p_h, p_w = pool_size
Y = torch.zeros(X.shape[0] - p_h + 1, X.shape[1] - p_w + 1)
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
if mode == 'max':
Y[i, j] = X[i:i + p_h, j:j + p_w].max()
elif mode == 'avg':
Y[i, j] = X[i:i + p_h, j:j + p_w].mean()
return Y
X = torch.tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
print(pool2d(X, (2, 2)))
print(pool2d(X, (2, 2), 'avg'))
# 填充和步幅
X = torch.arange(16, dtype=torch.float).view((1, 1, 4, 4))
pool2d = nn.MaxPool2d(3)
print(pool2d(X))
pool2d = nn.MaxPool2d(3, padding=1, stride=2)
print(pool2d(X))
pool2d = nn.MaxPool2d((2, 4), padding=(1, 2), stride=(2, 3))
print(pool2d(X))
# 多通道
X = torch.cat((X, X + 1), dim=1)
print(X)
print(X.shape)
pool2d = nn.MaxPool2d(3, padding=1, stride=2)
print(pool2d(X))
| [
"[email protected]"
] | |
f3a07f02268fdfe27b330e612e8b7945659aa549 | 77c7c1bb838fe3c7972e1fd54aab21ce50da0654 | /bhp045/apps/cancer_subject/models/subject_off_study_mixin.py | 5d0ff6fbd5c6fd21124adcf8aafec5f26650affd | [] | no_license | botswana-harvard/cancer | 394124fe4cb8ae5e03ca70842a13e20220201be9 | 410cdd637d1da5b9d5081da02697eb1d03ae0984 | refs/heads/master | 2021-01-21T18:24:58.316116 | 2017-05-22T13:11:05 | 2017-05-22T13:11:05 | 92,045,682 | 1 | 0 | null | null | null | null | UTF-8 | Python | false | false | 236 | py | from edc.subject.off_study.mixins.off_study_mixin import OffStudyMixin
class SubjectOffStudyMixin(OffStudyMixin):
def get_off_study_cls(self):
from .subject_off_study import SubjectOffStudy
return SubjectOffStudy
| [
"[email protected]"
] | |
6abde58c2c7813da0693cc6347cd6916351e7fd8 | 4c2def4621865535d36e6beb605691a6d53628d4 | /ask_weather/action.py | af6d79fd48a1e8ad45ce006213b73d645d5dcce1 | [] | no_license | liaozhihui/work | 4485722c73a796c25896bb083d84d0e4f79e05c5 | 61a11d591875e17818b1b303d3552818441efafc | refs/heads/master | 2020-07-16T17:04:07.788674 | 2019-09-02T10:13:28 | 2019-09-02T10:13:28 | 205,828,908 | 0 | 0 | null | null | null | null | UTF-8 | Python | false | false | 328 | py | from rasa_core_sdk import Action
from rasa_core_sdk.events import SlotSet
class ActionAskWeather(Action):
def name(self):
return 'action_ask_weather'
def run(self, dispatcher, tracker, domain):
dispatcher.utter_message(f'您问的天气地点是哪里呢')
return [SlotSet('city', '深圳')]
| [
"[email protected]"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.