package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aggdirect-job-report-utility | No description available on PyPI. |
aggdirect-ocr | No description available on PyPI. |
aggdirect-price-calculator | No description available on PyPI. |
aggdirect-route-estimation-calculator | No description available on PyPI. |
aggdraw | The aggdraw module implements the basic WCK 2D Drawing Interface on
top of the AGG library. This library provides high-quality drawing,
with anti-aliasing and alpha compositing, while being fully compatible
with the WCK renderer. |
aggexif | Requiredpython 3.8 later & pipexiftoolInstall# pip install aggexifUsageBasic usage.$ aggexif ~/dir/*.NEF
---- CAMERA LIST ----
NIKON Z 7: 276โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
NIKON Z 6: 69โโโโโโโโโโโโโโโ
---- LENS LIST ----
AF-S VR Zoom-Nikkor 70-300mm f/4.5-5.6G IF-ED: 213โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
NIKKOR Z 14-30mm f/4 S: 69โโโโโโโโ
NIKKOR Z 50mm f/1.8 S: 48โโโโโ
AF-S Zoom-Nikkor 80-200mm f/2.8D IF-ED: 13
---- FOCAL LENGTH ----
10-15: 19โโโโโโโโโโโ
15-20: 7โโโ
20-24: 9โโโโโ
28-35: 34โโโโโโโโโโโโโโโโโโโโโโ
45-50: 48โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
60-70: 54โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
70-85: 30โโโโโโโโโโโโโโโโโโโ
85-105: 13โโโโโโโ
105-135: 11โโโโโ
135-200: 18โโโโโโโโโโ
200-300: 100โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
---- YEAR ----
2021: 345โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโUse stdin pipe, -a(use cache), -w(print width), -l(filter lens), --monthly(view monthly graph) and --year(filter year).find ~/picture/ -name "*.NEF" | poetry run aggexif -a -w=100 -l="14-30" --monthly --year=2021
---- CAMERA LIST ----
NIKON Z 6: 4441โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
NIKON Z 7: 1183โโโโโโโโโโโโโโโโโโโ
---- LENS LIST ----
NIKKOR Z 14-30mm f/4 S: 5624โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
---- FOCAL LENGTH ----
10-15: 1301โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
15-20: 946โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
20-24: 860โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
24-28: 428โโโโโโโโโโโโโโโโ
28-35: 2088โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
40-45: 1
---- MONTH ----
2021/01: 185โโโโโโโโโโโ
2021/02: 1192โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/03: 491โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/04: 712โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/05: 756โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/06: 523โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/07: 507โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/08: 146โโโโโโโโ
2021/09: 83โโโโ
2021/10: 586โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
2021/11: 227โโโโโโโโโโโโโโ
2021/12: 216โโโโโโโโโโโโโHelpusage: Aggregate EXIF [-h] [-w WIDTH] [-l LENS [LENS ...]] [-c CAMERA [CAMERA ...]]
[--year YEAR [YEAR ...]] [--month MONTH [MONTH ...]]
[--day DAY [DAY ...]] [--yearly] [--monthly] [--daily] [-a]
[--ignore-cache]
[paths ...]
positional arguments:
paths images paths
options:
-h, --help show this help message and exit
-w WIDTH, --width WIDTH
print width
-l LENS [LENS ...], --lens LENS [LENS ...]
select lens
-c CAMERA [CAMERA ...], --camera CAMERA [CAMERA ...]
select camera
--year YEAR [YEAR ...]
select year
--month MONTH [MONTH ...]
select month
--day DAY [DAY ...] select day of month
--yearly view yearly graph
--monthly view monthly graph
--daily view daily graph
-a, --cache save exif in cache
--ignore-cache ignore cacheCacheAggexif supports local caching. If you want to save the cache, add a --cache option. If you want to disable the cache temporarily, use a --ignore-cache option. Since the cache is stored in~/.config/aggexif/exif.dbas a SQLite, so you can delete it to remove all the cache.Tested CameraNikon Z6/Z7(+FTZ)SONY A7C/A7IIIOLYMPUS E-PL10Panasonic GX7MK3(GX9)Canon EOS-RPDevelopmentUse poetry.# run
$ poetry run aggexif -h
# test(doctest)
$ poetry run pytest --doctest-modules
# build
$ poetry build
# local install(after build)
$ pip install dist/aggexif-x.x.x.tar.gz
# publish
$ poetry publish -u ponkotuy -p `password` |
aggiestack | Purpose-------This is a demo program towards satisfying the course requirements for CSE - 689-607 SPTP: Cloud Computing Individual Project.Description of the project can be found at:https://tamu.blackboard.com/bbcswebdav/pid-4279770-dt-content-rid-31058440_1/courses/CSCE.689.1811.M1/P1.pdfUsage-----pip install aggiestack cliFirst time you use the command you will be asked the mongodb host URL and database_name in order to connectUsage:aggiestack server create [--image=<IMAGE> --flavor=<FLAVOR_NAME>] <INSTANCE_NAME>aggiestack server delete <INSTANCE_NAME>aggiestack server listaggiestack admin show hardwareaggiestack admin show instancesaggiestack admin can_host <MACHINE_NAME> <FLAVOR>aggiestack admin show imagecaches <RACK_NAME>aggiestack admin evacuate <RACK_NAME>aggiestack admin remove <MACHINE>aggiestack admin clear_allaggiestack admin add -mem MEM -disk NUM_DISKS -vcpus VCPU -ip IP -rack RACK_NAME MACHINEaggiestack show hardwareaggiestack show imagesaggiestack show flavorsaggiestack show allaggiestack show logsaggiestack config [--images=<IMAGES_PATH> | --flavors=<FLAVORS_PATH> | --hardware=<HARDWARE_PATH> ]aggiestack config load <CONGFIG_PATH>aggiestack -h | --helpaggiestack --version |
aggify | Aggify is a Python library to generate MongoDB aggregation pipelinesAggifyAggify is a Python library for generating MongoDB aggregation pipelines, designed to work seamlessly with Mongoengine.
This library simplifies the process of constructing complex MongoDB queries and aggregations using an intuitive and
organized interface.FeaturesProgrammatically build MongoDB aggregation pipelines.Filter, project, group, and perform various aggregation operations with ease.Supports querying nested documents and relationships defined using Mongoengine.Encapsulates aggregation stages for a more organized and maintainable codebase.Designed to simplify the process of constructing complex MongoDB queries.TODOInstallationYou can install Aggify using pip:pipinstallaggifySample UsageHere's a code snippet that demonstrates how to use Aggify to construct a MongoDB aggregation pipeline:frommongoengineimportDocument,fieldsclassAccountDocument(Document):username=fields.StringField()display_name=fields.StringField()phone=fields.StringField()is_verified=fields.BooleanField()disabled_at=fields.LongField()deleted_at=fields.LongField()banned_at=fields.LongField()classFollowAccountEdge(Document):start=fields.ReferenceField("AccountDocument")end=fields.ReferenceField("AccountDocument")accepted=fields.BooleanField()meta={"collection":"edge.follow.account",}classBlockEdge(Document):start=fields.ObjectIdField()end=fields.ObjectIdField()meta={"collection":"edge.block",}Aggify query:frommodelsimport*fromaggifyimportAggify,F,QfrombsonimportObjectIdaggify=Aggify(AccountDocument)pipelines=list((aggify.filter(phone__in=[],id__ne=ObjectId(),disabled_at=None,banned_at=None,deleted_at=None,network_id=ObjectId(),).lookup(FollowAccountEdge,let=["id"],query=[Q(start__exact=ObjectId())&Q(end__exact="id")],as_name="followed",).lookup(BlockEdge,let=["id"],as_name="blocked",query=[(Q(start__exact=ObjectId())&Q(end__exact="id"))|(Q(end__exact=ObjectId())&Q(start__exact="id"))],).filter(followed=[],blocked=[]).group("username").annotate(annotate_name="phone",accumulator="first",f=F("phone")+10).redact(value1="phone",condition="==",value2="132",then_value="keep",else_value="prune",).project(username=0)[5:10].out(coll="account")))Mongoengine equivalent query:[{"$match":{"phone":{"$in":[]},"_id":{"$ne":ObjectId("65486eae04cce43c5469e0f1")},"disabled_at":None,"banned_at":None,"deleted_at":None,"network_id":ObjectId("65486eae04cce43c5469e0f2"),}},{"$lookup":{"from":"edge.follow.account","let":{"id":"$_id"},"pipeline":[{"$match":{"$expr":{"$and":[{"$eq":["$start",ObjectId("65486eae04cce43c5469e0f3"),]},{"$eq":["$end","$$id"]},]}}}],"as":"followed",}},{"$lookup":{"from":"edge.block","let":{"id":"$_id"},"pipeline":[{"$match":{"$expr":{"$or":[{"$and":[{"$eq":["$start",ObjectId("65486eae04cce43c5469e0f4"),]},{"$eq":["$end","$$id"]},]},{"$and":[{"$eq":["$end",ObjectId("65486eae04cce43c5469e0f5"),]},{"$eq":["$start","$$id"]},]},]}}}],"as":"blocked",}},{"$match":{"followed":[],"blocked":[]}},{"$group":{"_id":"$username","phone":{"$first":{"$add":["$phone",10]}}}},{"$redact":{"$cond":{"if":{"$eq":["phone","132"]},"then":"$$KEEP","else":"$$PRUNE",}}},{"$project":{"username":0}},{"$skip":5},{"$limit":5},{"$out":"account"},]In the sample usage above, you can see how Aggify simplifies the construction of MongoDB aggregation pipelines by
allowing you to chain filters, lookups, and other operations to build complex queries.
For more details and examples, please refer to the documentation and codebase. |
aggin | No description available on PyPI. |
agglo_cc | ################################################################################ Copyright (C) 2025 Plaisic and/or its subsidiary(-ies).## Contact: [email protected]#### This file is part of the Agglo project.#### AGGLO_BEGIN_LICENSE## Commercial License Usage## Licensees holding valid commercial Agglo licenses may use this file in## accordance with the commercial license agreement provided with the## Software or, alternatively, in accordance with the terms contained in## a written agreement between you and Plaisic. For licensing terms and## conditions contact [email protected].#### GNU General Public License Usage## Alternatively, this file may be used under the terms of the GNU## General Public License version 3.0 as published by the Free Software## Foundation and appearing in the file LICENSE.GPL included in the## packaging of this file. Please review the following information to## ensure the GNU General Public License version 3.0 requirements will be## met: http://www.gnu.org/copyleft/gpl.html.#### In addition, the following conditions apply:## * Redistributions in binary form must reproduce the above copyright## notice, this list of conditions and the following disclaimer in## the documentation and/or other materials provided with the## distribution.## * Neither the name of the Agglo project nor the names of its## contributors may be used to endorse or promote products derived## from this software without specific prior written permission.#### THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS## "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT## LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR## A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT## OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,## SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED## TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR## PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF## LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING## NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.#### AGGLO_END_LICENSE##############################################################################agglo_cli_client generates command to a cli in Alexis Royer's cli format. |
agglomerative-py-custom | agglomerative-pyAgglomerative Clustering Implementation in PythonLicenseMIT |
agglo_tb | ################################################################################ Copyright (C) 2025 Plaisic and/or its subsidiary(-ies).## Contact: [email protected]#### This file is part of the Agglo project.#### AGGLO_BEGIN_LICENSE## Commercial License Usage## Licensees holding valid commercial Agglo licenses may use this file in## accordance with the commercial license agreement provided with the## Software or, alternatively, in accordance with the terms contained in## a written agreement between you and Plaisic. For licensing terms and## conditions contact [email protected].#### GNU General Public License Usage## Alternatively, this file may be used under the terms of the GNU## General Public License version 3.0 as published by the Free Software## Foundation and appearing in the file LICENSE.GPL included in the## packaging of this file. Please review the following information to## ensure the GNU General Public License version 3.0 requirements will be## met: http://www.gnu.org/copyleft/gpl.html.#### In addition, the following conditions apply:## * Redistributions in binary form must reproduce the above copyright## notice, this list of conditions and the following disclaimer in## the documentation and/or other materials provided with the## distribution.## * Neither the name of the Agglo project nor the names of its## contributors may be used to endorse or promote products derived## from this software without specific prior written permission.#### THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS## "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT## LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR## A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT## OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,## SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED## TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR## PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF## LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING## NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.#### AGGLO_END_LICENSE##############################################################################agglo_test_conductor is a library for stearing tests. |
agglo-tk | agglo_tool_kit is a library embedding various utils:
* io management : generic, string, scapy
* generic data selectoryou can install it with
<pre>
pip install agglo_tool_kit
<pre> |
agglo_tt | ################################################################################ Copyright (C) 2025 Plaisic and/or its subsidiary(-ies).## Contact: [email protected]#### This file is part of the Agglo project.#### AGGLO_BEGIN_LICENSE## Commercial License Usage## Licensees holding valid commercial Agglo licenses may use this file in## accordance with the commercial license agreement provided with the## Software or, alternatively, in accordance with the terms contained in## a written agreement between you and Plaisic. For licensing terms and## conditions contact [email protected].#### GNU General Public License Usage## Alternatively, this file may be used under the terms of the GNU## General Public License version 3.0 as published by the Free Software## Foundation and appearing in the file LICENSE.GPL included in the## packaging of this file. Please review the following information to## ensure the GNU General Public License version 3.0 requirements will be## met: http://www.gnu.org/copyleft/gpl.html.#### In addition, the following conditions apply:## * Redistributions in binary form must reproduce the above copyright## notice, this list of conditions and the following disclaimer in## the documentation and/or other materials provided with the## distribution.## * Neither the name of the Agglo project nor the names of its## contributors may be used to endorse or promote products derived## from this software without specific prior written permission.#### THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS## "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT## LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR## A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT## OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,## SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED## TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR## PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF## LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING## NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS## SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.#### AGGLO_END_LICENSE##############################################################################aut is a library for unit tests. |
aggmap | Jigsaw-like AggMapA Robust and Explainable Omics Deep Learning ToolInstallationinstall aggmap by:# create an aggmap envcondacreate-naggmappython=3.8
condaactivateaggmap
pipinstall--upgradepip
pipinstallaggmapUsageimportpandasaspdfromsklearn.datasetsimportload_breast_cancerfromaggmapimportAggMap,AggMapNet# Data loadingdata=load_breast_cancer()dfx=pd.DataFrame(data.data,columns=data.feature_names)dfy=pd.get_dummies(pd.Series(data.target))# AggMap object definition, fitting, and savingmp=AggMap(dfx,metric='correlation')mp.fit(cluster_channels=5,emb_method='umap',verbose=0)mp.save('agg.mp')# AggMap visulizations: Hierarchical tree, embeddng scatter and gridmp.plot_tree()mp.plot_scatter(enabled_data_labels=True,radius=5)mp.plot_grid(enabled_data_labels=True)# Transoformation of 1d vectors to 3D Fmaps (-1, w, h, c) by AggMapX=mp.batch_transform(dfx.values,n_jobs=4,scale_method='minmax')y=dfy.values# AggMapNet training, validation, early stopping, and savingclf=AggMapNet.MultiClassEstimator(epochs=50,gpuid=0)clf.fit(X,y,X_valid=None,y_valid=None)clf.save_model('agg.model')# Model explaination by simply-explainer: global, localsimp_explainer=AggMapNet.simply_explainer(clf,mp)global_simp_importance=simp_explainer.global_explain(clf.X_,clf.y_)local_simp_importance=simp_explainer.local_explain(clf.X_[[0]],clf.y_[[0]])# Model explaination by shapley-explainer: global, localshap_explainer=AggMapNet.shapley_explainer(clf,mp)global_shap_importance=shap_explainer.global_explain(clf.X_)local_shap_importance=shap_explainer.local_explain(clf.X_[[0]])How It Works?AggMap flowchart of feature mapping and agglomeration into ordered (spatially correlated) multi-channel feature maps (Fmaps)a, AggMap flowchart of feature mapping and aggregation into ordered (spatially-correlated) channel-split feature maps (Fmaps).b, CNN-based AggMapNet architecture for Fmaps learning.c, proof-of-concept illustration of AggMap restructuring of unordered data (randomized MNIST) into clustered channel-split Fmaps (reconstructed MNIST) for CNN-based learning and important feature analysis.d, typical biomedical applications of AggMap in restructuring omics data into channel-split Fmaps for multi-channel CNN-based diagnosis and biomarker discovery (explanationsaliency-mapof important features).Proof-of-Concepts of reconstruction ability on MNIST DatasetIt can reconstruct to the original image from completely randomly permuted (disrupted) MNIST data:Org1: the original grayscale images (channel = 1),OrgRP1: the randomized images of Org1 (channel = 1),RPAgg1, 5: the reconstructed images ofOrgPR1by AggMap feature restructuring (channel = 1, 5 respectively, each color represents features of one channel).RPAgg5-tkb: the original images with the pixels divided into 5 groups according to the 5-channels ofRPAgg5and colored in the same way asRPAgg5.The effect of the number of channels on model performanceMulti-channel Fmaps can boost the model performance notably:The performance of AggMapNet using different number of channels on theTCGA-T (a)andCOV-D (b). ForTCGA-T, ten-fold cross validation average performance, forCOV-D, a fivefold cross validation was performed and repeat 5 rounds using different random seeds (total 25 training times), their average performances of the validation set were reported.Example for Restructured FmapsThe example on WDBC dataset: clickhereto find out more! |
aggnf | aggnf: Aggregate Nth Field. A small console utility to count/group text data.Free software: MIT licenseDocumentation: (COMING SOON!)https://aggnf.readthedocs.org.FeaturesGenerates aggregate counts of text data, using a specified field as a key.Fields can be delimited by any string, the default is consecutive whitespace.Key field can be any integer, with negative integers counting backwards. The default is the last field.How-ToThe--helpoption is descriptive:~$ aggnf --help
Usage: aggnf [OPTIONS] [IN_DATA]
Group text data based on a Nth field, and print the aggregate result.
Works like SQL:
`select field, count(*) from tbl group by field`
Or shell:
`cat file | awk '{print $NF}' | sort | uniq -c`
Arguments:
IN_DATA Input file, if blank, STDIN will be used.
Options:
-d, --sep TEXT Field delimiter. Defaults to whitespace.
-n, --fieldnum INTEGER The field to use as the key, default: last field.
-o, --sort Sort result.
-i, --ignore-err Don't exit if field is specified and out of range.
--help Show this message and exit.Here we generate an example file of 1000 random numbers, and ask aggnf to group it for us, ordering the result by the most common occurrences:~$ seq 1 1000 | while read -r l; do echo -e "line:${l}\t${RANDOM:0:1}"; done > rand.txt
~$ aggnf -o rand.txt
1: 340
2: 336
3: 120
8: 42
6: 37
5: 35
7: 35
4: 33
9: 22This might look familiar, as itโs the same result one might get from something likeselectfield,count(*)as count from table group by field order by count desc, or even by the following bash one-liner:~$ cat rand.txt | awk '{print $NF}' | sort | uniq -c | sort -nr
340 1
336 2
120 3
42 8
37 6
35 7
35 5
33 4
22 9To-DoOutput is mangled when using another delimiter, will fix.Add a--sumoption, which will key on one field, and sum the contents of another.Speed optimizations.NotesThe usefulness of this program is questionable. Itโs functionality is already covered by existing console commands that are much faster.This project is merely a quick example to learn the basics of packages which are unfamiliar to me, namely: cookiecutter, tox, and click.HistoryApril 4th: Released |
aggr | AggregatorPython package for aggregation learning (automating feature aggregation)The included example requires files which are available at : {link} |
aggravator | ==========Aggravator==========.. image:: https://travis-ci.org/petercb/aggravator.svg?branch=master:target: https://travis-ci.org/petercb/aggravator.. image:: https://coveralls.io/repos/github/petercb/aggravator/badge.svg?branch=master:target: https://coveralls.io/github/petercb/aggravator?branch=masterDynamic inventory script for Ansible that aggregates information from other sourcesInstalling----------.. code:: shvirtualenv aggravatorsource aggravator/bin/activatepip install aggravatorExecuting---------.. code:: shansible-playbook -i aggravator/bin/inventory site.ymlHow does it work----------------It will aggregate other config sources (YAML or JSON format) into a singleconfig stream.The sources can be files or urls (to either file or webservices that produceYAML or JSON) and the key path to merge them under can be specified.Why does it exist-----------------We wanted to maintain our Ansible inventory in GIT as YAML files, and not inthe INI like format that Ansible generally supports for flat file inventory.Additionally we had some legacy config management systems that contained someinformation about our systems that we wanted exported to Ansible so we didn'thave to maintain them in multiple places.So a script that could take YAML files and render them in a JSON format thatAnsible would ingest was needed, as was one that could aggregate many filesand streams.Config format-------------Example (etc/config.yaml):.. code:: yaml---environments:test:include:- path: inventory/test.yaml- path: vars/global.yamlkey: all/vars- path: secrets/test.yamlkey: all/varsBy default the inventory script will look for the root config file as follows:- `../etc/config.yaml` (relative to the `inventory` file)- `/etc/aggravator/config.yaml`- `/usr/local/etc/aggravator/config.yaml`If it can't find it in one of those locations, you will need to use the `--uri`option to specify it (or set the `INVENTORY_URI` env var)It will parse it for a list of environments (test, prod, qa, etc) and for alist of includes. The `include` section should be a list of dictionaries withthe following keys:pathThe path to the data to be ingested, this can be one of:- absolute file path- relative file path (relative to the root config.yaml)- url to a file or service that emits a supported formatkeyThe key where the data should be merged into, if none is specified it isimported into the root of the data structure.formatThe data type of the stream to ingest (ie. `yaml` or `json`) if not specifiedthen the script will attempt to guess it from the file extension*Order* is important as items lower in the list will take precedence over onesspecified earlier in the list.Merging-------Dictionaries will be merged, and lists will be replaced. So if a property atthe same level in two source streams of the same name are dictionaries theircontents will be merged. If they are lists, the later one will replace theearlier.If the data type of two properties at the same level are different the laterone will overwrite the earlier.Environment Variables---------------------Setting the following environment variables can influence how the scriptexecutes when it is called by Ansible.`INVENTORY_ENV`Specify the environment name to merge inventory for as defined under the'environments' section in the root config.The environment name can also be guessed from the executable name, so if youcreate a symlink from `prod` to the `inventory` bin, it will assume the envyou want to execute for is called `prod`, unless you override that.`INVENTORY_FORMAT`Format to output in, defaults to YAML in >0.4Previously only output in JSON`INVENTORY_URI`Location to the root config, if not in one of the standard locations`VAULT_PASSWORD_FILE`Location of the vault password file if not in the default location of`~/.vault_pass.txt`, can be set to `/dev/null` to disable decryption ofsecrets.Usage-----`inventory [OPTIONS]`Ansible file based dynamic inventory scriptOptions:--env TEXT specify the platform name to pull inventory for--uri TEXT specify the URI to query for inventory configfile, supports file:// and http(s):// [default:/home/peterb-l/git/petercb/aggravator/venv/etc/config.yaml]--output-format [yaml|json] specify the output format [default: yaml]--vault-password-file PATH vault password file, if set to /dev/null secretdecryption will be disabled [default: ~/.vault_pass.txt]--list Print inventory information as a JSON object--host TEXT Retrieve host variables (not implemented)--createlinks DIRECTORY Create symlinks in DIRECTORY to the script foreach platform name retrieved--show Output a list of upstream environments (or groups if environment was set)--help Show this message and exit. |
aggregate | aggregate: a powerful actuarial modeling libraryPurposeaggregatebuilds approximations to compound (aggregate) probability distributions quickly and accurately.
It can be used to solve insurance, risk management, and actuarial problems using realistic models that reflect
underlying frequency and severity. It delivers the speed and accuracy of parametric distributions to situations
that usually require simulation, making it as easy to work with an aggregate (compound) probability distribution
as the lognormal.aggregateincludes an expressive language called DecL to describe aggregate distributions
and is implemented in Python under an open source BSD-license.White Paper (new July 2023)TheWhite Paperdescribes
the purpose, implementation, and use of the classaggregate.Aggregatethat
handles the creation and manipulation of compound frequency-severity distributions.Documentationhttps://aggregate.readthedocs.io/Where to get ithttps://github.com/mynl/aggregateInstallationTo install into a newPython>=3.10virtual environment:python -m venv path/to/your/venv``
cd path/to/your/venvfollowed by:\path\to\env\Scripts\activateon Windows, or:source /path/to/env/bin/activateon Linux/Unix or MacOS. Finally, install the package:pip install aggregate[dev]All the code examples have been tested in such a virtual environment and the documentation will build.Version History0.22.0Created version 0.22.0, convolation0.21.4Updated requirement usingpipreqsrecommendationsColor graphics in documentationAddedexpected_shift_reduce = 16 # Set this to the number of expected shift/reduce conflictstoparser.pyto avoid warnings. The conflicts are resolved in the correct way for the grammar to work.Issues: there is a difference betweendfreq[1]and1 claim ... fixed, e.g.,
when using spliced severities. These should not occur.0.21.3Risk progression, defaults to linear allocation.Addedg_insurance_statisticstoextensionsto plot insurance statistics from a distortiong.Addedg_risk_appetitetoextensionsto plot risk appetite from a distortiong(value, loss ratio,
return on capital, VaR and TVaR weights).Corrected Wang distortion derivative.VectorizedDistortion.g_primecalculation for proportional hazardAddedtvar_weightsfunction tospectralto compute the TVaR weights of a distortion. (Work in progress)Updated dependencies in pyproject.toml file.0.21.2Misc documentation updates.Experimental magic functions, allowing, eg. %agg [spec] to create an aggregate object (one-liner).0.21.1 yanked from pypi due to error in pyproject.toml.0.21.0Movedslyinto the project for better control.slyis a Python implementation of lex and yacc parsing tools.
It is written by Dave Beazley. Per the sly repo on github:The SLY project is no longer making package-installable releases. Itโs fully functional, but if choose to use it,
you should vendor the code into your application. SLY has zero-dependencies. Although I am semi-retiring the project,
I will respond to bug reports and still may decide to make future changes to it depending on my mood.
Iโd like to thank everyone who has contributed to it over the years. โDaveExperimenting with a line/cell DecL magic interpreter in Jupyter Lab to obviate the
need forbuild.0.20.2risk progression logic adjusted to exclude values with zero probability; graphs
updated to use step drawstyle.0.20.1Bug fix in parser interpretation of arrays with step sizeAdded figures for AAS paper to extensions.ft and extensions.figuresValidation โnot unreasonableโ flag set to 0Added aggregate_white_paper.pdfColors in risk_progression0.20.0sev_attachment: changed default toNone; in that case gross losses equal
ground-up losses, with no adjustment. But if layer is 10 xs 0 then losses
become conditional on X > 0. That results in a different behaviour, e.g.,
when usingdsev[0:3]. Ripple through effect in Aggregate (change default),
Severity (change default, and change moment calculation; need to track the โattachmentโ
of zero and the fact that it came from None, to track Pr attaching)dsev: check if any elements are < 0 and set to zero before computing moments
in dhistogramsame for dfreq; implemented invalidate_discrete_distributionin distributions moduleDefaultrecommend_p=0.99999set in constsants module.interpreter_test_suiterenamed torun_test_suiteand includes test
to count and report if there are errors.Reason codes for failing validation; Aggregate.qt becomes Aggregte.explain_validation0.19.0Fixed reinsurance description formattingImproved splice parsing to allow explicit entry of lb and ub; needed to
model mixtures of mixtures (Albrecher et al. 2017)0.18.0 (major update)Added ability to specify occ reinsurance after a built in agg; this
allows you to alter a gross aggregate more easily.Underwriter.safe_lookupuses deepcopy rather than copy to avoid
problems array elements.Clean up and improved Parser and grammaratom -> term is much cleaner (removed power, factor; now
managed with prcedence and assoicativity)EXP and EXPONENT are right
associative, division is not associative so 1/2/3 gives an error.Still SR conflict from dfreq [ ] [ ] because it could be the
probabilities clause or the start of a vectorized limit clauseRemaining SR conflicts are from NUMBER, which is used in many
places. This is a problem with the grammar, not the parser.Added more tests to the parser test suiteSeverity weights clause must come after locations (more natural)Added ability for unconditional dsev.Support for splicing (see below)Cleanup ofAggregateclass, concurrent with creating a cheat sheetmany documentation updatesplot_olddeleteddeleteddelbaen_haezendonck_density; not used; not doing anything
that isnโt easy by hand. Includes dh_sev_density and dh_agg_density.deletedfitas alternative name forapproximatedeleted unused fieldsCleanup ofPortfolioclass, concurrent with creating a cheat sheetdeletedfitas alternative name forapproximatedeletedq_old_0_12_0(old quantile),q_temp,tvar_old_0_12_0deletedplot_old,last_a,_(inverse)_tail_var(_2)deleteddef get_stat(self,line='total',stat='EmpMean'):return self.audit_df.loc[line, stat]deletedresample, was an alias for sampleManagement of knowledge inUnderwriterchanged to support loading
a database after creation. Databases not loaded until needed - alas
that includes printing the object. TODO: Consider a change?Frequency mfg renamed to freq_pgf to match other Frequency class methods and
to accuractely describe the function as a probability generating function
rather than a moment generating function.Addedintrospectfunction to Utilities. Used to create a cheat sheet
for Aggregate.Added cheat sheets, completed for AggregateSeverity can now be conditional on being in a layer (see splice); managed
adjustments to underlying frozen rv using decorators. No overhead if not
used.Added โspliceโ option for Severity (see Albrecher et. al ch XX) and Aggregate,
new argumentssev_lbandsev_ub, each lists.Underwriter.builddefaults update argument to None, which uses the object default.pretty printing: now returns a value, no tacit mode; added _html version to
run through pygments, that looks good in Jupyter Lab.0.17.1Adjusted pyproject.tomlpygments lexer tweaksSimplified grammar: % and inf now handled as part of resolving NUMBER; still 16 = 5 * 3 + 1 SR conflictsReading databases on demand in Underwriter, resulting in faster object creationCreating and testing exsitance of subdirectories in Undewriter on demand using propertiesCreating directories moved into Extensions __init__.pylexer and parser as properties for Underwriter object creationDefaultrecommend_pchanged from 0.999 to 0.99999.recommend_bucketnow usesp=max(p,1-1e-8)if severity is unlimited.0.17.0 (July 2023)moreadded as a proper methodFixed debugfile in parser.py which stops installation if not None (need to
enure the directory exists)Fixed build and MANIFEST to remove build warningparser: semicolon no longer mapped to newline; it is now used to provide hints
notesrecommend_bucketuses p=max(p, 1-1e-8) if limit=inf. Default increased from 0.999
to 0.99999 based on examples; works well for limited severity but not well for unlimited severity.Implemented calculation hints in note strings. Format is k=v; pairs; k
bs, log2, padding, recommend_p, normalize are recognized. If present they are used
if no arguments are passed explicitly tobuild.Addedinterpreter_test_suite()toUnderwriterto run the test suiteAddedtest_suite_filetoUnderwriterto returnPathtotest_suite.agg`fileLayers, attachments, and the reinsurance tower can now be ranges,[s:f:j]syntax0.16.1 (July 2023)IDs can now include dashes: Line-A is a legitimate dateInclude templates and test-cases.agg file in the distributionFixed mixed severity / limit profile interaction. Mixtures now work with
exposure defined by losses and premium (as opposed to just claim count),
correctly account for excess layers (which requires re-weighting the
mixture components). Involves fixing the ground up severity and using it
to adjust weights first. Then, by layer, figure the severity and convert
exposure to claim count if necessary. Cases where there is no loss in the
layer (high layer from low mean / low vol componet) replace by zero. Use
logging level 20 for more details.Addedmorefunction toPortfolio,AggregateandUnderwriterclasses.
Given a regex it returns all methods and attributes matching. It tries to call a method
with no arguments and reports the answer.moreis defined in utilities
and can be applied to any object.Moved work ofqtfrom utilities intoAggregate`(where it belongs).
Retainedqtfor backwards compatibility.Parser: power <- atom ** factor to power <- factor ** factor to allow (1/2)**(3/4)random` module renamed `random_aggto avoid conflict with PythonrandomImplemented exact moments for exponential (special case of gamma) because
MED is a common distribution and computing analytic moments is very time
consuming for large mixtures.Added ZM and ZT examples to test_cases.agg; adjusted Portfolio examples to
be on one line so they run through interpreter_file tests.0.16.0 (June 2023)Implemented ZM and ZT distributions using decorators!Added panjer_ab to Frequency, reports a and b values, p_k = (a + b / k) p_{k-1}. These values can be tested
by computing implied a and b values from r_k = k p_k / p_{k-1} = ak + b; diff r_k = a and b is an easy
computation.Added freq_dist(log2) option to Freq to return the frequency distribution stand-aloneAdded negbin frequency where freq_a equals the variance multiplier0.15.0 (June 2023)Added pygments lexer for decl (called agg, agregate, dec, or decl)Added to the documentationusing pygments style inpprint_exhtml moderemoved old setup scripts and files and stack.md0.14.1 (June 2023)Added scripts.py for entry pointsUpdated .readthedocs.yaml to build from toml not requirements.txtFixes to documentationPortfolio.tvar_thresholdupdated to usescipy.optimize.bisectAddedkaplan_meiertoutilitiesto compute product limit estimator survival
function from censored data. This applies to a loss listing with open (censored)
and closed claims.doc to docs []Enhancedmake_var_tvarfor cases where all probabilities are equal, using linspace rather
than cumsum.0.13.0 (June 4, 2023)UpdatedPortfolio.priceto implementallocation='linear'and
allow a dictionary of distortionsordered='strict'default forPortfolio.calibrate_distortionsPentagon can return a namedtuple and solve does not return a dataframe (it has no return value)Added random.py module to hold random state. Incorporated intoUtilities: Iman Conover (ic_noise permuation) and rearrangement algorithmsPortfoliosampleAggregatesampleSpectralbagged_distortionPortfolioaddedn_unitspropertyPortfoliosimplified__repr__Addedblock_iman_conovertoutilitiles. Note tester code in the documentation. Very Nice! ๐๐๐New VaR, quantile and TVaR functions: 1000x speedup and more accurate. Builder function inutilities.pyproject.toml project specification, updated build process, now creates whl file rather than egg file.0.12.0 (May 2023)add_exa_samplebecomes method ofPortfolioAddedcreate_from_samplemethod toPortfolioAddedbodoffmethod to compute layer capital allocation toPortfolioImproved validation error reportingextensions.samplesmodule deletedAddedspectral.approx_ccocto create a ct approx to the CCoC distortionqdpmoved toutilities(describe plus some quantiles)AddedPentagonclass inextensionsAdded example use of the Pollaczeck-Khinchine formula, reproducing examples from
theactuar`risk vignette to Ch 5 of the documentation.Earlier versionsSee github commit notes.Version numbers follow semantic versioning, MAJOR.MINOR.PATCH:MAJOR version changes with incompatible API changes.MINOR version changes with added functionality in a backwards compatible manner.PATCH version changes with backwards compatible bug fixes.Issues and TodoTreatment of zero lb is not consistent with attachment equals zero.Flag attempts to use fixed frequency with non-integer expected value.Flag attempts to use mixing with inconsistent frequency distribution.Getting startedTo get started, importbuild. It provides easy access to all functionality.Here is a model of the sum of three dice rolls. The DataFramedescribecompares exact mean, CV and skewness with theaggregatecomputation for the frequency, severity, and aggregate components. Common statistical functions like the cdf and quantile function are built-in. The whole probability distribution is available ina.density_df.from aggregate import build, qd
a = build('agg Dice dfreq [3] dsev [1:6]')
qd(a)>>> E[X] Est E[X] Err E[X] CV(X) Est CV(X) Err CV(X) Skew(X) Est Skew(X)
>>> X
>>> Freq 3 0
>>> Sev 3.5 3.5 0 0.48795 0.48795 -3.3307e-16 0 2.8529e-15
>>> Agg 10.5 10.5 -3.3307e-16 0.28172 0.28172 -8.6597e-15 0 -1.5813e-13print(f'\nProbability sum < 12 = {a.cdf(12):.3f}\nMedian = {a.q(0.5):.0f}')>>> Probability sum < 12 = 0.741
>>> Median = 10aggregatecan use anyscipy.statscontinuous random variable as a severity, and
supports all common frequency distributions. Here is a compound-Poisson with lognormal
severity, mean 50 and cv 2.a = build('agg Example 10 claims sev lognorm 50 cv 2 poisson')
qd(a)>>> E[X] Est E[X] Err E[X] CV(X) Est CV(X) Err CV(X) Skew(X) Est Skew(X)
>>> X
>>> Freq 10 0.31623 0.31623
>>> Sev 50 49.888 -0.0022464 2 1.9314 -0.034314 14 9.1099
>>> Agg 500 498.27 -0.0034695 0.70711 0.68235 -0.035007 3.5355 2.2421# cdf and quantiles
print(f'Pr(X<=500)={a.cdf(500):.3f}\n0.99 quantile={a.q(0.99)}')>>> Pr(X<=500)=0.611
>>> 0.99 quantile=1727.125See the documentation for more examples.DependenciesSee requirements.txt.Install from sourcegit clone --no-single-branch --depth 50 https://github.com/mynl/aggregate.git .
git checkout --force origin/master
git clean -d -f -f
python -mvirtualenv ./venv
# ./venv/Scripts on Windows
./venv/bin/python -m pip install --exists-action=w --no-cache-dir -r requirements.txt
# to create help files
./venv/bin/python -m pip install --upgrade --no-cache-dir pip setuptools<58.3.0
./venv/bin/python -m pip install --upgrade --no-cache-dir pillow mock==1.0.1 alabaster>=0.7,<0.8,!=0.7.5 commonmark==0.9.1 recommonmark==0.5.0 sphinx<2 sphinx-rtd-theme<0.5 readthedocs-sphinx-ext<2.3 jinja2<3.1.0Note: options from readthedocs.org script.LicenseBSD 3 licence.Help and contributionsLimited help available. Email me [email protected] contributions, bug reports, bug fixes, documentation improvements,
enhancements and ideas are welcome. Create a pull request on github and/or
email me.Social media:https://www.reddit.com/r/AggregateDistribution/. |
aggregate6 | [](https://travis-ci.org/job/aggregate6)[](https://requires.io/github/job/aggregate6/requirements/?branch=master)[](https://coveralls.io/github/job/aggregate6?branch=master)aggregate6==========aggregate6 will compress an unsorted list of IP prefixes (both IPv4 and IPv6).Description-----------Takes a list of IPv6 prefixes in conventional format on stdin, and performs twooptimisations to attempt to reduce the length of the prefix list. The firstoptimisation is to remove any supplied prefixes which are superfluous becausethey are already included in another supplied prefix. For example,`2001:67c:208c:10::/64` would be removed if `2001:67c:208c::/48` wasalso supplied.The second optimisation identifies adjacent prefixes that can be combined undera single, shorter-length prefix. For example, `2001:67c:208c::/48` and`2001:67c:208d::/48` can be combined into the single prefix`2001:67c:208c::/47`.The above optimalisation steps are often useful in context of compressing firewallrules or BGP prefix-list filters.The following command line options are available:```-4 Only output IPv4 prefixes-6 Only output IPv6 prefixes-h, --help show help message and exit-m N Sets the maximum prefix length for entries read, longer prefixes will be discarded prior to processing-t truncate IP/mask to network/mask-v Display verbose information about the optimisations-V Display aggregate6 version```Installation------------OpenBSD 6.3:`$ doas pkg_add aggregate6`Other platforms:`$ pip3 install aggregate6`CLI Usage---------Either provide the list of IPv4 and IPv6 prefixes on STDIN, or give filenamescontaining lists of IPv4 and IPv6 prefixes as arguments.```$ # via STDIN$ cat file_with_list_of_prefixes | aggregate6... output ...$ # with a filename as argument$ aggregate6 file_with_list_of_prefixes [ ... optional_other_prefix_lists ]... output ...$ # Whitespace separated works too$ echo 2001:67c:208c::/48 2000::/3 | aggregate62000::/3$ # You can combine IPv4 and IPv6$ echo 10.0.0.0/16 10.0.0.0/24 2000::/3 | aggregate610.0.0.0/162000::/3```Library Usage-------------Aggregate6 can be used in your own pyp/python2/python3 project as python module.Currently there is just one simple public function: `aggregate()` which takes alist as parameter.```>>> import from aggregate6 import aggregate>>> aggregate(["10.0.0.0/8", "10.0.0.0/24"])['10.0.0.0/8']>>>```Bugs----Please report bugs at https://github.com/job/aggregate6/issuesAuthor------Job Snijders <[email protected]> |
aggregateGithubCommits | Aggregate Github commit count by author and time.RequirementPython 3.7+Your own GitHub accountInstallpip install git+https://github.com/rocaz/aggregateGithubCommitsUsageaggregateGithubCommits [-h] -r|โrepo REPO [-a|โauthor AUTHOR] [-s|since
SINCE] [-u|until UNTIL] [-p|โperiod {h,d,m,w}] [-t|โterm TERM]
[-f|โformat {text,json,csv}] [-v]-h, โhelp: show this help message and exit-r REPO, โrepo REPO: [Required] GitHub owner and repositry name.
ex)โgithub/covid-19-repo-dataโ-a AUTHOR, โauthor AUTHOR: GitHub author name, default is all authors.
ex)โgithubโ-s SINCE, โsince SINCE: since date in ISO format. ex) โ2020-07-12โ-u UNTIL, โuntil UNTIL: until date in ISO format, default is today.
ex)โ2020-07-12โ-p {h,d,m,w}, โperiod {h,d,m,w}: Aggregating period, default is โhโ.
โhโ: per hour,โdโ: per day, โmโ: per month, โwโ: per day of week-t TERM, โterm TERM: Aggregating term from until, default is โ3mโ. โ3mโ
means โ3monthsโ, โ100dโ means โ100daysโ-f {text,json,csv}, โformat {text,json,csv}: Output format type, default
is โtextโ.-v, โversion: show programโs version number and exitExampleSpecified author. Default term is from now to 3months ago.python ./aggregateGithubCommits.py-r"github/covid-19-repo-data"-agregceOutput:Repository: git://github.com/github/covid-19-repo-data.git
Total: 15
Author: gregce
Hour 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Count 2 1 2 1 1 0 2 0 1 2 0 1 1 1 0 0 0 0 0 0 0 0 0 0
AuthorTotal: 15The term is specified from โ2020-02-29โ to โ2020-08-02โ, Aggregation
period is โper monthโ.python ./aggregateGithubCommits.py-r"github/covid-19-repo-data"-pm-u'2020-08-02'-s'2020-02-29'Output:Repository: git://github.com/github/covid-19-repo-data.git
Total: 49
Author: gregce
Month 2020-03 2020-04 2020-05 2020-06 2020-07
Count 0 5 4 4 7
AuthorTotal: 20
Author: Ashikpaul
Month 2020-03 2020-04 2020-05 2020-06 2020-07
Count 0 0 0 0 1
AuthorTotal: 1
Author: hamelsmu
Month 2020-03 2020-04 2020-05 2020-06 2020-07
Count 0 22 0 4 0
AuthorTotal: 26
Author: github-actions[bot]
Month 2020-03 2020-04 2020-05 2020-06 2020-07
Count 0 1 0 0 0
AuthorTotal: 1
Author: DJedamski
Month 2020-03 2020-04 2020-05 2020-06 2020-07
Count 1 0 0 0 0
AuthorTotal: 1Output format is setted to JSON.python ./aggregateGithubCommits.py-r"github/covid-19-repo-data"-fjson{"AggregatedCommits": {"gregce": {"00": 2, "01": 1, "02": 2, "03": 1, "04": 1, "06": 2, "08": 1, "09": 2, "11": 1, "12": 1, "13": 1}, "Ashikpaul": {"00": 0, "01": 0, "02": 1, "03": 0, "04": 0, "06": 0, "08": 0, "09": 0, "11": 0, "12": 0, "13": 0}, "hamelsmu": {"00": 0, "01": 0, "02": 4, "03": 0, "04": 0, "06": 0, "08": 0, "09": 0, "11": 0, "12": 0, "13": 0}}, "Period": "h", "CommitCount": 20, "Authors": ["gregce", "Ashikpaul", "hamelsmu"], "Indexes": ["00", "01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23"]}Output format is setted to CSV.python ./aggregateGithubCommits.py-r"github/covid-19-repo-data"-fcsv"","00","01","02","03","04","05","06","07","08","09","10","11","12","13","14","15","16","17","18","19","20","21","22","23"
"gregce","2","1","2","1","1","0","2","0","1","2","0","1","1","1","0","0","0","0","0","0","0","0","0","0"
"Ashikpaul","0","0","1","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0"
"hamelsmu","0","0","4","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0","0"Environment VariableโGITHUBTOKENโPlase set your Github TokenLicenseCC BY-NC-SA 4.0non-commercial use only. |
aggregate-prefixes | aggregate-prefixesFast IPv4 and IPv6 prefix aggregator written in Python.Gets a list of unsorted IPv4 or IPv6 prefixes from argument or SDTIN and returns a sorted list of aggregates to STDOUT
Errors go to STDERR.Installgit clone https://github.com/lamehost/aggregate-prefixes.git
cd aggregate_prefixes
poetry build
pip install dist/aggregate_prefixes-0.7.0-py3-none-any.whlCLI Syntax for executableusage: aggregate-prefixes [-h] [--max-length LENGTH] [--strip-host-mask] [--truncate MASK] [--verbose] [--version] [prefixes]
Aggregates IPv4 or IPv6 prefixes from file or STDIN
positional arguments:
prefixes Text file of unsorted list of IPv4 or IPv6 prefixes. Use '-' for STDIN.
options:
-h, --help show this help message and exit
--max-length LENGTH, -m LENGTH
Discard longer prefixes prior to processing
--strip-host-mask, -s
Do not print netmask if prefix is a host route (/32 IPv4, /128 IPv6)
--truncate MASK, -t MASK
Truncate IP/mask to network/mask
--verbose, -v Display verbose information about the optimisations
--version, -V show program's version number and exitUsage as module$ python
Python 3.9.1+ (default, Feb 5 2021, 13:46:56)
[GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> from aggregate_prefixes import aggregate_prefixes
>>> list(aggregate_prefixes(['192.0.2.0/32', '192.0.2.1/32', '192.0.2.2/32']))
['192.0.2.0/31', '192.0.2.2/32']
>>>Python version compatibilityTested with:Python 3.9Python 3.10Python 3.11 |
aggregation_builder | No description available on PyPI. |
aggregationslib | MeansMeans, Aggregation functions...Example 1:# example datadata = [0.2, 0.6, 0.7]# configure function parametersfunc1 = A_amn(p=0.5)# use aggregation funcitonprint(func1(data))# Combine two aggregations - arithmetic mean and minimumfunc2 = Combine2Aggregations(A_ar(), min)# use combination of aggregation funcitonprint(func2(data))Example2:To get information about aggregation function you can use__str__()or 'repr()' methods.func1 = A_amn(p=0.5)print(func1)>>>A_amn(0.5)func2 = Combine2Aggregations(A_ar(), A_md())print(func2)>>>A_armdfunc3 = Combine2Aggregations(A_ar(), A_pw(r=3))print(func3.__repr__()) # function parameters are printed in order: func1, func2>>>A_arpw(r=3)exponential(y, r=1)is given by equation$$
A_6^{(r)}(x_1,...,x_n)= \frac{1}{r}\ln
\Big(\frac{1}{n} \sum \limits_{k=1}^{n} e^{rx_k}\Big), where
r \in \mathbb{R}, r \neq 0
$$A_ar - Arithmetic meanA_qd - Quadratic meanA_gm - Geometric meanA_hm - Harmonic meanA_pw - Power meanA_ex, A_ex2, A_ex3 - Exponential meanA_lm - Lehmer meanA_amn - Arithmetic minimum meanA_amx - Arithmetic maximum meanA_md - Median - ordered weighted aggregationA_ol - Olimpic aggregationA_oln - Olimpic aggregationWe can specify how many greatest and smallest records removeCombine2Aggregations - Combine aggregation functionsAmn, Amx, Aar , Aex , Amd,
Aow1, Aow1 |
agha | Yes, Agha is another GitHub API library for Python 2.x development.Support basic CRUD operations throw the
official REST API v3:http://developer.github.com/v3/Example:fromaghaimportGitHubApiapi=GitHubApi("myuser","mypass")# Create a repositoryapi.create_repo({'name':'mytestrepo1','description':'Github Test 1','auto_init':True}# Edit the repo descriptionapi.edit_myrepo("mytestrepo1",{'description':'Another description for my repo'})# List my repositoriesforrepoinapi.get_myrepos():print"NAME:%(name)s, DESCRIPTION:%(description)s, URL:%(html_url)s"%repo# Delete the repoapi.delete_myrepo("mytestrepo1")# Show my profile informationprint"USER:%(login)s, NAME:%(name)s, EMAIL:%(email)s"%api.get_myprofile()RequirementsPython 2.6+Requests libraryAboutThis source code is available inhttps://github.com/mrsarm/python-aghaDeveloped by Mariano Ruiz <[email protected]>License: LGPL-3 (C) 2014 |
aghajani-learn-1 | No description available on PyPI. |
aghasher | aghasherAn implementation of the Anchor Graph Hashing algorithm (AGH-1), presented inHashing with Graphs(Liu et al. 2011).Dependenciesaghashersupports Python 2.7 and Python 3, with numpy and scipy. These should be linked with a BLAS implementation
(e.g., OpenBLAS, ATLAS, Intel MKL). Without being linked to BLAS, numpy/scipy will use a fallback that causes
PyAnchorGraphHasher to run over 50x slower.Installationaghasheris available on PyPI, the Python Package Index.$pipinstallaghasherHow To UseTo use aghasher, first import theaghashermodule.import aghasherTraining a ModelAn AnchorGraphHasher is constructed using thetrainmethod, which returns an AnchorGraphHasher and the hash bit
embedding for the training data.agh, H_train = aghasher.AnchorGraphHasher.train(X, anchors, num_bits, nn_anchors, sigma)AnchorGraphHasher.train takes 5 arguments:XAnn-by-dnumpy.ndarray with training data. The rows correspond tonobservations, and the columns
correspond toddimensions.anchorsAnm-by-dnumpy.ndarray with anchors.mis the total number of anchors. Rows correspond to anchors,
and columns correspond to dimensions. The dimensionality of the anchors much match the dimensionality of the training
data.num_bits(optional; defaults to 12) Number of hash bits for the embedding.nn_anchors(optional; defaults to 2) Number of nearest anchors that are used for approximating the neighborhood
structure.sigma(optional; defaults toNone) sigma for the Gaussian radial basis function that is used to determine
similarity between points. When sigma is specified asNone, the code will automatically set a value, depending on
the training data and anchors.Hashing Data with an AnchorGraphHasher ModelWith an AnchorGraphHasher object, which has variable nameaghin the preceding and following examples, hashing
out-of-sample data is done with the object'shashmethod.agh.hash(X)The hash method takes one argument:XAnn-by-dnumpy.ndarray with data. The rows correspond tonobservations, and the columns correspond toddimensions. The dimensionality of the data much match the dimensionality of the training data used to train the
AnchorGraphHasher.Since Python does not have a native bit vector data structure, the hash method returns ann-by-rnumpy.ndarray, wherenis the number of observations indata, andris the number of hash bits specified when the model was trained.
The elements of the returned array are boolean values that correspond to bits.Testing an AnchorGraphHasher ModelTesting is performed with the AnchorGraphHasher.test method.precision = AnchorGraphHasher.test(H_train, H_test, y_train, y_test, radius)AnchorGraphHasher.test takes 5 arguments:H_trainAnn-by-rnumpy.ndarray with the hash bit embedding corresponding to the training data. The rows
correspond to thenobservations, and the columns correspond to therhash bits.H_testAnm-by-rnumpy.ndarray with the hash bit embedding corresponding to the testing data. The rows
correspond to themobservations, and the columns correspond to therhash bits.y_trainAnn-by-1numpy.ndarray with the ground truth labels for the training data.y_testAnm-by-1numpy.ndarray with the ground truth labels for the testing data.radius(optional; defaults to 2) Hamming radius to use for calculating precision.TestsTests are intests/.# Run tests$python3-munittestdiscovertests-vDifferences from the Matlab Reference ImplementationThe code is structured differently than the Matlab reference implementation.The Matlab code implements an additional hashing method, hierarchical hashing (referred to as 2-AGH), an extension of
1-AGH that is not implemented here.There is one functional difference relative to the Matlab code. Ifsigmais specified (as opposed to being
auto-estimated), then for the same value ofsigma, the Matlab and Python code will produce different results. They
will produce the same results when the Matlabsigmais sqrt(2) times bigger than the manually specifiedsigmain the
Python code. This is because in the Gaussian RBF kernel, the Python code uses a 2 in the denominator of the exponent,
and the Matlab code does not. A 2 was included in the denominator of the Python code, as that is the canonical way to
use an RBF kernel.Licenseaghasherhas anMIT License.SeeLICENSE.ReferencesLiu, Wei, Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. 2011. โHashing with Graphs.โ In Proceedings of the 28th
International Conference on Machine Learning (ICML-11), edited by Lise Getoor and Tobias Scheffer, 1โ8. ICML โ11. New
York, NY, USA: ACM. |
aghast | aghastAghast is a histogramming library that does not fill histograms and does not plot them. Its role is behind the scenes, to provide better communication between histogramming libraries.Specifically, it is a structured representation ofaggregated,histogram-likestatistics as sharable "ghasts." It has all of the "bells and whistles" often associated with plain histograms, such as number of entries, unbinned mean and standard deviation, bin errors, associated fit functions, profile plots, and even simple ntuples (needed for unbinned fits or machine learning applications).ROOThas all of these features;Numpyhas none of them.The purpose of aghast is to be an intermediate when converting ROOT histograms into Numpy, or vice-versa, or both of these intoBoost.Histogram,Physt,Pandas, etc. Without an intermediate representation, converting betweenNlibraries (to get the advantages of all) would equireN(N โ 1)/2conversion routines; with an intermediate representation, we only needN, and the mapping of feature to feature can be made explicit in terms of a common language.Furthermore, aghast is aFlatbuffersschema, so it can be deciphered inmany languages, withlazy, random-access, and uses asmall amount of memory. A collection of histograms, functions, and ntuples can be shared among processes as shared memory, used in remote procedure calls, processed incrementally in a memory-mapped file, or saved in files with future-proofschema evolution.Installation from packagesInstall aghast like any other Python package:pipinstallaghast# maybe with sudo or --user, or in virtualenv(Not on conda yet.)Manual installationAfter you git-clone this GitHub repository and ensure thatnumpyis installed, somehow:pipinstall"flatbuffers>=1.8.0"# for the flatbuffers runtime (with Numpy)cdpython# only implementation so far is in Pythonpythonsetup.pyinstall# to use it outside of this directoryNow you should be able toimport aghastorfrom aghast import *in Python.If you need to changeflatbuffers/aghast.fbs, you'll need to additionally:Getflatcto generate Python sources fromflatbuffers/aghast.fbs. I useconda install -c conda-forge flatbuffers. (Theflatcexecutable isnotincluded in the pipflatbufferspackage, and the Python runtime isnotincluded in the condaflatbufferspackage. They're disjoint.)In thepythondirectory, run./generate_flatbuffers.py(which callsflatcand does some post-processing).Every time you changeflatbuffers/aghast.fbs, re-run./generate_flatbuffers.py.DocumentationFull specification:IntroductionData typesCollectionHistogramAxisIntegerBinningRegularBinningRealIntervalRealOverflowHexagonalBinningEdgesBinningIrregularBinningCategoryBinningSparseRegularBinningFractionBinningPredicateBinningVariationBinningVariationAssignmentUnweightedCountsWeightedCountsInterpretedInlineBufferInterpretedInlineInt64BufferInterpretedInlineFloat64BufferInterpretedExternalBufferProfileStatisticsMomentsQuantilesModesExtremesStatisticFilterCovarianceParameterizedFunctionParameterEvaluatedFunctionBinnedEvaluatedFunctionNtupleColumnNtupleInstanceChunkColumnChunkPageRawInlineBufferRawExternalBufferMetadataDecorationTutorial examplesRun this tutorial on Binder.ConversionsThe main purpose of aghast is to move aggregated, histogram-like statistics (called "ghasts") from one framework to the next. This requires a conversion of high-level domain concepts.Consider the following example: in Numpy, a histogram is simply a 2-tuple of arrays with special meaningโbin contents, then bin edges.importnumpynumpy_hist=numpy.histogram(numpy.random.normal(0,1,int(10e6)),bins=80,range=(-5,5))numpy_hist(array([ 2, 5, 9, 15, 29, 49, 80, 104,
237, 352, 555, 867, 1447, 2046, 3037, 4562,
6805, 9540, 13529, 18584, 25593, 35000, 46024, 59103,
76492, 96441, 119873, 146159, 177533, 210628, 246316, 283292,
321377, 359314, 393857, 426446, 453031, 474806, 489846, 496646,
497922, 490499, 473200, 453527, 425650, 393297, 358537, 321099,
282519, 246469, 211181, 177550, 147417, 120322, 96592, 76665,
59587, 45776, 34459, 25900, 18876, 13576, 9571, 6662,
4629, 3161, 2069, 1334, 878, 581, 332, 220,
135, 65, 39, 26, 19, 15, 4, 4]),
array([-5. , -4.875, -4.75 , -4.625, -4.5 , -4.375, -4.25 , -4.125,
-4. , -3.875, -3.75 , -3.625, -3.5 , -3.375, -3.25 , -3.125,
-3. , -2.875, -2.75 , -2.625, -2.5 , -2.375, -2.25 , -2.125,
-2. , -1.875, -1.75 , -1.625, -1.5 , -1.375, -1.25 , -1.125,
-1. , -0.875, -0.75 , -0.625, -0.5 , -0.375, -0.25 , -0.125,
0. , 0.125, 0.25 , 0.375, 0.5 , 0.625, 0.75 , 0.875,
1. , 1.125, 1.25 , 1.375, 1.5 , 1.625, 1.75 , 1.875,
2. , 2.125, 2.25 , 2.375, 2.5 , 2.625, 2.75 , 2.875,
3. , 3.125, 3.25 , 3.375, 3.5 , 3.625, 3.75 , 3.875,
4. , 4.125, 4.25 , 4.375, 4.5 , 4.625, 4.75 , 4.875,
5. ]))We convert that into the aghast equivalent (a "ghast") with a connector (two functions:from_numpyandto_numpy).importaghastghastly_hist=aghast.from_numpy(numpy_hist)ghastly_hist<Histogram at 0x7f0dc88a9b38>This object is instantiated from a class structure built from simple pieces.ghastly_hist.dump()Histogram(
axis=[
Axis(binning=RegularBinning(num=80, interval=RealInterval(low=-5.0, high=5.0)))
],
counts=
UnweightedCounts(
counts=
InterpretedInlineInt64Buffer(
buffer=
[ 2 5 9 15 29 49 80 104 237 352
555 867 1447 2046 3037 4562 6805 9540 13529 18584
25593 35000 46024 59103 76492 96441 119873 146159 177533 210628
246316 283292 321377 359314 393857 426446 453031 474806 489846 496646
497922 490499 473200 453527 425650 393297 358537 321099 282519 246469
211181 177550 147417 120322 96592 76665 59587 45776 34459 25900
18876 13576 9571 6662 4629 3161 2069 1334 878 581
332 220 135 65 39 26 19 15 4 4])))Now it can be converted to a ROOT histogram with another connector.root_hist=aghast.to_root(ghastly_hist,"root_hist")root_hist<ROOT.TH1D object ("root_hist") at 0x55555e208ef0>importROOTcanvas=ROOT.TCanvas()root_hist.Draw()canvas.Draw()And Pandas with yet another connector.pandas_hist=aghast.to_pandas(ghastly_hist)pandas_histunweighted[-5.0, -4.875)2[-4.875, -4.75)5[-4.75, -4.625)9[-4.625, -4.5)15[-4.5, -4.375)29[-4.375, -4.25)49[-4.25, -4.125)80[-4.125, -4.0)104[-4.0, -3.875)237[-3.875, -3.75)352[-3.75, -3.625)555[-3.625, -3.5)867[-3.5, -3.375)1447[-3.375, -3.25)2046[-3.25, -3.125)3037[-3.125, -3.0)4562[-3.0, -2.875)6805[-2.875, -2.75)9540[-2.75, -2.625)13529[-2.625, -2.5)18584[-2.5, -2.375)25593[-2.375, -2.25)35000[-2.25, -2.125)46024[-2.125, -2.0)59103[-2.0, -1.875)76492[-1.875, -1.75)96441[-1.75, -1.625)119873[-1.625, -1.5)146159[-1.5, -1.375)177533[-1.375, -1.25)210628......[1.25, 1.375)211181[1.375, 1.5)177550[1.5, 1.625)147417[1.625, 1.75)120322[1.75, 1.875)96592[1.875, 2.0)76665[2.0, 2.125)59587[2.125, 2.25)45776[2.25, 2.375)34459[2.375, 2.5)25900[2.5, 2.625)18876[2.625, 2.75)13576[2.75, 2.875)9571[2.875, 3.0)6662[3.0, 3.125)4629[3.125, 3.25)3161[3.25, 3.375)2069[3.375, 3.5)1334[3.5, 3.625)878[3.625, 3.75)581[3.75, 3.875)332[3.875, 4.0)220[4.0, 4.125)135[4.125, 4.25)65[4.25, 4.375)39[4.375, 4.5)26[4.5, 4.625)19[4.625, 4.75)15[4.75, 4.875)4[4.875, 5.0)480 rows ร 1 columnsSerializationA ghast is also aFlatbuffersobject, which has amulti-lingual,random-access,small-footprintserialization:ghastly_hist.tobuffer()bytearray("\x04\x00\x00\x00\x90\xff\xff\xff\x10\x00\x00\x00\x00\x01\n\x00\x10\x00\x0c\x00\x0b\x00\x04
\x00\n\x00\x00\x00`\x00\x00\x00\x00\x00\x00\x01\x04\x00\x00\x00\x01\x00\x00\x00\x0c\x00\x00
\x00\x08\x00\x0c\x00\x0b\x00\x04\x00\x08\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x02\x08\x00
(\x00\x1c\x00\x04\x00\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x14\xc0\x00\x00\x00\x00\x00
\x00\x14@\x01\x00\x00\x00\x00\x00\x00\x00P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x08
\x00\n\x00\t\x00\x04\x00\x08\x00\x00\x00\x0c\x00\x00\x00\x00\x02\x06\x00\x08\x00\x04\x00\x06
\x00\x00\x00\x04\x00\x00\x00\x80\x02\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00
\x00\x00\x00\x00\t\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x1d\x00\x00
\x00\x00\x00\x00\x001\x00\x00\x00\x00\x00\x00\x00P\x00\x00\x00\x00\x00\x00\x00h\x00\x00\x00
\x00\x00\x00\x00\xed\x00\x00\x00\x00\x00\x00\x00`\x01\x00\x00\x00\x00\x00\x00+\x02\x00\x00
\x00\x00\x00\x00c\x03\x00\x00\x00\x00\x00\x00\xa7\x05\x00\x00\x00\x00\x00\x00\xfe\x07\x00
\x00\x00\x00\x00\x00\xdd\x0b\x00\x00\x00\x00\x00\x00\xd2\x11\x00\x00\x00\x00\x00\x00\x95\x1a
\x00\x00\x00\x00\x00\x00D%\x00\x00\x00\x00\x00\x00\xd94\x00\x00\x00\x00\x00\x00\x98H\x00\x00
\x00\x00\x00\x00\xf9c\x00\x00\x00\x00\x00\x00\xb8\x88\x00\x00\x00\x00\x00\x00\xc8\xb3\x00\x00
\x00\x00\x00\x00\xdf\xe6\x00\x00\x00\x00\x00\x00\xcc*\x01\x00\x00\x00\x00\x00\xb9x\x01\x00
\x00\x00\x00\x00A\xd4\x01\x00\x00\x00\x00\x00\xef:\x02\x00\x00\x00\x00\x00}\xb5\x02\x00\x00
\x00\x00\x00\xc46\x03\x00\x00\x00\x00\x00,\xc2\x03\x00\x00\x00\x00\x00\x9cR\x04\x00\x00\x00
\x00\x00a\xe7\x04\x00\x00\x00\x00\x00\x92{\x05\x00\x00\x00\x00\x00\x81\x02\x06\x00\x00\x00
\x00\x00\xce\x81\x06\x00\x00\x00\x00\x00\xa7\xe9\x06\x00\x00\x00\x00\x00\xb6>\x07\x00\x00
\x00\x00\x00vy\x07\x00\x00\x00\x00\x00\x06\x94\x07\x00\x00\x00\x00\x00\x02\x99\x07\x00\x00
\x00\x00\x00\x03|\x07\x00\x00\x00\x00\x00p8\x07\x00\x00\x00\x00\x00\x97\xeb\x06\x00\x00\x00
\x00\x00\xb2~\x06\x00\x00\x00\x00\x00Q\x00\x06\x00\x00\x00\x00\x00\x89x\x05\x00\x00\x00\x00
\x00K\xe6\x04\x00\x00\x00\x00\x00\x97O\x04\x00\x00\x00\x00\x00\xc5\xc2\x03\x00\x00\x00\x00
\x00\xed8\x03\x00\x00\x00\x00\x00\x8e\xb5\x02\x00\x00\x00\x00\x00\xd9?\x02\x00\x00\x00\x00
\x00\x02\xd6\x01\x00\x00\x00\x00\x00Py\x01\x00\x00\x00\x00\x00y+\x01\x00\x00\x00\x00\x00\xc3
\xe8\x00\x00\x00\x00\x00\x00\xd0\xb2\x00\x00\x00\x00\x00\x00\x9b\x86\x00\x00\x00\x00\x00\x00
,e\x00\x00\x00\x00\x00\x00\xbcI\x00\x00\x00\x00\x00\x00\x085\x00\x00\x00\x00\x00\x00c%\x00
\x00\x00\x00\x00\x00\x06\x1a\x00\x00\x00\x00\x00\x00\x15\x12\x00\x00\x00\x00\x00\x00Y\x0c
\x00\x00\x00\x00\x00\x00\x15\x08\x00\x00\x00\x00\x00\x006\x05\x00\x00\x00\x00\x00\x00n\x03
\x00\x00\x00\x00\x00\x00E\x02\x00\x00\x00\x00\x00\x00L\x01\x00\x00\x00\x00\x00\x00\xdc\x00
\x00\x00\x00\x00\x00\x00\x87\x00\x00\x00\x00\x00\x00\x00A\x00\x00\x00\x00\x00\x00\x00\'\x00
\x00\x00\x00\x00\x00\x00\x1a\x00\x00\x00\x00\x00\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x0f
\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00")print("Numpy size: ",numpy_hist[0].nbytes+numpy_hist[1].nbytes)tmessage=ROOT.TMessage()tmessage.WriteObject(root_hist)print("ROOT size: ",tmessage.Length())importpickleprint("Pandas size:",len(pickle.dumps(pandas_hist)))print("Aghast size: ",len(ghastly_hist.tobuffer()))Numpy size: 1288
ROOT size: 1962
Pandas size: 2984
Aghast size: 792Aghast is generally forseen as a memory format, likeApache Arrow, but for statistical aggregations. Like Arrow, it reduces the need to implement $N(N - 1)/2$ conversion functions among $N$ statistical libraries to just $N$ conversion functions. (See the figure on Arrow's website.)Translation of conventionsAghast also intends to be as close to zero-copy as possible. This means that it must make graceful translations among conventions. Different histogramming libraries handle overflow bins in different ways:fromroot=aghast.from_root(root_hist)fromroot.axis[0].binning.dump()print("Bin contents length:",len(fromroot.counts.array))RegularBinning(
num=80,
interval=RealInterval(low=-5.0, high=5.0),
overflow=RealOverflow(loc_underflow=BinLocation.below1, loc_overflow=BinLocation.above1))
Bin contents length: 82ghastly_hist.axis[0].binning.dump()print("Bin contents length:",len(ghastly_hist.counts.array))RegularBinning(num=80, interval=RealInterval(low=-5.0, high=5.0))
Bin contents length: 80And yet we want to be able to manipulate them as though these differences did not exist.sum_hist=fromroot+ghastly_histsum_hist.axis[0].binning.dump()print("Bin contents length:",len(sum_hist.counts.array))RegularBinning(
num=80,
interval=RealInterval(low=-5.0, high=5.0),
overflow=RealOverflow(loc_underflow=BinLocation.above1, loc_overflow=BinLocation.above2))
Bin contents length: 82The binning structure keeps track of the existence of underflow/overflow bins and where they are located.ROOT's convention is to put underflow before the normal bins (below1) and overflow after (above1), so that the normal bins are effectively 1-indexed.Boost.Histogram's convention is to put overflow after the normal bins (above1) and underflow after that (above2), so that underflow is accessed viamyhist[-1]in Numpy.Numpy histograms don't have underflow/overflow bins.Pandas could haveIntervalsthat extend to infinity.Aghast accepts all of these, so that it doesn't have to manipulate the bin contents buffer it receives, but knows how to deal with them if it has to combine histograms that follow different conventions.Binning typesAll the different axis types have an equivalent in aghast (and not all are single-dimensional).aghast.IntegerBinning(5,10).dump()aghast.RegularBinning(100,aghast.RealInterval(-5,5)).dump()aghast.HexagonalBinning(0,100,0,100,aghast.HexagonalBinning.cube_xy).dump()aghast.EdgesBinning([0.01,0.05,0.1,0.5,1,5,10,50,100]).dump()aghast.IrregularBinning([aghast.RealInterval(0,5),aghast.RealInterval(10,100),aghast.RealInterval(-10,10)],overlapping_fill=aghast.IrregularBinning.all).dump()aghast.CategoryBinning(["one","two","three"]).dump()aghast.SparseRegularBinning([5,3,-2,8,-100],10).dump()aghast.FractionBinning(error_method=aghast.FractionBinning.clopper_pearson).dump()aghast.PredicateBinning(["signal region","control region"]).dump()aghast.VariationBinning([aghast.Variation([aghast.Assignment("x","nominal")]),aghast.Variation([aghast.Assignment("x","nominal + sigma")]),aghast.Variation([aghast.Assignment("x","nominal - sigma")])]).dump()IntegerBinning(min=5, max=10)
RegularBinning(num=100, interval=RealInterval(low=-5.0, high=5.0))
HexagonalBinning(qmin=0, qmax=100, rmin=0, rmax=100, coordinates=HexagonalBinning.cube_xy)
EdgesBinning(edges=[0.01 0.05 0.1 0.5 1 5 10 50 100])
IrregularBinning(
intervals=[
RealInterval(low=0.0, high=5.0),
RealInterval(low=10.0, high=100.0),
RealInterval(low=-10.0, high=10.0)
],
overlapping_fill=IrregularBinning.all)
CategoryBinning(categories=['one', 'two', 'three'])
SparseRegularBinning(bins=[5 3 -2 8 -100], bin_width=10.0)
FractionBinning(error_method=FractionBinning.clopper_pearson)
PredicateBinning(predicates=['signal region', 'control region'])
VariationBinning(
variations=[
Variation(assignments=[
Assignment(identifier='x', expression='nominal')
]),
Variation(
assignments=[
Assignment(identifier='x', expression='nominal + sigma')
]),
Variation(
assignments=[
Assignment(identifier='x', expression='nominal - sigma')
])
])The meanings of these binning classes are given inthe specification, but many of them can be converted into one another, and converting toCategoryBinning(strings) often makes the intent clear.aghast.IntegerBinning(5,10).toCategoryBinning().dump()aghast.RegularBinning(10,aghast.RealInterval(-5,5)).toCategoryBinning().dump()aghast.EdgesBinning([0.01,0.05,0.1,0.5,1,5,10,50,100]).toCategoryBinning().dump()aghast.IrregularBinning([aghast.RealInterval(0,5),aghast.RealInterval(10,100),aghast.RealInterval(-10,10)],overlapping_fill=aghast.IrregularBinning.all).toCategoryBinning().dump()aghast.SparseRegularBinning([5,3,-2,8,-100],10).toCategoryBinning().dump()aghast.FractionBinning(error_method=aghast.FractionBinning.clopper_pearson).toCategoryBinning().dump()aghast.PredicateBinning(["signal region","control region"]).toCategoryBinning().dump()aghast.VariationBinning([aghast.Variation([aghast.Assignment("x","nominal")]),aghast.Variation([aghast.Assignment("x","nominal + sigma")]),aghast.Variation([aghast.Assignment("x","nominal - sigma")])]).toCategoryBinning().dump()CategoryBinning(categories=['5', '6', '7', '8', '9', '10'])
CategoryBinning(
categories=['[-5, -4)', '[-4, -3)', '[-3, -2)', '[-2, -1)', '[-1, 0)', '[0, 1)', '[1, 2)', '[2, 3)',
'[3, 4)', '[4, 5)'])
CategoryBinning(
categories=['[0.01, 0.05)', '[0.05, 0.1)', '[0.1, 0.5)', '[0.5, 1)', '[1, 5)', '[5, 10)', '[10, 50)',
'[50, 100)'])
CategoryBinning(categories=['[0, 5)', '[10, 100)', '[-10, 10)'])
CategoryBinning(categories=['[50, 60)', '[30, 40)', '[-20, -10)', '[80, 90)', '[-1000, -990)'])
CategoryBinning(categories=['pass', 'all'])
CategoryBinning(categories=['signal region', 'control region'])
CategoryBinning(categories=['x := nominal', 'x := nominal + sigma', 'x := nominal - sigma'])This technique can also clear up confusion about overflow bins.aghast.RegularBinning(5,aghast.RealInterval(-5,5),aghast.RealOverflow(loc_underflow=aghast.BinLocation.above2,loc_overflow=aghast.BinLocation.above1,loc_nanflow=aghast.BinLocation.below1)).toCategoryBinning().dump()CategoryBinning(
categories=['{nan}', '[-5, -3)', '[-3, -1)', '[-1, 1)', '[1, 3)', '[3, 5)', '[5, +inf]',
'[-inf, -5)'])Fancy binning typesYou might also be wondering aboutFractionBinning,PredicateBinning, andVariationBinning.FractionBinningis an axis of two bins: #passing and #total, #failing and #total, or #passing and #failing. Adding it to another axis effectively makes an "efficiency plot."h=aghast.Histogram([aghast.Axis(aghast.FractionBinning()),aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5)))],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.array([[9,25,29,35,54,67,60,84,80,94],[99,119,109,109,95,104,102,106,112,122]]))))df=aghast.to_pandas(h)dfunweightedpass[-5.0, -4.0)9[-4.0, -3.0)25[-3.0, -2.0)29[-2.0, -1.0)35[-1.0, 0.0)54[0.0, 1.0)67[1.0, 2.0)60[2.0, 3.0)84[3.0, 4.0)80[4.0, 5.0)94all[-5.0, -4.0)99[-4.0, -3.0)119[-3.0, -2.0)109[-2.0, -1.0)109[-1.0, 0.0)95[0.0, 1.0)104[1.0, 2.0)102[2.0, 3.0)106[3.0, 4.0)112[4.0, 5.0)122df=df.unstack(level=0)dfunweightedallpass[-5.0, -4.0)999[-4.0, -3.0)11925[-3.0, -2.0)10929[-2.0, -1.0)10935[-1.0, 0.0)9554[0.0, 1.0)10467[1.0, 2.0)10260[2.0, 3.0)10684[3.0, 4.0)11280[4.0, 5.0)12294df["unweighted","pass"]/df["unweighted","all"][-5.0, -4.0) 0.090909
[-4.0, -3.0) 0.210084
[-3.0, -2.0) 0.266055
[-2.0, -1.0) 0.321101
[-1.0, 0.0) 0.568421
[0.0, 1.0) 0.644231
[1.0, 2.0) 0.588235
[2.0, 3.0) 0.792453
[3.0, 4.0) 0.714286
[4.0, 5.0) 0.770492
dtype: float64PredicateBinningmeans that each bin represents a predicate (if-then rule) in the filling procedure. Aghast doesn'thavea filling procedure, but filling-libraries can use this to encode relationships among histograms that a fitting-library can take advantage of, for combined signal-control region fits, for instance. It's possible for those regions to overlap: an input datum might satisfy more than one predicate, andoverlapping_filldetermines which bin(s) were chosen:first,last, orall.VariationBinningmeans that each bin represents a variation of one of the paramters used to calculate the fill-variables. This is used to determine sensitivity to systematic effects, by varying them and re-filling. In this kind of binning, the same input datum enters every bin.xdata=numpy.random.normal(0,1,int(1e6))sigma=numpy.random.uniform(-0.1,0.8,int(1e6))h=aghast.Histogram([aghast.Axis(aghast.VariationBinning([aghast.Variation([aghast.Assignment("x","nominal")]),aghast.Variation([aghast.Assignment("x","nominal + sigma")])])),aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5)))],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.concatenate([numpy.histogram(xdata,bins=10,range=(-5,5))[0],numpy.histogram(xdata+sigma,bins=10,range=(-5,5))[0]]))))df=aghast.to_pandas(h)dfunweightedx := nominal[-5.0, -4.0)31[-4.0, -3.0)1309[-3.0, -2.0)21624[-2.0, -1.0)135279[-1.0, 0.0)341683[0.0, 1.0)341761[1.0, 2.0)135675[2.0, 3.0)21334[3.0, 4.0)1273[4.0, 5.0)31x := nominal + sigma[-5.0, -4.0)14[-4.0, -3.0)559[-3.0, -2.0)10814[-2.0, -1.0)84176[-1.0, 0.0)271999[0.0, 1.0)367950[1.0, 2.0)209479[2.0, 3.0)49997[3.0, 4.0)4815[4.0, 5.0)193df.unstack(level=0)unweightedx := nominalx := nominal + sigma[-5.0, -4.0)3114[-4.0, -3.0)1309559[-3.0, -2.0)2162410814[-2.0, -1.0)13527984176[-1.0, 0.0)341683271999[0.0, 1.0)341761367950[1.0, 2.0)135675209479[2.0, 3.0)2133449997[3.0, 4.0)12734815[4.0, 5.0)31193CollectionsYou can gather many objects (histograms, functions, ntuples) into aCollection, partly for convenience of encapsulating all of them in one object.aghast.Collection({"one":fromroot,"two":ghastly_hist}).dump()Collection(
objects={
'one': Histogram(
axis=[
Axis(
binning=
RegularBinning(
num=80,
interval=RealInterval(low=-5.0, high=5.0),
overflow=RealOverflow(loc_underflow=BinLocation.below1, loc_overflow=BinLocation.above1)),
statistics=[
Statistics(
moments=[
Moments(sumwxn=InterpretedInlineInt64Buffer(buffer=[1e+07]), n=0),
Moments(sumwxn=InterpretedInlineFloat64Buffer(buffer=[1e+07]), n=0, weightpower=1),
Moments(sumwxn=InterpretedInlineFloat64Buffer(buffer=[1e+07]), n=0, weightpower=2),
Moments(sumwxn=InterpretedInlineFloat64Buffer(buffer=[2468.31]), n=1, weightpower=1),
Moments(
sumwxn=InterpretedInlineFloat64Buffer(buffer=[1.00118e+07]),
n=2,
weightpower=1)
])
])
],
counts=
UnweightedCounts(
counts=
InterpretedInlineFloat64Buffer(
buffer=
[0.00000e+00 2.00000e+00 5.00000e+00 9.00000e+00 1.50000e+01 2.90000e+01
4.90000e+01 8.00000e+01 1.04000e+02 2.37000e+02 3.52000e+02 5.55000e+02
8.67000e+02 1.44700e+03 2.04600e+03 3.03700e+03 4.56200e+03 6.80500e+03
9.54000e+03 1.35290e+04 1.85840e+04 2.55930e+04 3.50000e+04 4.60240e+04
5.91030e+04 7.64920e+04 9.64410e+04 1.19873e+05 1.46159e+05 1.77533e+05
2.10628e+05 2.46316e+05 2.83292e+05 3.21377e+05 3.59314e+05 3.93857e+05
4.26446e+05 4.53031e+05 4.74806e+05 4.89846e+05 4.96646e+05 4.97922e+05
4.90499e+05 4.73200e+05 4.53527e+05 4.25650e+05 3.93297e+05 3.58537e+05
3.21099e+05 2.82519e+05 2.46469e+05 2.11181e+05 1.77550e+05 1.47417e+05
1.20322e+05 9.65920e+04 7.66650e+04 5.95870e+04 4.57760e+04 3.44590e+04
2.59000e+04 1.88760e+04 1.35760e+04 9.57100e+03 6.66200e+03 4.62900e+03
3.16100e+03 2.06900e+03 1.33400e+03 8.78000e+02 5.81000e+02 3.32000e+02
2.20000e+02 1.35000e+02 6.50000e+01 3.90000e+01 2.60000e+01 1.90000e+01
1.50000e+01 4.00000e+00 4.00000e+00 0.00000e+00]))),
'two': Histogram(
axis=[
Axis(binning=RegularBinning(num=80, interval=RealInterval(low=-5.0, high=5.0)))
],
counts=
UnweightedCounts(
counts=
InterpretedInlineInt64Buffer(
buffer=
[ 2 5 9 15 29 49 80 104 237 352
555 867 1447 2046 3037 4562 6805 9540 13529 18584
25593 35000 46024 59103 76492 96441 119873 146159 177533 210628
246316 283292 321377 359314 393857 426446 453031 474806 489846 496646
497922 490499 473200 453527 425650 393297 358537 321099 282519 246469
211181 177550 147417 120322 96592 76665 59587 45776 34459 25900
18876 13576 9571 6662 4629 3161 2069 1334 878 581
332 220 135 65 39 26 19 15 4 4])))
})Not only for convenience:you can also defineanAxisin theCollectionto subdivide all contents by thatAxis. For instance, you can make a collection of qualitatively different histograms all have a signal and control region withPredicateBinning, or all have systematic variations withVariationBinning.It is not necessary to rely on naming conventions to communicate this information from filler to fitter.Histogram โ histogram conversionsI said in the introduction that aghast does not fill histograms and does not plot histogramsโthe two things data analysts are expecting to do. These would be done by user-facing libraries.Aghast does, however, transform histograms into other histograms, and not just among formats. You can combine histograms with+. In addition to adding histogram counts, it combines auxiliary statistics appropriately (if possible).h1=aghast.Histogram([aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5)),statistics=[aghast.Statistics(moments=[aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([10])),n=1),aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([20])),n=2)],quantiles=[aghast.Quantiles(aghast.InterpretedInlineBuffer.fromarray(numpy.array([30])),p=0.5)],mode=aghast.Modes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([40]))),min=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([50]))),max=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([60]))))])],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(10))))h2=aghast.Histogram([aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5)),statistics=[aghast.Statistics(moments=[aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([100])),n=1),aghast.Moments(aghast.InterpretedInlineBuffer.fromarray(numpy.array([200])),n=2)],quantiles=[aghast.Quantiles(aghast.InterpretedInlineBuffer.fromarray(numpy.array([300])),p=0.5)],mode=aghast.Modes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([400]))),min=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([500]))),max=aghast.Extremes(aghast.InterpretedInlineBuffer.fromarray(numpy.array([600]))))])],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(100,200,10))))(h1+h2).dump()Histogram(
axis=[
Axis(
binning=RegularBinning(num=10, interval=RealInterval(low=-5.0, high=5.0)),
statistics=[
Statistics(
moments=[
Moments(sumwxn=InterpretedInlineInt64Buffer(buffer=[110]), n=1),
Moments(sumwxn=InterpretedInlineInt64Buffer(buffer=[220]), n=2)
],
min=Extremes(values=InterpretedInlineInt64Buffer(buffer=[50])),
max=Extremes(values=InterpretedInlineInt64Buffer(buffer=[600])))
])
],
counts=
UnweightedCounts(
counts=InterpretedInlineInt64Buffer(buffer=[100 111 122 133 144 155 166 177 188 199])))The corresponding moments ofh1andh2were matched and added, quantiles and modes were dropped (no way to combine them), and the correct minimum and maximum were picked; the histogram contents were added as well.Another important histogram โ histogram conversion is axis-reduction, which can take three forms:slicing an axis, either dropping the eliminated bins or adding them to underflow/overflow (if possible, depends on binning type);rebinning by combining neighboring bins;projecting out an axis, removing it entirely, summing over all existing bins.All of these operations use a Pandas-inspiredloc/ilocsyntax.h=aghast.Histogram([aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5)))],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.array([0,10,20,30,40,50,60,70,80,90]))))locslices in the data's coordinate system.1.5rounds up to bin index6. The first five bins get combined into an overflow bin:150 = 10 + 20 + 30 + 40 + 50.h.loc[1.5:].dump()Histogram(
axis=[
Axis(
binning=
RegularBinning(
num=4,
interval=RealInterval(low=1.0, high=5.0),
overflow=
RealOverflow(
loc_underflow=BinLocation.above1,
minf_mapping=RealOverflow.missing,
pinf_mapping=RealOverflow.missing,
nan_mapping=RealOverflow.missing)))
],
counts=UnweightedCounts(counts=InterpretedInlineInt64Buffer(buffer=[60 70 80 90 150])))ilocslices by bin index number.h.iloc[6:].dump()Histogram(
axis=[
Axis(
binning=
RegularBinning(
num=4,
interval=RealInterval(low=1.0, high=5.0),
overflow=
RealOverflow(
loc_underflow=BinLocation.above1,
minf_mapping=RealOverflow.missing,
pinf_mapping=RealOverflow.missing,
nan_mapping=RealOverflow.missing)))
],
counts=UnweightedCounts(counts=InterpretedInlineInt64Buffer(buffer=[60 70 80 90 150])))Slices have astart,stop, andstep(start:stop:step). Thestepparameter rebins:h.iloc[::2].dump()Histogram(
axis=[
Axis(binning=RegularBinning(num=5, interval=RealInterval(low=-5.0, high=5.0)))
],
counts=UnweightedCounts(counts=InterpretedInlineInt64Buffer(buffer=[10 50 90 130 170])))Thus, you can slice and rebin as part of the same operation.Projecting uses the same mechanism, except thatNonepassed as an axis's slice projects it.h2=aghast.Histogram([aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5))),aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5)))],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(numpy.arange(100))))h2.iloc[:,None].dump()Histogram(
axis=[
Axis(binning=RegularBinning(num=10, interval=RealInterval(low=-5.0, high=5.0)))
],
counts=
UnweightedCounts(
counts=InterpretedInlineInt64Buffer(buffer=[45 145 245 345 445 545 645 745 845 945])))Thus, all three axis reduction operations can be performed in a single syntax.In general, an n-dimensional ghastly histogram can be sliced like an n-dimensional Numpy array. This includes integer and boolean indexing (though that necessarily changes the binning toIrregularBinning).h.iloc[[4,3,6,7,1]].dump()Histogram(
axis=[
Axis(
binning=
IrregularBinning(
intervals=[
RealInterval(low=-1.0, high=0.0),
RealInterval(low=-2.0, high=-1.0),
RealInterval(low=1.0, high=2.0),
RealInterval(low=2.0, high=3.0),
RealInterval(low=-4.0, high=-3.0)
]))
],
counts=UnweightedCounts(counts=InterpretedInlineInt64Buffer(buffer=[40 30 60 70 10])))h.iloc[[True,False,True,False,True,False,True,False,True,False]].dump()Histogram(
axis=[
Axis(
binning=
IrregularBinning(
intervals=[
RealInterval(low=-5.0, high=-4.0),
RealInterval(low=-3.0, high=-2.0),
RealInterval(low=-1.0, high=0.0),
RealInterval(low=1.0, high=2.0),
RealInterval(low=3.0, high=4.0)
]))
],
counts=UnweightedCounts(counts=InterpretedInlineInt64Buffer(buffer=[0 20 40 60 80])))locfor numerical binnings acceptsa real numbera real-valued sliceNonefor projectionellipsis (...)locfor categorical binnings acceptsa stringan iterable of stringsanemptysliceNonefor projectionellipsis (...)ilocacceptsan integeran integer-valued sliceNonefor projectioninteger-valued array-likeboolean-valued array-likeellipsis (...)Bin counts โ NumpyFrequently, one wants to extract bin counts from a histogram. Theloc/ilocsyntax above createshistogramsfromhistograms, not bin counts.A histogram'scountsproperty has a slice syntax.allcounts=numpy.arange(12)*numpy.arange(12)[:,None]# multiplication tableallcounts[10,:]=-999# underflowsallcounts[11,:]=999# overflowsallcounts[:,0]=-999# underflowsallcounts[:,1]=999# overflowsprint(allcounts)[[-999 999 0 0 0 0 0 0 0 0 0 0]
[-999 999 2 3 4 5 6 7 8 9 10 11]
[-999 999 4 6 8 10 12 14 16 18 20 22]
[-999 999 6 9 12 15 18 21 24 27 30 33]
[-999 999 8 12 16 20 24 28 32 36 40 44]
[-999 999 10 15 20 25 30 35 40 45 50 55]
[-999 999 12 18 24 30 36 42 48 54 60 66]
[-999 999 14 21 28 35 42 49 56 63 70 77]
[-999 999 16 24 32 40 48 56 64 72 80 88]
[-999 999 18 27 36 45 54 63 72 81 90 99]
[-999 999 -999 -999 -999 -999 -999 -999 -999 -999 -999 -999]
[-999 999 999 999 999 999 999 999 999 999 999 999]]h2=aghast.Histogram([aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5),aghast.RealOverflow(loc_underflow=aghast.RealOverflow.above1,loc_overflow=aghast.RealOverflow.above2))),aghast.Axis(aghast.RegularBinning(10,aghast.RealInterval(-5,5),aghast.RealOverflow(loc_underflow=aghast.RealOverflow.below2,loc_overflow=aghast.RealOverflow.below1)))],aghast.UnweightedCounts(aghast.InterpretedInlineBuffer.fromarray(allcounts)))print(h2.counts[:,:])[[ 0 0 0 0 0 0 0 0 0 0]
[ 2 3 4 5 6 7 8 9 10 11]
[ 4 6 8 10 12 14 16 18 20 22]
[ 6 9 12 15 18 21 24 27 30 33]
[ 8 12 16 20 24 28 32 36 40 44]
[10 15 20 25 30 35 40 45 50 55]
[12 18 24 30 36 42 48 54 60 66]
[14 21 28 35 42 49 56 63 70 77]
[16 24 32 40 48 56 64 72 80 88]
[18 27 36 45 54 63 72 81 90 99]]To get the underflows and overflows, set the slice extremes to-infand+inf.print(h2.counts[-numpy.inf:numpy.inf,:])[[-999 -999 -999 -999 -999 -999 -999 -999 -999 -999]
[ 0 0 0 0 0 0 0 0 0 0]
[ 2 3 4 5 6 7 8 9 10 11]
[ 4 6 8 10 12 14 16 18 20 22]
[ 6 9 12 15 18 21 24 27 30 33]
[ 8 12 16 20 24 28 32 36 40 44]
[ 10 15 20 25 30 35 40 45 50 55]
[ 12 18 24 30 36 42 48 54 60 66]
[ 14 21 28 35 42 49 56 63 70 77]
[ 16 24 32 40 48 56 64 72 80 88]
[ 18 27 36 45 54 63 72 81 90 99]
[ 999 999 999 999 999 999 999 999 999 999]]print(h2.counts[:,-numpy.inf:numpy.inf])[[-999 0 0 0 0 0 0 0 0 0 0 999]
[-999 2 3 4 5 6 7 8 9 10 11 999]
[-999 4 6 8 10 12 14 16 18 20 22 999]
[-999 6 9 12 15 18 21 24 27 30 33 999]
[-999 8 12 16 20 24 28 32 36 40 44 999]
[-999 10 15 20 25 30 35 40 45 50 55 999]
[-999 12 18 24 30 36 42 48 54 60 66 999]
[-999 14 21 28 35 42 49 56 63 70 77 999]
[-999 16 24 32 40 48 56 64 72 80 88 999]
[-999 18 27 36 45 54 63 72 81 90 99 999]]Also note that the underflows are now all below the normal bins and overflows are now all above the normal bins, regardless of how they were arranged in the ghast. This allows analysis code to be independent of histogram source.Other typesAghast can attach fit functions to histograms, can store standalone functions, such as lookup tables, and can store ntuples for unweighted fits or machine learning.AcknowledgementsSupport for this work was provided by NSF cooperative agreement OAC-1836650 (IRIS-HEP), grant OAC-1450377 (DIANA/HEP) and PHY-1520942 (US-CMS LHC Ops).Thanks especially to the gracious help ofaghast contributors! |
agh-distributions | No description available on PyPI. |
ag-helper | ...SafeExecutorThe SafeExecutor decorator allows you to quickly execute a function or method within a try/except block.
Parameterdefaultallows you to override the return value in case of an error.
Parameterloggertakes aloggerobject and executes thelogger.error(error)method if an error occursExamples:from aghelper.utils import SafeExecutor
from logging import getLogger, StreamHandler, Formatter
logger = getLogger(__name__)
formatter = Formatter("%(asctime)s - %(levelname)s - %(message)s")
sh = StreamHandler()
sh.setFormatter(formatter)
logger.addHandler(sh)
@SafeExecutor
def foo():
return 1/0
class FooClass:
@SafeExecutor
def foo_method(self):
return 1/0
@classmethod
@SafeExecutor
def foo_classmethod(cls):
return 1/0
@staticmethod
@SafeExecutor
def foo_staticmethod():
return 1/0
@SafeExecutor(default=0)
def change_return(self):
return 1/0
@SafeExecutor(logger=logger)
def write_to_log(self):
return 1/0
print(f"1. Func result: {foo()}")
print(f"2. Method foo result: {FooClass().foo_method()}")
print(f"3. Class method foo result: {FooClass.foo_classmethod()}")
print(f"5. Static method foo result: {FooClass.foo_staticmethod()}")
print(f"6. Set default return: {FooClass().change_return()}")
print(f"7. Write error to log: {FooClass().write_to_log()}")
>>> 1. Func result: None
>>> 2. Method foo result: None
>>> 3. Class method foo result: None
>>> 5. Static method foo result: None
>>> 6. Set default return: 0
>>> 7. Write error to log: None
>>> 2023-01-13 17:06:55,437 - ERROR - Execute error: division by zero |
aghplctools | No description available on PyPI. |
agh-vqis | agh-vqisA Python package computing a set of image quality indicators (IQIs) for a given input video.Following IQIs are included in the package:A set of 15 Video Quality Indicators (VQIs) developed by the team from AGH. See the following website for more information:https://qoe.agh.edu.pl/indicators/.Our Python reimplementation of the Colourfulness IQI. Seethispaper for more information regarding this IQI.Blur Amount IQI. Seepackage's Python Package Index web pagefor more information.UGC IQI (User-generated content).Authors: Jakub Nawaลa <[email protected]>, Filip Korus <[email protected]>Requirementsffmpeg - version >= 4.4.2Python - version >= 3.9Installationpipinstallagh_vqisUsageSingle multimedia file:fromagh_vqisimportprocess_single_mm_file,VQIsfrompathlibimportPathif__name__=='__main__':process_single_mm_file(Path('/path/to/single/video.mp4'))Folder with multimedia files:fromagh_vqisimportprocess_folder_w_mm_files,VQIsfrompathlibimportPathif__name__=='__main__':process_folder_w_mm_files(Path('/path/to/multimedia/folder/'))Options parameter - in eitherprocess_single_mm_fileandprocess_folder_w_mm_filesfunction options could be provided as an optional argument. It is being passed to function as a dictionary like below.process_single_mm_file(Path('/path/to/single/video.mp4'),options={VQIs.contrast:False,# disable contrast indicatorVQIs.colourfulness:False,# disable colourfulness indicator})How to disable/enable indicators to count? Every indicator isenabled by default exceptblue_amountdue to long computing time. To disable one of following indicators(blockiness, SA, letterbox, pillarbox, blockloss, blur, TA, blackout, freezing, exposure, contrast, interlace, noise, slice, flickering, colourfulness, ugc)passVQIs.indicator_name:Falseto options dictionary. Whereas toenableblur_amountindicator passTruevalue.agh-vqis package could be used from command line interface as well. For example:python3-magh_vqis/path/to/single/movie.mp4# will run VQIS for single fileorpython3-magh_vqis/path/to/multimedia/folder/# will run VQIS for folderWhereas this command will display help:$python3-magh_vqis-hSupported multimedia files:mp4,y4m,mov,mkv,avi,ts,webm,jpg,jpeg,png,gif,bmp.First row of the output CSV file contains header with image quality indicators (IQIs) names, whereas each row below the header represents single video frame from the input video file. Each column provides a numerical result as returned by a given IQI when applied to the respective frame.Cast chosen indicators for different resolutions (experimental). For example: to cast Blur to 1440p and Blockiness to 2160p you should pass two additional lines inoptionsdictionary like below.fromagh_vqisimportprocess_single_mm_file,CastVQI,DestResolutionfrompathlibimportPathif__name__=='__main__':process_single_mm_file(Path('/path/to/single/video.mp4'),options={CastVQI.blur:DestResolution.p1440,CastVQI.blockiness:DestResolution.p2160})Available casts (with percentage of correctness)Blockiness:source resolutiondestination resolutioncorrectness1080p1440p99.80%1080p2160p99.72%1440p2160p99.68%2160p1440p99.81%Blur:source resolutiondestination resolutioncorrectness240p360p93.91%240p480p87.78%360p240p94.28%360p480p98.08%360p720p92.51%480p240p87.68%480p360p98.05%480p720p97.35%720p360p92.69%720p480p97.31%720p1080p80.95%1080p1440p99.12%1080p2160p93.41%1440p1080p99.07%1440p2160p96.24%2160p1080p93.59%2160p1440p96.39%Exposure(bri):source resolutiondestination resolutioncorrectness240p360p97.75%240p480p94.89%240p720p89.71%360p240p97.77%360p480p98.68%360p720p94.83%360p1080p80.37%480p240p94.82%480p360p98.68%480p720p97.80%480p1080p82.90%480p1440p80.40%720p240p89.32%720p360p94.87%720p480p97.85%720p1080p86.99%720p1440p84.66%720p2160p82.34%1080p360p80.71%1080p480p83.41%1080p720p86.88%1080p1440p98.78%1080p2160p96.91%1440p480p80.90%1440p720p84.85%1440p1080p98.83%1440p2160p99.05%2160p720p82.89%2160p1080p96.93%2160p1440p98.98%Contrast:source resolutiondestination resolutioncorrectness240p360p99.83%240p480p99.69%240p720p99.55%240p1080p87.94%240p1440p87.94%240p2160p87.91%360p240p99.85%360p480p99.97%360p720p99.93%360p1080p88.36%360p1440p88.34%360p2160p88.36%480p240p99.73%480p360p99.97%480p720p99.98%480p1080p88.61%480p1440p88.67%480p2160p88.67%720p240p99.61%720p360p99.93%720p480p99.98%720p1080p88.45%720p1440p88.46%720p2160p88.46%1080p240p87.12%1080p360p87.71%1080p480p87.82%1080p720p87.91%1080p1440p99.99%1080p2160p99.99%1440p240p87.13%1440p360p87.72%1440p480p87.85%1440p720p87.92%1440p1080p99.99%1440p2160p100.0%2160p240p87.11%2160p360p87.72%2160p480p87.85%2160p720p87.91%2160p1080p99.99%2160p1440p100.0%Interlace:source resolutiondestination resolutioncorrectness240p360p86.93%360p240p87.19%360p480p88.32%480p360p87.22%480p720p90.80%720p480p91.21%720p1080p82.65%1080p1440p81.35%Noise:source resolutiondestination resolutioncorrectness240p360p88.85%360p240p88.08%360p480p86.33%480p360p85.94%480p720p88.71%720p480p88.12%1080p1440p94.84%1080p2160p80.34%1440p1080p92.28%1440p2160p87.24%2160p1440p88.54%LicenseTheagh-vqisPython package is provided via theEvaluation License Agreement. |
agi | AGIApparently AGI will be the biggest breakthrough in like forever.Suppose it is true that AGI can be used to solve everything. Then GDP will tend towards infinity if AGI is achieved. AGI will probably be written in Python because network effects and lock-in (sorry Julia and Swift). Therefore the PyPi namespace for AGI will be infinitely valuable.If you wish to acquire this namespace, please send me infinite $$$ (bitcoin not accepted), and we'll see what we can do!Cheers. |
agic | Failed to fetch description. HTTP Status Code: 404 |
agify | Asynchronous Python wrapper for (,,)A simple API for predicting the age, gender, and country of a person by their name.The API is free for up to 1000 names/day. No sign up or API key needed. So go ahead and try it out.Instalationpip install agifyUsage example:async version:fromagifyimportAsyncNameAPIg=AsyncNameAPI(["Igor","Alex"],mode="*")print(asyncio.run(g.get_names_info()))# ->{'Alex':{'age':45,'count':1114390,'country':[{'country_id':'CZ','probability':0.082},{'country_id':'UA','probability':0.045},{'country_id':'RO','probability':0.033},{'country_id':'RU','probability':0.031},{'country_id':'IL','probability':0.028}],'gender':'male','probability':0.96},'Igor':{'age':49,'count':168019,'country':[{'country_id':'UA','probability':0.169},{'country_id':'RS','probability':0.113},{'country_id':'RU','probability':0.093},{'country_id':'HR','probability':0.084},{'country_id':'SK','probability':0.062}],'gender':'male','probability':1.0}}a=AsyncNameAPI(["Ivan"],"gender")print(asyncio.run(a.get_names_info()))# ->{'Ivan':{'count':425630,'gender':'male','probability':1.0}}a=AsyncNameAPI()print(asyncio.run(a.get_limit_remaining()))# ->987usual version:fromagifyimportNameAPIg=NameAPI(["Igor","Alex"],mode="*")print(g.get_names_info())# ->{'Alex':{'age':45,'count':1114390,'country':[{'country_id':'CZ','probability':0.082},{'country_id':'UA','probability':0.045},{'country_id':'RO','probability':0.033},{'country_id':'RU','probability':0.031},{'country_id':'IL','probability':0.028}],'gender':'male','probability':0.96},'Igor':{'age':49,'count':168019,'country':[{'country_id':'UA','probability':0.169},{'country_id':'RS','probability':0.113},{'country_id':'RU','probability':0.093},{'country_id':'HR','probability':0.084},{'country_id':'SK','probability':0.062}],'gender':'male','probability':1.0}}a=NameAPI(["Ivan"],"gender")print(a.get_names_info())# ->{'Ivan':{'count':425630,'gender':'male','probability':1.0}}a=NameAPI()print(a.get_limit_remaining())# ->987 |
agile | Meta-package for python with tools for an agile development workflow.add it to your projectpipinstallagilewhat is in it?mockThemocklibrary is the
easiest and most expressive way to mock in python.example: mocking I/O calls:# cool-git-project/my_cool_library/core.pyimportioimportjsonclassJSONDatabase(object):def__init__(self,filename=None,data={}):self.filename=filenameself.data=datadefstate_to_json(self):returnjson.dumps(self.data)defsave(self):# open filefd=io.open(self.filename,'wb')fd.write(self.state_to_json())fd.close()# cool-git-project/tests/unit/test_core.pyfrommockimportpatchfrommy_cool_library.coreimportJSONDatabase@patch('my_cool_library.core.io')@patch('my_cool_library.core.JSONDatabase.state_to_json')deftest_json_database_save(state_to_json,io):("JSONDatabase.save() should open the database file, ""and write the latest json state of the data")# Given that the call to io.open returns a mockmocked_fd=io.open.return_value# And that I create an instance of JSONDatabase with some datajdb=JSONDatabase('my-database.json',data={'foo':'bar'})# When I call .save()jdb.save()# Then the file descriptor should have been opened in write mode,# and pointing to the right fileio.open.assert_called_once_with('my-database.json','wb')# And the returned file descriptor should have been used# to write the return value from state_to_jsonmocked_fd.write.assert_called_once_with(state_to_json.return_value)# And then the file descriptor should have been closedmocked_fd.close.assert_called_once_with()The mock documentation can be foundheresureSure modifies the all the python objects in memory, adding a special
propertyshould, that allows you to test aspects of the given
object.Letโs see it in practice.Still considering the project from themockexample above, now letโs
test thatstate_to_jsonreturns a json string.deftest_json_database_state_to_json():("JSONDatabase.state_to_json() should return a valid json string")# Given that I have an instance of the database containing some datajdb=JSONDatabase(data={'name':'Foo Bar'})# When I call .state_to_jsonresult=jdb.state_to_json()# Then it should return a valid JSONresult.should.equal('{"name": "Foo Bar"}')The sure documentation is availableherenose + coverage + rednosenosetests-vsx--rednose--with-coverage--cover-package=my_cool_librarytests/unit# ornosetests-vsx--rednose--with-coverage--cover-package=my_cool_librarytests/functionalNose is a great test runner, recursively scans for files that start withtest_and and with.py. It supports plugins and agile installs
two cool plugins:coveragecoverage is a module that collects test coverage data so that nose can
show a summary of what lines of python code donโt have test coverage.rednoseRednose is a plugin that prints a prettier output when running the
tests, and show bad things inredwhich highlights problems and make
it easier to see where is the problem, pretty awesome.More over,as long as you write single-line docstrings to describe
your testsrednose will show the whole sentence, pretty and with no
chops.JSONDatabase.save()shouldopenthedatabasefile,andwritethelatestjsonstateofthedata...passedJSONDatabase.state_to_json()shouldreturnavalidjsonstring...passed-----------------------------------------------------------------------------2testsrunin0.0seconds(2testspassed)ps.: nose actually matches files that containtestin the name and
can also findTestCaseclasses, but I recommend using function-based
tests, for clarity, expressiveness and to enforce simplicity. We
developers tend to add too much logic to setup and teardown functions
when writing test-based class.Gists:creating a basic python test infrastructuremkdir-ptests/{unit,functional}touchtests/{unit,functional,}/__init__.pyprintf'import sure\nsure\n'>tests/unit/__init__.pyprintf'import sure\nsure\n'>tests/functional/__init__.pynow go ahead and add a unit test file, try to name your test file such
that it resembles module being tested, for example, letโs say you are
testingmy_cool_library/engine.py, you could create a test file like
thisprintf"# -*- coding: utf-8 -*-\n\n">tests/unit/test_engine.py |
agile-analytics | # jira-agile-extractor[](https://travis-ci.org/cmheisel/agile-analytics)[](https://coveralls.io/github/cmheisel/agile-analytics?branch=master)[](http://waffle.io/cmheisel/agile-analytics)Extract data about items from JIRA, output raw data and interesting reports## ArchitectureThe agile extractor is composed of several different components. Fetchers gather data, which is fed to one or more analyzer/reporters/writer combos.### Components#### FetchersFetchers are responsible for getting raw data about tickets from agile data sources (JIRA, Trello, etc.)They depend on fetcher specific configuration including things like API end points, credentials and search criteria.They produce a set of AgileTicket objects with a known interface.#### AnalyzersAnalyzers take in a set of AgileTicket objects and an analysis configuration and return a set of AnalyzedTicket objects that contain the original AgileTicket as well as additional data calculated in light of the analysis context.For example, a CycleTime analyzer would look for a start_state and an end_state in the configuration, and calculate the days between those and store it as cycle_time on the AnalyzedTicket object.#### ReportersReporters take in a set of AnalyzedTicket objects and a report configuration and return a Report object. It has two standard attributes:* Table - A representation of the report as a 2 dimensional table, provides column headers, row labels, and values for each row/column combo* Summary - A key/value store of report specific data#### WritersWriters take in a Report and a WriterConfig can write it out a particular source. Examples:* CSV to standout* CSV to a file* Google spreadsheet* Plotly### Diagram```+-----------> Reporter: Distribution| title=Cycle Time Distribution| start_date=1/1/2015| end_date=3/31/2015| field=cycle_time|+-----------> Reporter: Throughput| title=Weekly Throughput| start_date=1/1/2015| end_date=3/31/2015| period=weekly|||+-----------------> Analyzer: Cycle Time +| start_state=Backlog| end_state=Deployed| issue_types=Story|Fetcher | +-----------> Reporter: Throughputsource=JIRA +----------------> Analyzer: Defect + title=Escaped Defectsfilter=1111 | defect_types=Bug,Incident start_date=1/1/2015auth=user,pass | end_date=3/31/2015||+----------------> Analyzer: Cycle Time +-----------> Reporter: Throughputstart_state=Analysis title=Weekly Analysis Throughputend_state=Dev start_date=1/1/2015end_date=3/31/2015period=weekly``` |
agilearning | UNKNOWN |
agilecoder | No description available on PyPI. |
agilecode-test-tools | No description available on PyPI. |
agile_conf | ## Agile Conf Document (WIP)[agile_conf](https://github.com/tly1980/agile_conf) - A config files (in [YAML](http://yaml.org) format) and template engine ([Jinja2](http://jinja.pocoo.org)) based configuration compile / management tool to make DevOp tasks (or maybe 1,000 other things) easier.### MotivationA lot of work of DevOps is about configs / deployment script generation and management.One can easily implement script using ["sed"](http://en.wikipedia.org/wiki/Sed) to generate the configs / deployment scripts.However, ["sed"](http://en.wikipedia.org/wiki/Sed) is far away from a perfect tool when you want to do some slightly complicated find / replace.From my expierence, modern [Templating processor](http://en.wikipedia.org/wiki/Template_processor) does a much better job in:**translating the variables into any forms of text-based outputs** (HTML, XML, JSON, YAML, INI, ..., etc.).Powered by ([Jinja2](http://jinja.pocoo.org)), [agile_conf](https://github.com/tly1980/agile_conf) supports all the features that is built-in with ([Jinja2](http://jinja.pocoo.org)) templating, such as:template inhertitance, include, .etc.On top of that, [agile_conf](https://github.com/tly1980/agile_conf) comes with some useful filters for DevOps purpose:1. jsonify2. aws_userdata(it can translate [AWS userdata](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) into [cloudformation stack template](http://aws.amazon.com/cloudformation/aws-cloudformation-templates/))3. yamlifyOther than that, I believe that we should be serious about config / deployment scripts. Rather than doing the scripts / config generation with the execution altogether at run time, I prefer that we can have the compiled scripts / configuration by hands **before** executing it.So that we can review the deployment scripts / config before running them, to gain clear idea on what is going to happen and avoid the surpise you don't want.I also believe we should manage those compiled deployment scripts / configurations in [git](http://git-scm.com) or any other SCM, so that we can treat them seriously just like we usually does with our code.And because they're managed by SCM, we will have full history, diff between changes, and permissions control.### Basic workfolw0. Create a project, and use [git](http://git-scm.com) or other SCM to manage it.1. Define your variable and templates.2. Compile artifacts: any text-based config / scripts.3. Commit your changes (variable, templates and the compiled scripts and configs) to the [git](http://git-scm.com) or other SCM repository.4. Use Bamboo or Jenkins to check out the compiled scripts and configs and execute them.Or, you can always run the scripts locally as long as you have the permissions.5. Retired your compiled scripts and configs if you don't need them (You should destroy the resources accordingly, using the pre-compiled destroy scripts.)### InstallUse [PIP](https://pip.pypa.io/en/latest/quickstart.html) to install it.```pip install agile-conf```If you don't have [PIP](https://pip.pypa.io/en/latest/quickstart.html), please [install](https://pip.pypa.io/en/latest/installing.html) it first.After the installation, you can run ```agc --help``` to verify.### Getting started```agc``` is the main commandline tool of [agile_conf](https://github.com/tly1980/agile_conf).0. Clone the boilplate locally.```git clone https://github.com/tly1980/agile_conf_boilplate.git ~/workspace/agile_conf_boilplate```You don't have to use this boilplate repository, you can create your own boilplate repository by using same directory structure.#### 1. Create a new project by using the boilplate.```$ agc create single_ec2 my_ec2 --bo_repo=~/workspace/agile_conf_boilplate/creating project: my_ec2using boilerplate: /Users/minddriven/workspace/agile_conf_boilplate/single_ec2```Notes: You can specify the boilplate_repo with ```--bo_repo```, or set it in enviornment variable: ```AGC_BOIL```.#### 2. Walk thorugh the project.```my_ec2``` project (build from single_ec2 boilplate) comes with following structure.Please read through the comments.```my_ec2/_script/cfnmodule.yaml # the variables specifically for cfn moduleec2.josn.tpl # template of cloudformation script0_userdata.sh.tpl # template of the userdata.# Rendering happended alphabatically# '0_' prefix makes it the first one to be render.conf_perf.yaml # config for 'perf' performance test builds.conf_prod.yaml # config for 'prod' production builds.conf_uat.yaml # config for 'uat' user user acceptance builds.Makefileproject.yaml # the common variables# that can use across# diffeent modulesREADME.md```In project folder, any sub-folders do **NOT** has "_" prefix is a module. Each module can have its own templates.Inside the module, any file that has ".tpl" postfix in the name would be rendered.The order of rendering is alphabetical. This is a simple way to avoid circulating dependencies.Common template variables are defined in ```project.yaml```, ```conf.yaml```.Module specific variables are defined in ```module.yaml```.Variables defined in ```conf.yaml```, can be referenced in ```projects.yaml``` and ```module.yaml``` and templates.In the single_ec2 projects, it has mupltiple conf.yaml for different enviornments.```conf_uat.yaml```, ```conf_prod.yaml``` and ```conf_prod.yaml```. When you run the command, you should run it with ```--conf``` options.With a ```conf_uat.yaml``````yamlname: uatnetenv: uat # will deploy to uat subnetsnumber: 1```Following is a line in ```project.yaml``````yamlproduct_fullname: hello-ec2-{{ conf.name }}-{{ conf.number }}```would be rendered into```yamlproduct_fullname: hello-ec2-uat-1```Variables defined in ```conf.yaml``` and ```project.yaml``` can be use in ```${MODULE}/module.yaml``` and templates.If you want to see the exact value used in the templates:USE ```inspect``` command.```$ agc inspect --conf conf_uat.yaml```Output would be:```with [conf=conf_uat.yaml][conf]name: uatnetenv: uatnumber: 1[project]instance_type:perf: m3.largeprod: m3.mediumuat: t2.microproduct_fullname: hello-ec2-uat-1[cfn]image_id: ami-d50773efinstance_type: t2.microkey_name: my-keynetenv: uatsubnet_id:prod: subnet-prodsubnetuat: subnet-uatsubnetsubnet_security_groups:prod:- sg-prod1- sg-prod2uat:- sg-uat1- sg-uat2subnet_sg_group: fronttags:- Key: NameValue: hello-ec2-uat-1- Key: EnvironmentValue: uat```### 3. Create a config build.Run follow command will generate a build.You must provide the conf file with ```--conf```, so that command tool knows which conf file to use.```agc build --conf conf_uat.yaml```It will generate a new folder in ```_builds/{conf.name}/{conf.number}```.If the content inside ```conf_uat.yaml``` is following:```yamlname: uatnetenv: uat # will deploy to uat subnetsnumber: 1```You would have a folder ```_builds/uat/1``` with following layout:```cfn/ # all are from cfn/*.tpl0_userdata.shec2.jsonmodule.yamlcreate_stack.sh # compiled from _script/create_stack.sh.tplkill_stack.sh # compiled from _script/kill_stack.sh.tpl```### 4. filters[agile_conf](https://github.com/tly1980/agile_conf) built-in jinja2 filters.Here is the example of ```aws_userdata``` filter from the ```single_ec2``` boilplate project.```bashecho "hello world"echo "This is [{{conf.name}}-{{conf.number}}] for project: {{project.product_fullname}}"```It would be rendered into:```bashecho "hello world"echo "This is [uat-1] for project: hello-ec2-uat-1"```In ```ec2.json.tpl``` we have a following code.```"UserData": {{ [_BUILD_DST_FOLDER, "0_userdata.sh"] |aws_userdata }},```It is using a ```aws_userdata``` filter to turn ```0_userdata.sh``` into following code.```_BUILD_DST_FOLDER``` is the output destination folder of the module, exactly where the ```0_userdata.sh``` located.And you can see the shell script is rendred into cloudformation json structure:```"UserData": {"Fn::Base64": {"Fn::Join": ["",["echo \"hello world\"\n","echo \"This is [uat-1] for project: hello-ec2-uat-1\"\n"]]}},```Another filter is ```jsonify```.In ```cfn/module.yaml```, tags are defined in following value:```yamltags:- Key: NameValue: {{ project.product_fullname }}- Key: EnvironmentValue: {{ conf.netenv }}```In ```cfn/ec2.json.tpl```, it is how ```tags``` being used:```"Tags": {{ tags|jsonify }}```It would be rendered into following:```"Tags": [{"Key": "Name", "Value": "hello-ec2-uat-1"},{"Key": "Environment", "Value": "uat"}]```### Commands**Command: build**Compile the variables into```agc build --conf conf_xxx.yaml```**Command: inspect**Print out all the variables, would be very useful for debugging```agc inspect --conf conf_xxx.yaml```**Command: inspect**```agc inspect --conf conf_xxx.yaml```**Shortcut: using**If you put following shell script in your BASH rc file,```bashusing() {envcmd="env AGC_CONF=conf_$1.yaml"shiftactual_cmd="$@"$envcmd $actual_cmd}```you will have a very convinient short cut to switch different conf_xxx.yaml.```using uat agc inspect``````using uat agc build```It is particular useful to do it with Makefile.Supposed you have following a Makefile.```Makefilebuild_uat:agc build --conf conf_uat.yamlbuild_prod:agc build --conf conf_prod.yamlbuild_perf:agc build --conf conf_prod.yaml```With shortcut ```using```, you could have a Makefile like following:```Makefilebuild_uat:agc build```So you can switch between different conf_xxx.yaml by:1. ```using uat make build```2. ```using prod make build```3. ```using perf make build```**PS**: ```using``` can work with all the command with ```--conf``` options.**Command: create**To create a project from boilerplate repository.```agc create ${bo_name} ${project}```Before you run this command, you should set enviornment variable ```AGC_BOIL```,or use it with ```--bo_repo``` with it.> --bo_repo or AGC_BOIL can only be set to point to a local path.> You cannot put it GIT/HTTP URL to it, yet ... :)**Command: id**```agc id --conf conf_uat.yaml``` or ```using uat agc id```Will output:```{conf_name}/{conf_number}```**Command: where**```agc id --conf conf_uat.yaml``` or ```using uat agc where```Will output the exact location where the build is gonna to be.e.g.:```$ using uat agc where/Users/minddriven/workspace/play_agile_conf/my_ec2/_builds/uat/1``` |
agilecrm | Python library forAgile CRMbased on therest-api documentation.StatusWe use this in production forScreenly, and it works fine. Still a
bit rough around the corners, but it does indeed work.InstallationClone the repo as a sub-module inside your project.Install the Python requirements.$ pip install agilecrmConfigurationIn order to use the module, you need to set the following environment
variables:AGILECRM_APIKEYAGILECRM_EMAILAGILECRM_DOMAINUsageFirst, you need to import the module. This may vary depending on your
paths etc, but something like:import agilecrmCreating a userSimply create a new user. Despite what is claimed in the documentation,
all variables appear to be optional.agilecrm.create_contact(
first_name='John',
last_name='Doe',
email='[email protected]',
tags=['signed_up'],
company='Foobar Inc')You can also use custom fields (must be created in Agile CRM first):agilecrm.create_contact(
first_name='John',
custom = {
'SomeField': 'Foobar'
}Update a contactUpdate a user object.agilecrm.update_contact(
first_name='Steve',
last_name='Smith',
email='[email protected]',
tags=['name_updated'],
company='Foobar2 Inc')Get a user (by email)This will get the user by email and return the user object as JSON.agilecrm.get_contact_by_email('[email protected]')Get a user (by UUID)This will get the user by UUID and return the user object as JSON.agilecrm.get_contact_by_uuid(1234)Add a tagThis will add the tag โawesome_userโ to the user โ[email protected]โ. Both
variables are required.agilecrm.add_tag('[email protected]', 'awesome_user') |
agile-crm-python | No description available on PyPI. |
agilecrm-python | agilecrm-pythonagilecrm is an API wrapper for Agile CRM written in PythonInstallingpip install agilecrm-pythonUsagefrom agilecrm.client import Client
client = Client('API_KEY', 'EMAIL', 'DOMAIN')Create contactcontact_data = {
"star_value": "4",
"lead_score": "92",
"tags": [
"Lead",
"Likely Buyer"
],
"properties": [
{
"type": "SYSTEM",
"name": "first_name",
"value": "Los "
},
{
"type": "SYSTEM",
"name": "last_name",
"value": "Bruikheilmer"
},
{
"type": "SYSTEM",
"name": "company",
"value": "steady.inc"
},
{
"type": "SYSTEM",
"name": "title",
"value": "VP Sales"
},
{
"type": "SYSTEM",
"name": "email",
"subtype": "work",
"value": "[email protected]"
},
{
"type": "SYSTEM",
"name": "address",
"value": "{\"address\":\"225 George Street\",\"city\":\"NSW\",\"state\":\"Sydney\",\"zip\":\"2000\",\"country\":\"Australia\"}"
},
{
"type": "CUSTOM",
"name": "My Custom Field",
"value": "Custom value"
}
]
}
response = client.create_contact(contact_data)Get contact by idresponse = client.get_contact_by_id('5685265389584384')Get contact by emailresponse = client.get_contact_by_email('[email protected]')Update contactupdate_contact_data = {
"id": "5685265389584384",
"properties": [
{
"type": "SYSTEM",
"name": "last_name",
"value": "Chan"
},
{
"type": "CUSTOM",
"name": "My Custom Field",
"value": "Custom value chane"
}
]
}
response = client.update_contact(update_contact_data)Search contactsimport json
myjson = {
"rules": [{"LHS": "created_time", "CONDITION": "BETWEEN", "RHS": 1510152185.779954, "RHS_NEW": 1510238585.779877}],
"contact_type": "PERSON"}
response = client.search(
{'page_size': 25,
'global_sort_key': '-created_time',
'filterJson': json.dumps(myjson)
})RequirementsrequestsTestspython tests/test_client.py |
agileetc | agileetcExample python 3.9+ project where we can develop best practices and provide teams with a useful template with the following features:Poetry packaged python project with example CLI entry point.Linux and Windows compatible project.Example read/write YML files.Example Unit Tests.Example flake8 linter configuration.Example line operation via click API allowing project to be run from command line of from CICD pipelines.Example use of Fabric API to execute external commands.Example use of Texttable for pretty table output.Example GoCD pipeline.Example GitHub actions.Python package publishing to PiPy.Docker image publishing to docker hub.Example usage of python package.Example usage of docker image.PrerequisitesThis project uses poetry is a tool for dependency management and packaging in Python. It allows you to declare the
libraries your project depends on, it will manage (install/update) them for you.Use the installer rather than pipinstalling-with-the-official-installer.poetryselfaddpoetry-bumpversionpoetry-V
Poetry(version1.2.0)Getting StartedpoetryupdatepoetryinstallRunpoetryrunagileetcLintpoetryrunflake8TestpoetryrunpytestPublishBy default we are usingPYPI packages.Create yourself an access token for PYPI and then follow the instructions.exportPYPI_USERNAME=__token__exportPYPI_PASSWORD=<YourAPIToken>
poetrypublish--build--username$PYPI_USERNAME--password$PYPI_PASSWORDVersioningWe useSemVerfor versioning. For the versions available, see thetags on this repository.ReleasingWe are usingpoetry-bumpversionto manage release versions.poetryversionpatchDependencyOnce the release has been created it is now available for you to use in other python projects via:pipinstallagileetcAnd also for poetry projects via:poetryaddaigleetcContinuous IntegrationThe objects of each of the example CI pipelines here are to:Lint the python code.Run Unit TestsBuild package.Release package, bumping the version.Publishing in new package version.GitHub ActionsThe example GitHub actions CI for this project is located in file .github/workflows/python-ci.yml and is available from
the GitHub dashboard. This CI is setup torun via a local runnerwhich should be configured.JenkinsGoCDContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to us.LicenseThis project is licensed under the Apache License, Version 2.0 - see theLICENSEfile for details |
agilegeo | The agilegeo module contains several common geophysics functionsused for modelling and post-processing seismic reflection data.PrerequisitesRequires scipy and numpy.ContributorsEvan BiancoBen BougherMatt HallWes Hamlyn, and Sean Ross-RossLinksAgile GeoscienceHomepageIssue TrackerPyPiGithub |
agilego | No description available on PyPI. |
agileid | UNKNOWN |
agile_item_master | agile item master |
agile-north-build-essentials | InstallationRun pip install agile_north_build_essentials |
agilentaspparser | Agilent IR spectrum file parsersIn our lab we have an Agilent Cary 660 ATR-IR spectrometer. The ASCII files produced by this device are a nuissance to work with beacause they don't feature a wavenumber column. This 'library' intends to make work with said files a bit easier by parsing spectra and adding missing information. |
agilentaspparser-egonik | Agilent IR spectrum file parsersIn our lab we have an Agilent Cary 660 ATR-IR spectrometer. The ASCII files produced by this device are a nuissance to work with beacause they don't feature a wavenumber column. This 'library' intends to make work with said files a bit easier by parsing spectra and adding missing information. |
agilentlightwave | No description available on PyPI. |
agilent-lightwave | No description available on PyPI. |
agilent-visa-control | agilent_visa_controlA set of files aimed at facilitating the transfer of data from Agilent Spectrum Analizers through VISA InterfacePrerequisites:You will need PyVisa (and PyVisa's dependencies too obviously).Installation:No need to install, just clone this repository to your working workspace folder and code inside of it.For a good example on how to use the code check Agilent_save_trace.pyFrequency.py is a helper class to help you deal with frequencies and its conversion. You can by using this class just sum or substract frequencies and the units will be taken care of by the script. You can then convert to which ever frequency unit you need. Check Frequency.py for examples on usage (afterif __name__ == __main__:)Usage of Agilent class:Before using this class you will need to know the VISA identifier of the Agilent Spectrum Analyzer you'll want to use.
The identifier is usually found on the IO Librairies Suite of KeySight (https://www.keysight.com/en/pd-1985909/io-libraries-suite) (This suite is usually needed for communication with the Spectrum Analyzer) or in the instrument panel of your VISA package.It is something likeidentifier="GPIB0::8::INSTR"Once you know your identifier you will need to create the Agilent class object:agilent=Agilent(identifier)Then you need to open your connection:agilent.open()Then you could set the mode of the Analyzer, for example you could choose the spectrum analyzer mode. As of today the code only supports setting this mode remotely. You could always set in some other mode using the Frontal Interface on the instrument and then carry on extracting the data with this library.agilent.set_sa()Then you create the frequencies at which you want to center and the span of the analyzer.center_freq = Frequency(80.1, FreqUnit(FreqUnit.MHz))
span = Frequency(50, FreqUnit(FreqUnit.kHz))And we set the x Axis:agilent.set_x(center_freq, span)We set the Y axis now:agilent.set_y(3,10)#in dBm (first argument is the reference Level and the second one is the scale in dBm per Div.You can also set markers:agilent.set_marker(1, center_freq)And in the end we extract the values:values=agilent.get_trace(1)#treat valuesAnd in the end we close the connection.#close connection once you are done with the agilent Spectrum Analyzer
agilent.close() |
agileplanner | Agile PlannerFor some projects, as an Engineering Leader responsible for delivery, you can often get by with just enough planning and you leave the door open for adjusting as you execute the plan. Adjustments are in the form of flexing either or both the schedule and scope. For other projects, you might be faced with strict deadlines forcing you to fix the dates. This at least gives you some wiggle room to flex the scope as you execute the plan.The most challenging types of projects are where the dates are fixed, and the scope has been reduced to the bare bones. This leaves no margin for error and therefore exposes the project to significant risk. This is what is known as theโIron Triangleโ. What might you do in such a situation? One option is to run. Another option is to fight it. If you do nothing, you run the real risk of having to rely on heroics to meet the dates leading to team burnout and attrition.There is another option. As a leader, you can take this environment as a constraint and work with it. To mitigate the risks, forces you to improve planning discipline.These tools were conceived out of necessity when the author was faced with such a situation.The tools enable capacity to be determined for a team or organization, for a specific period. This is usually, but not always a quarter.In addition, scheduling tools are provided that consider, the type of work (epic) matching the skillset of the team. The start date and end date for an epic (if the work fits) is calculated and can be subsequently plugged into your tool of choice (Jira is a popular choice). As the project progresses, you can easily determine if the project is off-track if epics are not starting or finishing on time. These indicators do not replace any metrics you might have in place at the sprint level. They can be used in addition to get a sense of whether the project is on-track or not.Be sure to checkout thecookbookto learn how to use Agile Planner.Capacity estimation toolsHow much QE capacity do we have for a given time period? (e.g. remaining for the quarter, Q2, Q3)How much front end capacity do we have for a given time period for a specific team?How much total back end capacity do we have for the entire org for Q2?What is the estimated capacity for a team's sprint?All it requires is that we have details of each team and person on that team to be captured in a YAML.team:name:DevOps teampersons:-name:Billy Tedstart_date:'2023-09-13'end_date:'2030-12-31'front_end:Falseback_end:Falseqe:Falsedevops:Truedocumentation:Falsereserve_capacity:0.35location:Ukraineout_of_office_dates:-'2023-10-03'-'2023-10-04'-'2023-10-05'-'2023-10-06'-'2023-10-09'-'2023-10-10'Given this, we can generate a capacity spreadsheet (CSV) broken down by person and by day for each person on the team for any time period.You can easily combine teams together enabling capacity to be calculated for an entire organization.A pandas DataFrame can easily be created from the team capacity, enabling querying and exploring of the available capacity. For example, you might want to query how much capacity you have for QE or Documentation.Epic scheduling toolsOnce we have capacity calculated, it opens up the possibility to perform basic epic scheduling.Sample use cases:Given capacity and epic rough sizes, what is the estimated epic start date? end date? - so that we can determine if we are on-track or notWill the epics allocated to a team fit in the specified time period (e.g. Q2)?To support this simple details about the epics are required in YAML as follows:features:-key:CSESC-52epics:-key:CSESC-1022estimated_size:171epic_type:FRONTEND-key:CSESC-1023estimated_size:50epic_type:BACKENDEach epic can be of the following type:FRONTENDBACKENDQEDEVOPSDOCUMENTATIONEpics that are of mixed types are not supported at this time. I need to figure out what that might look like.Here's an example of basic scheduling. We have a short time-period of 4 days. Notice how there 10/22 is a weekend and so there is zero capacity. There are no US holidays detected in the time-period and the people on the team do not have any planned PTO. The team of 3 is available for the entire time-period. We use a classic ideal hours calculation of 6 hours per day (6/8 = 0.25). This allows time for the team ceremonies, PRs etc.We load the team with 4 epics each with a size of 2 points each. The scheduler forecasts the start date and the edn date for each epic. Notice how the
scheduler notifies that the final epic CSESC-1974 is not forecast to complete within the time period.Team Person Location Start Date End Date Front End Back End QE DevOps Reserve Capacity 2023-10-22 2023-10-23 2023-10-24 2023-10-25 Total
0 Provisioners Alice US 2023-01-01 2030-12-31 T F F F 0.25 0 0.75 0.75 0.75 2.25
1 Provisioners Sue US 2023-01-01 2030-12-31 T F F F 0.25 0 0.75 0.75 0.75 2.25
2 Provisioners Bob US 2023-01-01 2030-12-31 T F F F 0.25 0 0.75 0.75 0.75 2.25
3 Provisioners Total - - - - - - - - 0 2.25 2.25 2.25 6.75
CSESC-1971 sized 2 starts on 2023-10-23 and is scheduled to complete on 2023-10-23 with 0 points remaining
CSESC-1972 sized 2 starts on 2023-10-23 and is scheduled to complete on 2023-10-24 with 0 points remaining
CSESC-1973 sized 2 starts on 2023-10-24 and is scheduled to complete on 2023-10-25 with 0 points remaining
CSESC-1974 sized 2 starts on 2023-10-25 and is scheduled to complete on WILL NOT COMPLETE IN TIME with 1.25 points remaining |
agilepoint | TODO: Create script to regenerate codeExamplesBasic Usage:from agilepoint import AgilePoint
ap = AgilePoint(host, path, username, password)
db_info = ap.admin.get_database_info()
# Responses in json usually have a primary key indicating what AgilePoint class the response has.
for key, value in db_info['GetDatabaseInfoResult'].items():
print('{}: {}'.format(key,value))Register Users:users = {'First Last': '[email protected]'}
for name, email in users.items():
r = ap.admin.register_user(UserName=email, FullName=name)
print(r)Note: Itโs not well defined what arguments are required and what is optional. Iโve made logical conclusions. If you notice that the required/optional arguments is incorrect please submit a PR. |
agile-py | AgilePyWhat is it?Agile-py is a Python package to quickly kick off and python project.Main FeaturesInintializeproject withpyproject.toml,.pre-commit-config.yaml,.gitignore, and vscode setting.Where to get itAgile-py can be installed from PyPI usingpip:pipinstallagile-pyInterfaceInitialize python projectapyinit |
agilerl | AgileRLReinforcement learning streamlined.Easier and faster reinforcement learning with RLOps. Visit ourwebsite. Viewdocumentation.Join theDiscord Serverto collaborate.NEW: AgileRL now introduces evolvableContextual Multi-armed Bandit Algorithms!This is a Deep Reinforcement Learning library focused on improving development by introducing RLOps - MLOps for reinforcement learning.This library is initially focused on reducing the time taken for training models and hyperparameter optimization (HPO) by pioneering evolutionary HPO techniques for reinforcement learning.Evolutionary HPO has been shown to drastically reduce overall training times by automatically converging on optimal hyperparameters, without requiring numerous training runs.We are constantly adding more algorithms and features. AgileRL already includes state-of-the-art evolvable on-policy, off-policy, offline, multi-agent and contextual multi-armed bandit reinforcement learning algorithms with distributed training.AgileRL offers 10x faster hyperparameter optimization than SOTA.Global steps is the sum of every step taken by any agent in the environment, including across an entire population, during the entire hyperparameter optimization process.Table of ContentsBenchmarksGet StartedTutorialsAlgorithms implementedTrain an agentCiting AgileRLBenchmarksReinforcement learning algorithms and libraries are usually benchmarked once the optimal hyperparameters for training are known, but it often takes hundreds or thousands of experiments to discover these. This is unrealistic and does not reflect the true, total time taken for training. What if we could remove the need to conduct all these prior experiments?In the charts below, a single AgileRL run, which automatically tunes hyperparameters, is benchmarked against Optuna's multiple training runs traditionally required for hyperparameter optimization, demonstrating the real time savings possible. Global steps is the sum of every step taken by any agent in the environment, including across an entire population.AgileRL offers an order of magnitude speed up in hyperparameter optimization vs popular reinforcement learning training frameworks combined with Optuna. Remove the need for multiple training runs and save yourself hours.AgileRL also supports multi-agent reinforcement learning using the Petting Zoo-style (parallel API). The charts below highlight the performance of our MADDPG and MATD3 algorithms with evolutionary hyper-parameter optimisation (HPO), benchmarked against epymarl's MADDPG algorithm with grid-search HPO for the simple speaker listener and simple spread environments.Get StartedInstall as a package with pip:pipinstallagilerlOr install in development mode:gitclonehttps://github.com/AgileRL/AgileRL.git&&cdAgileRL
pipinstall-e.Demo:cddemos
pythondemo_online.pyor to demo distributed training:cddemos
acceleratelaunch--config_fileconfigs/accelerate/accelerate.yamldemos/demo_online_distributed.pyTutorialsWe are in the process of creating tutorials on how to use AgileRL and train agents on a variety of tasks.Currently, we havetutorials for single-agent tasksthat will guide you through the process of training both on and off-policy agents to beat a variety of Gymnasium environments. Additionally, we havemulti-agent tutorialsthat make use of PettingZoo environments such as training DQN to play Connect Four with curriculum learning and self-play, and also for multi-agent tasks in MPE environments. We also have atutorial on using hierarchical curriculum learningto teach agents Skills. We also have files for a tutorial on training a language model with reinforcement learning using ILQL on Wordle intutorials/Language. If using ILQL on Wordle, download and unzip data.ziphere.Our demo files indemosalso provide examples on how to train agents using AgileRL, and more information can be found in ourdocumentation.Evolvable algorithms implemented (more coming soon!)DQNRainbow DQNDDPGTD3PPOCQLILQLMADDPGMATD3NeuralUCBNeuralTSTrain an agent to beat a Gym environmentBefore starting training, there are some meta-hyperparameters and settings that must be set. These are defined inINIT_HP, for general parameters, andMUTATION_PARAMS, which define the evolutionary probabilities, andNET_CONFIG, which defines the network architecture. For example:INIT_HP={'ENV_NAME':'LunarLander-v2',# Gym environment name'ALGO':'DQN',# Algorithm'DOUBLE':True,# Use double Q-learning'CHANNELS_LAST':False,# Swap image channels dimension from last to first [H, W, C] -> [C, H, W]'BATCH_SIZE':256,# Batch size'LR':1e-3,# Learning rate'EPISODES':2000,# Max no. episodes'TARGET_SCORE':200.,# Early training stop at avg score of last 100 episodes'GAMMA':0.99,# Discount factor'MEMORY_SIZE':10000,# Max memory buffer size'LEARN_STEP':1,# Learning frequency'TAU':1e-3,# For soft update of target parameters'TOURN_SIZE':2,# Tournament size'ELITISM':True,# Elitism in tournament selection'POP_SIZE':6,# Population size'EVO_EPOCHS':20,# Evolution frequency'POLICY_FREQ':2,# Policy network update frequency'WANDB':True# Log with Weights and Biases}MUTATION_PARAMS={# Relative probabilities'NO_MUT':0.4,# No mutation'ARCH_MUT':0.2,# Architecture mutation'NEW_LAYER':0.2,# New layer mutation'PARAMS_MUT':0.2,# Network parameters mutation'ACT_MUT':0,# Activation layer mutation'RL_HP_MUT':0.2,# Learning HP mutation'RL_HP_SELECTION':['lr','batch_size'],# Learning HPs to choose from'MUT_SD':0.1,# Mutation strength'RAND_SEED':1,# Random seed}NET_CONFIG={'arch':'mlp',# Network architecture'h_size':[32,32],# Actor hidden size}First, useutils.utils.initialPopulationto create a list of agents - our population that will evolve and mutate to the optimal hyperparameters.fromagilerl.utils.utilsimportmakeVectEnvs,initialPopulationimporttorchdevice=torch.device("cuda"iftorch.cuda.is_available()else"cpu")env=makeVectEnvs(env_name=INIT_HP['ENV_NAME'],num_envs=16)try:state_dim=env.single_observation_space.n# Discrete observation spaceone_hot=True# Requires one-hot encodingexceptException:state_dim=env.single_observation_space.shape# Continuous observation spaceone_hot=False# Does not require one-hot encodingtry:action_dim=env.single_action_space.n# Discrete action spaceexceptException:action_dim=env.single_action_space.shape[0]# Continuous action spaceifINIT_HP['CHANNELS_LAST']:state_dim=(state_dim[2],state_dim[0],state_dim[1])agent_pop=initialPopulation(algo=INIT_HP['ALGO'],# Algorithmstate_dim=state_dim,# State dimensionaction_dim=action_dim,# Action dimensionone_hot=one_hot,# One-hot encodingnet_config=NET_CONFIG,# Network configurationINIT_HP=INIT_HP,# Initial hyperparameterspopulation_size=INIT_HP['POP_SIZE'],# Population sizedevice=device)Next, create the tournament, mutations and experience replay buffer objects that allow agents to share memory and efficiently perform evolutionary HPO.fromagilerl.components.replay_bufferimportReplayBufferfromagilerl.hpo.tournamentimportTournamentSelectionfromagilerl.hpo.mutationimportMutationsfield_names=["state","action","reward","next_state","done"]memory=ReplayBuffer(action_dim=action_dim,# Number of agent actionsmemory_size=INIT_HP['MEMORY_SIZE'],# Max replay buffer sizefield_names=field_names,# Field names to store in memorydevice=device)tournament=TournamentSelection(tournament_size=INIT_HP['TOURN_SIZE'],# Tournament selection sizeelitism=INIT_HP['ELITISM'],# Elitism in tournament selectionpopulation_size=INIT_HP['POP_SIZE'],# Population sizeevo_step=INIT_HP['EVO_EPOCHS'])# Evaluate using last N fitness scoresmutations=Mutations(algo=INIT_HP['ALGO'],# Algorithmno_mutation=MUTATION_PARAMS['NO_MUT'],# No mutationarchitecture=MUTATION_PARAMS['ARCH_MUT'],# Architecture mutationnew_layer_prob=MUTATION_PARAMS['NEW_LAYER'],# New layer mutationparameters=MUTATION_PARAMS['PARAMS_MUT'],# Network parameters mutationactivation=MUTATION_PARAMS['ACT_MUT'],# Activation layer mutationrl_hp=MUTATION_PARAMS['RL_HP_MUT'],# Learning HP mutationrl_hp_selection=MUTATION_PARAMS['RL_HP_SELECTION'],# Learning HPs to choose frommutation_sd=MUTATION_PARAMS['MUT_SD'],# Mutation strengtharch=NET_CONFIG['arch'],# Network architecturerand_seed=MUTATION_PARAMS['RAND_SEED'],# Random seeddevice=device)The easiest training loop implementation is to use ourtrain_off_policy()function. It requires theagenthave functionsgetAction()andlearn().fromagilerl.training.train_off_policyimporttrain_off_policytrained_pop,pop_fitnesses=train_off_policy(env=env,# Gym-style environmentenv_name=INIT_HP['ENV_NAME'],# Environment namealgo=INIT_HP['ALGO'],# Algorithmpop=agent_pop,# Population of agentsmemory=memory,# Replay bufferswap_channels=INIT_HP['CHANNELS_LAST'],# Swap image channel from last to firstn_episodes=INIT_HP['EPISODES'],# Max number of training episodesevo_epochs=INIT_HP['EVO_EPOCHS'],# Evolution frequencyevo_loop=1,# Number of evaluation episodes per agenttarget=INIT_HP['TARGET_SCORE'],# Target score for early stoppingtournament=tournament,# Tournament selection objectmutation=mutations,# Mutations objectwb=INIT_HP['WANDB'])# Weights and Biases trackingCiting AgileRLIf you use AgileRL in your work, please cite the repository:@software{Ustaran-Anderegg_AgileRL,author={Ustaran-Anderegg, Nicholas and Pratt, Michael},license={Apache-2.0},title={{AgileRL}},url={https://github.com/AgileRL/AgileRL}} |
agiletixapi | UNKNOWN |
agile-toolkit | Agile toolkit |
agileupbom | agileupbomPython 3.8+ project to manage AgileUP pipeline bill-of-materials with the following features:Linux and Windows compatible project.Defines BOM model.Manages bom yaml in S3 compatible storage.PrerequisitesThis project uses poetry is a tool for dependency management and packaging in Python. It allows you to declare the
libraries your project depends on, it will manage (install/update) them for you.Use the installer rather than pipinstalling-with-the-official-installer.poetryselfaddpoetry-bumpversionpoetry-V
Poetry(version1.2.0)Windows PathInstall poetry from powershell in admin mode.(Invoke-WebRequest-Urihttps://install.python-poetry.org-UseBasicParsing).Content|py-The path will beC:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exewhich will will need to add to your system path.Windows GitBashWhen using gitbash you can setup an alias for the poetry command:aliaspoetry="\"C:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exe\""Getting StartedpoetryupdatepoetryinstallRunpoetryrunagileupbomLintpoetryrunflake8TestpoetryrunpytestPublishBy default we are usingPYPI packages.Create yourself an access token for PYPI and then follow the instructions.exportPYPI_USERNAME=__token__exportPYPI_PASSWORD=<YourAPIToken>
poetrypublish--build--username$PYPI_USERNAME--password$PYPI_PASSWORDVersioningWe useSemVerfor versioning. For the versions available, see thetags on this repository.ReleasingWe are usingpoetry-bumpversionto manage release versions.poetryversionpatchDependencyOnce the release has been created it is now available for you to use in other python projects via:pipinstallagileupbomAnd also for poetry projects via:poetryaddagileupbomContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to us.LicenseThis project is licensed under the Apache License, Version 2.0 - see theLICENSEfile for details |
agileupipc | agileupipcPython 3.8+ project to manage AgileUP Informatica PowerCenter with the following features:Linux and Windows compatible project.PrerequisitesThis project uses poetry is a tool for dependency management and packaging in Python. It allows you to declare the
libraries your project depends on, it will manage (install/update) them for you.Use the installer rather than pipinstalling-with-the-official-installer.poetryselfaddpoetry-bumpversionpoetry-V
Poetry(version1.2.0)Windows PathInstall poetry from powershell in admin mode.(Invoke-WebRequest-Urihttps://install.python-poetry.org-UseBasicParsing).Content|py-The path will beC:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exewhich will will need to add to your system path.Windows GitBashWhen using gitbash you can setup an alias for the poetry command:aliaspoetry="\"C:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exe\""Getting StartedpoetryupdatepoetryinstallRunpoetryrunagileupipcLintpoetryrunflake8TestpoetryrunpytestPublishBy default we are usingPYPI packages.Create yourself an access token for PYPI and then follow the instructions.exportPYPI_USERNAME=__token__exportPYPI_PASSWORD=<YourAPIToken>
poetrypublish--build--username$PYPI_USERNAME--password$PYPI_PASSWORDVersioningWe useSemVerfor versioning. For the versions available, see thetags on this repository.ReleasingWe are usingpoetry-bumpversionto manage release versions.poetryversionpatchDependencyOnce the release has been created it is now available for you to use in other python projects via:pipinstallagileupipcAnd also for poetry projects via:poetryaddagileupipcContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to us.LicenseThis project is licensed under the Apache License, Version 2.0 - see theLICENSEfile for details |
agileupisa | agileupisaPython 3.8+ project to manage AgileUP Informatica Secure Agents with the following features:Linux and Windows compatible project.PrerequisitesThis project uses poetry is a tool for dependency management and packaging in Python. It allows you to declare the
libraries your project depends on, it will manage (install/update) them for you.Use the installer rather than pipinstalling-with-the-official-installer.poetryselfaddpoetry-bumpversionpoetry-V
Poetry(version1.2.0)Windows PathInstall poetry from powershell in admin mode.(Invoke-WebRequest-Urihttps://install.python-poetry.org-UseBasicParsing).Content|py-The path will beC:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exewhich will will need to add to your system path.Windows GitBashWhen using gitbash you can setup an alias for the poetry command:aliaspoetry="\"C:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exe\""Getting StartedpoetryupdatepoetryinstallRunpoetryrunagileupisaLintpoetryrunflake8TestpoetryrunpytestPublishBy default we are usingPYPI packages.Create yourself an access token for PYPI and then follow the instructions.exportPYPI_USERNAME=__token__exportPYPI_PASSWORD=<YourAPIToken>
poetrypublish--build--username$PYPI_USERNAME--password$PYPI_PASSWORDVersioningWe useSemVerfor versioning. For the versions available, see thetags on this repository.ReleasingWe are usingpoetry-bumpversionto manage release versions.poetryversionpatchDependencyOnce the release has been created it is now available for you to use in other python projects via:pipinstallagileupisaAnd also for poetry projects via:poetryaddagileupisaContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to us.LicenseThis project is licensed under the Apache License, Version 2.0 - see theLICENSEfile for details |
agileupstate | AgileUp StatePython 3.8+ project to manage AgileUP pipeline states with the following features:Linux and Windows compatible project.Defines state model.Saves and fetches states from vault.Exports private key for Linux SSH connections.Exports client PKI data for Windows WinRM connections.Creates cloud init zip file for mTLS connection data to Windows WinRM hosts.Exports ansible inventories for both Linux(SSH) and Windows(WinRM) connections.Provides simple connectivity tests.PrerequisitesThis project uses poetry is a tool for dependency management and packaging in Python. It allows you to declare the
libraries your project depends on, it will manage (install/update) them for you.Use the installer rather than pipinstalling-with-the-official-installer.poetryselfaddpoetry-bumpversionpoetry-V
Poetry(version1.2.0)Windows PathInstall poetry from powershell in admin mode.(Invoke-WebRequest-Urihttps://install.python-poetry.org-UseBasicParsing).Content|py-The path will beC:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exewhich you will need to add to your system path.Windows GitBashWhen using gitbash you can setup an alias for the poetry command:aliaspoetry="\"C:\Users\<YOURUSER>\AppData\Roaming\Python\Scripts\poetry.exe\""Getting StartedpoetryupdatepoetryinstallDevelopmentThis project uses thehvacpython module and to develop locally you can run vault
as a docker service as detailed herelocal docker vault. For local development vault
setup follow theVAULTguide for information.Check your connection with the following command, note in development mode vault should not be sealed.exportVAULT_ADDR='http://localhost:8200'exportVAULT_TOKEN='8d02106e-b1cd-4fa5-911b-5b4e669ad07a'poetryrunagileupstatecheckRequired Environment State VariablesVariableDescriptionSIAB_IDUnique environment ID.SIAB_CLOUDCloud vendor API mnemonic.SIAB_LOCATION1Cloud vendor defined cloud location, uksouth, etc.SIAB_LOCATION2Cloud vendor defined cloud location, UK South, etc.SIAB_CONTEXTEnvironment context, e.g. dev. test, prod.SIAB_VALUES_PATHVault path to common environment values to be exported.SIAB_DOMAINOptional public domain that overrides cloud provided DNS names.SIAB_LOCATION: Azure has a different location string between "accounts" and "resources" and onlyuksouthis useful
to the automation, but we must also provideUK Southfor resource groups.FIXME: Needs to be verified.SIAB_VALUES_PATH: Rather than load variables into the delivery platform environment, there can be many, a better option
is to define a YML file that contains all the required common variables for a specific environment and have the user upload
that to vault. This application can then download the YML data file and convert it into an exports file that can be sourced
by the pipelines. These environment values that are exported can then be used by this project and other utilities such as
terraform, ansible and powershell.SIAB_DOMAIN: Cloud DNS services might in some cases provide a DNS domain that is not the same as the public internet
domain required by the project, for example server1.uksouth.cloudapp.azure.com might optionally be server1.meltingturret.io.username/password: These values are common across the environment, for example Ubuntu Azure images use ausername=azureuser,
and so it simplifies configuration if the same credentials are used for Linux and Windows environments running in Azure for
administration access, client administration access as well as PFX certificate files used on Windows for WinRM certificate
authentication. For AWSubuntuis the username for Ubuntu images the same approach can be taken there.Required environment inputs:These values should be setup in your CD platforms environment variables.exportSIAB_ID=001exportSIAB_CLOUD=armexportSIAB_LOCATION1=uksouthexportSIAB_LOCATION2="UK South"exportSIAB_CONTEXT=devexportSIAB_VALUES_PATH=siab-state/001-arm-uksouth-dev/siab-values/siab-values.ymlRequired values inputs (stored in vault pathSIAB_VALUES_PATH):connection:url:https://server1.meltingturret.io:5986username:azureuserpassword:mypasswordca_trust_path:siab-client/chain.meltingturret.io.pemcert_pem:siab-client/[email protected]_key_pem:siab-client/[email protected]:group_owner:Paul Gilligangroup_department:DevOpsgroup_location:uksouthRequired Supporting DataSome data is generated only once and thus can be uploaded to vault manually.Uploading values file:base64./siab-values.yml|vaultkvputsecret/siab-state/001-arm-uksouth-dev/siab-values/siab-values.ymlfile=-Uploading pfx files:base64./server1.meltingturret.io.pfx|vaultkvputsecret/siab-pfx/server1.meltingturret.io.pfxfile=-
base64./[email protected]|vaultkvputsecret/siab-pfx/[email protected]=-Uploading pki files:base64./chain.meltingturret.io.pem|vaultkvputsecret/siab-client/chain.meltingturret.io.pemfile=-
base64./[email protected]|vaultkvputsecret/siab-client/[email protected]=-
base64./[email protected]|vaultkvputsecret/siab-client/[email protected]=-Provision Use CaseExample steps required for the Windows terraform provision use case shown below.agileupstatecloud-init--server-path=siab-pfx/ags-w-arm1.meltingturret.io.pfx--client-path=siab-pfx/[email protected]
agileupstatecreatesource./siab-state-export.shterraforminit
terraformapply-auto-approve
agileupstatesaveExample steps required for the Linux terraform provision use case shown below.agileupstatecreatesource./siab-state-export.shterraforminit
terraformapply-auto-approve
agileupstatesaveDestroy Use CaseExample steps required for recovering system state use case shown below which might be for example to destroy an environment.agileupstateloadsource./siab-state-export.shterraforminit
terraformdestroy-auto-approveAnsible Use CaseExample steps required for the Windows ansible use case shown below.agileupstateloadsource./siab-state-export.shagileupstateinventory-windows--ca-trust-path=siab-client/chain.meltingturret.io.pem--cert-pem=siab-client/[email protected]=siab-client/[email protected]
ansible-inventory-iinventory.ini--list
ansible-iinventory.ini"${TF_VAR_siab_name_underscore}"-mwin_pingExample steps required for the Linux ansible use case shown below.agileupstateloadsource./siab-state-export.shagileupstateinventory-linux
ansible-inventory-iinventory.ini--listANSIBLE_HOST_KEY_CHECKING=Trueansible-iinventory.ini--user"${TF_VAR_admin_username}""${TF_VAR_siab_name_underscore}"-mpingExports Use CaseTheymlfile fromSIAB_VALUES_PATHis exported to the filesiab-state-export.shwith the contents as shown in the
example below which can then be used by downstream utilities.exportSIAB_URL=https://server1.meltingturret.io:5986exportSIAB_USERNAME=azureuserexportSIAB_PASSWORD=mypasswordexportSIAB_CA_TRUST_PATH=siab-client/chain.meltingturret.io.pemexportSIAB_CERT_PEM=siab-client/[email protected]_CERT_KEY_PEM=siab-client/[email protected]_VAR_group_owner=PaulGilliganexportTF_VAR_group_department=DevOpsexportTF_VAR_group_location=UKSouthexportTF_VAR_admin_username=azureuserexportTF_VAR_admin_password=mypasswordexportTF_VAR_siab_name=001-arm-uksouth-devexportTF_VAR_siab_name_underscore=001_arm_uksouth_devsource./siab-state-export.shCloud Init Data Use CaseExample cloud init command that generates the zip file that is loaded onto Windows machines for WimRM certificate authentication.poetryrunagileupstatecloud-init--server-path=siab-pfx/ags-w-arm1.meltingturret.io.pfx--client-path=siab-pfx/[email protected] Windows Inventory Use Caseinventory.iniis generated with the target(s) and configuration information for a successful SSH connection from Ansible.Whenexport SIAB_DOMAIN=meltingturret.io:[001_arm_uksouth_dev]ags-w-arm1.meltingturret.io[001_arm_uksouth_dev:vars]ansible_user=azureuseransible_password=heTgDg!J4buAv5kcansible_connection=winrmansible_port=5986ansible_winrm_ca_trust_path=chain.meltingturret.io.pemansible_winrm_cert_pem=azureuser@meltingturret.io.pemansible_winrm_cert_key_pem=azureuser@meltingturret.io.keyansible_winrm_transport=certificateAnsible Linux Inventory Use Caseinventory.iniis generated with the target(s) and configuration information for a successful SSH connection from Ansible.Whenexport SIAB_DOMAIN=meltingturret.io:[001_arm_uksouth_dev]ags-w-arm1.meltingturret.io ansible_ssh_private_key_file=vm-rsa-private-key.pemRunpoetryrunagileupstateLintpoetryrunflake8TestpoetryrunpytestPublishBy default we are usingPYPI packages.Create yourself an access token for PYPI and then follow the instructions.exportPYPI_USERNAME=__token__exportPYPI_PASSWORD=<YourAPIToken>
poetrypublish--build--username$PYPI_USERNAME--password$PYPI_PASSWORDVersioningWe useSemVerfor versioning. For the versions available, see thetags on this repository.ReleasingWe are usingpoetry-bumpversionto manage release versions.poetryversionpatchDependencyOnce the release has been created it is now available for you to use in other python projects via:pipinstallagileupstateAnd also for poetry projects via:poetryaddagileupstateContributingPlease readCONTRIBUTING.mdfor details on our code of conduct, and the process for submitting pull requests to us.LicenseThis project is licensed under the Apache License, Version 2.0 - see theLICENSEfile for details |
agileutil | No description available on PyPI. |
agilicus | Agilicus SDK (Python)TheAgilicus PlatformAPI.
is defined usingOpenAPI 3.0,
and may be used from any language. This allows configuration of our Zero-Trust Network Access cloud native platform
using REST. You can see the API specificationonline.This package provides a Python SDK, class library interfaces for use in
accessing individual collections. In addition it provides a command-line-interface (CLI)
for interactive use.Read the class-library documentationonlineSamplesshows various examples of this code in use.Generally you may install this frompypias:pip install --upgrade agilicusYou may wish to add bash completion by adding this to your ~/.bashrc:eval "$(_AGILICUS_CLI_COMPLETE=source agilicus-cli)"Example: List usersThe below python code will show the same output as the CLI command:agilicus-cli --issuer https://auth.dbt.agilicus.cloud list-usersimport agilicus
import argparse
import sys
scopes = agilicus.scopes.DEFAULT_SCOPES
parser = argparse.ArgumentParser(description="update-user")
parser.add_argument("--auth-doc", type=str)
parser.add_argument("--issuer", type=str)
parser.add_argument("--email", type=str)
parser.add_argument("--disable-user", type=bool, default=None)
args = parser.parse_args()
if not args.auth_doc and not args.issuer:
print("error: specify either an --auth-doc or --issuer")
sys.exit(1)
if not args.email:
print("error: specify an email to search for a user")
sys.exit(1)
api = agilicus.GetClient(
agilicus_scopes=scopes, issuer=args.issuer, authentication_document=args.auth_doc
)
users = api.users.list_users(org_id=api.default_org_id, email=args.email)
if len(users.users) != 1:
print(f"error: failed to find user with email: {args.email}")
sys.exit(1)
user = users.users[0]
if args.disable_user is not None:
user.enabled = args.disable_user
result = api.users.replace_user(
user.id, user=user, _check_input_type=False, _host_index=0
)
print(result) |
agility | AgilityAtomisticGrain Boundary andInterface Utility.InstallationThere are different ways to installagility. Choose what works best with your workflow.From sourceTo build from source, usepip install -r requirements.txt
python setup.py build
python setup.py installUsingpippip install agilityUsingcondaconda skeleton pypi agility
conda build agility
conda install --use-local agilityContributingAny contributions or even questions about the code are welcome - please use theIssue TrackerorPull Requests.DevelopmentThe development takes place on thedevelopmentbranch. Python 3.9 is the minimum requirement. Some backends (like ovito) currently do not support Python 3.10.If you use VSCode, you might editsettings.jsonas follows:"python.linting.flake8Enabled":true,"python.linting.flake8Args":["--max-line-length=100","--ignore=F841"],"python.linting.enabled":true,"python.linting.pylintEnabled":false,"python.linting.mypyEnabled":true,"python.linting.pycodestyleEnabled":false,"python.linting.pydocstyleEnabled":true,"python.formatting.provider":"black","python.formatting.blackArgs":["--line-length=100"],"python.sortImports.args":["--profile","black"],"[python]":{"editor.codeActionsOnSave":{"source.organizeImports":true},}DocumentationThe user documentation will be written in python sphinx. The source files should be
stored in thedocdirectory.Run testsAfter installation, in the home directory, use%pytest |
agility_automation_summary | UNKNOWN |
agility-cms | Agility CMS Python SDKThis package is a Python library for calling theAgility CMS Rest API.InstallationUse the package managerpipto install agility-cms.pipinstallagility-cmsInitializationimportClientfromagility_cmsclient=Client('<your guid>','<your api key>',locale="en-us",preview=True,url="api.aglty.io")Required Argumentsguid (str)api_key (str)Optional Argumentslocale (str)preview (bool)url (str)Galleryclient.gallery('<gallery id>')Required Argumentsgallery_id (int)Itemclient.item('<item id>',content_link_depth=1,expand_all_content_links=False)Required Argumentsitem_id (int)Optional Argumentscontent_link_depth (int)expand_all_content_links (bool)Listclient.list('<reference name>',fields="",take=10,skip=0,filter_="",sort="",direction='asc',content_link_depth=1,expand_all_content_links=False)Required Argumentsreference_name (str)Optional Argumentsfields (str)take (int)skip (int)filter (str)sort (str)direction (str)content_link_depth (int)expand_all_content_links (bool)Pageclient.page('<page id>',content_link_depth=2,expand_all_content_links=False)Required Argumentspage_id (int)Optional Argumentscontent_link_depth (int)expand_all_content_links (bool)Sitemapclient.sitemap('<channel name>',nested=False)Required Argumentschannel_name (str)Optional Argumentsnested (bool)Sync Itemsclient.sync_items(sync_token=0,page_size=500)Optional Argumentssync_token (int)page_size (int)Sync Pagesclient.sync_pages(sync_token=0,page_size=500)Optional Argumentssync_token (int)page_size (int)Url Redirectionsclient.url_redirections(last_access_date="")Optional Argumentslast_access_date (str) |
agilkia | Agilkia: A Python Toolkit to Support AI-for-TestingThis toolkit is intended to make it easier to build testing tools that learn
from traces of customer behaviors, analyze those traces for common patterns
and unusual behaviors (e.g. using clustering techniques), learn machine learning (ML)
models of typical behaviors, and use those models to generate smart tests that
imitate customer behaviors.Agilkia is intended to provide a storage and interchange format that makes it easy to
built 'smart' tools on top of this toolkit, often with just a few lines of code.
The main focus of this toolkit is saving and loading traces in a standard *.JSON
format, and transforming those traces to and from lots of other useful formats,
including:Pandas DataFrames (for data analysis and machine learning);ARFF files (for connection to Weka and the StackedTrees tools);SciPy Linkage matrices (for hierarchical clustering and drawing Dendrograms);CSV files in application-specific formats (requires writing some Python code).The key datastructures supported by this library are:TraceSet = a sequence of Trace objectsTrace = a sequence of Event objectsEvent = one interaction with a web service/site, with an action name, inputs, outputs.In addition, note that the TraceSet can store 'clustering' information about the
traces (flat clusters and optional hierarchical clustering) and all three of the
above objects include various kinds of 'meta-data'. For example, each Event
object can contain a timestamp, and each TraceSet contains an 'event_chars' dictionary
that maps each kind of event to a single character to enable concise visualization of traces.This 'agilkia' library is part of the Philae research project:http://projects.femto-st.fr/philae/enIt is open source software under the MIT license.
See LICENSE.txtKey Features:Manage sets of traces (load/save to JSON, etc.).Split traces into smaller traces (sessions).Cluster traces on various criteria, with support for flat and hierarchical clustering.Visualise clusters of tests, to see common / rare behaviours.Convert traces to Pandas DataFrame for data analysis / machine learning.Generate random tests, or 'smart' tests from a machine learning (ML) model.Automated testing of SOAP web services with WSDL descriptions.About the NameThe name 'Agilkia' was chosen for this library because it is
closely associated with the name 'Philae', and the Agilkia toolkit
has been developed as part of the Philae research project.Agilkia is an island in the reservoir of the Aswan Low Dam,
downstream of the Aswan Dam and Lake Nasser, Egypt.It is the current location of the ancient temple of Isis, which was
moved there from the islands of Philae after dam water levels rose.Agilkia was also the name given to the first landing place of the
Philae landing craft on the comet 67P/ChuryumovโGerasimenko,
during the Rosetta space mission.PeopleArchitect and developer: AProf. Mark UttingProject leader: Prof. Bruno LegeardExample UsagesAgilkia requires Python 3.7 or higher.
Here is how to install this toolkit using conda:conda install -c mark.utting agilkiaHere is a simple example of generating some simple random tests of an imaginary
web service running on the URLhttp://localhost/cash:import agilkia
# sample input values for named parameters
input_values = {
"username" : ["TestUser"],
"password" : ["<GOOD_PASSWORD>"] * 9 + ["bad-pass"], # wrong 10% of time
"version" : ["2.7"] * 9 + ["2.6"], # old version 10% of time
"account" : ["acc100", "acc103"], # test these two accounts equally
"deposit" : [i*100 for i in range(8)], # a range of deposit amounts
}
def first_tests():
tester = agilkia.RandomTester("http://localhost/cash",
parameters=input_values)
tester.set_username("TestUser") # will prompt for password
tests = agilkia.TraceSet([])
for i in range(10):
tr = tester.generate_trace(length=30)
print(f"========== trace {i}:\n {tr}")
tests.append(tr)
return tests
first_tests().save_to_json(Path("tests1.json"))And here is an example of loading a file containing a single long trace, splitting it into
customer sessions based on a 'sessionID' input field, using SciPy to cluster those sessions
using hierarchical clustering, visualizing them as a dendrogram tree, and saving the results.from pathlib import Path
import scipy.cluster.hierarchy as hier
import matplotlib.pyplot as plt
import agilkia
traces = agilkia.TraceSet.load_from_json(Path("trace.json"))
sessions = traces.with_traces_grouped_by("sessionID")
data = sessions.get_trace_data(method="action_counts")
tree = hier.linkage(data)
hier.dendrogram(tree, 10, truncate_mode="level") # view top 10 levels of tree
plt.show()
cuts = hier.cut_tree(tree, [3]) # cut the tree to get 3 clusters
sessions.set_clusters(cuts[:,0], tree)
sessions.save_to_json(Path("sessions_clustered.json"))For more complete examples, see the *.py scripts in theexamples/scannerdirectory in the
Agilkia source code distribution (https://github.com/utting/agilkia) and the README there. |
agiltron-selfalign | Agiltron-SelfAlignPython interface for theAgiltron SelfAlignfiber switch.Installpip install git+https://github.com/ograsdijk/Agiltron-SelfAlign.gitCode Examplefromagiltron_selfalignimportAgiltronSelfAlignresource_name="COM8"switch=AgiltronSelfAlign(resource_name,number_of_ports=16)# change port to port 14switch.set_fiber_port(14)# home switch to port 1switch.home() |
agine | agineagineis a Python package which have functionalities related to points in an n-dimensional space (which is defined by itsx, y, ...zcoordinates), or an actual position on the Earth (given by itslatitude, longitude). Considering two points (say P, Q), apart from many other purposes, this library can also detect if the two have a clear line of sight or not.Basic Usageagine hasthreemain functionalities: (1) Calculation of Distances, using different metrics, which is defined undercommons, (2) Functions to Find the Nearest Neighbor and (3) Function to Find if two Geographic Point has aLine-of-Sightor not. All of this can be done using the following:gitclonehttps://github.com/ZenithClown/agine.gitcdagine# as agine is currently not indexed in PyPipipinstallagine# Installing agine with pipimportagine>>Settingupagine-Environment...>>DetectedOS:"<os-name-with-version>">>scikit-learnOptions:"<is-scikit-learn-available>">>"etc. which Defines the Core-Capability"agine has a hard dependency of onlynumpyso that some of its functionalities can be used somewhere else. For options (2) and (3) it has different requirements, which can be accessed using:agine.OSOptions._point_funcandagine.OSOptions._line_of_strepectively. |
aginet | aginet-pythonPython SDK for AGInet |
ag-initiatives | No description available on PyPI. |
agiocli | Autograder.io CLIAutograder.io CLI (agio) is a command line interface toautograder.io.Quick start$pipinstallagiocli$agioContributingContributions from the community are welcome! Check out theguide for contributing.AcknowledgmentsAutograder.io CLI is written by Andrew [email protected]. Justin Applefield removed bugs and contributed features.It is based on work by James Perretta (Autograder.io Contrib) and Amir Kamil (Autograder Tools). |
agios | ## What is agios?agios is an open source, Python 3 library for playing with genetic algorithms. Main functionality includes:* Generic API allowing to easily implement custom tasks* Highly customizable algorithm execution cycle* Multithreading support* Built-in support for images processing* Multidimensional data processingTODO list includes:* Support for PyCUDA and processing on GPU## How to install it?```pip install agios```## Where is an example code?```pythonfrom agios import evolutionfrom agios import extrasblueprint = extras.load_normalized_image('input/mona_lisa.jpg', extras.Greyscale)evolution_problem_solver = evolution.SimpleSolver(population_size=100,best_samples_to_take=2,blueprint=evolution.NumpyArraySample(blueprint),mutator=evolution.SimplePaintbrushMatrixMutator((10, 15), (10, 50)),crosser=evolution.MeanValueMatrixCrosser(),loss_calculator=evolution.SquaredMeanMatrixLossCalculator(),initial_sample_state_generator=evolution.RandomMatrixGenerator(blueprint.shape))for _ in range(10000):evolution_problem_solver.step()```Live examples can be found in examples/ directory.## How to contribute?Report observed issues or provide working pull request. Pull request must be verified before merging and it must include the following:* Unit tests* Public API marked with static typing annotations (typing module)* Public classes must include brief documentation |
agi-pack | ๐ฆ agi-packA Dockerfile builder for Machine Learning developers.๐ฆagi-packallows you to define your Dockerfiles using a simple YAML format, and then generate images from them trivially usingJinja2templates andPydantic-based validation. It's a simple tool that aims to simplify the process of building Docker images for machine learning (ML).Goals ๐ฏ๐Simplicity: Make it easy to define and build docker images for ML.๐ฆBest-practices: Bring best-practices to building docker images for ML -- good base images, multi-stage builds, minimal image sizes, etc.โก๏ธFast: Make it lightning-fast to build and re-build docker images with out-of-the-box caching for apt, conda and pip packages.๐งฉModular, Re-usable, Composable: Definebase,devandprodtargets with multi-stage builds, and re-use them wherever possible.๐ฉโ๐ปExtensible: Make the YAML / DSL easily hackable and extensible to support the ML ecosystem, as more libraries, drivers, HW vendors, come into the market.โ๏ธVendor-agnostic:agi-packis not intended to be built for any specific vendor -- I need this tool for internal purposes, but I decided to build it in the open and keep it simple.Installation ๐ฆpipinstallagi-packFor shell completion, you can install them via:agi-pack--install-completion<bash|zsh|fish|powershell|pwsh>Go through theexamplesand the correspondingexamples/generateddirectory to see a few examples of whatagi-packcan do. If you're interested in checking out a CUDA / CUDNN example, check outexamples/agibuild.base-cu118.yaml.Quickstart ๐ Create a simple YAML configuration file calledagibuild.yaml. You can useagi-pack initto generate a sample configuration file.agi-packinitEditagibuild.yamlto define your custom system and python packages.images:sklearn-base:base:debian:buster-slimsystem:-wget-build-essentialpython:"3.8.10"pip:-loguru-typer-scikit-learnLet's break this down:sklearn-base: name of the target you want to build. Usually, these could be variants like*-base,*-dev,*-prod,*-testetc.base: base image to build from.system: system packages to install viaapt-get install.python: specific python version to install viaminiconda.pip: python packages to install viapip install.Generate the Dockerfile usingagi-pack generateagi-packgenerate-cagibuild.yamlYou should see the following output:$agi-packgenerate-cagibuild.yaml
๐ฆsklearn-base
โโโ๐SuccessfullygeneratedDockerfile(target=sklearn-base,filename=Dockerfile).โโโ`dockerbuild-fDockerfile--targetsklearn-base.`That's it! Here's the generatedDockerfile-- use it to rundocker buildand build the image directly.Rationale ๐คDocker has become the standard for building and managing isolated environments for ML. However, any one who has gone down this rabbit-hole knows how broken ML development is, especially when you need to experiment and re-configure your environments constantly. Production is another nightmare -- large docker images (10GB+), bloated docker images with model weights that are~5-10GBin size, 10+ minute long docker build times, sloppy package management to name just a few.What makes Dockerfiles painful?If you've ever tried to roll your own Dockerfiles with all the best-practices while fully understanding their internals, you'll still find yourself building, and re-building, and re-building these images across a whole host of use-cases. Having to build Dockerfile(s) fordev,prod, andtestall turn out to be a nightmare when you add the complexity of hardware targets (CPUs, GPUs, TPUs etc), drivers, python, virtual environments, build and runtime dependencies.agi-packaims to simplify this by allowing developers to define Dockerfiles in a concise YAML format and then generate them based on your environment needs (i.e. python version, system packages, conda/pip dependencies, GPU drivers etc).For example, you should be able to easily configure yourdevenvironment for local development, and have a separateprodenvironment where you'll only need the runtime dependencies avoiding any bloat.agi-packhopes to also standardize the base images, so that we can really build on top of giants.More Complex Example ๐Now imagine you want to build a more complex image that has multiple stages, and you want to build abaseimage that has all the basic dependencies, and adevimage that has additional build-time dependencies.images:base-cpu:name:agibase:debian:buster-slimsystem:-wgetpython:"3.8.10"pip:-scikit-learnrun:-echo "Hello, world!"dev-cpu:base:base-cpusystem:-build-essentialOnce you've defined thisagibuild.yaml, runningagi-pack generatewill generate the following output:$agi-packgenerate-cagibuild.yaml
๐ฆbase-cpu
โโโ๐SuccessfullygeneratedDockerfile(target=base-cpu,filename=Dockerfile).โโโ`dockerbuild-fDockerfile--targetbase-cpu.`๐ฆdev-cpu
โโโ๐SuccessfullygeneratedDockerfile(target=dev-cpu,filename=Dockerfile).โโโ`dockerbuild-fDockerfile--targetdev-cpu.`As you can see,agi-packwill generate asingleDockerfile for each of the targets defined in the YAML file. You can then build the individual images from the same Dockerfile using docker targets:docker build -f Dockerfile --target <target> .where<target>is the name of the image target you want to build.Here's the correspondingDockerfilethat was generated.Why the name? ๐คทโโ๏ธagi-packis very much intended to be tongue-in-cheek -- we are soon going to be living in a world full of quasi-AGI agents orchestrated via ML containers. At the very least,agi-packshould provide the building blocks for us to build a more modular, re-usable, and distribution-friendly container format for "AGI".Inspiration and Attribution ๐TL;DRagi-packwas inspired by a combination ofReplicate'scog,Baseten'struss,skaffold, andDocker Compose Services. I wanted a standalone project without any added cruft/dependencies of vendors and services.๐ฆagi-packis simply a weekend project I hacked together, that started with a conversation withChatGPT / GPT-4.ChatGPT PromptPrompt:I'm building a Dockerfile generator and builder to simplify machine learning infrastructure. I'd like for the Dockerfile to be dynamically generated (using Jinja templates) with the following parametrizations:# Sample YAML fileimages:base-gpu:base:nvidia/cuda:11.8.0-base-ubuntu22.04system:-gnupg2-build-essential-gitpython:"3.8.10"pip:-torch==2.0.1I'd like for this yaml file to generate a Dockerfile viaagi-pack generate -c <name>.yaml. You are an expert in Docker and Python programming, how would I implement this builder in Python. Use Jinja2 templating and miniconda python environments wherever possible. I'd like an elegant and concise implementation that I can share on PyPI.Contributing ๐คContributions are welcome! Please read theCONTRIBUTINGguide for more information.License ๐This project is licensed under the MIT License. See theLICENSEfile for details. |
agi-probability | No description available on PyPI. |
agipy | /ษหdสษชหpaษช/, portmanteau of AgI (chemical formula of silver iodide) and Pythonagipyenables you to destroy cloud infrastructure in a hurry. Whether you want to destroy a demo
environment you no longer need or want to purge artifacts a colleague left behind,agipyis here
to help you dissolving your cloud problems.About the nameagipyis a portmanteau of the chemical formula AgI for silver iodide and the word Python.Silver iodide is a chemical used forcloud seeding.
Cloud seeding is a weather modification that changes the amount or type of precipitation that falls from clouds;
in my opinion that is an appropriate metaphor for deleting cloud infrastructure, especially compared to
alternatives that include โnukesโ or โEMPsโ.Pythonis the languageagipyis implemented in. It is a great language for
everything concerning cloud infrastructure and automation, and it happens to be the language I feel most
comfortable with.How to useagipyagipyis available onPyPI, so you can install it usingpip:pipinstall--useragipyIn addition you can clone the repository and then install it from source via:python3setup.pyinstallTo find out how the CLI works, use the built-in help function:agipy--helpagipyis based on the concept ofproviders: For each public cloud provider, aagipy-specific
provider module has to be implemented. At the moment, the following provider modules exist:Microsoft AzureBecause each provider has its own semantics, please refer to the respective subsection.How to use theazureproviderAs of yet, theazureprovider is capable of deleting all resource groups with a specific prefix.
For using the provider, you have to set the following environment variables:AZURE_CLIENT_ID, the service principal IDAZURE_CLIENT_SECRET, the service principal secretAZURE_SUBSCRIPTION_IDAZURE_TENANT_IDAlternatively you can provide any of these values via command line arguments, see below.You can call the provider via:agipyazure--prefix=${RG_PREFIX}[--client-id=${AZURE_CLIENT_ID}][--client-secret=${AZURE_CLIENT_SECRET}][--subscription-id=${AZURE_SUBSCRIPTION_ID}][--tenant-id=${AZURE_TENANT_ID}]How to TestThe test suite is based on the great``pytest`<https://docs.pytest.org/en/latest/>`_ library. Once installed, the automatic test
discovery can be used viapytestagipyIn order to keep the tests clean and run smoothly, I try to use subtests and patches where possible.How to ContributeThis project is released under the GPL license, so I am happy for contributions, forks, and feedback!
This can be done in many ways:Open PRs with improvements,add issues if you find bugs,add issues for new features,useagipyand spread the word.PRs have to comply with:``black`<https://black.readthedocs.io/en/stable/>`_,``mypy`<http://mypy-lang.org>`_,and``pylint`<https://www.pylint.org>`_.New features must be covered by proper tests (idealy unit tests and functional tests) using the``pytest`<https://docs.pytest.org/en/latest/>`_ library.Contributorsshimst3rLicenseagipy enables you to destroy cloud infrastructure in a hurry.
Copyright (C) 2019 Nils Mรผ[email protected] program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.You should have received a copy of the GNU General Public License
along with this program. If not, seehttps://www.gnu.org/licenses/ |
agi-suggester | Failed to fetch description. HTTP Status Code: 404 |
agisys | No description available on PyPI. |
agit | tools for agi |
agithub | The Agnostic GitHub APIIt doesn't know, and you don't care!agithubis a REST API client with transparent syntax which facilitates
rapid prototyping โ onanyREST API!Originally tailored to the GitHub REST API, AGitHub has grown up to
support many other REST APIs:DigitalOceanFacebookGitHubOpenWeatherMapSalesForceAdditionally, you can addfull supportfor another REST API with very
little new code! To see how, check out theFacebook client, which has
about 30 lines of code.This works because AGithub knows everything it needs to about protocol
(REST, HTTP, TCP), but assumes nothing about your upstream API.UseThe most striking quality of AGitHub is how closely its syntax emulates
HTTP. In fact, you might find it even more convenient than HTTP, and
almost as general (as far as REST APIs go, anyway). The examples below
tend to use the GitHub API as a reference point, but it is no less easy to
useagithubwith, say, the Facebook Graph.Create a clientfromagithub.GitHubimportGitHubclient=GitHub()GETHere's how to do aGETrequest, with properly-encoded url parameters:client.issues.get(filter='subscribed')That is equivalent to the following:GET /issues/?filter=subscribedPOSTHere's how to send a request body along with your request:some_object={'foo':'bar'}client.video.upload.post(body=some_object,tags="social devcon")This will send the following request, withsome_objectserialized as
the request body:*POST /video/upload?tags=social+devcon{"foo": "bar"}Thebodyparameter is reserved and is used to define the request body to be
POSTed.tagsis an example query parameter, showing that you can pass both
an object to send as the request body as well as query parameters.*For now, the request body is limited to JSON data; but
we plan to add support for other types as wellParametersheadersPass custom http headers in your ruquest with the reserved parameterheaders.fromagithub.GitHubimportGitHubg=GitHub()headers={'Accept':'application/vnd.github.symmetra-preview+json'}status,data=g.search.labels.get(headers=headers,repository_id=401025,q='ยฏ\_(ใ)_/ยฏ')print(data['items'][0]){u'default': False, u'name': u'\xaf\\_(\u30c4)_/\xaf', u'url': u'https://api.github.com/repos/github/hub/labels/%C2%AF%5C_(%E3%83%84)_/%C2%AF', u'color': u'008672', u'node_id': u'MDU6TGFiZWwxMTcwNjYzNTM=', u'score': 43.937515, u'id': 117066353, u'description': u''}bodyIf you're usingPOST,PUT, orPATCH(post(),put(), orpatch()),
then you should include the body as thebody=argument. The body is
serialized to JSON before sending it out on the wire.fromagithub.GitHubimportGitHubg=GitHub()# This Content-Type header is only required in this example due to a GitHub# requirement for this specific markdown.raw API endpointheaders={'Content-Type':'text/plain'}body='# This should be my header'status,data=g.markdown.raw.post(body=body,headers=headers)print(data)<h1>
<a id="user-content-this-should-be-my-header" class="anchor" href="#this-should-be-my-header" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>This should be my header</h1>Example AppFirst, instantiate aGitHubobject.fromagithub.GitHubimportGitHubg=GitHub()When you make a request, the status and response body are passed back
as a tuple.status,data=g.users.octocat.get()print(data['name'])print(status)The Octocat
200If you forget the request method,agithubwill complain that you
haven't provided enough information to complete the request.g.users<class 'agithub.github.IncompleteRequest'>: /usersSometimes, it is inconvenient (or impossible) to refer to a URL as a
chain of attributes, so indexing syntax is provided as well. It
behaves exactly the same. In these examples we use indexing syntax because
you can't have a python function namestarting with a digit :1containing a dash (-) character :Spoon-Knifeg.repos.github.hub.issues[1].get()g.repos.octocat['Spoon-Knife'].branches.get()(200, { 'id': '#blah', ... })
(200, [ list, of, branches ])You can also pass query parameter to the API as function parameters to the
method function (e.g.get).status,data=g.repos.octocat['Spoon-Knife'].issues.get(state='all',creator='octocat')print(data[0].keys())print(status)[u'labels', u'number', โฆ , u'assignees']
200Notice the syntax here:<API-object>.<URL-path>.<request-method>(<query-parameters>)API-object:gURL-path:repos.octocat['Spoon-Knife'].issuesrequest-method:getquery-parameters:state='all', creator='octocat'As a weird quirk of the implementation, you may build a partial call
to the upstream API, and use it later.deffollowing(self,user):returnself.user.following[user].getmyCall=following(g,'octocat')if204==myCall()[0]:print'You are following octocat'You are following octocatYou may find this useful โ or not.Finally,agithubknows nothing at all about the GitHub API, and it
won't second-guess you.g.funny.I.donna.remember.that.one.head()(404, {'message': 'Not Found'})The error message you get is directly from GitHub's API. This gives
you all of the information you need to survey the situation.If you need more information, the response headers of the previous
request are available via thegetheaders()method.g.getheaders()[('status', '404 Not Found'),
('x-ratelimit-remaining', '54'),
โฆ
('server', 'GitHub.com')]Note that the headers are standardized to all lower case. So though, in this
example, GitHub returns a header ofX-RateLimit-Remainingthe header is
returned fromgetheadersasx-ratelimit-remainingError handlingErrors are handled in the most transparent way possible: they are passed
on to you for further scrutiny. There are two kinds of errors that can
crop up:Networking Exceptions (from thehttplibrary). Catch these withtry .. catchblocks, as you otherwise would.GitHub API errors. These mean you're doing something wrong with the
API, and they are always evident in the response's status. The API
considerately returns a helpful error message in the JSON body.Specific REST APIsagithubincludes a handful of implementations for specific REST APIs. The
example above uses the GitHub API but only for demonstration purposes. It
doesn't include any GitHub specific functionality (for example, authentication).Here is a summary of additional functionality available for each distinct REST
API with support included inagithub. Keep in mind,agithubis designed
to be extended to any REST API and these are just an initial collection of APIs.GitHub :agithub/GitHub.pyGitHub AuthenticationTo initiate an authenticatedGitHubobject, pass it your username and password
or atoken.fromagithub.GitHubimportGitHubg=GitHub('user','pass')fromagithub.GitHubimportGitHubg=GitHub(token='token')GitHub PaginationWhen calling the GitHub API with a query that returns many results, GitHub willpaginatethe response, requiring
you to request each page of results with separate API calls. If you'd like to
automatically fetch all pages, you can enable pagination in theGitHubobject
by settingpaginatetoTrue.fromagithub.GitHubimportGitHubg=GitHub(paginate=True)status,data=g.repos.octocat['Spoon-Knife'].issues.get()status,data=g.users.octocat.repos.get(per_page=1)print(len(data))8(added in v2.2.0)GitHub Rate LimitingBy default, if GitHub returns a response indicating that a request was refused
due torate limiting, agithub
will wait until the point in time when the rate limit is lifted and attempt
the call again.If you'd like to disable this behavior and instead just return the error
response from GitHub setsleep_on_ratelimittoFalse.fromagithub.GitHubimportGitHubg=GitHub(sleep_on_ratelimit=False)status,data=g.repos.octocat['Spoon-Knife'].issues.get()print(status)print(data['message'])403
API rate limit exceeded for 203.0.113.2. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)(added in v2.2.0)GitHub LoggingTo see log messages related to GitHub specific features like pagination and
rate limiting, you can use a root logger from the Python logging module.importlogginglogging.basicConfig()logger=logging.getLogger()# The root loggerlogger.setLevel(logging.DEBUG)fromagithub.GitHubimportGitHubg=GitHub(paginate=True)status,data=g.repos.octocat['Spoon-Knife'].issues.get()DEBUG:agithub.GitHub:No GitHub ratelimit remaining. Sleeping for 676 seconds until 14:22:43 before trying API call again.
DEBUG:agithub.GitHub:Fetching an additional paginated GitHub response page at https://api.github.com/repositories/1300192/issues?page=2
DEBUG:agithub.GitHub:Fetching an additional paginated GitHub response page at https://api.github.com/repositories/1300192/issues?page=3
โฆSemanticsHere's howagithubworks, under the hood:It translates a sequence of attribute look-ups into a URL; The
Python method you call at the end of the chain determines the
HTTP method to use for the request.The Python method also receivesname=valuearguments, which it
interprets as follows:headers=You can include custom headers as a dictionary supplied to theheaders=argument. Some headers are provided by default (such as
User-Agent). If these occur in the supplied dictionary, the default
value will be overridden.headers={'Accept':'application/vnd.github.loki-preview+json'}body=If you're usingPOST,PUT, orPATCH(post(),put(), andpatch()), then you should include the body as thebody=argument.
The body is serialized to JSON before sending it out on the wire.GET ParametersAny other arguments to the Python method become GET parameters, and are
tacked onto the end of the URL. They are, of course, url-encoded for
you.When the response is received,agithublooks at its content
type to determine how to handle it, possibly decoding it from the
given char-set to Python's Unicode representation, then converting to
an appropriate form, then passed to you along with the response
status code. (A JSON object is de-serialized into a Python object.)Extensibilityagithubhas been written in an extensible way. You can easily:Add new HTTP methods by extending theClientclass with
new Python methods of the same name (and adding them to thehttp_methodslist).Add new default headers to the_default_headersdictionary.
Just make sure that the header names are lower case.Add a new media-type (a.k.a. content-type a.k.a mime-type) by
inserting a new method into theResponseBodyclass, replacing'-'and'/'with'_'in the method name. That method will then be
responsible for converting the response body to a usable
form โ and for callingdecode_bodyto do char-set
conversion, if required. For example to create a handler for the content-typeapplication/xmlyou'd extendResponseBodyand create a new method like thisimportxml.etree.ElementTreeasETclassCustomResponseBody(ResponseBody):def__init__(self):super(ChildB,self).__init__()defapplication_xml(self):# Handles Content-Type of "application/xml"returnET.fromstring(self.body)And if all else fails, you can strap in, and take 15 minutes to read and
become an expert on the code. From there, anything's possible.LicenseCopyright 2012โ2016 Jonathan Paugh and contributors
SeeCOPYINGfor license details |
agixt | AGiXTAGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.Embracing the spirit of extremity in every facet of life, we introduce AGiXT. This advanced AI Automation Platform is our bold step towards the realization of Artificial General Intelligence (AGI). Seamlessly orchestrating instruction management and executing complex tasks across diverse AI providers, AGiXT combines adaptive memory, smart features, and a versatile plugin system to maximize AI potential. With our unwavering commitment to innovation, we're dedicated to pushing the boundaries of AI and bringing AGI closer to reality.AGiXT Setup and Usage VideoTable of Contents ๐AGiXTTable of Contents ๐โ ๏ธ DisclaimersMonitor Your UsageKey Features ๐๏ธGetting Started with Local Models and AGiXT VideoQuick Start GuideWindows and Mac PrerequisitesLinux PrerequisitesDownload and InstallRunning and Updating AGiXTConfigurationDocumentationOther RepositoriesContributingDonations and SponsorshipsOur Team ๐งโ๐ปJosh (@Josh-XT)James (@JamesonRGrieve)Historyโ ๏ธ DisclaimersMonitor Your UsagePlease note that using some AI providers (such as OpenAI's GPT-4 API) can be expensive! Monitor your usage carefully to avoid incurring unexpected costs. We'reNOTresponsible for your usage under any circumstances.Key Features ๐๏ธContext and Token Management: Adaptive handling of long-term and short-term memory for an optimized AI performance, allowing the software to process information more efficiently and accurately.Smart Instruct: An advanced feature enabling AI to comprehend, plan, and execute tasks effectively. The system leverages web search, planning strategies, and executes instructions while ensuring output accuracy.Interactive Chat & Smart Chat: User-friendly chat interface for dynamic conversational tasks. The Smart Chat feature integrates AI with web research to deliver accurate and contextually relevant responses.Task Execution & Smart Task Management: Efficient management and execution of complex tasks broken down into sub-tasks. The Smart Task feature employs AI-driven agents to dynamically handle tasks, optimizing efficiency and avoiding redundancy.Chain Management: Sophisticated handling of chains or a series of linked commands, enabling the automation of complex workflows and processes.Web Browsing & Command Execution: Advanced capabilities to browse the web and execute commands for a more interactive AI experience, opening a wide range of possibilities for AI assistance.Multi-Provider Compatibility: Seamless integration with leading AI providers such as OpenAI GPT series, Hugging Face Huggingchat, GPT4All, GPT4Free, Oobabooga Text Generation Web UI, Kobold, llama.cpp, FastChat, Google Bard, Bing, and more.Versatile Plugin System & Code Evaluation: Extensible command support for various AI models along with robust support for code evaluation, providing assistance in programming tasks.Docker Deployment: Simplified setup and maintenance through Docker deployment.Audio-to-Text & Text-to-Speech Options: Integration with Hugging Face for seamless audio-to-text transcription, and multiple TTS choices, featuring Brian TTS, Mac OS TTS, and ElevenLabs.Platform Interoperability & AI Agent Management: Streamlined creation, renaming, deletion, and updating of AI agent settings along with easy interaction with popular platforms like Twitter, GitHub, Google, DALL-E, and more.Custom Prompts & Command Control: Granular control over agent abilities through enabling or disabling specific commands, and easy creation, editing, and deletion of custom prompts to standardize user inputs.RESTful API: FastAPI-powered RESTful API for seamless integration with external applications and services.Expanding AI Support: Continually updated to include new AI providers and services, ensuring the software stays at the forefront of AI technology.Getting Started with Local Models and AGiXT VideoThis is a video that walks through the process of setting up and using AGiXT to interact with locally hosted language models. This is a great way to get started with AGiXT and see how it works.Quick Start GuideWindows and Mac PrerequisitesGitDocker DesktopPowerShell 7.XLinux PrerequisitesGitDockerDocker ComposePowerShell 7.X(Optional if you want to use the launcher script, but not required)NVIDIA Container Toolkit(if using local models on GPU)Download and InstallOpen a PowerShell terminal and run the following to download and install AGiXT:Windows and Mac:gitclonehttps://github.com/Josh-XT/AGiXTcdAGiXT
./AGiXT.ps1Linux:gitclonehttps://github.com/Josh-XT/AGiXTcdAGiXT
sudopwsh./AGiXT.ps1When you run theAGiXT.ps1script for the first time, it will create a.envfile automatically. There are a few questions asked on first run to help you get started. The default options are recommended for most users.For advanced environment variable setup, see theEnvironment Variable Setupdocumentation for guidance on setup.___________________/|/____(_)|//___///||//__//|////___//_////|//
/_/|_\____/_//_/|_|/_/
-------------------------------
Visitourdocumentationathttps://AGiXT.com
WelcometotheAGiXTEnvironmentSetup!
WouldyoulikeAGiXTtoautoupdate?(y/n-default:y):
WouldyouliketosetanAPIKeyforAGiXT?Enteritifso,otherwisepressentertoproceed.(defaultisblank):
EnterthenumberofAGiXTworkerstorun(default:10):After the environment setup is complete, you will have the following options:1.RunAGiXT(Stable-Recommended!)2.RunAGiXT(Development)3.RunBackendOnly(Development)4.Exit
Enteryourchoice:Choose Option 1 to run AGiXT with the latest stable release. This is the recommended option for most users. If you're not actively developing AGiXT, this is the option you should choose.Running and Updating AGiXTAny time you want to run or update AGiXT, run the following commands from yourAGiXTdirectory:./AGiXT.ps1Access the web interface athttp://localhost:8501Access the AGiXT API documentation athttp://localhost:7437ConfigurationEach AGiXT Agent has its own settings for interfacing with AI providers, and other configuration options. These settings can be set and modified through the web interface.DocumentationNeed more information? Check out thedocumentationfor more details to get a better understanding of the concepts and features of AGiXT.Other RepositoriesCheck out the other AGiXT repositories athttps://github.com/orgs/AGiXT/repositories- these include the AGiXT Streamlit Web UI, AGiXT Python SDK, AGiXT TypeScript SDK, and more!ContributingWe welcome contributions to AGiXT! If you're interested in contributing, please check out ourcontributions guidetheopen issues on the backend,open issues on the frontendandpull requests, submit apull request, orsuggest new features. To stay updated on the project's progress,and. Also feel free to join our.Donations and SponsorshipsWe appreciate any support for AGiXT's development, including donations, sponsorships, and any other kind of assistance. If you would like to support us, please use one of the various methods listed at the top of the repository or contact us through ouror.We're always looking for ways to improve AGiXT and make it more useful for our users. Your support will help us continue to develop and enhance the application. Thank you for considering to support us!Our Team ๐งโ๐ปJosh (@Josh-XT)James (@JamesonRGrieve)History |
agixtsdk | AGiXT SDK for PythonThis repository is for theAGiXTSDK for Python.InstallationpipinstallagixtsdkUsagefromagixtsdkimportAGiXTSDKbase_uri="http://localhost:7437"api_key="your_agixt_api_key"ApiClient=AGiXTSDK(base_uri=base_uri,api_key=api_key)Check out the AGiXTExamples and Tests Notebookfor examples of how to use the AGiXT SDK for Python.More DocumentationWant to know more about AGiXT? Check out ourdocumentationorGitHubpage. |
agjax | Agjax -- jax wrapper for autograd-differentiable functions.v0.3.0Agjax allows existing code built with autograd to be used with the jax framework.In particular,agjax.wrap_for_jaxallows arbitrary autograd functions ot be differentiated usingjax.grad. Several other function transformations (e.g. compilation viajax.jit) are not supported.Meanwhile,agjax.experimental.wrap_for_jaxsupportsgrad,jit,vmap, andjacrev. However, it depends on certain under-the-hood behavior by jax, which is not guaranteed to remain unchanged. It also is more restrictive in terms of the valid function signatures of functions to be wrapped: all arguments and outputs must be convertible to valid jax types. (agjax.wrap_for_jaxalso supports non-jax inputs and outputs, e.g. strings.)Installationpip install agjaxUsageBasic usage is as follows:@agjax.wrap_for_jaxdeffn(x,y):returnx*npa.cos(y)jax.grad(fn,argnums=(0,1))(1.0,0.0)# (Array(1., dtype=float32), Array(0., dtype=float32))The experimental wrapper is similar, but requires that the function outputs and datatypes be specified, simiilar tojax.pure_callback.wrapped_fn=agjax.experimental.wrap_for_jax(lambdax,y:x*npa.cos(y),result_shape_dtypes=jnp.ones((5,)),)jax.jacrev(wrapped_fn,argnums=0)(jnp.arange(5,dtype=float),jnp.arange(5,10,dtype=float))# [[ 0.28366217 0. 0. 0. 0. ]# [ 0. 0.96017027 0. 0. 0. ]# [ 0. 0. 0.75390226 0. 0. ]# [ 0. 0. 0. -0.14550003 0. ]# [ 0. 0. 0. 0. -0.91113025]]Agjax wrappers are intended to be quite general, and can support functions with multiple inputs and outputs as well as functions that have nondifferentiable outputs or arguments that cannot be differentiated with respect to. These should be specified usingnondiff_argnumsandnondiff_outputnumsarguments. In the experimental wrapper, these must still be jax-convertible types, while in the standard wrapper they may have arbitrary [email protected](agjax.wrap_for_jax,nondiff_argnums=(2,),nondiff_outputnums=(1,))deffn(x,y,string_arg):returnx*npa.cos(y),string_arg*2(value,aux),grad=jax.value_and_grad(fn,argnums=(0,1),has_aux=True)(1.0,0.0,"test")print(f"value ={value}")print(f" aux ={aux}")print(f" grad ={grad}")value = 1.0
aux = testtest
grad = (Array(1., dtype=float32), Array(0., dtype=float32)) |
aglar | No description available on PyPI. |
agl-base-db | agl-base-dbDatabase Backend |
agldata | No description available on PyPI. |
aglfn | aglfnPython utilities for AGLFN (Adobe Glyph List For New Fonts)pip installaglfn๐พ Installโข๐ฎ Quick startโข๐ Testingโข๐ Acknowledgementsโข๐ค Contributingโข๐ผ Licenseaglfnis a small utility to accessAGLFNnames easily in Python.
Many software tools for new typefaces often referes to glyphs with this names.
Some typography tools tend to hardcode the aglfn.txt file and parse it, so this
is an attempt to have the submodule of the AGLFN repository without inglobe
those files each time in our repos.AGFLN is a subset of the AGL list intended to provide a baselist of glyphs for a
new Latin font. More detailed information could be found on therepoand on theAGL specification.๐ฉ Table of Contents(click to expand)InstallQuick startTestingAcknowledgementsContributingLicense๐พ Installpip install aglfnor if you want to install it locally for development clone this repo and thencdaglfn
pipinstall-e.๐ฎ Quick startnamesget the list of all the AGLFN namesimportaglfnprint(aglfn.names)glyphsget the list of all glyphs with a corresponding AGLFN nameimportaglfnprint(aglfn.glyphs)name()get the corresponding AGLFN name by passing a glyphimportaglfnname=aglfn.name('โฌ')assert'Euro'==nameto_glyph()get the corresponding glyph by passing an AGLFN nameimportaglfnglyph=aglfn.to_glyph('Euro')assert'โฌ'==glyph๐ TestingTest are executed with travis, in case you want to run them locally just:cdaglfn
pythonsetup.pytest๐ AcknowledgementsCopyright ๐ฏ 2020 Puria Nafisi Azizi, ItalyDesigned, written and maintained by Puria Nafisi Azizi.Logo, dictionary by Smalllike from the Noun Project.๐ค Contributing๐FORK ITCreate your feature branchgit checkout -b feature/branchCommit your changesgit commit -am 'Add some foobar'Push to the branchgit push origin feature/branchCreate a new Pull Request๐ Thank you๐ผ Licenseaglfn - Python utilities for Adobe Glyph List For New Fonts
Copyright ๐ฏ 2020 Puria Nafisi Azizi, Italy
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. |
agl-frame2vid | AGL-Frame2Vid: Convert Extracted Frames to VideoDescriptionAGL-Frame2Vid is a Python package that enables users to create videos from a sequence of extracted frames. This tool is especially designed to work in tandem with theagl-frame-extractorpackage, utilizing its metadata to determine video attributes like framerate.The package aims to facilitate research in the field of gastrointestinal endoscopy by providing an easy and standardized way to create videos for both training and clinical practice.InstallationRequirementsPython 3.11 or higherOpenCV-Python 4.5 or higherFFmpeg-Python 0.2.0 or higherUsing PoetryTo install the package, navigate to the package directory and run:poetryinstallUsageBasic ExampleHere's a basic example that shows how to useFrame2Vid:fromagl_frame2vidimportFrame2Vid# Initialize the converterconverter=Frame2Vid("input_frames","output.mp4")# Generate the videoconverter.generate_video()DocumentationFor complete documentation, refer to theofficial documentation.TestingTests can be run usingpytest:pytestContributingContributions are welcomed to improve the package. Please read thecontribution guidelinesfor more information.AuthorsThomas J. [email protected] project is licensed under the MIT License - see theLICENSE.mdfile for details. |
agl-frame-extractor | agl-frame-extractorDescriptionagl_frame_extractoris a Python package designed to extract individual frames and metadata from .MOV video files. This package is highly useful for researchers and clinicians who require frame-by-frame analysis of video data. With applications ranging from medical research to training simulations, the package aims to improve standards and classifications in gastrointestinal endoscopy by enhancing objectivity and reproducibility.FeaturesExtracts individual frames from .MOV files and saves them as PNG images.Gathers video metadata including total number of frames, frames per second, and video duration.Offers optional multithreading support for faster frame extraction.Generates a log file to record the extraction process.InstallationTo install this package, clone the repository and run the following command in the repository root:pipinstall-e.UsageBasic Usagefromvideo_frame_extractor.extractorimportVideoFrameExtractorinput_folder="input_videos"output_folder="output_frames_metadata"extractor=VideoFrameExtractor(input_folder,output_folder)extractor.extract_frames_and_metadata()# If you want to extract png files instead of jpgs:input_folder="input_videos"output_folder="output_frames_metadata"extractor=VideoFrameExtractor(input_folder,output_folder,image_format='png')extractor.extract_frames_and_metadata()Multithreaded UsageTo enable multithreading for faster frame extraction:fromvideo_frame_extractor.extractorimportVideoFrameExtractorinput_folder="input_videos"output_folder="output_frames_metadata"extractor=VideoFrameExtractor(input_folder,output_folder,use_multithreading=True)extractor.extract_frames_and_metadata()DependenciesOpenCVtqdmLoggingThe package generates a log filevideo_frame_extraction.login the directory where it is executed. This log file contains detailed information about the extraction process.ContributingWe welcome contributions to improve this package. Please follow the standard GitHub pull request process. Make sure to document your code thoroughly, keeping in mind that the package targets an academic audience focused on research. |
aglink | aglinkGenerator LinksInstallingInstall and update usingpip_:.. code-block:: text$ pip install aglinkaglink supports Python 2 and newer, Python 3 and newer, and PyPy... _pip:https://pip.pypa.io/en/stable/quickstart/ExampleWhat does it look like? Here is an example of a simple generate link:.. code-block:: batch$ agl.py -d --pcloud -p /root/Downloads "https://www110.zippyshare.com/v/0CtTucxG/file.html"And what it looks like when run:.. code-block:: batch$ GENERATE : "http://srv-8.directserver.us/?file=473600c431"
NAME : "Game Of Thores S01E01.mp4"
DOWNLOAD NAME : "Game Of Thores S01E01.mp4"You can use on python interpreter.. code-block:: python>>> from aglink.agl import autogeneratelink
>>> c = autogeneratelink()
>>>
>>> c.generate("https://www110.zippyshare.com/v/0CtTucxG/file.html", direct_download=True, download_path=".", pcloud = True, pcloud_username = "[email protected]", pcloud_password = "tester123", wget = True, auto=True)
>>> GENERATE : "http://srv-8.directserver.us/?file=473600c431"
NAME : "Game Of Thores S01E01.mp4"
DOWNLOAD NAME : "Game Of Thores S01E01.mp4"For more options use '-h' or '--help'.. code-block:: python$ aglink --help
or
$ agl --helpSupportDirect Upload to PcloudDownload With 'wget' (linux) or 'Internet Download Manager (IDM) (Windows) (pip install idm)'Python 2.7 + (only)Windows, LinuxLinksLicense:BSD <https://bitbucket.org/licface/aglink/src/default/LICENSE.rst>_Code:https://bitbucket.org/licface/aglinkIssue tracker:https://bitbucket.org/licface/aglink/issues |
aglio | Another Geodata Library for Input/Output (aglio :garlic:)agliofocuses on data aggregation and regularization from a variety of sources, with an emphasis on geophysical applications but may be useful beyond.For now, check out the examplesherefor an idea of what you can do!InstallationFor a base installation:pipinstallaglioTo install optional dependencies:pipinstallaglio[full]To install extra dependencies that are not used byagliobutareused in
some of the examples:pipinstallaglio[extra]Development installationafter forking and cloning:pipinstall-e.[dev] |
aglioolio | Contains some data |
aglite-test | AutoML for Image, Text, Time Series, and Tabular DataInstall Instructions| Documentation (Stable|Latest)AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.Example# First install package from terminal:# pip install -U pip# pip install -U setuptools wheel# pip install autogluon # autogluon==0.7.0fromautogluon.tabularimportTabularDataset,TabularPredictortrain_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')test_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')predictor=TabularPredictor(label='class').fit(train_data,time_limit=120)# Fit models for 120sleaderboard=predictor.leaderboard(test_data)AutoGluon TaskQuickstartAPITabularPredictorMultiModalPredictorTimeSeriesPredictorResourcesSee theAutoGluon Websitefordocumentationand instructions on:Installing AutoGluonLearning with tabular dataTips to maximize accuracy(ifbenchmarking, make sure to runfit()with argumentpresets='best_quality').Learning with multimodal data (image, text, etc.)Learning with time series dataRefer to theAutoGluon Roadmapfor details on upcoming features and releases.Scientific PublicationsAutoGluon-Tabular: Robust and Accurate AutoML for Structured Data(Arxiv, 2020)Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation(NeurIPS, 2020)Multimodal AutoML on Structured Tables with Text Fields(ICML AutoML Workshop, 2021)ArticlesAutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions(AWS Open Source Blog, Mar 2020)Accurate image classification in 3 lines of code with AutoGluon(Medium, Feb 2020)AutoGluon overview & example applications(Towards Data Science, Dec 2019)Hands-on TutorialsPractical Automated Machine Learning with Tabular, Text, and Image Data (KDD 2020)Train/Deploy AutoGluon in the CloudAutoGluon-Tabular on AWS MarketplaceAutoGluon-Tabular on Amazon SageMakerAutoGluon Deep Learning ContainersContributing to AutoGluonWe are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read theContributing Guideto get started.Citing AutoGluonIf you use AutoGluon in a scientific publication, please cite the following paper:Erickson, Nick, et al."AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data."arXiv preprint arXiv:2003.06505 (2020).BibTeX entry:@article{agtabular,title={AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data},author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},journal={arXiv preprint arXiv:2003.06505},year={2020}}If you are using AutoGluon Tabular's model distillation functionality, please cite the following paper:Fakoor, Rasool, et al."Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation."Advances in Neural Information Processing Systems 33 (2020).BibTeX entry:@article{agtabulardistill,title={Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation},author={Fakoor, Rasool and Mueller, Jonas W and Erickson, Nick and Chaudhari, Pratik and Smola, Alexander J},journal={Advances in Neural Information Processing Systems},volume={33},year={2020}}If you use AutoGluon's multimodal text+tabular functionality in a scientific publication, please cite the following paper:Shi, Xingjian, et al."Multimodal AutoML on Structured Tables with Text Fields."8th ICML Workshop on Automated Machine Learning (AutoML). 2021.BibTeX entry:@inproceedings{agmultimodaltext,title={Multimodal AutoML on Structured Tables with Text Fields},author={Shi, Xingjian and Mueller, Jonas and Erickson, Nick and Li, Mu and Smola, Alex},booktitle={8th ICML Workshop on Automated Machine Learning (AutoML)},year={2021}}AutoGluon for Hyperparameter OptimizationAutoGluon's state-of-the-art tools for hyperparameter optimization, such as ASHA, Hyperband, Bayesian Optimization and BOHB have moved to the stand-alone packagesyne-tune.To learn more, checkout our paper"Model-based Asynchronous Hyperparameter and Neural Architecture Search"arXiv preprint arXiv:2003.10865 (2020).@article{abohb,title={Model-based Asynchronous Hyperparameter and Neural Architecture Search},author={Klein, Aaron and Tiao, Louis and Lienart, Thibaut and Archambeau, Cedric and Seeger, Matthias},journal={arXiv preprint arXiv:2003.10865},year={2020}}LicenseThis library is licensed under the Apache 2.0 License. |
aglite-test.common | AutoML for Image, Text, Time Series, and Tabular DataInstall Instructions| Documentation (Stable|Latest)AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.Example# First install package from terminal:# pip install -U pip# pip install -U setuptools wheel# pip install autogluon # autogluon==0.7.0fromautogluon.tabularimportTabularDataset,TabularPredictortrain_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')test_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')predictor=TabularPredictor(label='class').fit(train_data,time_limit=120)# Fit models for 120sleaderboard=predictor.leaderboard(test_data)AutoGluon TaskQuickstartAPITabularPredictorMultiModalPredictorTimeSeriesPredictorResourcesSee theAutoGluon Websitefordocumentationand instructions on:Installing AutoGluonLearning with tabular dataTips to maximize accuracy(ifbenchmarking, make sure to runfit()with argumentpresets='best_quality').Learning with multimodal data (image, text, etc.)Learning with time series dataRefer to theAutoGluon Roadmapfor details on upcoming features and releases.Scientific PublicationsAutoGluon-Tabular: Robust and Accurate AutoML for Structured Data(Arxiv, 2020)Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation(NeurIPS, 2020)Multimodal AutoML on Structured Tables with Text Fields(ICML AutoML Workshop, 2021)ArticlesAutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions(AWS Open Source Blog, Mar 2020)Accurate image classification in 3 lines of code with AutoGluon(Medium, Feb 2020)AutoGluon overview & example applications(Towards Data Science, Dec 2019)Hands-on TutorialsPractical Automated Machine Learning with Tabular, Text, and Image Data (KDD 2020)Train/Deploy AutoGluon in the CloudAutoGluon-Tabular on AWS MarketplaceAutoGluon-Tabular on Amazon SageMakerAutoGluon Deep Learning ContainersContributing to AutoGluonWe are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read theContributing Guideto get started.Citing AutoGluonIf you use AutoGluon in a scientific publication, please cite the following paper:Erickson, Nick, et al."AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data."arXiv preprint arXiv:2003.06505 (2020).BibTeX entry:@article{agtabular,title={AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data},author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},journal={arXiv preprint arXiv:2003.06505},year={2020}}If you are using AutoGluon Tabular's model distillation functionality, please cite the following paper:Fakoor, Rasool, et al."Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation."Advances in Neural Information Processing Systems 33 (2020).BibTeX entry:@article{agtabulardistill,title={Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation},author={Fakoor, Rasool and Mueller, Jonas W and Erickson, Nick and Chaudhari, Pratik and Smola, Alexander J},journal={Advances in Neural Information Processing Systems},volume={33},year={2020}}If you use AutoGluon's multimodal text+tabular functionality in a scientific publication, please cite the following paper:Shi, Xingjian, et al."Multimodal AutoML on Structured Tables with Text Fields."8th ICML Workshop on Automated Machine Learning (AutoML). 2021.BibTeX entry:@inproceedings{agmultimodaltext,title={Multimodal AutoML on Structured Tables with Text Fields},author={Shi, Xingjian and Mueller, Jonas and Erickson, Nick and Li, Mu and Smola, Alex},booktitle={8th ICML Workshop on Automated Machine Learning (AutoML)},year={2021}}AutoGluon for Hyperparameter OptimizationAutoGluon's state-of-the-art tools for hyperparameter optimization, such as ASHA, Hyperband, Bayesian Optimization and BOHB have moved to the stand-alone packagesyne-tune.To learn more, checkout our paper"Model-based Asynchronous Hyperparameter and Neural Architecture Search"arXiv preprint arXiv:2003.10865 (2020).@article{abohb,title={Model-based Asynchronous Hyperparameter and Neural Architecture Search},author={Klein, Aaron and Tiao, Louis and Lienart, Thibaut and Archambeau, Cedric and Seeger, Matthias},journal={arXiv preprint arXiv:2003.10865},year={2020}}LicenseThis library is licensed under the Apache 2.0 License. |
aglite-test.core | AutoML for Image, Text, Time Series, and Tabular DataInstall Instructions| Documentation (Stable|Latest)AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.Example# First install package from terminal:# pip install -U pip# pip install -U setuptools wheel# pip install autogluon # autogluon==0.7.0fromautogluon.tabularimportTabularDataset,TabularPredictortrain_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')test_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')predictor=TabularPredictor(label='class').fit(train_data,time_limit=120)# Fit models for 120sleaderboard=predictor.leaderboard(test_data)AutoGluon TaskQuickstartAPITabularPredictorMultiModalPredictorTimeSeriesPredictorResourcesSee theAutoGluon Websitefordocumentationand instructions on:Installing AutoGluonLearning with tabular dataTips to maximize accuracy(ifbenchmarking, make sure to runfit()with argumentpresets='best_quality').Learning with multimodal data (image, text, etc.)Learning with time series dataRefer to theAutoGluon Roadmapfor details on upcoming features and releases.Scientific PublicationsAutoGluon-Tabular: Robust and Accurate AutoML for Structured Data(Arxiv, 2020)Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation(NeurIPS, 2020)Multimodal AutoML on Structured Tables with Text Fields(ICML AutoML Workshop, 2021)ArticlesAutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions(AWS Open Source Blog, Mar 2020)Accurate image classification in 3 lines of code with AutoGluon(Medium, Feb 2020)AutoGluon overview & example applications(Towards Data Science, Dec 2019)Hands-on TutorialsPractical Automated Machine Learning with Tabular, Text, and Image Data (KDD 2020)Train/Deploy AutoGluon in the CloudAutoGluon-Tabular on AWS MarketplaceAutoGluon-Tabular on Amazon SageMakerAutoGluon Deep Learning ContainersContributing to AutoGluonWe are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read theContributing Guideto get started.Citing AutoGluonIf you use AutoGluon in a scientific publication, please cite the following paper:Erickson, Nick, et al."AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data."arXiv preprint arXiv:2003.06505 (2020).BibTeX entry:@article{agtabular,title={AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data},author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},journal={arXiv preprint arXiv:2003.06505},year={2020}}If you are using AutoGluon Tabular's model distillation functionality, please cite the following paper:Fakoor, Rasool, et al."Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation."Advances in Neural Information Processing Systems 33 (2020).BibTeX entry:@article{agtabulardistill,title={Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation},author={Fakoor, Rasool and Mueller, Jonas W and Erickson, Nick and Chaudhari, Pratik and Smola, Alexander J},journal={Advances in Neural Information Processing Systems},volume={33},year={2020}}If you use AutoGluon's multimodal text+tabular functionality in a scientific publication, please cite the following paper:Shi, Xingjian, et al."Multimodal AutoML on Structured Tables with Text Fields."8th ICML Workshop on Automated Machine Learning (AutoML). 2021.BibTeX entry:@inproceedings{agmultimodaltext,title={Multimodal AutoML on Structured Tables with Text Fields},author={Shi, Xingjian and Mueller, Jonas and Erickson, Nick and Li, Mu and Smola, Alex},booktitle={8th ICML Workshop on Automated Machine Learning (AutoML)},year={2021}}AutoGluon for Hyperparameter OptimizationAutoGluon's state-of-the-art tools for hyperparameter optimization, such as ASHA, Hyperband, Bayesian Optimization and BOHB have moved to the stand-alone packagesyne-tune.To learn more, checkout our paper"Model-based Asynchronous Hyperparameter and Neural Architecture Search"arXiv preprint arXiv:2003.10865 (2020).@article{abohb,title={Model-based Asynchronous Hyperparameter and Neural Architecture Search},author={Klein, Aaron and Tiao, Louis and Lienart, Thibaut and Archambeau, Cedric and Seeger, Matthias},journal={arXiv preprint arXiv:2003.10865},year={2020}}LicenseThis library is licensed under the Apache 2.0 License. |
aglite-test.features | AutoML for Image, Text, Time Series, and Tabular DataInstall Instructions| Documentation (Stable|Latest)AutoGluon automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy machine learning and deep learning models on image, text, time series, and tabular data.Example# First install package from terminal:# pip install -U pip# pip install -U setuptools wheel# pip install autogluon # autogluon==0.7.0fromautogluon.tabularimportTabularDataset,TabularPredictortrain_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/train.csv')test_data=TabularDataset('https://autogluon.s3.amazonaws.com/datasets/Inc/test.csv')predictor=TabularPredictor(label='class').fit(train_data,time_limit=120)# Fit models for 120sleaderboard=predictor.leaderboard(test_data)AutoGluon TaskQuickstartAPITabularPredictorMultiModalPredictorTimeSeriesPredictorResourcesSee theAutoGluon Websitefordocumentationand instructions on:Installing AutoGluonLearning with tabular dataTips to maximize accuracy(ifbenchmarking, make sure to runfit()with argumentpresets='best_quality').Learning with multimodal data (image, text, etc.)Learning with time series dataRefer to theAutoGluon Roadmapfor details on upcoming features and releases.Scientific PublicationsAutoGluon-Tabular: Robust and Accurate AutoML for Structured Data(Arxiv, 2020)Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation(NeurIPS, 2020)Multimodal AutoML on Structured Tables with Text Fields(ICML AutoML Workshop, 2021)ArticlesAutoGluon for tabular data: 3 lines of code to achieve top 1% in Kaggle competitions(AWS Open Source Blog, Mar 2020)Accurate image classification in 3 lines of code with AutoGluon(Medium, Feb 2020)AutoGluon overview & example applications(Towards Data Science, Dec 2019)Hands-on TutorialsPractical Automated Machine Learning with Tabular, Text, and Image Data (KDD 2020)Train/Deploy AutoGluon in the CloudAutoGluon-Tabular on AWS MarketplaceAutoGluon-Tabular on Amazon SageMakerAutoGluon Deep Learning ContainersContributing to AutoGluonWe are actively accepting code contributions to the AutoGluon project. If you are interested in contributing to AutoGluon, please read theContributing Guideto get started.Citing AutoGluonIf you use AutoGluon in a scientific publication, please cite the following paper:Erickson, Nick, et al."AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data."arXiv preprint arXiv:2003.06505 (2020).BibTeX entry:@article{agtabular,title={AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data},author={Erickson, Nick and Mueller, Jonas and Shirkov, Alexander and Zhang, Hang and Larroy, Pedro and Li, Mu and Smola, Alexander},journal={arXiv preprint arXiv:2003.06505},year={2020}}If you are using AutoGluon Tabular's model distillation functionality, please cite the following paper:Fakoor, Rasool, et al."Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation."Advances in Neural Information Processing Systems 33 (2020).BibTeX entry:@article{agtabulardistill,title={Fast, Accurate, and Simple Models for Tabular Data via Augmented Distillation},author={Fakoor, Rasool and Mueller, Jonas W and Erickson, Nick and Chaudhari, Pratik and Smola, Alexander J},journal={Advances in Neural Information Processing Systems},volume={33},year={2020}}If you use AutoGluon's multimodal text+tabular functionality in a scientific publication, please cite the following paper:Shi, Xingjian, et al."Multimodal AutoML on Structured Tables with Text Fields."8th ICML Workshop on Automated Machine Learning (AutoML). 2021.BibTeX entry:@inproceedings{agmultimodaltext,title={Multimodal AutoML on Structured Tables with Text Fields},author={Shi, Xingjian and Mueller, Jonas and Erickson, Nick and Li, Mu and Smola, Alex},booktitle={8th ICML Workshop on Automated Machine Learning (AutoML)},year={2021}}AutoGluon for Hyperparameter OptimizationAutoGluon's state-of-the-art tools for hyperparameter optimization, such as ASHA, Hyperband, Bayesian Optimization and BOHB have moved to the stand-alone packagesyne-tune.To learn more, checkout our paper"Model-based Asynchronous Hyperparameter and Neural Architecture Search"arXiv preprint arXiv:2003.10865 (2020).@article{abohb,title={Model-based Asynchronous Hyperparameter and Neural Architecture Search},author={Klein, Aaron and Tiao, Louis and Lienart, Thibaut and Archambeau, Cedric and Seeger, Matthias},journal={arXiv preprint arXiv:2003.10865},year={2020}}LicenseThis library is licensed under the Apache 2.0 License. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.