package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
analyzer
|
UNKNOWN
|
analyzerdam
|
Python Boilerplate contains all the boilerplate you need to create a Python package.Free software: BSD licenseDocumentation:https://ultra-finance-dam.readthedocs.org.FeaturesTODOHistory0.1.0 (2015-01-11)First release on PyPI.
|
analyzere
|
This is a Python wrapper for a Analyze Re REST API. It allows you to easily
utilize the PRIME Re platform in your applications.Installationpip install analyzereUsagePlease seehttp://docs.analyzere.net/?pythonfor the most up-to-date
documentation.TestingWe currently commit to being compatible with Python 2.7. 3.4, 3.5, and 3.6. In
order to run tests against against each environment we usetoxandpy.test. You’ll
need an interpreter installed for each of the versions of Python we test.
You can find these via your system’s package manager oron the Python site.To start, install tox:pip install toxThen, run the full test suite:toxTo run tests for a specific module, test case, or single test, you can pass
arguments to py.test through tox with--. E.g.:tox -- tests/test_base_resources.py::TestReferences::test_known_resourceSeetox--helpandpy.test--helpfor more information.TaggingInstalltwineandwheel:pip install twine wheelIncrement version number insetup.pyaccording toPEP 440.Increment the version number in theuser_agentvariable inanalyzere/__init__.py.Commit your change tosetup.pyand create a tag for it with the version
number. e.g.:git tag 0.5.1
git push origin 0.5.1.pypircfileCreate a.pypircfile with your production and test server accounts in yourHOMEdirectory. This file should look as follows:[distutils]
index-servers=
pypi
testpypi
[testpypi]
repository = https://test.pypi.org/legacy/
username = <username>
password = <password>
[pypi]
repository = https://upload.pypi.org/legacy/
username = <username>
password = <password>Note thattestpypiandpypirequire separate registration.Testing Publication1. Ensure you have tagged the master repository according to the tagging
instructions above.Package source and wheel distributions:python setup.py sdist bdist_wheelCheck format:twine check dist/*Upload to PyPI with twine:twine upload dist/* -r testpypiTest that you can install the package from testpypi:pip install -i https://testpypi.python.org/pypi analyzerePublishing1. Ensure you have tagged the master repository according to the tagging
instructions above, testing publication before publication.Package source and wheel distributions:python setup.py sdist bdist_wheelUpload to PyPI with twine:twine upload dist/* -r pypi
|
analyzere-extras
|
An extension to the analyzere python library that facilitates “extras”
including visualizations of Analyze Re LayerView objects.Installationpip install analyzere_extrasGraphing OptionsThis graphing utility provides some methods of controlling the style and format of the rendered image.rankdir=’XX’Option that controls the orientation of the graph. Options include:‘BT’ bottom to top (default)‘TB’ top to bottom‘LR’ left to right‘RL’ right to leftcompact=True|FalseControls if duplicate nodes should be omitted (default=True). This option tends to produce smaller graphs, which should be easier to read.with_terms=True|FalseSpecify that a Layer’s terms are included in each node of the graph (default=True).warnings=True|FalseHighlight nodes with suspicious terms by coloring the node red. Warning nodes are generated when any of the following conditions are true:participation = 0.0invert = trueandfilters = []attachmentoraggregate_attachment= unlimitedmax_depth=0The maximum depth of the graph to process. For very deeply nested structures this can reduce the size. (default=0 == all levels).max_sources=0The maximum number of Loss sources to graph in detail for a single node. (default=0 == all sources).colors=[1-12]The number of colors to be used when coloring nodes and edges. (default=1 == black, max=12).color_mode=[‘breadth’|’depth’]The mode to use when applying colors. Options include: [‘breadth’, ‘depth’], default: ‘breadth’.Sample LayerView Images:LayerViewDigraph(lv,...)render(...)compact=with_terms=warnings=rankdir='BT'rankdir='LR'TrueTrueTrueTrueTrueFalseTrueFalseTrueFalseFalseFalseColorization:LayerViewDigraph(lv,...)render(...)compact=colors=color_mode=rankdir='BT'True4breadthTrue4depthFalse4breadthFalse4depthUsageIn order to make use of the tools in theanalyzere_extrasmodule you will need to import theanalyzeremodule.You will need to define your connection information:import analyzere
analyzere.base_url = '<your server url>'
analyzere.username = '<your userid>'
analyzere.password = '<your password>'VisualizationTo make use of the visualization tool, you will need to query a LayerView
that you would like to graph:from analyzere import LayerView
lv = analyzere.LayerView.retrieve('011785b1-203b-696e-424e-7da9b0ec779a')Now you can generate a graph of your LayerView:from analyzere_extras.visualizations import LayerViewDigraph
g = LayerViewDigraph(lv) # defaults: with_terms=True, compact=True, rankdir='TB', warnings=True
g = LayerViewDigraph(lv, with_terms=False) # omit Layer terms from nodes
g = LayerViewDigraph(lv, compact=False) # graph duplicate nodes
g = LayerViewDigraph(lv, rankdir='LR') # render the graph from Left to Right
g = LayerViewDigraph(lv, warnings=False) # disable error node highlightingThen to render your graph:g.render() # defaults: filename=None, view=True, format=None, rankdir=None
g.render(filename='mygraph') # write graph to 'mygraph'
g.render(view=True) # attempt to auto display the graph
g.render(format='pdf') # change the output format 'pdf'
g.render(rankdir='LR') # render the graph from Left to RightShortcut: generate a graph for a given LayerView Id:graph = LayerViewDigraph.from_id('011785b1-203b-696e-424e-7da9b0ec779a')ELT CombinationTo make use of the ELT combiner tool, you will need to define the list of
uuids representing the resources with ELTs that you would like to combine:uuid_list = ['26a8f73b-0fbb-46c7-8dcf-f4de1e222994', 'cd67ba03-302b-45e5-9341-a4267875c1f8']You will need to indicate which catalog these ELTs correspond to:catalog_uuid = '61378251-ce85-4b6e-a63c-f5d67c4e4877'Then to combine the ELTs into a single ELT:from analyzere_extras.combine_elts import ELTCombiner
elt_combiner = ELTCombiner()
combined_elt = elt_combiner.combine_elts_from_resources(
uuid_list,
catalog_uuid,
uuid_type='all',
description='My Combined ELT'
)uuid_typespecifies which the type of resources inuuid_list. Valid
values foruuid_typeare:'Portfolio''PortfolioView''Layer''LayerView''LossSet''all'Ifuuid_type='all'is set, then the resources inuuid_listcan be a mix
of Portfolios, PortfolioViews, Layers, LayerViews, and LossSets. The default
value ofuuid_typeis'all'.descriptiondefines the description for the uploaded combined ELT. If not
set, the default is'analyzere-python-extras:Combined ELT'.TestingWe currently commit to being compatible with Python 2.7 and Python 3.4 to 3.7.
In order to run tests against against each environment we usetoxandpy.test. You’ll
need an interpreter installed for each of the versions of Python we test.
You can find these via your system’s package manager oron the Python site.To start, install tox:pip install toxThen, run the full test suite:toxTo run tests for a specific module, test case, or single test, you can pass
arguments to py.test through tox with--. E.g.:tox -- tests/test_base_resources.py::TestReferences::test_known_resourceSeetox--helpandpy.test--helpfor more information.PublishingInstalltwineandwheel:pip install twine wheelIncrement version number insetup.pyaccording toPEP 440.Commit your change tosetup.pyand create a tag for it with the version
number. e.g.:git tag 0.1.0
git push origin 0.1.0Package source and wheel distributions:python setup.py sdist bdist_wheelUpload to PyPI with twine:twine upload dist/*
|
analyzerePythonTools
|
An extension to the analyzere python library that facilitates “extras”
including visualizations of Analyze Re LayerView objects.Installationpip install analyzerePythonToolsGraphing OptionsThis graphing utility provides some methods of controlling the style and format of the rendered image.rankdir=’XX’Option that controls the orientation of the graph. Options include:‘BT’ bottom to top (default)‘TB’ top to bottom‘LR’ left to right‘RL’ right to leftcompact=True|FalseControls if duplicate nodes should be omitted (default=True). This option tends to produce smaller graphs, which should be easier to read.with_terms=True|FalseSpecify that a Layer’s terms are included in each node of the graph (default=True).warnings=True|FalseHighlight nodes with suspicious terms by coloring the node red. Warning nodes are generated when any of the following conditions are true:participation = 0.0invert = trueandfilters = []attachmentoraggregate_attachment= unlimitedmax_depth=0The maximum depth of the graph to process. For very deeply nested structures this can reduce the size. (default=0 == all levels).max_sources=0The maximum number of Loss sources to graph in detail for a single node. (default=0 == all sources).colors=[1-12]The number of colors to be used when coloring nodes and edges. (default=1 == black, max=12).color_mode=[‘breadth’|’depth’]The mode to use when applying colors. Options include: [‘breadth’, ‘depth’], default: ‘breadth’.Sample LayerView Images:LayerViewDigraph(lv,...)render(...)compact=with_terms=warnings=rankdir='BT'rankdir='LR'TrueTrueTrueTrueTrueFalseTrueFalseTrueFalseFalseFalseColorization:LayerViewDigraph(lv,...)render(...)compact=colors=color_mode=rankdir='BT'True4breadthTrue4depthFalse4breadthFalse4depthUsageIn order to make use of the tools in theanalyzere-tool-extramodule you will need to import theanalyzeremodule.You will need to define your connection information:import analyzere
analyzere.base_url = '<your server url>'
analyzere.username = '<your userid>'
analyzere.password = '<your password>'VisualizationTo make use of the visualization tool, you will need to query a LayerView
that you would like to graph:from analyzere import LayerView
lv = analyzere.LayerView.retrieve('011785b1-203b-696e-424e-7da9b0ec779a')Now you can generate a graph of your LayerView:from analyzere-tool-extra.visualizations import LayerViewDigraph
g = LayerViewDigraph(lv) # defaults: with_terms=True, compact=True, rankdir='TB', warnings=True
g = LayerViewDigraph(lv, with_terms=False) # omit Layer terms from nodes
g = LayerViewDigraph(lv, compact=False) # graph duplicate nodes
g = LayerViewDigraph(lv, rankdir='LR') # render the graph from Left to Right
g = LayerViewDigraph(lv, warnings=False) # disable error node highlightingThen to render your graph:g.render() # defaults: filename=None, view=True, format=None, rankdir=None
g.render(filename='mygraph') # write graph to 'mygraph'
g.render(view=True) # attempt to auto display the graph
g.render(format='pdf') # change the output format 'pdf'
g.render(rankdir='LR') # render the graph from Left to RightShortcut: generate a graph for a given LayerView Id:graph = LayerViewDigraph.from_id('011785b1-203b-696e-424e-7da9b0ec779a')ELT CombinationTo make use of the ELT combiner tool, you will need to define the list of
uuids representing the resources with ELTs that you would like to combine:uuid_list = ['26a8f73b-0fbb-46c7-8dcf-f4de1e222994', 'cd67ba03-302b-45e5-9341-a4267875c1f8']You will need to indicate which catalog these ELTs correspond to:catalog_uuid = '61378251-ce85-4b6e-a63c-f5d67c4e4877'Then to combine the ELTs into a single ELT:from analyzerePythonTools.combine_elts import ELTCombiner
elt_combiner = ELTCombiner()
combined_elt = elt_combiner.combine_elts_from_resources(
uuid_list,
catalog_uuid,
uuid_type='all',
description='My Combined ELT'
)uuid_typespecifies which the type of resources inuuid_list. Valid
values foruuid_typeare:'Portfolio''PortfolioView''Layer''LayerView''LossSet''all'Ifuuid_type='all'is set, then the resources inuuid_listcan be a mix
of Portfolios, PortfolioViews, Layers, LayerViews, and LossSets. The default
value ofuuid_typeis'all'.descriptiondefines the description for the uploaded combined ELT. If not
set, the default is'analyzerePythonTools: Combined ELT'.TestingWe currently commit to being compatible with Python 2.7 and Python 3.4 to 3.7.
In order to run tests against against each environment we usetoxandpy.test. You’ll
need an interpreter installed for each of the versions of Python we test.
You can find these via your system’s package manager oron the Python site.To start, install tox:pip install toxThen, run the full test suite:toxTo run tests for a specific module, test case, or single test, you can pass
arguments to py.test through tox with--. E.g.:tox -- tests/test_base_resources.py::TestReferences::test_known_resourceSeetox--helpandpy.test--helpfor more information.PublishingInstalltwineandwheel:pip install twine wheelIncrement version number insetup.pyaccording toPEP 440.Commit your change tosetup.pyand create a tag for it with the version
number. e.g.:git tag 0.1.0
git push origin 0.1.0Package source and wheel distributions:python setup.py sdist bdist_wheelUpload to PyPI with twine:twine upload dist/*
|
analyzers
|
UNKNOWN
|
analyzerstrategies
|
Analyzer StrategiesFeaturesTODOHistory0.1.0 (2015-02-30)First release on PyPI.
|
analyze_site
|
analyze_site is a python application to crawl a site and return a count of the keywords, provided in a file, found in the web pages of the site. The application will also return counts of the most used verb, nouns, adverbs and adjectives.analyze_site requires Python version 3 and the following libraries:nltk - Natural Language Toolkit with maxent_treebank_pos_taggerusage: analyze_site.py [-h] [-d DEPTH] [-r PATH_REGEX] [–verbose]keywords_file urlpositional arguments:keywords_file Path to keywords file
url URL to crawloptional arguments:-h,--helpshow this help message and exit-dDEPTH,--depthDEPTHDepth to crawl-rPATH_REGEX,--path_regexPATH_REGEXRegular expression to match URL--verboseIncrease logging level
|
analyze-text
|
Failed to fetch description. HTTP Status Code: 404
|
analyze-the-shit-out-of-me
|
Thank you for downloading the file
|
analyzethis
|
Failed to fetch description. HTTP Status Code: 404
|
analyzr-sdk-python
|
Python SDK for the Analyzr APIOverviewThis Python client will give you access to the Analyzr API. See files in theexamplesfolder
for examples showing how to use the client. Note that aclient_idshould always be provided when querying the API; it is used for reporting purposes.For general information please seehttps://analyzr.ai.For help and support seehttps://support.analyzr.ai.For SDK reference documentation seehttps://analyzr-sdk-python.readthedocs.io.Installation instructionsGetting the client set up will require the following:Install the latest version of the client on your local machine:pip install analyzr-sdk-pythonGet an API username and password from your Analyzr admin (you may need SSO credentials from your local admin instead).Confirm you are able to connect to the API, and check the API version
as follows from a Python session:>>> from analyzrclient import Analyzer
>>> analyzer = Analyzer(host="<your host>")
>>> analyzer.login()
Login successful
>>> Analyzer().version()
{'status': 200, 'response': {'version': 'x.x.xxx', 'tenant': <your tenant name>, 'copyright': '2023 (c) Go2Market Insights Inc. All rights reserved.'}}Testing instructionsIf you are developing the SDK and would like to test the repo, clone it locally using git then
run the following from the root directory:python -m unittest tests.test_all -v # all tests
python -m unittest tests.test_quick -v # quick testsMake sure you update theconfig.jsonfile first to include the name of your API tenant.
To run a single test case do:python -m unittest tests.test_all.PropensityTest.test_logistic_regression_classifier -v
|
anamator
|
anamatorProject created on purpose of animating basics concepts of real analysis.
|
anamic
|
No description available on PyPI.
|
anaml-client
|
The Anaml Python SDK makes it easy to interact with theAnamlfeature
engineering platform from Python. The SDK provides datatypes and methods to
interact with the Anaml REST API and to load Anaml feature data into Pandas
and/or Spark data frames.
|
anaml-helper
|
OverviewA helper library for working with features and featuresets programmatically leveraging the existing anaml_client SDK.Example Usagefrom anaml_client import Anaml ##import Anaml as normal.
from anaml_helper import AnamlHelper ##import AnamlHelper
anaml_client = Anaml(url=,apikey=,secret=,ref=) ##create Anaml class
anaml_helper = AnamlHelper(anaml_client) ##pass Anaml to AnamlHelperThis will provide access to the helper methods for interacting with Features and FeatureSets via anaml sdk. N.B -The library is currently limited to working only with Features and FeatureSet objects.The below contains some examples and a list of available methods.Feature Methodscreate_featureupdate_feature_aggregateupdate_feature_attributesupdate_feature_descriptionupdate_feature_entityRestrictionsupdate_feature_filterupdate_feature_labelsupdate_feature_nameupdate_feature_postAggregateExprupdate_feature_selectupdate_feature_tableupdate_feature_templateupdate_feature_to_DayWindowupdate_feature_to_OpenWindowupdate_feature_to_RowWindow#Creating EventFeature
anaml_helper.create_feature("feature_name",
attributes=[{"key": "OWNER", "value": "C11_HH360"},
{"key": "CREATOR", "value": "[email protected]"}],
labels=["label1", "label2"],
select="accnt_key",
description="feature description",
template=None,
table=383,
filter="accnt_key is not null",
aggregate="count",
post_aggregate=None,
entity_ids=[4])
#Updating feature labels
anaml_helper.update_feature_labels(feature_id=3467,labels=['label1','label2','label3'])
#Updating feature filter
anaml_helper.update_feature_filter(feature_id=3467, filter="accnt_id is not null")
#Updating feature name
anaml_helper.update_feature_name(feature_id=3457, name="modified_feature_name")
#Updating feature window to DayWindow
anaml_helper.update_feature_to_DayWindow(feature_id=3467, days=30)
#Updating feature window to RowWindow
anaml_helper.update_feature_to_RowWindow(feature_id=3467, rows=10_000)FeatureSet Methodscreate_featuresetupdate_featureset_attributesupdate_featureset_descriptionupdate_featureset_entityupdate_featureset_featuresupdate_featureset_labelsupdate_featureset_name#Creating FeatureSet
anaml_helper.create_featureset(
name="featureset_name",
entity=4,
description="featureset description.",
labels=["featureset_label1","featureset_label2"],
attributes=[{"key": "OWNER", "value": "C11_HH360"}],
features=[3467, 3459, 3462]
)
#Updating featureset name
anaml_helper.update_featureset_name(featureset_id=440,name="new_featureset_name")
#Updating featureset description
anaml_helper.update_featureset_description(featureset_id=440,description="new featureset description")
#Updating featureset entity
anaml_helper.update_featureset_entity(featureset_id=333, entity_id=3)
|
anamnesis
|
anamnesisThis repository contains the anamnesis python module. This is a python module
which enables easy serialisation of python objects to/from HDF5 files as well
as between machines using the Message Passing Interface (MPI) via mpi4py.anamnesis was originally part of the Neuroimaging Analysis Framework (NAF)
which is available fromhttps://vcs.ynic.york.ac.uk/naf. It was split out as a
separate module in order to allow for its use in other modules (such as the
replacement for NAF, YNE:https://github.com/sails-dev/yne.AuthorsMark [email protected] project is currently licensed under the GNU General Public Licence 2.0
or higher. For alternative license arrangements, please contact the authors.DependenciesSee therequirements.txtfile.Some of these aren't strict dependencies, but are instead what we develop
against (i.e. we don't guarantee not to use features which only exist from that
release onwards).
|
anamod
|
anamodOverviewanamodis a python library that implements model-agnostic algorithms for the feature importance analysis of trained black-box models.
It is designed to serve the larger goal of interpretable machine learning by using different abstractions over features to interpret
models. At a high level,anamodimplements the following algorithms:Given a learned model and a hierarchy over features, (i) it tests feature groups, in addition to base features, and tries to determine
the level of resolution at which important features can be determined, (ii) uses hypothesis testing to rigorously assess the effect of
each feature on the model’s loss, (iii) employs a hierarchical approach to control the false discovery rate when testing feature groups
and individual base features for importance, and (iv) uses hypothesis testing to identify important interactions among features and feature
groups. More details may be found in the following paper:Lee, Kyubin, Akshay Sood, and Mark Craven. 2019. “Understanding Learned Models by
Identifying Important Features at the Right Resolution.”
In Proceedings of the AAAI Conference on Artificial Intelligence, 33:4155–63.
https://doi.org/10.1609/aaai.v33i01.33014155.Given a learned temporal or sequence model, it identifies its important features, windows as well as its dependence on temporal ordering.
More details may be found in the following paper:Sood, Akshay, and Mark Craven. “Feature Importance Explanations for Temporal
Black-Box Models.” ArXiv:2102.11934 [Cs, Stat], February 23, 2021.
http://arxiv.org/abs/2102.11934.anamodsupersedes the librarymihifepe, based on the first paper
(https://github.com/Craven-Biostat-Lab/mihifepe).mihifepeis maintained for legacy reasons but will not receive further updates.anamoduses the librarysynmodto generate synthetic data, including time-series data, to test and validate the algorithms
(https://github.com/cloudbopper/synmod).UsageSee detailed API documentationhere. Here are some examples of how the package may be used:Analyzing a scikit-learn binary classification model:# Train a model
from sklearn.linear_model import LogisticRegression
from sklearn import datasets
model = LogisticRegression()
dataset = datasets.load_breast_cancer()
X, y, feature_names = (dataset.data, dataset.target, dataset.feature_names)
model.fit(X, y)
# Analyze the model
import anamod
output_dir = "example_sklearn_classifier"
model.predict = lambda X: model.predict_proba(X)[:, 1] # To return a vector of probabilities when model.predict is called
analyzer = anamod.ModelAnalyzer(model, X, y, feature_names=feature_names, output_dir=output_dir)
features = analyzer.analyze()
# Show list of important features sorted in decreasing order of importance score, along with importance score and model coefficient
from pprint import pprint
important_features = sorted([feature for feature in features if feature.important], key=lambda feature: feature.importance_score, reverse=True)
pprint([(feature.name, feature.importance_score, model.coef_[0][feature.idx[0]]) for feature in important_features])Analyzing a scikit-learn regression model:# Train a model
from sklearn.linear_model import Ridge
from sklearn import datasets
model = Ridge(alpha=1e-2)
dataset = datasets.load_diabetes()
X, y, feature_names = (dataset.data, dataset.target, dataset.feature_names)
model.fit(X, y)
# Analyze the model
import anamod
output_dir = "example_sklearn_regressor"
analyzer = anamod.ModelAnalyzer(model, X, y, feature_names=feature_names, output_dir=output_dir)
features = analyzer.analyze()
# Show list of important features sorted in decreasing order of importance score, along with importance score and model coefficient
from pprint import pprint
important_features = sorted([feature for feature in features if feature.important], key=lambda feature: feature.importance_score, reverse=True)
pprint([(feature.name, feature.importance_score, model.coef_[feature.idx[0]]) for feature in important_features])The outputs can be visualized in other ways as well. To show a table indicating feature importance:import subprocess
subprocess.run(["open", f"{output_dir}/feature_importance.csv"], check=True)To visualize the feature importance hierarchy (since no hierarchy is provided in this case, a flat hierarchy is automatically created):subprocess.run(["open", f"{output_dir}/feature_importance_hierarchy.png"], check=True)Analyzing a synthentic model with a hierarchy generated using hierarchical clustering:# Generate synthetic data and model
import synmod
output_dir = "example_synthetic_non_temporal"
num_features = 10
synthesized_features, X, model = synmod.synthesize(output_dir=output_dir, num_instances=100, seed=100,
num_features=num_features, fraction_relevant_features=0.5,
synthesis_type="static", model_type="regressor")
y = model.predict(X, labels=True)
# Generate hierarchy using hierarchical clustering
from types import SimpleNamespace
from anamod.simulation import simulation
args = SimpleNamespace(hierarchy_type="cluster_from_data", contiguous_node_names=True, num_features=num_features)
feature_hierarchy, _ = simulation.gen_hierarchy(args, X)
# Analyze the model
from anamod import ModelAnalyzer
analyzer = ModelAnalyzer(model, X, y, feature_hierarchy=feature_hierarchy, output_dir=output_dir)
features = analyzer.analyze()
# Visualize feature importance hierarchy
import subprocess
subprocess.run(["open", f"{output_dir}/feature_importance_hierarchy.png"], check=True)Analyzing a synthetic temporal model:# Generate synthetic data and model
import synmod
output_dir = "example_synthetic_temporal"
num_features = 10
synthesized_features, X, model = synmod.synthesize(output_dir=output_dir, num_instances=100, seed=100,
num_features=num_features, fraction_relevant_features=0.5,
synthesis_type="temporal", sequence_length=20, model_type="regressor")
y = model.predict(X, labels=True)
# Analyze the model
from anamod import TemporalModelAnalyzer
analyzer = TemporalModelAnalyzer(model, X, y, output_dir=output_dir)
features = analyzer.analyze()
# Visualize feature importance for temporal windows
import subprocess
subprocess.run(["open", f"{output_dir}/feature_importance_windows.png"], check=True)The package supports parallelization usingHTCondor, which can significantly improve running time for large models.
If HTCondor is available on your system, you can enable it by providing the “condor” keyword argument. The python
packagehtcondormust be installed (see Installation). Additional condor options may be viewed in the API documentation:analyzer = anamod.ModelAnalyzer(model, X, y, condor=True)InstallationThe recommended installation method is viavirtual environmentsandpip.
In addition, you also needgraphvizinstalled on your system to visualize feature importance hierarchies.To install the latest stable release:pip install anamodOr to install the latest development version from GitHub:pip install git+https://github.com/cloudbopper/anamod.git@master#egg=anamodIf HTCondor is available on your platform, install thehtcondorPyPi package using pip. To enable it, see Usage:pip install htcondorDevelopmentCollaborations and contributions are welcome. If you are interested in helping with development,
please take a look athttps://anamod.readthedocs.io/en/latest/contributing.html.Licenseanamodis free, open source software, released under the MIT license. SeeLICENSEfor details.ContactAkshay SoodChangelog
|
anananananas
|
No description available on PyPI.
|
ananas
|
AnanasWhat is Ananas?Ananas allows you to write simple (or complicated!) mastodon bots without having
to rewrite config file loading, interval-based posting, scheduled posting,
auto-replying, and so on.Some bots are as simple as a configuration file:[bepis]
class = tracery.TraceryBot
access_token = ....
grammar_file = "bepis.json"But it's easy to write one with customized behavior:class MyBot(ananas.PineappleBot):
def start(self):
with open('trivia.txt', 'r') as trivia_file:
self.trivia = trivia_file.lines()
@hourly(minute=17)
def post_trivia(self):
self.mastodon.toot(random.choice(self.trivia))
@reply
def respond_trivia(self, status, user):
self.mastodon.toot("@{}: {}".format(user["acct"], random.choice(self.trivia)))Run multiple bots on multiple instances out of a single config file:[jorts]
class = custom.JortsBot
domain = botsin.space
access_token = ....
line = 632
[roll]
class = roll.DiceBot
domain = cybre.space
access_token = ....And use the DEFAULT section to share common configuration options between them:[DEFAULT]
domain = cybre.space
client_id = ....
client_secret = ....Getting startedpip install ananasTheananaspip package comes with a script to help you manage your bots.Simply give it a config file and it'll load your bots and close them safely
when it receives a keyboard interrupt, SIGINT, SIGTERM, or SIGKILL.ananas config.cfgIf you haven't specified a client id/secret or access token, the script will
exit unless you run it with the--interactiveflag, which allows it to
prompt you for the instance login information. (The only part of the input
you enter here that's stored in the config file is the instance name -- the
email and password are only used to generate the access token).ConfigurationThe following fields are interpreted by the PineappleBot base classs and will
work for every bot:class: the fully-specified python class that the runner script should
instantiate to start your bot. e.g. "ananas.default.TraceryBot"domain¹: the domain of the instance to run the bot on. Must support https
connections. Only include the domain, no protocol or slashes. e.g. "mastodon.social"client_id¹,client_secret¹: the tokens that the instance uses to identify
what client this bot is posting from/as. Will be used to determine what's
displayed underneath all the posts made by this bot.access_token¹: the access token used to authenticate API requests with the
instance. Make sure this is secret, don't distribute config files with this
field filled out or people will be able to post under the account this token was
created with.admin: the full username (without leading @) of the user to DM error reports to.
Can be left unspecified, but is useful for keeping an eye on the health of the
bot without constantly monitoring the script logs. [email protected]¹: Filled out automatically if the bot is run in interactive mode.Additional fields are specific to the type of bot, refer to the documentation
for the bot's class for more information about the fields it expects.Writing BotsCustom bot classes should be subclasses ofananas.PineappleBot. If you
override__init__, be sure to call the base class's__init__.DecoratorsIn order for the bot to do anything, you should add a method decorated with at
least one of the following decorators:@ananas.reply: Calls the decorated function when the bot is mentioned by any
other user. Decorator takes no parameters, but should only be called on
functions matching this signature:def reply_fn(self, mention, user).mentionwill be the dictionary corresponding to the status containing the
mention (as returned by themastodon API),userwill be the dictionary corresponding to the user that mentioned the bot
(again, according to theAPI)[email protected](secs): Calls the decorated function everysecsseconds,
starting when the bot is initialized. For intervals longer than ~an hour, you
may want to use@scheduleinstead. [email protected](60)@ananas.schedule(**kwargs): Allows you to schedule, cron-style, the
decorated function. Accepted keywords are "second", "minute", "hour",
"day_of_week" or "day_of_month" (but not both), "month", and "year". If any of
these keywords are not specified, they will be treated like cron treats an *,
that is, as long as the time matches the other values, any value will be
accepted. Speaking of which, the cron-like syntax "*" as well as "*/3" are
both accepted, and will expand to the expected thing: for example,schedule(hour="*/2", minute="*/10")will post every 10 minutes during hours
which are multiples of [email protected](minute=0),@ananas.daily(hour=0, minute=0): Shortcuts [email protected]()that call the decorated function once an hour at the
specified minute or once a day at the specified hour and minute. If parameters
are omitted they'll post at the top of the hour or midnight (UTC)[email protected]_reporter: specifies custom behavior for reporting errors. The
decorated function should match this signature:def err(self, error)whereerroris a string representation of the error.Overrideable FunctionsYou can also define the following functions and they will be called at the
relevant points in the bot's lifecycle:init(self): called before the configuration file has been loaded, so
that you can set default values for config fields in case the config file
doesn't specify them.start(self): called after all of the internal PineappleBot initialization is
complete and the mastodon API is ready to use. A good place to load files
specified in the config, post a startup notice, or otherwise do bot-specific
setup.stop(self): called when the bot has received a shutdown signal and needs to
stop. The config file will be saved after this, so if you need to make any last
minute changes to the config, do that here.Configuration FieldsAll of the configuration fields for the current bot are available through theself.configobject, which exposes them with both field-accessor syntax and
dictionary-accessor syntax, for example:foo = self.config.foo
bar = self.config["bar"]These can be read (to get the user's configuration data) or written to (to
affect the config file on next save) or deleted (to remove that field from the
config file).You can callself.config.load()to get the latest values from the config
file.loadtakes an optional parametername, which is the name of the
section to load in the config file in case you want to load a different one than
the bot was started with.You can also callself.config.save()to write any changes made since the last
load back to the config file.Note that if you callself.config.load()during bot operation, without first
callingself.config.save(), you will discard any changes made to the
configuration since the last load.Distributing BotsYou can distribute bots however you want; as long as the class is available in
some module in python'ssys.pathor a module accessible from the current
directory, the runner script will be able to load it.If you think your bot might be generally useful to other people, feel free to
create a pull request on this repository to get it added to the collection of
default bots.Questions? Ping me on Mastodon at@[email protected] shoot me an email [email protected] I'll answer as best I can!
|
ananas-doc
|
No description available on PyPI.
|
anandcal
|
this is for testing purpose
this lib containd add,sub,mul and div
easy to use just by giving comments like add(2,3)…
|
anand-calculator
|
This is a vert simple calullaor which performs calcularot functionschange log0.0.1 (7/12/2020)first release
|
anandpdf
|
this project going to publish later, some bugs are fixing, don’t worry about the package.
|
ananimlib
|
AnAnimlib was inspired byManimLibby Grant
Sanderson of3Blue1Brown. The aim of AnAnimlib is to facilitate the creation
of mathematically preceise animations through an intuitive and extensible API.As a simple example, the following code spins a square it as it moves across the canvas.importananimlibasalrect=al.Rectangle([1,1])al.Animate(al.AddAnObject(rect),al.MoveTo(rect,[-3.0,0.0]),al.RunParallel(al.Move(rect,[6,0],duration=1.0),al.Rotate(rect,2*3.1415,duration=1.0),),al.Wait(1.0))al.play_movie()Installation instructions:> pip install ananimlibDocumentation available atRead The Docs
|
ananke
|
No description available on PyPI.
|
ananke-causal
|
AnankeVisit thewebsiteto find out more.Ananke, named for the Greek
primordial goddess of necessity and causality, is a python package for
causal inference using the language of graphical modelsContributorsRohit BhattacharyaJaron LeeRazieh NabiPreethi PrakashRanjani SrinivasanInterested contributors should check out theCONTRIBUTING.mdfor further details.InstallationIf graph visualization is not required then install viapip:pip install ananke-causalAlternatively, the package may be installed from gitlab by cloning andcdinto the directory. Then,poetry(seehttps://python-poetry.org) can be used to install:poetry installInstall with graph visualizationIf graphing support is required, it is necessary to installgraphviz.Non M1 Mac instructionsUbuntu:sudoaptinstallgraphvizlibgraphviz-devpkg-configMac (Homebrew):brewinstallgraphvizFedora:sudoyuminstallgraphvizOnce graphviz has been installed, then:pipinstallananke-causal[viz]# if pip is preferredpoetryinstall--extrasviz# if poetry is preferredM1 Mac specific instructionsIf on M1 see thisissue. The fix is to run the following before installing:brewinstallgraphviz
python-mpipinstall\--global-option=build_ext\--global-option="-I$(brew--prefixgraphviz)/include/"\--global-option="-L$(brew--prefixgraphviz)/lib/"\pygraphviz
|
anankeesol
|
No description available on PyPI.
|
ananke_sdk
|
No description available on PyPI.
|
ananother
|
No description available on PyPI.
|
ananother1
|
No description available on PyPI.
|
ananother2
|
No description available on PyPI.
|
ananotherone
|
No description available on PyPI.
|
ananse
|
AnanseThis project is a collaboration between Dr. Effah Antwi,Research Scientist, Natural Resources Canada, Canadian Forest Service
and Dr Wiafe Owusu-Banahene, Department of Computer Engineering, School of Engineering Sciences, University of Ghana, Legon, Accra, Ghana.
TheAnansepackage is a python package designed to partially automate search term selection and writing search strategies for systematic reviews. Read the documentation atbaasare.github.io/ananseandananse.readthedocs.io/SetupAnanse requires python 3.7 or higherUsing pippipinstallananseDirectly from the repositorygitclonehttps://github.com/baasare/ananse.git
pythonananse/setup.pyinstallQuick startWriting your own scriptfromananseimportAnansemin_len=1# minimum keyword lengthmax_len=4# maximum keyword length# Create an instance of the packagetest_run=Ananse()# Import your naive search results from the current working directoryimports=test_run.import_naive_results(path="./")# Columns to deduplicate imported search resultscolumns=['title','abstract']#de-duplicate the imported search resultsdata=test_run.deduplicate_dataframe(imports,columns)#extract keywords from article title and abstract as well as author and database tagged keywordsall_terms=test_run.extract_terms(data,min_len=min_len,max_len=max_len)#create Document-Term Matrix, with columns as terms and rows as articlesdtm,term_column=test_run.create_dtm(data.text,keywords=all_terms,min_len=max_len,max_len=max_len)#create co-occurrence network using Document-Term Matrixgraph_network=test_run.create_network(dtm,all_terms)#plot histogram and node strength of the networktest_run.plot_degree_histogram(graph_network)test_run.plot_degree_distribution(graph_network)#Determine cutoff for the relevant keywordscutoff_strengths=test_run.find_cutoff(graph_network,"spline","degree",degrees=3,knot_num=1,percent=0.879956,diagnostics=True)#get suggested keywords and save to a csv filesuggested_keywords=test_run.get_keywords(graph_network,"degree",cutoff_strengths,save_keywords=True)#Print suggested keywordsforwordinsuggested_keywords:print(word)Using Ananse Test Scriptpythontests/ananse_testReferencesThis is a python implementation of the R package as mentioned in paperAn automated approach to identifying search terms for systematic reviews using keyword co‐occurrence networks by Eliza M. Grames, Andrew N. Stillman Morgan W. Tingley and Chris S. Elphick
|
anansescanpy
|
AnanseScanpypackage: implementation of scANANSE for Scanpy objects in PythonInstallationThe most straightforward way to install the most recent version of AnanseScanpy is via conda using PyPI.Install package through CondaIf you have not used Bioconda before, first set up the necessary channels (in this order!).
You only have to do this once.$ conda config --add channels defaults
$ conda config --add channels bioconda
$ conda config --add channels conda-forgeThen install AnanseScanpy with:$ conda install anansescanpyInstall package through PyPI$ pip install anansescanpyInstall package through GitHub$ git clone https://github.com/Arts-of-coding/AnanseScanpy.git
$ cd AnanseScanpy
$ conda env create -f requirements.yaml
$ conda activate AnanseScanpy
$ pip install -e .Install Jupyter Notebook$ pip install jupyterStart using the packageRun the package either in the console$ python3Or run the package in jupyter notebook$ jupyter notebookFor extended documentation see our ipynb vignette with PBMC sample dataOf which the sample data can be downloaded$ wget https://zenodo.org/record/7446267/files/rna_PBMC.h5ad -O scANANSE/rna_PBMC.h5ad
$ wget https://zenodo.org/record/7446267/files/atac_PBMC.h5ad -O scANANSE/atac_PBMC.h5adinstalling and running anansnakeFollow the instructions its respective github page,https://github.com/vanheeringen-lab/anansnakeNext automatically use the generated files to run GRN analysis using your single cell cluster data:snakemake --use-conda --conda-frontend mamba \
--configfile scANANSE/analysis/config.yaml \
--snakefile scANANSE/anansnake/Snakefile \
--resources mem_mb=48_000 --cores 12Thanks to:Jos Smits and his Seurat equivalent of this packagehttps://github.com/JGASmits/AnanseSeuratSiebren Frohlich and his anansnake implementationhttps://github.com/vanheeringen-lab/anansnake
|
anansi
|
No description available on PyPI.
|
anansi-market-data-handler
|
Anansi Market Data HandlerPacote python cujo objetivo é servir como abstração para aquisição,
armazenamento, leitura e apresentação de dados de mercado, procurando
ser agnóstico quanto à fonte dados[^1].AAPIdeste pacote fornece uma interface para os objetos de dados
cujosmétodosprocuram refletir a necessidade de quem lida,
usualmente, com dados deste tipo - ostraders- abstraindo, por
exemplo, a complexidade matemática da implementação dos indicadores de
mercado, como médias móveis, bandas de bollinger e afins sobre um
conjunto decandlesticks, por exemplo.[^1]: Corretoras,exchanges,brokerssão nomenclaturas comuns para
essas fontes.
|
anansi-md
|
Failed to fetch description. HTTP Status Code: 404
|
anansi-toolkit
|
AnansiDependenciesPython, Pip, Poetry.To installpoetry, on
osx, linux orbashonwindowsterminals, type it:curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -Alternatively, poetry could be installed by pip (supposing you have
python and pip already installed):pip install poetryConsuming on Jupyter notebookThat is only a suggestion, you could run anansi on any python
terminal. Only tested on linux.Perform the commands:poetry install
poetry run python -m ipykernel install --user --name=$(basename $(pwd))
poetry run jupyter notebook > jupyterlog 2>&1 &Straight to the point: Running Default Back Testing OperationImporting Dependenciesfromanansi.tradingbot.modelsimport*fromanansi.tradingbotimporttradersfromanansi.tradingbot.viewsimportcreate_user,create_default_operationAdd a new usermy_user_first_name="John"create_user(first_name=my_user_first_name,last_name="Doe",email="{}@email.com".format(my_user_first_name.lower()))Creating a default operationmy_user=User[1]create_default_operation(user=my_user)Instantiating a tradermy_op=Operation.get(id=1)my_trader=traders.DefaultTrader(operation=my_op)Run the tradermy_trader.run()Playing with the database modelsGetting all usersusers=select(userforuserinUser)users.show()id|first_name|last_name|login_displayed_name|email
--+----------+---------+--------------------+--------------
1 |John |Doe | |[email protected]_user.first_name'John'Some operation attributemy_op.stop_loss.name'StopTrailing3T'Some trader attributemy_trader.Classifier.parameters.time_frame'6h'Updating some attributesbefore_update=my_trader.operation.position.side,my_trader.operation.position.exit_reference_pricemy_trader.operation.position.update(side="Long",exit_reference_price=1020.94)after_update=my_trader.operation.position.side,my_trader.operation.position.exit_reference_pricebefore_update,after_update(('Zeroed', None), ('Long', 1020.94))Requesting klinesKlines treated and ready for use, including market indicators methodsThe example below uses the 'KlinesFromBroker' class from the 'handlers'
module ('marketdata' package), which works as an abstraction over the
data brokers, not only serializing requests (in order to respect
brokers' limits), but also conforming the klines like a pandas
dataframe,extendedwith market indicator methods.fromanansi.marketdata.handlersimportKlinesFromBrokerBinanceKlines=KlinesFromBroker(broker_name="binance",ticker_symbol="BTCUSDT",time_frame="1h")newest_klines=BinanceKlines.newest(2167)newest_klines<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}</style>Open_timeOpenHighLowCloseVolume02020-06-17 11:00:009483.259511.539466.009478.611251.80269712020-06-17 12:00:009478.619510.889477.359499.251120.42633222020-06-17 13:00:009499.249565.009432.009443.484401.69300832020-06-17 14:00:009442.509464.839366.099410.954802.21112042020-06-17 15:00:009411.279436.549388.439399.242077.135281.....................21622020-09-15 13:00:0010907.9410917.9610834.0010834.713326.42094021632020-09-15 14:00:0010834.7110879.0010736.6310764.194382.02147721642020-09-15 15:00:0010763.3710815.4710745.6310784.463531.30965421652020-09-15 16:00:0010785.2310827.6110700.0010784.233348.73516621662020-09-15 17:00:0010784.2310812.4410738.3310794.841931.0359212167 rows × 6 columnsApplying simple moving average indicatorsindicator=newest_klines.apply_indicator.trend.simple_moving_average(number_of_candles=35)indicator.name,indicator.last(),indicator.serie('sma_ohlc4_35',
10669.49407142858,
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
...
2162 10619.190500
2163 10632.213571
2164 10644.682643
2165 10657.128857
2166 10669.494071
Length: 2167, dtype: float64)newest_klines<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}</style>Open_timeOpenHighLowCloseVolume02020-06-17 11:00:009483.259511.539466.009478.611251.80269712020-06-17 12:00:009478.619510.889477.359499.251120.42633222020-06-17 13:00:009499.249565.009432.009443.484401.69300832020-06-17 14:00:009442.509464.839366.099410.954802.21112042020-06-17 15:00:009411.279436.549388.439399.242077.135281.....................21622020-09-15 13:00:0010907.9410917.9610834.0010834.713326.42094021632020-09-15 14:00:0010834.7110879.0010736.6310764.194382.02147721642020-09-15 15:00:0010763.3710815.4710745.6310784.463531.30965421652020-09-15 16:00:0010785.2310827.6110700.0010784.233348.73516621662020-09-15 17:00:0010784.2310812.4410738.3310794.841931.0359212167 rows × 6 columnsSame as above, but showing indicator columnindicator=newest_klines.apply_indicator.trend.simple_moving_average(number_of_candles=35,indicator_column="SMA_OHLC4_n35")newest_klines<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}</style>Open_timeOpenHighLowCloseVolumeSMA_OHLC4_n3502020-06-17 11:00:009483.259511.539466.009478.611251.802697NaN12020-06-17 12:00:009478.619510.889477.359499.251120.426332NaN22020-06-17 13:00:009499.249565.009432.009443.484401.693008NaN32020-06-17 14:00:009442.509464.839366.099410.954802.211120NaN42020-06-17 15:00:009411.279436.549388.439399.242077.135281NaN........................21622020-09-15 13:00:0010907.9410917.9610834.0010834.713326.42094010619.19050021632020-09-15 14:00:0010834.7110879.0010736.6310764.194382.02147710632.21357121642020-09-15 15:00:0010763.3710815.4710745.6310784.463531.30965410644.68264321652020-09-15 16:00:0010785.2310827.6110700.0010784.233348.73516610657.12885721662020-09-15 17:00:0010784.2310812.4410738.3310794.841931.03592110669.4940712167 rows × 7 columnsRaw klines, using the low level abstraction module "data_brokers"DISCLAIMER: Requests here are not queued! There is a risk of banning
the IP or even blocking API keys if some limits are exceeded. Use with
caution.fromanansi.marketdataimportdata_brokersBinanceBroker=data_brokers.BinanceDataBroker()my_klines=BinanceBroker.get_klines(ticker_symbol="BTCUSDT",time_frame="1m")my_klines<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}</style>Open_timeOpenHighLowCloseVolume0160016556010688.1210691.1410684.8810684.8821.5298351160016562010684.8810686.1510681.8410685.9918.4874282160016568010686.0010687.6510684.9210687.0922.2463763160016574010687.0910689.5410683.8610687.2618.8184814160016580010687.2610687.2610683.7110685.7638.281582.....................494160019520010762.4310763.4810760.3510760.758.572210495160019526010760.7510762.4810759.3010759.3111.089815496160019532010759.3010762.2210755.3910761.2627.070820497160019538010761.2610761.2610751.7410756.0215.482246498160019544010755.6110756.5710748.0310748.0461.153777499 rows × 6 columnsSame as above, but returning all information get from the data brokermy_klines=BinanceBroker.get_klines(ticker_symbol="BTCUSDT",time_frame="1m",show_only_desired_info=False)my_klines<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}</style>Open_timeOpenHighLowCloseVolumeClose_timeQuote_asset_volumeNumber_of_tradesTaker_buy_base_asset_volumeTaker_buy_quote_asset_volumeIgnore0160016556010688.1210691.1410684.8810684.8821.5298351600165619230126.587773373.010.279415109864.1498220.01160016562010684.8810686.1510681.8410685.9918.4874281600165679197536.180849336.08.25649888223.5660540.02160016568010686.0010687.6510684.9210687.0922.2463761600165739237738.839831415.013.378805142975.2432460.03160016574010687.0910689.5410683.8610687.2618.8184811600165799201100.293663539.09.06295796849.6118440.04160016580010687.2610687.2610683.7110685.7638.2815821600165859409068.511314534.016.799813179523.7085310.0.......................................494160019520010762.4310763.4810760.3510760.758.572210160019525992253.016477292.02.39477825771.7154130.0495160019526010760.7510762.4810759.3010759.3111.0898151600195319119341.014647277.03.06445832976.2565340.0496160019532010759.3010762.2210755.3910761.2627.0708201600195379291245.877535490.014.654896157679.9267580.0497160019538010761.2610761.2610751.7410756.0215.4822461600195439166520.446192353.07.39040779491.1609610.0498160019544010755.6110756.5710748.0310748.0461.1537771600195499657520.935924585.013.436657144474.0846840.0499 rows × 12 columns
|
anantmishra
|
# Anant Mishra’s ModuleA simple package for doing calculations.## InstallationTo install Anant Mishra’s Module, use pip:
pip install anantmishra## API###calculate(x, y, operation)Perform a calculation based on the value ofoperation.operationcan be one of the following strings: “add”, “subtract”, “multiply”, “divide”, “exponentiate”, “modulo”, or “square root”.Returns the result of the calculation.###about()Prints out some ASCII art and a description of the package.## CreditsAnant Mishra
|
anaouder
|
Anaouder mouezh e brezhoneg gant VoskVersion françaisePetra eo ?Un anaouder mouezh emgefre, graet gant ar meziantoùKaldihaVosk.Gantañ e c'heller adskrivañ komzoù e brezhoneg (Son -> Skrid) en un doare emgefre, dre ur mikro e amzer real pe diouzh restroù son.Pleustret eo bet gant un dek eurvezh bennak a deulioù son ha skrid linennet.Un nebeut perzhioù dedennus :Skañv. Pouezh ar model a zo dindan 100 Mo ha treiñ a ra war ur bern mekanikoù : urzhiataerioùhep GPU, RaspberryPi, hezoug Android...Prim. Gallout a reer adskrivañ ar son eamzer real, memes gant un urzhiataer kozh, pe primoc'h c'hoazh gant dafar nevesoc'h.Lec'hel. Ezhomm ebet eus an Internet. Ho mouezh hagho data a chomo war ho penveg, ha tretet e vint gant ho penveg nemetken. Kudenn surentez ebet liammet d'an treuzkas dre rouedad ha gwelloc'h a-fed ekologel.Digoust ha dieub. Gellout a reoc'h azasaat ar meziant d'hoc'h ezhommoù pe enframmañ anezhañ e meziantoù all.Dalc'hoù 'zo siwazh :Poentadur ebet.Kizidig d'an trouzioù endro.Fall war ur bern pouezioù-mouezh c'hoazh.Ret eo komz sklaer ha goustadik.Emichañs e vo gwellaet efedusted an anaouder tamm-ha-tamm, gant ma vo kavet roadennoù mouezh adskrivet.Ul lisañs dieub (doareCreative Commons) a aotrefe eskemm ar roadennoù-se en un doare aes.Sikour ar raktres gant un donezon :StaliañGoude bezañ bet stalietPython3e c'heller staliañ an anaouder dre an terminal :pipinstallanaouderUr wech staliet ha pa vo kinniget modeloù efedusoc'h, e c'hellit nevesaat ar meziant gant :pipinstall--upgradeanaouderAdskrivañ ur restr sonGant an urzhadskrivanen un terminal, e vo adskrivet ar pezh e vez komprenet gant an anaouder diouzh ur restr son. Ar wech kentañ ma vo peurgaset an urzh-se e vo ret deoc'h gortoz ur pennadig ma vefe pellkarget ha staliet ar modulstatic_ffmpeg(evit amdreiñ restroù son ha video).adskrivanRESTR_SON_PE_VIDEODre ziouer, adskrivet e vo pep tra e diabarzh an terminal. Gallout a rit ivez implij an opsion-oevit resisaat anv ur restr, e lec'h ma vo skrivet an titouroù. Tu zo implij an option-se gant an holl urzhioù eus ar meziant.adskrivanRESTR_SON_PE_VIDEO-oDISOC'H.txtEvit kaout listennad an opsionoù, implijit an opsion-h.Implijout gant ur mikroDre an an urzhmikroe c'heller implij an anaouder gant ho vouezh e amzer real.Ma n'ez eus skrid ebet o tont, klaskit niverenn an etrefas son gant :mikro-lHa gant an niverenn-se :mikro-dNIVERENN_ETREFASLinennañ ur teul skrid gant un teul sonM'ho peus un teul skrid adskrivet dre dorn (e stumm.txt) e c'heller linennañ ar skrid gant ar son, evit krouiñ ur restr istitloù (e stummsrt).linennanRESTR_SON_PE_VIDEORESTR_SKRIDAdskrivañ istitloù evit ur videoGallout a rit adskrivañ istitloù diouzh teuliadoù son pe video, e stummsrt(Subrip).istitlanRESTR_SON_PE_VIDEO-oistitloù.srtAn oberiadur-se a gemero kalzig a amzer (hervez padelezh an teuliad son). Klaskit gant un teul film berr da gentañ !https://user-images.githubusercontent.com/10166907/213805292-63becbe2-ffb5-492f-9bac-1330c4b2d07d.mp4Setu disoc'h an istitloù emgefre, hep cheñch netra. Kollet eo buan pa vez sonnerez...Implijout gant meziantoù allN'eo ket aliet dre ma vez kollet un nebeut perzhioù e-keñver ar pezh vez graet gant ar modulanaouder: adlakaat ar varennigoù-stag hag amdreiñ an niverennoù da skouer.Ar model noazh a c'hellit kavout en dosseranaouder/modelspe dre al liammreleases.AudapolisM'ho peus c'hoant implijout ar model gant ur etrefas grafikel e c'hellit mont da sellet ar raktresAudapolis.KdenliveGant ar meziant frammañ videoioùKdenlivee c'heller adskrivañ istitloù en un doare emgefre ivez.Ar mod-implij a c'heller kavoutamañ.TrugarezAr meziant-se zo bet diorroet o kemer harp war meziantoù dieub all : Kaldi, Vosk ha difazierHunspellan Drouizig (evit naetaat an testennoù a-raok ar pleustr).Lakaat da bleustriñ ar model a zo bet posubl a-drugarez d'an danvez prizius, krouet ha rannet gant ur bern tud all : ar raktres Mozilla Common Voice, enrolladennoù Dizale, Brezhoweb, RKB, Kaouen.net, Ya!, Becedia, abadennoù France3 ha Dastum.Trugarez da Elen Cariou, Jean-Mari Ollivier, Karen Treguier, Mélanie Jouitteau ha Pêr Morvan evit o sikour hag o souten.
|
anaparser
|
anaparserHere is an first anaplan parser.
|
anapass-python
|
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
|
anapass-python2
|
Example PackageAnapass2TModule packageGithub-flavored Markdown
|
anapioficeandfire
|
anapioficeandfire-pythonA Python helper library for anapioficeandfire.com - The world’s greatest API for quantified and structured data from the universe of Ice and Fire.Free software: BSD licenseDocumentation:https://anapioficeandfire-python.readthedocs.org.FeaturesTODOHistory0.1.0 (2016-3-3)First release on PyPI.0.1.1 (2016-3-3)Fixed a few things0.1.2 (2016-3-3)Importing submodules in init
|
anaplan-api
|
anaplan-apiAnaplan-API is a Python library wrapper forAnaplan Bulk APIandAnaplan Authentication API.InstallationUse the package managerpipto install Anaplan-API.pip3installanaplan_apiUsageimportloggingfromanaplan_apiimportanaplanfromanaplan_api.AnaplanConnectionimportAnaplanConnectionfromanaplan_api.KeystoreManagerimportKeystoreManagerlogging.basicConfig(format='%(asctime)s,%(msecs)d%(name)s%(levelname)s%(message)s',datefmt='%H:%M:%S',level=logging.INFO)logger=logging.getLogger(__name__)if__name__=='__main__':keys=KeystoreManager(path='/keystore.jks',passphrase='',alias='',key_pass='')auth=anaplan.generate_authorization(auth_type='Certificate',cert=keys.get_cert(),private_key=keys.get_key())conn=AnaplanConnection(authorization=auth,workspace_id='',model_id='')anaplan.file_upload(conn=conn,file_id="",chunk_size=5,data='/Users.csv')results=anaplan.execute_action(conn=conn,action_id="",retry_count=3)forresultinresults:ifresult:# Boolean check of ParserResponse object, true if failure dump is availableprint(result.get_error_dump())Known IssuesThis library currently uses PyJKS library for handling Java Keystore files. This project does not appear to be actively developed, and there is a known error installing pycryptodomex and twofish - both dependencies for PyJKS. The core files required from this library are:jks.pyrfc2898.pysun_crypto.pyutil.pyPyJKS Requirementsjavaobj-py3pyasn1pyasn1_modulesYou can simply download, remove the unnecessary files, and drop the jks folder in your site-package directory to work around the error.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.LicenseBSD
|
anaplanapi2
|
No description available on PyPI.
|
anaplanConnector
|
Simple Anaplan Connector PackageIntroductionThis is a simple Anaplan connector intended to be used as a quick and easy way to mainly integrate with Anaplan using Python. This package does not include all API options. It uses the main calls to push data to anaplan via files, call a process, and export data.Anaplan Integration OverviewThe method of pushing data to Anaplan is common in the data warehousing space. Instead of pushing data in a transaction api (i.e. record by record), Anaplan utilizes a bulk data API which includes pushing delimitted files to a file location, and then copying the file into an Anaplan database. This is similar to Postgres and Snowflake'sCOPY INTOcommand.Command SummaryImport anaplan connectorfrom anaplanConnector import ConnectionIntialize the connectionBasic authenticationanaplan = Connection(authType='basic',email='[email protected]',password='SecurePassword',workspaceId='anaplanWorkspaceID',modelId='AnaplanModelID')Certificate authenticationanaplan = Connection(authType='certificate', privateCertPath='./AnaplanPrivateKey.pem', publicCertPath='./AnaplanPublicKey.pem', workspaceId='anaplanWorkspaceID', modelId='AnaplanModelID')There are two auth types: "basic" and "certificate". If basic is supplied, then the fields "email" and "password" are required. If "certificate" is supplied, then the fields "privateCertPath" and "publicCertPath" are required.Multiple workspaceIds and modelIds can be used by doing one of the following:Change the ids directly:anaplan.workspaceId = 'NewWorkspaceId'anpalan.modelId = 'NewModelId'Make new initialization of the connector:anaplanModel1 = Connection(email='[email protected]',password='SecurePassword',workspaceId='anaplanWorkspaceID',modelId='AnaplanModelID')
anaplanModel2 = Connection(email='[email protected]',password='SecurePassword',workspaceId='anaplanWorkspaceID2',modelId='AnaplanModelID2')Get a list of Workspacesworkspaces = anaplan.getWorkspaces()Get a list of Modelsmodels = anaplan.getModels()Get a list of filesfiles = anaplan.getFiles()Get the fileId from a filenamefileId = anaplan.getFileIdByFilename(filename)Load a fileanaplan.loadFile(filepath,fileId)filepath = The local location and filename of the file to load (e.g. '/home/fileToLoad.csv')fileId = The Anaplan file ID which can be found by running one of the above commandsGet a list of processesprocesses = anaplan.getProcesses()Get a processId from a process nameprocessId = anaplan.getProcessIdByName(processName)Run a processanaplan.runProcess(processId)Get a list of exportsexports = anaplan.getExports()Get an exportId from an export nameexportId = anaplan.getExportIdByName(exportName)Export dataanaplan.export(exportId, filepath)exportId = is Anaplan's Export ID that can be found with the above commandsfilepath = is the location and filename of where you want to save the file (e.g. '/home/export.csv')encoding (optional) = is the character encoding of the export file (default is utf-8)Process ExamplesLoad data into Anaplanfrom anaplanConnector import Connection
anaplan = Connection(authType='basic',email='[email protected]',password='SecurePassword',workspaceId='anaplanWorkspaceID',modelId='AnaplanModelID')
filepath = '/tmp/dataToLoad.csv'
anaplan.loadFile(filepath,anaplan.getFileIdByFilename('ExampleFile.csv'))
anaplan.runProcess(anaplan.getProcessIdByName('Import Data'))Export data from Anaplanfrom anaplanConnector import Connection
anaplan = Connection(authType='basic',email='[email protected]',password='SecurePassword',workspaceId='anaplanWorkspaceID',modelId='AnaplanModelID')
filepath = '/tmp/LocalExportedData.csv'
anaplan.export(anaplan.getExportIdByName('ExportedData.csv'), filepath)List of features that are currently being developedScript to create the public and private pem keys from the .p12 file.
|
ana-py
|
No description available on PyPI.
|
anarcho
|
Android artifact hosting service
|
anarchy
|
Overviewanarchyis a package for managed & beautiful chaos, designed with
incremental procedural content generation in mind. It includes
incremental & reversible random numbers, a selection of distribution
functions for random values, and cohort operations that can be applied
incrementally to groups along an indefinite 1-dimensional space.The goal is to give a high level of control to designers of PCG systems
while maintaining incremental operation.Coming soon: fractal coordinates.VersionsThere are implementations of at least the core functionality available in
C, C#, Javascript, and Python; this documentation applies most closely to
the Python implementation, and it is drawn from that code. Each different
language implementation has its own idiosyncrasies, but the higher-level
things, like number and meaning of parameters, are the same for the core
functions.TODO: Links to versions...Note: the anarchy Python package uses 64-bit integers, for compatibility
with the C version of the library and for a larger output space.
Technical limitations with JavaScript mean that the JS version of the
library uses 32-bit integers and will therefore give different results.DependenciesThe python version requires Python 3; tests usepytestand require
Python >=3.6.Example ApplicationThe incremental shuffling algorithm can be used as a replacement for a
standard random number generator in cases where you want to guarantee a
global distribution of items and are currently using independent random
number checks to control item distribution. For example, if you have code
that looks like this...defroll_item():r=random.random()ifr<0.01:# 1% chance for Rare itemreturn"Rare"elifr<0.06:# 5% chance for Uncommon itemreturn"Uncommon"else:return"Common"...you have no guarantees about exactly how often rare/uncommon items
will be, and some players will get lucky or unlucky. Instead, even if you
don't know the number ofroll_itemcalls, withanarchyyou can do
this:N=0seed=472389223defroll_item():globalN,seedr=anarchy.cohort_shuffle(N,100,seed+N//100)N+=1ifr<1:return"Rare"elifr<6:return"Uncommon"else:return"Common"In this code there are two extra variables that have to be managed in
some way, but the benefit is that for every 100 calls to the function,
"Rare" will be returned exactly once, and "Uncommon" will be returned
exactly 5 times. Each group of 100 calls will still have a different
ordering of the items, because it uses a different seed.Here's an image illustrating the differences between these two
approaches: in the top half, results are generated using independent
random numbers, while the bottom half uses anarchy's cohort shuffling to
guarantee one red cell and 5 blue cells per 10×10 white square..{.pixels}\There are many other potential applications of reversible/incremental
shuffling; this is one of the most direct.
|
anarchy-bot
|
anarchy_bottelegram bot that can promote any user to admininstallpipinstall-Uanarchy_botrunpython-manarchy_bottranslate to your languagefork repogo toanarchy_bot/langduplicateen.ymlfilerename it to something likeru.ymledit filemake pull requestbot automatically decects language set in user's telegram client, and will search the dir for file with translationlicencelicense is gnu agpl 3 -gnu.org/licenses/agpl-3.0.en.html
|
anarchychess
|
AnarchyChess
|
anarchy-sphinx
|
ContentsInstallationVia packageVia git or downloadChangelog0.3.1:0.3.0:0.2.0:Swift auto documentation extractorManual documentation for Swift typesanarchysphinxcommand line toolGenerate Dash docsets with sphinxThis is a simplistic theme used for the AnarchyTools Swift documentation.InstallationVia packageDownload the package or add it to yourrequirements.txtfile:$pipinstallanarchy-sphinxIn yourconf.pyfile:# documentation extractor and swift specific commandsextensions=["swift_domain"]# anarchy themeimportanarchy_themehtml_theme="anarchy_theme"html_theme_path=[anarchy_theme.get_html_theme_path()]Via git or downloadSymlink or subtree theanarchy_sphinx/anarchy_themerepository into your documentation atdocs/_themes/anarchy_themeandanarchy_sphinx/swift_domaintodocs/_extensions/swift_domainthen add the following two settings to your Sphinx conf.py file:# documentation extractor and swift specific commandsimportosimportsyssys.path.insert(0,os.path.abspath('_extensions'))extensions=["swift_domain"]# anarchy themehtml_theme="anarchy_theme"html_theme_path=["_themes",]Changelog0.3.1:Fix layout when no sidebar enabledExperimental: Generate anchors likedoc2dashexpects them. Tell me if something breaks!0.3.0:Fix table rendering in themeMake code boxes that overflow scrollableSwitch to bold style for active toc itemsBugfix: right aligned images were left alignedAdd bullets in front of nav items on top-bar to distinguish them0.2.0:Addanarchysphinxcommand line tool to bootstrap documentationSwift auto documentation extractorIf you want to use the doc-string extractor for Swift you’ll need to inform Sphinx about
where you keep your*.swiftfiles.swift_search_path=["../src"]If you’ve set that up you can use.. autoswift:: <symbol>to let the documenter search
for a Swift symbol and import the documentation in place.You may set some flags to configure documentation behaviour::noindex:do not add to index:noindex-members:do not index members:members:document members, optional: list of members to include:recursive-members:recursively document members (enums nested in classes, etc.):undoc-members:include members without docstring:nodocstring:do not show the docstring:file-location:add a paragraph with the file location:exclude-members:exclude these members:private-members:show private membersManual documentation for Swift typesThe Swift Domain contains the following directives, if the directive declares what you
document you can skip the corresponding Swift keyword (Example:.. swift:class:: Classname).. swift:function::toplevel functions.. swift:class::class definitions.. swift:struct::struct definitions.. swift:enum::enum definitions.. swift:protocol::protocol definitions.. swift:extension::extensions and default implementations for protocols.. swift:method::func signatures.. swift:class_method::class functions.. swift:static_method::static methods in structs or protocols.. swift:init::initializers.. swift:enum_case::enum cases.. swift:let::let constants.. swift:var::variables.. swift:static_let::static let constants.. swift:static_var::static variablesall of those have a:noindex:parameter to keep it out of the index.anarchysphinxcommand line toolusage: anarchysphinx [-h] [--private] [--overwrite] [--undoc-members]
[--no-members] [--file-location] [--no-index]
[--no-index-members] [--exclude-list file]
[--use-autodocumenter]
source_path documentation_path
Bootstrap ReStructured Text documentation for Swift code.
positional arguments:
source_path Path to Swift files
documentation_path Path to generate the documentation in
optional arguments:
-h, --help show this help message and exit
--private Include private and internal members
--overwrite Overwrite existing documentation
--undoc-members Include members without documentation block
--no-members Do not include member documentation
--file-location Add a paragraph with file location where the member
was defined
--no-index Do not add anything to the index
--no-index-members Do not add members to the index, just the toplevel
items
--exclude-list file File with exclusion list for members
--use-autodocumenter Do not dump actual documentation but rely on the auto
documenter, may duplicate documentation in case you
have defined extensions in multiple filesGenerate Dash docsets with sphinxAdd the following to your sphinxMakefile. You will need the pip packagedoc2dashinstalled for this to work.On top in the variable declaration section:PROJECT_NAME=myprojectIn the helptext section:@echo " dashdoc to make Dash docset"Below thehtmltarget:.PHONY: dashdoc
dashdoc:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) -D 'html_sidebars.**=""' $(BUILDDIR)/dashdoc
doc2dash -v -n $(PROJECT_NAME) -d $(BUILDDIR)/ -f -I index.html -j $(BUILDDIR)/dashdoc
@echo
@echo "Build finished. The Docset is in $(BUILDDIR)/$(PROJECT_NAME).docset."and run the build withmake dashdoc
|
anarcute
|
My toolbox for dynamic programming#to be documented
#Chapter: tf-idffrom anarcute import *import requests, jsonsentence=”Eat more of those french fries and drink cola”alice=requests.get(”https://gist.githubusercontent.com/phillipj/4944029/raw/75ba2243dd5ec2875f629bf5d79f6c1e4b5a8b46/alice_in_wonderland.txt”).textprint(tf_idf(sentence,alice))>> {‘eat’: 168.7962962962963, ‘more’: 62.006802721088434, ‘of’: 5.9111543450064845, ‘those’: 303.8333333333333, ‘french’: 759.5833333333333, ‘and’: 3.4843272171253816, ‘drink’: 434.047619047619}#If text is too big it’s frequencies can be pre-cached.filename=”alice.json”vector=vectorize(alice)open(filename,”w+”).write(json.dumps(vector))vector=json.load(open(filename,”r+”))print(tf_idf(sentence,vector))>>{‘eat’: 168.7962962962902, ‘more’: 62.00680272108618, ‘of’: 5.91115434500627, ‘those’: 303.8333333333223, ‘french’: 759.5833333333056, ‘and’: 3.484327217125255, ‘drink’: 434.0476190476033}#we can sort by valueprint(sort_by_value(tf_idf(sentence,vector)))>>{‘french’: 759.5833333332979, ‘drink’: 434.04761904759886, ‘those’: 303.8333333333192, ‘eat’: 168.7962962962885, ‘more’: 62.006802721085556, ‘of’: 5.911154345006209, ‘and’: 3.4843272171252204}#Chapter: Google#We have Google Translate and Google Custom Search Engine nowkey=”MY_GOOGLE_KEY”gt=GT(key)gt.translate(“pl”,”en”,”Jeszcze Polska nie zginęła, Kiedy my żyjemy. Co nam obca przemoc wzięła, Szablą odbierzemy.”)>> {‘data’: {‘translations’: [{‘translatedText’: ‘Poland is not dead yet, When we live. What foreign violence has taken from us, we will take away the Saber.’}]}}cx=”MY_CUSTOM_SEARCH_ENGINE_KEY”gs=GS(cx,key)gs.search(“krakauer sausage recipe”)>> dict with search result, up to 10 itemsgs.items(“krakauer sausage recipe””)>> array of results, up to 100 items#Chapter: Multithreading#based on multithreading_on_dill library#let’s reverse every string of Alice in Wonderlandurl=”https://gist.githubusercontent.com/phillipj/4944029/raw/75ba2243dd5ec2875f629bf5d79f6c1e4b5a8b46/alice_in_wonderland.txt”alice=requests.get(url).textalice_reversed=mapp(lambda s: str(s[::-1]),alice.split(’n’))#as you see we have no problem with lambda#by default the number of processes equals to cpu number, but you can make it bigger for highly async tasks or smaller to prevent overloadalice_reversed=mapp(lambda s: str(s[::-1]),alice.split(’n’),processes=2)#decorator @timeit also included in the library@timeitdef test(p=None):r=mapp(lambda s: math.factorial(150*len(s)),alice.split(’n’),processes=p)return Nonetest()>> ‘test’ 2563.11 mstest(1)>> ‘test’ 5287.27 ms#multithreading filteralice_special=filterp(lambda s: “alice” in s.lower(),alice.split(’n’))#run one async functionrun(print,[“A B C”])#you can wait for it’s result when you need to catch upp=run(lambda x: request.get(x).text,url)some_other_stuff()p.join()#apply - function that executes functions. Used to run few different functions in one multithreading processr=mapp(apply,[lambda:requests.get(“https://gist.githubusercontent.com/phillipj/4944029/raw/75ba2243dd5ec2875f629bf5d79f6c1e4b5a8b46/alice_in_wonderland.txt”).text,lambda: math.factorial(9000)])#Chapter predicates#in_or(a,b) - returns if at least one element of array is in array/string ba=[“Some”,”important”,”array”]b=[“Another”,”array”]in_or(a,b)>> Truec=[“Something”, “Else”]in_or(a,c)>> Falsed=”Some string”in_or(a,d)>> True
|
anarion
|
This is a stock analysing library. It has various type of technical and fundamental indicators.Change Log0.0.1 (31/7/2021)-First Release
|
anarpy
|
AnarPy - ANalysis And Replication in PYthonAnarPy is a Python package to facilitate the simulation, analysis, and replication of several experiments using computational whole brain models.
For more details, installation instructions, documentation, tutorials, forums, videos and more, please visit:https://anarpy.readthedocs.ioThis package is developed and maintained by the Valparaíso Neural Dynamics Laboratory at Universidad de Valparaíso (https://vandal-uv.github.io/)LicenseThe source code for the site is licensed under the MIT license, which you can find in the LICENSE file.CitationContactCode Mantainer: Javier Palma-Espinosa ([email protected])Principal Investigator: Patricio Orio ([email protected])
|
anaryotext
|
Failed to fetch description. HTTP Status Code: 404
|
anas
|
No description available on PyPI.
|
ana-sales
|
No description available on PyPI.
|
ana-sdk
|
ANA SDKO projeto ANA SDK visa fornecer uma interface para interagir com os serviços
relacionados ao ambiente de automação de negócios, dados e clientes da ANA.
Permite realizar login, executar comandos e definir tenant, cliente
e empresa atuais para ter acesso automático ao ANA Data.InstalaçãoUse o gerenciador de pacotespippara instalar a ANA SDK.pipinstallana-sdkExemplo de utilizaçãoTodos os métodos públicos estão documentados, com seus parâmetros, retornos e exceções mapeadas. Os atributos das classes também estão documentados.fromana_sdkimportANAana=ANA(rpa_api_base_url,dashboard_api_base_url,oauth_token_full_url)ana.login("[email protected]","senha")clientes=ana.api.get_clientes()print(clientes)ana.set_cliente(cliente[0]["id"])lotacoes=ana.data.get_lotacoes(empresa=UUID_da_empresa)lotacoes=","join(lotacao["id"]forlotacaoinlotacoes)ana.execute("668.279",{"mes":"02","ano":"2023","lotacoes":lotacoes})Além disso, você pode usar os clients para saber mais sobre os métodos disponíveis em cada um:romana_sdkimportANAana=ANA(rpa_api_base_url,dashboard_api_base_url,oauth_token_full_url)ana.login("[email protected]","senha")ana.api.# Te mostrará tudo que precisa saber sobre a RPA API, por exemploana.rpa.ana.data.# Só estará disponível caso uma empresa, cliente ou tenant já tenha sido selecionadoContribuiçãoContribuições são bem-vindas! Para contribuir com o projeto, siga as orientações abaixo:Faça o fork do projetoCrie uma nova branch com o nome da sua contribuição:git checkout -b minha-contribuicaoFaça as modificações desejadas e adicione os arquivos alterados:git add .Faça o commit das suas alterações:git commit -m "Descrição da minha contribuição"Faça o push para a branch remota:git push origin minha-contribuicaoAbra um pull request descrevendo suas alterações
|
anasim
|
No description available on PyPI.
|
anaspdf
|
this is a demo
pls don’t download
|
anastasia
|
# AnastasiaJenkins build: <a href=’http://vps110163.vps.ovh.ca:8080/job/Anastasia/’><img src=’http://vps110163.vps.ovh.ca:8080/buildStatus/icon?job=Anastasia’></a>## English section## TelegramFor those who are interested by developping Anastasia, join us on our Telegram chat :https://telegram.me/joinchat/CBKa1ggoTRMEkyZTJpsTxg## Advised environmentTo make things easier, we recommend you to use PyCharm. The project was built with this IDE, and obviously, everything is working with PyCharm ! Every student can claim a free license here :https://www.jetbrains.com.## DeploymentAll pushes on master are automatically deployed on live version of Anastasia, you can access to her at @anaimag_bot. Pushes on master are not directly allowed.
All pushes on dev are automatically deployed on beta version of Anastasia, you can access to her at @anaimagbeta_bot.## AdvicesAdd your dependencies on requirements files.Join our Telegram chat !Don’t implement useless functions or spam functionsEnjoy ! :D## Used librariesTo link Anastasia to Telegram, we’re using this lib :https://github.com/python-telegram-bot/python-telegram-bot## Using git with SSH
RTFM !
———
## French section
## TelegramPour ceux qui sont intéressés par le développement d’Anastasia, vous pouvez rejoindre la discussion Telegram associée:https://telegram.me/joinchat/CBKa1ggoTRMEkyZTJpsTxg## Environnement conseilléPour vous faciliter la tâche, l’utilisation de PyCharm est recommandé. Le projet est déjà adapté pour être utilisé avec cet IDE. Il est disponible gratuitement pour tout étudiant sur ce site:https://www.jetbrains.com.## DéploiementTous les pushs sur master sont automatiquement déployés sur Anastasia, accessible sur Telegram avec l’identifiant @anaimag_bot. Les pushs sur master ne sont pas autorisés.
Tous les pushs sur dev sont automatiquement déployés sur Anastasia Bêta, accessible sur Telegram avec l’identifiant @anaimagbeta_bot.## ConseilsN’oubliez pas d’ajouter vos dépendance dans le fichier requirements.Rejoingez la conversation Telegram pour qu’on discute entre nous.N’implémentez pas de fonctions trop inutiles ou qui risqueraient de spamer les discussions.Amusez vous :D## Librairies utiliséesPour le lien avec Telegram, voilà le lien du projet utilisé:https://github.com/python-telegram-bot/python-telegram-bot## Utiliser git avec SSH
RTFM !
|
anastasia-logging
|
Anastasia LoggingThis repository holds a logging implementation wrapper for python scripts with standarized code notifications and additional features.SummaryAboutloggingpython libraryEnhancements and additional functionalitiesLog Output StandarizationPredefined Parameters for AnastasiaLoggers from Environment VariablesCode Tags StandarizationPrint FunctionalityVersioning1. AboutloggingPython LibraryThis repository is based in python base log management library calledlogging.Commonly, when this library is intended to be used, for a basic usage a simple logging import is recommended to be used in order to avoid additional settings in our internal scripts.import logging
logging.warning("I am a very usefull warning")
OUTPUT => WARNING:root:I am a very usefull warningBut, for more complex repositories, is recommended to manage different loggers according to their necessities. For a customized logging behaviour, a Logger class must be declared from logging.import logging
logger = logging.Logger("custom_logger")
logger.warning("I am a very usefull warning")
OUTPUT => I am a very usefull warningHave you noticed in the first import logging example that appears the wordrootin the output console? This is because when you useimport loggingdirectly, a default Logger class is instanciated with namerootalong with default configurations.This repository contains a class calledAnastasiaLogger, which inherits from Logger class and standarize logging definitions and has some improvments over debugging (.debug()), information (.info()), warning (.warning()), error (.error()), critical (.critical()) and fatal (.fatal()) methods.from anastasia_logging import AnastasiaLogger
logger = AnastasiaLogger()
logger.warning("I am a very usefull warning")
OUTPUT => 2023-05-08 10:39:17 UTC/GMT-0400 | [ANASTASIA-JOB] WARNING: I am a very usefull warningIf a script already has a logging usage, it is possible to replaceimport loggingwithimport anastasia_logging as loggingand no modifications are required from the script side to work!import anastasia_logging as logging
logging.warning("I am a very usefull warning")
OUTPUT => 2023-05-08 10:39:17 UTC/GMT-0400 | [ANASTASIA-JOB] WARNING: I am a very usefull warning2. Enhancements and Additional Functionalities2.1 Log Output StandarizationAnastasiaLogger has a common structure in order to show information, which is the following one:YYYY-MM-DD HR:MN:SS UTC/GMT(+-)HHMN | [TAG] LEVEL: messageTAG is define by default as ANASTASIA-JOB (can be changed during class instantiation), and LEVEL is defined according to level method called.TAG is intended to differentiate responsabilities along scripts in other repositories.2.2 Predefined Parameters for AnastasiaLoggers from Environment VariablesFor a faster logging behaviour along an entire repository, some variables that AnastasiaLogger can recieve can be predefined as environment variables, which are:ANASTASIA_LOG_NAME: name identification of logger instance (default="anastasia-log")ANASTASIA_LOG_TAG: tag identification for job type for StreamHandlers (console) and FileHandlers (file) (default="ANASTASIA-JOB")ANASTASIA_LOG_LEVEL: define severity hierarchy for log generation (default="INFO")ANASTASIA_LOG_SAVELOG: generate.logfile containing logs generated (default="0"(deactivated))ANASTASIA_LOG_PATH: define a custom name for log file (default="anastasia-log.log")If it is not the case, AnastasiaLogger will instanciate with default parameters.2.3 Code Tags StandarizationIn order to have a common identification for upper level implementations, AnastasiaLogger holds a standarized code implementations according to different topics.The coding structure is the following one:CodeTopic0Unindentified1XXData related2XXMathematical related3XXAI related4XXResources related5XXOperative System related6XXAPI related7XXAWS relatedMethodsdebug(),info(),warning(),error(),critical()andfatal()can be declared with a code as parameter in order to extended log with a code description.import anastasia_logging as logging
logging.warning("I am a dataset warning", 100)
OUTPUT => 2023-05-08 11:55:27 UTC/GMT-0400 | [ANASTASIA-JOB] WARNING: <W100> I am a dataset warningIf a code is already predefined and no message is introduced, a default message will appear according to code declared.2.4 Print FunctionalityFor a easy visualization in console without interacting with an AnastasiaLogger, you can useprintlike a python built-in print call.import anastasia_logging as logging
logging.print("Some prints to show")
OUTPUT => 2023-05-08 11:55:27 UTC/GMT-0400 | [ANASTASIA-JOB] PRINT: Some prints to show3. Versioningv1.3.4Fixes:Fixed edge cases inANASTASIA_LOG_LEVELandANASTASIA_LOG_PATHassiganation valuesv1.3.3Fixes:Fixed edge case ofANASTASIA_LOG_SAVELOGenvironment variable detection if value entered is not a digitv1.3.2Fixes:Fixed parameter assignation forANASTASIA_LOG_SAVELOGenvironment variablev1.3.1Fixes:Default unindentifieddebug,criticalandfatalstandard codes fixedv1.3.0Features:Functionscritical,fatalanddebugincorporatedv1.2.0Features:FunctionprintincorporatedFunctionsinfo,warning,errorandprintcan return the formatted message for further usagev1.1.1Fixes:FunctionbasicConfigwas skipping predefined structures with tags and log streamhandlersv1.1.0Features:FunctionbasicConfigequivalent from logging implemented for root AnastasiaLoggerv1.0.0Features:AnastasiaLoggerClass abstractionStandar code description definitions forINFO,WARNINGandERRORPredefinedAnastasiaLoggerparameters loaded from environment variables
|
anastruct
|
anaStruct 2D Frames and TrussesAnalyse 2D Frames and trusses for slender structures. Determine the bending moments, shear forces, axial forces and displacements.InstallationFor the actively developed version:$ pip install git+https://github.com/ritchie46/anaStruct.gitOr for a release:$ pip install anastructRead the docs!DocumentationQuestionsGot a question? Please ask ongitter.Includestrusses :heavy_check_mark:beams :heavy_check_mark:moment lines :heavy_check_mark:axial force lines :heavy_check_mark:shear force lines :heavy_check_mark:displacement lines :heavy_check_mark:hinged supports :heavy_check_mark:fixed supports :heavy_check_mark:spring supports :heavy_check_mark:q-load in elements direction :heavy_check_mark:point loads in global x, y directions on nodes :heavy_check_mark:dead load :heavy_check_mark:q-loads in global y direction :heavy_check_mark:hinged elements :heavy_check_mark:rotational springs :heavy_check_mark:non-linear nodes :heavy_check_mark:geometrical non linearity :heavy_check_mark:load cases and load combinations :heavy_check_mark:generic type of section - rectangle and circle :heavy_check_mark:EU, US, UK steel section database :heavy_check_mark:ExamplesfromanastructimportSystemElementsimportnumpyasnpss=SystemElements()element_type='truss'# Create 2 towerswidth=6span=30k=5e3# create trianglesy=np.arange(1,10)*np.pix=np.cos(y)*width*0.5x-=x.min()forlengthin[0,span]:x_left_column=np.ones(y[::2].shape)*x.min()+lengthx_right_column=np.ones(y[::2].shape[0]+1)*x.max()+length# add trianglesss.add_element_grid(x+length,y,element_type=element_type)# add vertical elementsss.add_element_grid(x_left_column,y[::2],element_type=element_type)ss.add_element_grid(x_right_column,np.r_[y[0],y[1::2],y[-1]],element_type=element_type)ss.add_support_spring(node_id=ss.find_node_id(vertex=[x_left_column[0],y[0]]),translation=2,k=k)ss.add_support_spring(node_id=ss.find_node_id(vertex=[x_right_column[0],y[0]]),translation=2,k=k)# add top girderss.add_element_grid([0,width,span,span+width],np.ones(4)*y.max(),EI=10e3)# Add stability elements at the bottom.ss.add_truss_element([[0,y.min()],[width,y.min()]])ss.add_truss_element([[span,y.min()],[span+width,y.min()]])forelinss.element_map.values():# apply wind load on elements that are verticalifnp.isclose(np.sin(el.ai),1):ss.q_load(q=1,element_id=el.id,direction='x')ss.show_structure()ss.solve()ss.show_displacement(factor=2)ss.show_bending_moment()fromanastructimportSystemElementsss=SystemElements(EA=15000,EI=5000)# Add beams to the system.ss.add_element(location=[0,5])ss.add_element(location=[[0,5],[5,5]])ss.add_element(location=[[5,5],[5,0]])# Add a fixed support at node 1.ss.add_support_fixed(node_id=1)# Add a rotational spring support at node 4.ss.add_support_spring(node_id=4,translation=3,k=4000)# Add loads.ss.point_load(Fx=30,node_id=2)ss.q_load(q=-10,element_id=2)# Solvess.solve()# Get visual results.ss.show_structure()ss.show_reaction_force()ss.show_axial_force()ss.show_shear_force()ss.show_bending_moment()ss.show_displacement()Real world use case.Non linear water accumulation analysis
|
ana_survey
|
UNKNOWN
|
anasymod
|
anasymodanasymodis a tool for running FPGA emulations of mixed-signal systems. It supports digital blocks described with Verilog or VHDL and synthesizable analog models created usingmsdslandsvreal.InstallationFrom PyPI>pipinstallanasymodIf you get a permissions error when running one of thepipcommands, you can try adding the--userflag to thepipcommand. This will causepipto install packages in your user directory rather than to a system-wide location.From GitHubIf you are a developer ofanasymod, it is more convenient to clone and install the GitHub repository:>gitclonehttps://github.com/sgherbst/anasymod.git
>cdanasymod>pipinstall-e.Testing the InstallationCheck to see if theanasymodcommand-line script is accessible by running:>anasymod-hIf theanasymodscript isn't found, then you'll have to add the directory containing it to the path. On Windows, a typical location isC:\Python3*\Scripts, while on Linux or macOS you might want to check~/.local/bin(particularly if you used the--userflag).Prerequites to run the examplesThe examples included withanasymoduseIcarus Verilogfor running simulations,Xilinx Vivadofor running synthesis and place-and-route, andGTKWavefor viewing the simulation and emulation results. The instructions for setting up these tools are included below for various platforms.WindowsInstall Xilinx Vivado by going to thedownloads page. Scroll to the latest version of the "Full Product Installation", and download the Windows self-extracting web installer. Launch the installer and follow the instructions. You'll need a Xilinx account (free), and will have to select a license, although the free WebPACK license option is fine you're just planning to work with small FPGAs like the one on the Pynq-Z1 board.GTKwave and Icarus Verilog can be installed at the same time using the latest Icarus binaryhere.LinuxInstall Xilinx Vivado by going to thedownloads page. Scroll to the latest version of the "Full Product Installation", and download the Linux self-extracting web installer. Then, in a terminal:>sudo./Xilinx_Vivado_SDK_Web_*.binA GUI will pop up and guide you through the rest of the installation. Note that you'll need a Xilinx account (free), and that you can select the free WebPACK license option if you're planning to work with relatively small FPGAs like the one on the Pynq-Z1 board.Next, the Xilinx cable drivers must be installed (AR #66440):>cd<YOUR_XILINX_INSTALL>/data/xicom/cable_drivers/lin(32|64)/install_script/install_drivers
>sudo./install_driversFinally, some permissions cleanup is required (AR #62240)>cd~/.Xilinx/Vivado
>sudochown-R$USER*
>sudochmod-R777*
>sudochgrp-R$USER*Installing GTKWave and Icarus Verilog is much simpler; just run the following in a terminal:>sudoapt-getinstallgtkwaveiverilogmacOSThe simulation part of this example should work if you install Icarus Verilog and GTKWave:>brewinstallicarus-verilog
>brewinstall--caskgtkwaveUnfortunately Xilinx Vivado does not run natively on macOS. However, running Vivado on a Windows or Linux virtual machine on macOS does seem to work.Running a SimulationFrom within the folderunittests, run>anasymod-ibuck--models--sim--viewThis will generate a synthesizable model for a buck converter, run a simulation, and display the results.Command-line optionsHere's a breakdown of what the options mean:The-i buckoption indicates that the folderbuckcontains the emulation files--modelsmeans thatanasymodshould look for a file calledgen.pyand run it. In this case thegen.pyscript usesmsdslto generate a synthesizable model for the buck converter.--simmeans that a simulation should be run on the computer, rather than building the emulator bitstream. This is helpful for debugging. You can also pass the option--simulator_name NAMEto specify which simulator should be used. Currently supported simulators areicarus,vivado, andxrun(Cadence Xcelium),--viewmeans that the resuls should be displayed after running the simulation. The viewer can be specified with the--viewer_name NAMEoption. Onlygtkwaveis supported at the moment. When a file calledview.gtkwis in the folder with emulator sources,anasymodwill load it to configure the GTKWave display. You can generate your ownview.gtkwfile usingFile→Write Save Filein GTKWave.Source filesLooking into thebuckfolder, you'll notice there are a bunch of files. Some have special meanings for theanasymodtool (i.e., if you have a file with one of these names, itanasymodwill treat it in a certain way)gen.py: Generates synthesizable emulation models.anasymodexpects to run this as a command-line script that has arguments-o(output directory) and-dt(fixed timestep).anasymodfills those arguments based on user settings when it runsgen.py.prj.yaml: YAML file containing settings for the project, like the FPGA board to use, build options, emulator control infrastructure to use, etc. In this case, one of the key options isdt, which indicates that each emulator cycle corresponds to a fixed timestep of 50 ns.simctrl.yaml: YAML file containing signals to probe. The signals indicated will be probed for both simulation and emulation (in the latter case, using a Xilinx Integrated Logic Analyzer instance). This file can also specify signals to be written and read in interactive tests.tb.sv: Top-level file for simulation and synthesis, representing a synthesizable testbench.anasymodhas other special files (e.g., ``clks.yaml` for controlling clock generation), but they are not used in this example.Running an EmulationAt this point, we have run a simulation of the emulator design, but haven't built a bitstream of programmed the FPGA. This section shows how to run those tasks.For this test to work as-is, you'll need aPynq-Z1board. However, if you uncomment theboard_nameoption inprj.yaml, you can specify a different board; currently supported options arePYNQ_Z1,ARTY_A7,VC707,ZC702,ZC706,ZCU102, andZCU106. We are always interested to add support for new boards, so please let us know if your board isn't listed here (feel free to file a GitHub Issue).Go to the folderunittestsand run the following command. It will take about 11 minutes to build the bitstream.>anasymod-ibuck--models--buildIf you are using the Pynq-Z1 board, then please make sure it is set up correctly:Jumper JP4 should be set for "JTAG"Jumper "JP5" should be set for "USB"
Then, plug the Pynq board into your computer using a micro USB cable. Move the Pynq board power switch to "ON".Run the emulation and view the results with the following command:>anasymod-ibuck--emulate--viewFor now, there is a separate*.gtkwfile used when viewing emulation results, which isview_fpga.gtkw.Timestep ManagementThe example considered so far used a fixed timestep, but for optimal emulator performance, some systems benefit from an event-driven approach, where various blocks make timestep requests, resulting in a variable timestep emulator.Withanasymod, an emulation model can make a timestep request (dt_req), which is passed to an auto-generated time manager. The time manager takes the minimum of all timestep requests, and passes the result (emu_dt) back to models that need that information.This system is configured through a file calledclks.yaml. For an example, seeunittests/multi_clock, which represents a system where two independent oscillators are running at different frequencies, and each is making its own timestep requests.If you look in the top-level file (tb.sv), you'll see that there are two oscillator instances,osc_0andosc_1. These are referenced inclks.yaml, which says thatanasymodshould wire up the timestep request from each (dt_req) as well as the emulator timestep (emu_dt). It also connects up the emulator clock (emu_clk) and reset signals (emu_rst).This brings us to a point about postprocessing: when you run a simulation or emulation, the raw data produced is dumped to a folder calledraw_results, and includes a timestamp at each emulation cycle. To make the results easier to visualize,anasymodpost-processes the raw data, applying timestamps to all signals. The post-processed result is placed in a folder calledvcd; that waveform is what is displayed when you invoke the--viewoption. However, for debugging timestep issues, it can be useful to examine theraw_resultsfolder, too.Clock GenerationThe sameclks.yamlfile also handles the generation of new clock signals.anasymodautomatically generates an emulator clock signal, but typically there are one or more "real" clocks in the design that need to be generated as well.This takes some care to avoid timing issues, and the strategy taken byanasymodis to make sure the rising and falling edges of all generated clocks are aligned to a rising edge of the emulator clock. For a block that wants to generate a new clock signal, it produces a clock request (gated_clk_req) in the preceding emulator cycle, like a clock enable signal, andanasymodpasses the generated clock signal (gated_clk) back to the block. The generated clock is properly aligned and routed through FPGA clock infrastructure to ensure good performance.You can see an example of this configuration by examing theclk_0andclk_1entries inclks.yamlof themulti_clockexample: the oscillator models each produce a clock request, and each clock request is used to generate one clock signal.FirmwareRather than writing tests entirely in RTL, or controlling tests entirely from a host computer, it is often a good compromise to use firmware running on the FPGA to receive commands from the host computer and implement them at a lower level when interacting with the code intb.sv.For FPGAs that contain a Processing System (PS),anasymodautomates much of this process by automatically instantiating the PS and generating firmware to interact with it. An example can be seen inunittests/custom_firmware, where user code, written inmain.cinvokesGETandSETcommands from the auto-generatedgpio_funcs.hheader.The signals to be set up for reading and writing are specified insimctrl.yaml, just as they would be for VIO-based control. However, thefpga_sim_ctrlsetting inprj.yamlindicates that VIO control should not be used, and that the FPGA should instead using PS firmware to interact with the DUT.By default,anasymodwill generate amain.cfile, but if you want to use your own, as in this example, setcustom_zynq_firmwaretoTrueinprj.yaml, and then specify the location of themain.cfile insource.yaml(underfirmware_files).Interactive TestsIt's often important to be able to interact with the emulator from Python while it is running, is order to steer the high-level direction of the tests. This is supported through theAnalysisobject provided by theanasymodPython package, which provides a programmatic way to access all of the features of the command-lineanasymodtool.As an example, consider the example inunittests/rc, which is an RC filter whose input and output are accessible for interactive testing (as specifed in itssimctrl.yamlfile).With theanasymodprogrammatic interface, it's possible to build a bitstream, program the FPGA, and interact with the emulator with a fairly small number of lines of code:fromanasymod.analysisimportAnalysisana=Analysis('path/to/rc')ana.set_target('fpga')ana.build()# build bitstreamctrl=ana.launch()# program FPGActrl.stall_emu()ctrl.set_param(name='v_in',value=1.0)ctrl.set_reset(1)ctrl.set_reset(0)for_inrange(25):ctrl.refresh_param('vio_0_i')v_out=ctrl.get_param('v_out')t=ctrl.get_emu_time()print(f't:{t}, v_out:{v_out}')ctrl.sleep_emu(0.1e-6)In real-world use, it's unlikely that you would want to rebuild the FPGA bitstream before every emulation run, but we have have included the command here just for reference. As long as the FPGA bitstream is built, you could comment out that line, and everything should still work.This example illustrates howanasymodprovides commands for interacting with emulator time (stall_emu,get_emu_time,sleep), as well as reading/writing emulator values (set_param,get_param). Emulator I/O works for both digital values and analog values; in the analog case, it automatically converts real numbers to the format being used by the emulator.ContributingTo improve the quality of the software, users are encouraged to share modifications, enhancements or bug fixes with Infineon Technologies AG [email protected].
|
anasyspythontools
|
NOTEPlease note that this package is currently beta. Things may break unexpectedly. Most functions writing to Analysis Studio format have been disabled so I don’t see how this package could break your existing data anymore.Anasys Python ToolsAnasys Python Tools is a python package for working with files generated by Anasys Instruments’ (now Bruker) Analysis Studio software. Anasys Python Tools was originally developed by Cody Schindler from Anasys.Basic Usageimportanasyspythontoolsasanasys# Read your Analysis Studio filef=anasys.read("afmdata.axz")# Grab all the height map data from the fileheightmaps=f.HeightMaps# Show off your beautiful imagesheightmaps['Height 1'].show()# Unsure what the data looks like? Try:dir(f)# Displays user-accessible datadir(f.HeightMaps)FeaturesRead files with .axz, .axd, file extensionsExtract AFM spectral and height map images as numpy arraysQuickly display and save your dataUse your data with popular Python data libraries and applications (Pandas, Orange3, Jupyter, etc.)Work with your data when you’re away from your instrumentsEvaluate your data in a trasparent and flexible wayInstallationFrom pippipinstallanasyspythontoolsFrom githubInstall the latest version:pipinstallgit+https://github.com/GeorgRamer/anasys-python-tools.gitor a specific branch:pipinstallgit+https://github.com/GeorgRamer/anasys-python-tools.git@<branchname>ContributeFeel free to fork and hack away!If you have a feature you’d like to see, please open an Issue.SupportThis section previously said:If you are having issues, please let us know.
Email Cody directly [email protected] Cody appears to have stopped working on this, please raise issues in this fork or email Georg [email protected] project is licensed under the MIT license.Each .py file in this document has a header stating:# Copyright 2017 Cody Schindler <[email protected]>## This program is the property of Anasys Instruments, and may not be# redistributed or modified without explict permission of the author.To my understanding the MIT license constitutes an “explicit permission” to redistribute and modify. To be on the safe side, I (GeorgRamer) have repeatedly, over a span of several years tried to get confirmation on that from Bruker. I never got a definite answer.
|
ana-tanase-own-package
|
No description available on PyPI.
|
anaties
|
anatiesAnaties (contraction of 'analysis utilities'). A place for common operations like signal smoothing that are useful across all my data analysis projects.Installation and usageInstall with pip:pip install anatiesWhen a new release is made, upgrade with:pip install anaties --upgradeUsage is simple. In your code:import anaties as ana
ana.function_name()You can test it out with:import anaties as ana
print(ana.datetime_string())
plt.plot([0, 1], [0,1], color='k', linewidth=0.6)
ana.rect_highlight([0.25, 0.5])All other functions are listed below.Brief summary of all utilitiessignals.py (for 1d data arrays, or arrays of such arrays)
- smooth: smooth a signal with a window (gaussian, etc)
- smooth_rows: smooth each row of a 2d array using smooth()
- power_spec: get the power spectral density or power spectrum
- spectrogram: calculate/plot spectrogram of a signal
- notch_filter: notch filter to attenuate specific frequency (e.g. 60hz)
- bandpass_filter: allow through frequencies within low- and high-cutoff
plots.py (basic plotting)
- error_shade: plot line with shaded error region
- freqhist: calculate/plot a relative frequency histogram
- paired_bar: bar plot for paired data
- plot_with_events: plot with vertical lines to indicate events
- rect_highlight: overlay rectangular highlight to figure
- vlines: add vertical lines to figure
stats (basic statistical things)
- collective_correlation: collective correlation coefficient
- med_semed: median and std error of median of an array
- mean_sem: mean and std error of the mean of an array
- mean_std: mean and standard deviation of an array
- se_mean: std err of mean of array
- se_median: std error of median of array
- cramers_v: cramers v for effect size for chi-square test
helpers.py (generic utility functions for use everywhere)
- datetime_string : return date_time string to use for naming files etc
- file_exists: check to see if file exists
- get_bins: get bin edges and centers, given limits and bin width
- get_offdiag_vals: get lower off-diagonal values of a symmetric matrix
- ind_limits: return indices that contain array data within range
- is_symmetric: check if 2d array is symmetric
- rand_rgb: returns random array of rgb valuesAcknowledgmentsSongbird wav is open source from:https://freesound.org/people/Sonic-ranger/sounds/243677/Developed with the support from NIH Bioinformatics and the Neurobehavioral Core at NIEHS.To do: More importantfinish adding tests.plots.rect_highlight should just use axvspan/axhspan!use median instead of mean in spectrogramadd proper documentation and tests to stats module.integrate vlines into pypi and version up (maybe good test for ci)add ax return for all plot functions, when possible.finish plots.twinx and make sure it worksadd test for plots.error_shade.Add return object for plots.rect_highlight()consider adding directory_exists to helperspaired_bar and mean_sem/std need to handle one point better (throws warning)Add a proper suptitle fix in aplots it is a pita to add manually/remember:
f.suptitle(..., fontsize=16)
f.tight_layout()
f.subplots_adjust(top=0.9)For freqhist should I guarantee it sums to 1 even when bin widths don't match data limits? Probably not. Something to think about though.In smoother, consider switching from filtfilt() to sosfiltfilt() for reasons laid out here:https://dsp.stackexchange.com/a/17255/51564Convert notch filter to sos?For spectral density estimation consider adding multitaper option. Good discussions:https://github.com/cokelaer/spectrumhttps://pyspectrum.readthedocs.io/en/latest/https://mark-kramer.github.io/Case-Studies-Python/04.htmladd ability to control event colors in spectrogram.ind_limits: add checks for data, data_limits, clarify description and docsAdd numerical tests with random seed set not just graphical eyeball tests.To do: longer termAdd audio playback of signals (see notes in audio_playback_workspace), incorporate this into some tests of filtering, etc.. simpleaudio package is too simple I think.autodocs (sphinx?)CI/CD with github actionsconsider adding wavelets.Add 3d array support for stat functions like mn_semUseful sourcesSmoothinghttps://scipy-cookbook.readthedocs.io/items/FiltFilt.htmlhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.filtfilt.htmlWhat about wavelets?I may add wavelets at some point, but it isn't plug-and-play enough for this repo. If you want to get started with wavelets in Python, I recommendhttp://ataspinar.com/2018/12/21/a-guide-for-using-the-wavelet-transform-in-machine-learning/Tolerance valuesFor a discussion of the difference between relative and absolute tolerance values when testing floats for equality (for instance as used inhelpers.is_symmetric()) see:https://stackoverflow.com/questions/65909842/what-is-rtol-for-in-numpys-allclose-functionSuggestions?If there is something you'd like to see, please open an issue.
|
anatolygusev-djet
|
Django Extended Testsis a set of helpers for easy testing of Django apps.Main features:easy unit testing of Django views (ViewTestCase)useful assertions provides as mixin classes:response status codes (StatusCodeAssertionsMixin)emails (EmailAssertionsMixin)messages (MessagesAssertionsMixin)model instances (InstanceAssertionsMixin)handy helpers for testing file-related code (InMemoryStorageMixinand others)smooth integration with Django REST Framework authentication mechanism (APIViewTestCase)Full documentation available onread the docs.Developed bySUNSCRAPERSwith passion & patience.RequirementsPython: 2.7 (Only Django 1.11), 3.4+Django: 1.11, 2.0+(optional)Django REST Framework: 3.7+InstallationSimply install usingpip:$pipinstalldjetDocumentationFull documentation is available to study atread the docsand indocsdirectory.
|
anatomy
|
IntroductionThis package implements the $\text{oShapley-VI}_p$ (out-of-sample Shapley-based variable importance), $\text{PBSV}_p$ (performance-based Shapley value), MAS (model accordance score) and MAS hypothesis testing, proposed in the "The Anatomy of Out-of-Sample Forecasting Accuracy" paper by Daniel Borup, Philippe Goulet Coulombe, David E. Rapach, Erik Christian Montes Schütte, and Sander Schwenk-Nebbe, which is available to download for free at SSRN:https://ssrn.com/abstract=4278745.The $\text{PBSV}_p$ is a Shapley-based decomposition that measures the contributions of an individual predictor $p$ in fitted models to the out-of-sample loss. While a performance metric like the RMSE focuses solely on out-of-sample performance, MAS evaluates whether a model's out-of-sample success mirrors what it has learned from the in-sample data by comparing $\text{iShapley-VI}_p$ (or $\text{oShapley-VI}_p$) to $\text{PBSV}_p$. The MAS paired with a performance metric such as the RMSE provides insight into the model's "intentional success" (seebelow example).The interpretation of PBSVs is straightforward: if $\text{PBSV}_p$ is negative (positive), predictor $p$ reduces (increases) the loss and is thus beneficial for (detrimental to) forecasting accuracy in the out-of-sample period. Taking the sum of the individual contributions (including the contribution of the empty set) yields the decomposed loss exactly (due to the efficiency property of Shapley values; seebelow example).Please cite our paper if you find the package useful:Borup, Daniel and Coulombe, Philippe Goulet and Rapach, David E. and Montes Schütte, Erik Christian and Schwenk-Nebbe, Sander (2022). “The Anatomy of Out-of-Sample Forecasting Accuracy”. Federal Reserve Bank of Atlanta Working Paper 2022-16. https://doi.org/10.29338/wp2022-16.QuickstartIf you haven't already, install the package viapip install anatomy, preferably in a new environment with Python 3.9.The anatomy package uses a simple workflow. AnAnatomyobject is initially estimated on your forecasting setup (using your data and your models), is then stored to disk, and can then be loaded at any future time without requiring re-estimation.After initial estimation, anAnatomycan anatomize:forecasts produced by any combination of your modelsyour original forecastsany loss or gain function applied to your forecastsan arbitrary subset of your forecastsall of which requiresno additional computational time.General structureYou may already have trained your models before you create theAnatomy, and the aggregate of all your models at all periods may be too large to fit into your RAM. During estimation, theAnatomywill therefore ask you for the specific model and dataset it needs at a given iteration by calling your mapping function:from anatomy import *
def my_map(key: AnatomyModelProvider.PeriodKey) -> \
AnatomyModelProvider.PeriodValue:
train, test, model = ... # load from somewhere or generate here
return AnatomyModelProvider.PeriodValue(train, test, model)You wrap the mapping function in anAnatomyModelProvideralongside information about the forecasting application:my_provider = AnatomyModelProvider(
n_periods=..., n_features=..., model_names=[...],
y_name=..., provider_fn=my_map
)and finally create theAnatomy:my_anatomy = Anatomy(provider=my_provider, n_iterations=...).precompute(
n_jobs=16, save_path="my_anatomy.bin"
)After running the above, theAnatomyis estimated and stored in your working directory asmy_anatomy.bin.Example:For convenience, the examples below are contained in a single Python script availablehere.To get started, we need a forecasting application. We use a linear DGP to generate our dataset consisting of 500 observations of the three predictorsx_{0,1,2}and our targety:# set random seed for reproducibility:
np.random.seed(1338)
xy = pd.DataFrame(np.random.normal(0, 1, (500, 3)), columns=["x_0", "x_1", "x_2"])
xy["y"] = xy.sum(axis=1) + np.random.normal(0, 1, 500)
# set a unique and monotonically increasing index (default index would suffice):
xy.index = pd.date_range("2021-04-19", "2022-08-31").map(lambda x: x.date())For convenience, theAnatomySubsetsincludes a generator that splits your dataset into training and test sets according to your forecasting scheme. Here, we include 100 periods in our first training set, forecast the target of the next period with no gap between the training set and the forecast, extend our training set by one period, and repeat until we reach the end of our data:subsets = AnatomySubsets.generate(
index=xy.index,
initial_window=100,
estimation_type=AnatomySubsets.EstimationType.EXPANDING,
periods=1,
gap=0
)In this example, we have not yet trained our models. We thus do so directly in our mapping function:def mapper(key: AnatomyModelProvider.PeriodKey) -> \
AnatomyModelProvider.PeriodValue:
train = xy.iloc[subsets.get_train_subset(key.period)]
test = xy.iloc[subsets.get_test_subset(key.period)]
if key.model_name == "ols":
model = train_ols(train.drop("y", axis=1), train["y"])
elif key.model_name == "rf":
model = train_rf(train.drop("y", axis=1), train["y"])
return AnatomyModelProvider.PeriodValue(train, test, model)usingtrain_olsandtrain_rf, which train a model and yield its prediction function wrapped in anAnatomyModel:from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression
def train_ols(x_train: pd.DataFrame, y_train: pd.Series) -> AnatomyModel:
ols_model = LinearRegression().fit(x_train, y_train)
def pred_fn_ols(xs: np.ndarray) -> np.ndarray:
xs_df = pd.DataFrame(xs, columns=x_train.columns)
return np.array(ols_model.predict(xs_df)).flatten()
return AnatomyModel(pred_fn_ols)
def train_rf(x_train: pd.DataFrame, y_train: pd.Series) -> AnatomyModel:
rf_model = RandomForestRegressor(random_state=1338).fit(x_train, y_train)
def pred_fn_rf(xs: np.ndarray) -> np.ndarray:
xs_df = pd.DataFrame(xs, columns=x_train.columns)
return np.array(rf_model.predict(xs_df)).flatten()
return AnatomyModel(pred_fn_rf)We now have all we need to train the models and estimate theAnatomy:provider = AnatomyModelProvider(
n_periods=subsets.n_periods,
n_features=xy.shape[1]-1,
model_names=["ols", "rf"],
y_name="y",
provider_fn=mapper
)
anatomy = Anatomy(provider=provider, n_iterations=10).precompute(
n_jobs=16, save_path="anatomy.bin"
)At this point, theAnatomyis stored asanatomy.binin our working directory. We can load it at any later point usinganatomy = Anatomy.load("anatomy.bin").AnatomizingWe can now use our estimatedAnatomyto anatomize our forecasts. In this example, we are using two models,rfandols, as well as an equal-weighted combination of the two:groups = {
"rf": ["rf"],
"ols": ["ols"],
"ols+rf": ["ols", "rf"]
}Anatomize the out-of-sample $R^2$ of the forecasts:To decompose the out-of-sample $R^2$ of our forecasts produced by the two models and their combination, we use the unconditional forecasts as benchmark and provide a function transforming forecasts into out-of-sample $R^2$ to theAnatomy:prevailing_mean = np.array([
xy.iloc[subsets.get_train_subset(period=i)]["y"].mean()
for i in range(subsets.n_periods)
])
def transform(y_hat, y):
return 1 - np.sum((y - y_hat) ** 2) / np.sum((y - prevailing_mean) ** 2)
df = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)This yields the change in out-of-sample $R^2$ attributable to each predictor:>>> df
base_contribution x_0 x_1 x_2
rf 2021-07-28 -> 2022-08-31 0.000759 0.257240 0.238458 0.215062
ols 2021-07-28 -> 2022-08-31 0.000000 0.285329 0.267940 0.249870
ols+rf 2021-07-28 -> 2022-08-31 0.000383 0.279073 0.259201 0.240548InterpretationThe Shapley-based decomposition can be understood as a means to fairly allocate a single value (in this case the out-of-sample $R^2$) amongst multiple actors contributiong to it (the predictors in our model). This implies that the individual contributions of the actors and the contribution of the empty set of actors (base_contribution) sum up exactly to the original value that was decomposed.The above depicts the individual contributions to the out-of-sample $R^2$, which can be negative, if a given predictor hurts accuracy, or positive, if a given predictor increases accuracy. In this case, all predictors contribute positively to the out-of-sample $R^2$. In practice, predictors can hurt accuracy by reducing the out-of-sample $R^2$.Note: In this example, we use the prevailing mean (average of the target in the training sets) as benchmark to compute the out-of-sample R². The average forecast of an OLS model coincides with this benchmark, which explains why thebase_contributionof OLS is exactly zero.... the RMSE of the forecasts:def transform(y_hat, y):
return np.sqrt(np.mean((y - y_hat) ** 2))
df = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)which yields the change in root mean squared error attributable to each predictor:>>> df
base_contribution x_0 x_1 x_2
rf 2021-07-28 -> 2022-08-31 2.105513 -0.351430 -0.326065 -0.296708
ols 2021-07-28 -> 2022-08-31 2.106313 -0.414488 -0.386125 -0.371149
ols+rf 2021-07-28 -> 2022-08-31 2.105910 -0.397903 -0.368193 -0.350080InterpretationWe previously decomposed the out-of-sample $R^2$. In this case, we use the RMSElossfunction, implying that a predictor with a negative contribution increases forecasting accuracy. Because the RMSE cannot be negative, thebase_contribution, which is the RMSE of the average forecasts of the models, can only be positive.Similar to the previous case, we find that all predictors contribute positively to forecasting accuracy (by contributing negatively to the RMSE).... the MAE:def transform(y_hat, y):
return np.mean(np.abs(y - y_hat))
df = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)which yields the change in mean absolute error attributable to each predictor:>>> df
base_contribution x_0 x_1 x_2
rf 2021-07-28 -> 2022-08-31 1.679359 -0.299382 -0.249591 -0.221651
ols 2021-07-28 -> 2022-08-31 1.679946 -0.345583 -0.300613 -0.288960
ols+rf 2021-07-28 -> 2022-08-31 1.679652 -0.330303 -0.283129 -0.262270InterpretationThe interpretation is similar to that of the RMSE.... the SE:def transform(y_hat, y):
return (y - y_hat) ** 2
df = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)which yields the change in squared error attributable to each predictor for each forecast:>>> df
base_contribution x_0 x_1 x_2
rf 2021-07-28 0.026485 -0.063311 -0.115985 0.163743
2021-07-29 0.021582 0.238569 -0.370448 0.694773
2021-07-30 2.451742 -2.702365 1.660915 -1.407192
... ... ... ... ...Note: Thetransformfunction in this case does not aggregate (returns a vector instead of a scalar). TheAnatomythus yields one decomposition per forecast, which is also known as a local (as opposed to global) decomposition.InterpretationThe previous decompositions have consistently shown that all predictors increase forecasting accuracy when it is gauged over the entire period (2021-07-28 to 2022-08-31). Anatomizing instead each individual forecast reveals that this is not always true, at least not at the local level. We now see that individual predictors are contributing positively to the squared error of some forecasts (thus reducing forecast accuracy).... the RMSE of the forecasts in a subperiod:subset = pd.date_range("2021-07-28", "2021-08-06").map(lambda x: x.date())
def transform(y_hat, y):
return np.sqrt(np.mean((y - y_hat) ** 2))
df_pbsv_rmse = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform),
explanation_subset=subset
)which yields the change in root mean squared error in the ten-day period attributable to each predictor:>>> df_pbsv_rmse
base_contribution x_0 x_1 x_2
rf 2021-07-28 -> 2021-08-06 1.402950 -0.149847 -0.199615 0.118059
ols 2021-07-28 -> 2021-08-06 1.404761 -0.155275 -0.323760 0.317055
ols+rf 2021-07-28 -> 2021-08-06 1.403853 -0.169584 -0.275408 0.211672InterpretationIn this short subperiod of ten days, we find that our last predictor contributed positively to the RMSE (and thus negatively to forecasting accuracy).... or just the raw forecasts:def transform(y_hat):
return y_hat
df_oshapley = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)which yields the change in the forecast attributable to each predictor:>>> df_oshapley
base_contribution x_0 x_1 x_2
rf 2021-07-28 0.070861 0.222932 0.360182 -0.315820
2021-07-29 0.070354 0.250389 -0.617779 0.984992
2021-07-30 0.071163 -1.514547 0.839562 -0.835136
... ... ... ... ...InterpretationDecomposing the forecasts themselves yields contributions that bear no relation to forecasting accuracy. Hence, a negative or positive contribution means no more than a decrease or increase in the forecast at that period attributable to the given predictor, which may or may not have be good for forecasting accuracy.Hence: beware, a high average absolute contribution does not necessarily translate into a high gain in accuracy. That is precisely why we need to decompose the loss directly, and in consequence, take into account our target and how far away our forecasts were from it.From the anatomized raw forecasts, we can compute the $\text{oShapley-VI}$ by averaging over the magnitudes of the invididual contributions (here for theols+rfcombination):>>> df_oshapley.loc["ols+rf"].abs().mean(axis=0)
base_contribution 0.078415
x_0 0.864057
x_1 0.786044
x_2 0.770115The Efficiency propertyDuring estimation, theAnatomychecks that the individual attributions of the predictors to the forecasts sum up exactly to the forecasts produced by the models. The estimation would be aborted if efficiency does not hold.Due to the efficiency property of Shapley values, summing the individual contributions yields the decomposed value exactly. We can check that the results thatAnatomyyields are consistet. We can recover the RMSE from the decomposed RMSE, but we can also recover the RMSE from the decomposed forecasts. Let's make sure that they match.We first anatomize the RMSE:def transform(y_hat, y):
return np.sqrt(np.mean((y - y_hat) ** 2))
df = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)The RMSE (or any other decomposed value) is the sum of the individual attributions,rmse_a = df.sum(axis=1):>>> rmse_a
rf 2021-07-28 -> 2022-08-31 1.131310
ols 2021-07-28 -> 2022-08-31 0.934551
ols+rf 2021-07-28 -> 2022-08-31 0.989733We next decompose the raw forecasts and compute the RMSE from these:def transform(y_hat):
return y_hat
df = anatomy.explain(
model_sets=AnatomyModelCombination(groups=groups),
transformer=AnatomyModelOutputTransformer(transform=transform)
)The forecasts are recovered as the sum of the individual attributions,y_hat = df.sum(axis=1):>>> y_hat
rf 2021-07-28 0.338154
2021-07-29 0.687956
2021-07-30 -1.438958
...From the forecasts, we can compute the RMSE:y_true = np.hstack([
xy.iloc[subsets.get_test_subset(period=i)]["y"]
for i in range(subsets.n_periods)
])
rmse_b = pd.Series({
key: np.sqrt(np.mean((y_true - y_hat.xs(key)) ** 2))
for key in groups.keys()
})which yields the same RMSE as the sum of the contributions of the RMSE decomposition:>>> rmse_b
rf 1.131310
ols 0.934551
ols+rf 0.989733Model Accordance Score:We can compute the MAS for our combination model (ols+rf) using $\text{oShapley-VI}$ and, for instance, the
$\text{PBSV}$ for the root-mean-square error:vi = df_oshapley.loc["ols+rf"].abs().mean(axis=0)
pbsv = df_pbsv_rmse.loc["ols+rf"].iloc[0]
# loss type is rmse, so lower is better:
loss_type = MAS.LossType.LOWER_IS_BETTER
mas = MAS(vi, pbsv, loss_type).compute()which yields the MAS:>>> mas
{'mas': 1.0, 'mas_p_value': 0.020796}InterpretationIn this example, the ranking of $\text{oShapley-VI}$ is identical to the signed-ranking of $\text{PBSV}$.
Thus, MAS is 1 (perfect) and the null hypothesis of no relation between $\text{oShapley-VI}_p$ and $\text{PBSV}_p$
is rejected at the 5% level (mas_p_valueis the probability of observing a MAS at least as extreme asmasunder
the null).
|
anatools
|
Rendered.ai's SDK: anatoolsanatoolsis an SDK for connecting to the Rendered.ai Platform.
Withanatoolsyou can generate and access synthetic datasets, and much more!>>>importanatools>>>ana=anatools.client()'Enter your credentials for the Rendered.ai Platform.''email:'[email protected]'password:'***************>>>channels=ana.get_channels()>>>graphs=ana.get_staged_graphs()>>>datasets=ana.get_datasets()Install theanatoolsPackage(Optional) Create a new Conda EnvironmentInstall conda for your operating system:https://www.anaconda.com/products/individual.Create a new conda environment and activate it.Install anatools from the Python Package Index.$condacreate-nrenderedaipython=3.7
$condaactivaterenderedaiInstall AnaTools to the Python EnvironmentInstall AnaTools from the Python Package Index.$pipinstallanatoolsDependenciesThe anatools package requires python 3.6 or higher and has dependencies on the following packages:PackageDescriptiondockerA python library for the Docker Engine API.numpyA python library used for array-based processing.pillowA fork of the Python Image Library.pyyamlA python YAML parser and emitter.requestsA simple HTTP python library.If you have any questions or comments, contact Rendered.AI [email protected] ways to loginLogin with emailExecute the python command line, create a client and login to Rendered.ai.
In this example we are instantiating a client with no workspace or environment variables, so it is setting our default workspace.
To access the tool, you will need to use your email and password forhttps://deckard.rendered.ai.>>>importanatools>>>ana=anatools.client()'Enter your credentials for the Rendered.ai Platform.''email:'[email protected]'password:'***************API KeysYou can generate as many API keys as you desire with custom expiration dates in order to bypass the email login. Create keys via your email login. To do this, run create_api_key and save the resulting output that is your new API Key. This will only be shown once.ana.create_api_key(name='name',expires='mm-dd-yyyy',organizationId='OrgId')'apikey-12345...'This API Key can be used for future logins as demonstrated in the following two examples.API Key Param>>>importanatools>>>ana=anatools.client(APIKey='API KEY')UsingprovidedAPIKeytologin....Environment VariableexportRENDEREDAI_API_KEY=API_KEYpython>>>importanatools>>>ana=anatools.client()UsingenvironmentRENDEREDAI_API_KEYkeytologin....Quickstart GuideWhat is the Rendered.ai Platform?The Rendered.ai Platform is a synthetic dataset generation tool where graphs describe what and how synthetic datasets are generated.TermsDefinitionsworkspaceA workspace is a collection of data used for a particular use-case, for example workspaces can be used to organize data for different projects.datasetA dataset is a collection of data, for many use-cases these are images with text-based annotation files.graphA graph is defined by nodes and links, it describes the what and the how a dataset is generated.nodeA node can be described as an executable block of code, it has inputs and runs some algorithm to generate outputs.linkA link is used to transfer data from the output of one node, to the input of other nodes.channelA channel is a collection of nodes, it is used to limit the scope of what is possible to generate in a dataset (like content from a tv channel).How do you use the SDK?The Rendered.ai Platform creates synthetic datasets by processing a graph, so we will need to create the client to connect to the Platform API, create a graph, then create a dataset.Login using email or API KeyCreate a graph file calledgraph.ymlwith the code below.
We are defining a simplistic graph for this example with multiple children's toys dropped into a container.
WhileYAMLfiles are used in channel development and for this example, the Platform SDK and API only supportJSON. Ensure that theYAMLfile is valid in order for the SDK to convertYAMLtoJSONfor you. Otherwise, provide a graph inJSONformat.version:2nodes:Rubik's Cube:nodeClass:"Rubik'sCube"Mix Cube:nodeClass:Mix CubeBubbles:nodeClass:BubblesYoyo:nodeClass:Yo-yoSkateboard:nodeClass:SkateboardMouldingClay:nodeClass:PlaydoughColorToys:nodeClass:ColorVariationvalues:{Color:"<random>"}links:Generators:-{sourceNode:Bubbles,outputPort:Bubbles Bottle Generator}-{sourceNode:Yoyo,outputPort:Yoyo Generator}-{sourceNode:MouldingClay,outputPort:Play Dough Generator}-{sourceNode:Skateboard,outputPort:Skateboard Generator}ObjectPlacement:nodeClass:RandomPlacementvalues:{Number of Objects:20}links:Object Generators:-{sourceNode:ColorToys,outputPort:Generator}-{sourceNode:"Rubik'sCube",outputPort:"Rubik'sCubeGenerator"}-{sourceNode:Mix Cube,outputPort:Mixed Cube Generator}Container:nodeClass:Containervalues:{Container Type:"LightWoodenBox"}Floor:nodeClass:Floorvalues:{Floor Type:"Granite"}DropObjects:nodeClass:DropObjectsNodelinks:Objects:-{sourceNode:ObjectPlacement,outputPort:Objects}Container Generator:-{sourceNode:Container,outputPort:Container Generator}Floor Generator:-{sourceNode:Floor,outputPort:Floor Generator}Render:nodeClass:RenderNodelinks:Objects of Interest:-{sourceNode:DropObjects,outputPort:Objects of Interest}Create a graph using the client.
To create a new graph, we load the graph defined above into a python dictionary using the yaml python package.
Then we create a graph using the client. This graph is being namedtestgraphand is using theexamplechannel. We will first find thechannelIdmatching to theexamplechannel and use that in thecreate_staged_graphcall.
The client will return agraphIdso we can reference this graph later.>>>importyaml>>>withopen('graph.yml')asgraphfile:>>>graph=yaml.safe_load(graphfile)>>>channels=ana.get_channels()>>>channelId=list(filter(lambdachannel:channel['name']=='example',channels))[0]['channelId']>>>graphId=ana.create_staged_graph(name='testgraph',channelId=channelId,graph=graph)>>>print(graphId)'010f9362-daa8-4c10-a3e8-1e81e0f2e4f4'Create a dataset using the client.
Using thegraphId, we can create a new job to generate a dataset. The job takes some time to run.The client will return adatasetIdthat can be used for reference later. You can use thisdatasetIdto check the job status and, once the job is complete, download the dataset. You have now generated Synthetic Data!>>>datasetId=ana.create_dataset(name='testdataset',graphId=graphId,interpretations='10',priority='1',seed='1',description='A simple dataset with cubes in a container.')>>>datasetId'ce66e81c-23a6-11eb-adc1-0242ac120002'
|
anatqc
|
No description available on PyPI.
|
an-at-sync
|
an-at-syncPython package & cli for syncing between ActionNetwork & AirTable.How to UseTo set up a new project withan-at-sync, create a new folder for your project:mkdirproject-name&&cdproject-nameIn that folder, create arequirements.txtand addan-at-syncas a dependency:an-at-syncInstall it with pip:pipinstall-rrequirements.txtCreate a folder for your project namespace:mkdir project_nameIn that folder, create amodels.pywith this default content:fromdatetimeimportdatetimefromtypingimportAny,Dict,Optionalfroman_at_sync.formatimportconvert_adr,standardize_phonefroman_at_sync.modelimportBaseActivist,BaseEvent,BaseRSVPfromdateutilimporttzfrompyairtable.utilsimportdatetime_to_iso_strfrompydanticimportHttpUrl,validatorfrompydantic.networksimportEmailStreastern=tz.gettz("America/New_York")utc=tz.gettz("UTC")classActivist(BaseActivist):first_name:Optional[str]last_name:Optional[str]email:EmailStrzip_code:Optional[str]phone_number:Optional[str]address:Optional[str]city:Optional[str]state:Optional[str]@classmethoddeffrom_actionnetwork(cls,source:dict,**kwargs:Any):address,city,state,zip_code=convert_adr(source["postal_addresses"][0])returncls(first_name=source.get("given_name"),last_name=source.get("family_name"),email=source["email_addresses"][0]["address"].strip(),address=address,city=city,state=state,zip_code=zip_code,phone_number=standardize_phone(source["phone_numbers"][0].get("number")),)defdisplay_name(self)->str:returnf"{self.first_name}{self.last_name}"defpk(self)->Dict:return{"Email":self.email}defto_airtable(self):return{"Email":self.email,"First Name":self.first_name,"Last Name":self.last_name,"Phone":self.phone_number,"Address":self.address,"City":self.city,"State":self.state,"Zip":self.zip_code,}classRSVP(BaseRSVP):id:strrsvpd_at:datetime@classmethoddeffrom_actionnetwork(cls,source,**kwargs:Any):returncls(id=f"{kwargs['activist_record']['id']}-{kwargs['event_record']['id']}",activist=kwargs["activist"],event=kwargs["event"],rsvpd_at=source["created_date"],)defdisplay_name(self)->str:returnf"{self.activist.display_name()}to{self.event.display_name()}"defpk(self):return{"Id":self.id}defto_airtable(self)->dict:return{"Id":self.id,"RSVP'd At":datetime_to_iso_str(self.rsvpd_at.replace(tzinfo=None)),}defactivist_column(self)->str:return"Volunteer"defevent_column(self)->str:return"Event"classEvent(BaseEvent):url:HttpUrlname:strlocation:strstart_date:datetimeend_date:Optional[datetime]status:str@validator("start_date","end_date")defdates_must_be_eastern(cls,v:datetime):returnv.replace(tzinfo=eastern)ifvisnotNoneelsev@classmethoddeffrom_actionnetwork(cls,source:dict,**kwargs):address,city,state,*_=convert_adr(source["location"])event_location="Zoom"ifnotaddresselsef"{address},{city},{state}"returncls(url=source["browser_url"],name=source["title"],start_date=source["start_date"],end_date=source.get("end_date"),status=source["status"],location=event_location,)defdisplay_name(self)->str:returnself.namedefpk(self)->Dict:return{"Url":str(self.url)}defto_airtable(self)->dict:return{"Url":str(self.url),"Name":self.name,"Start At":self.start_date.astimezone(tz=utc).replace(tzinfo=None).isoformat(timespec="milliseconds")+"Z","End At":self.end_date.astimezone(tz=utc).replace(tzinfo=None).isoformat(timespec="milliseconds")+"Z"ifself.end_dateelseNone,"Status":self.status.capitalize(),"Location":self.location,}These models represent the various part of the system we're going to interact independently from the two systems that will use them. The Activist represents an individual person in your campaign. The Event represents a single campaign event, and an RSVP represents a response from an Activist to attend an event.You can use this as a baseline for your own models, allowing you to customize which fields are synced to AirTable & how.Next, create this.envfile:AN_AT_SYNC_MODELS="project_name.models"AN_API_KEY="TODO: ActionNetwork API Key"AT_API_KEY="TODO: Airtable API Key"AT_BASE="TODO: Base"AT_ACTIVISTS_TABLE="TODO: Volunteers Table"AT_EVENTS_TABLE="TODO: Events Table"AT_RSVP_TABLE="TODO: RSVP Table"Given theproject_nameabove, theAN_AT_SYNC_MODELSpoints to the module thatan-at-synccan load our custom models from. The rest of the env vars require you to get the proper keys and configure your AirTable account.Getting Your ActionNetwork API KeyDetails -> API & Sync. "Your API Key" -> Generate Key. Copy into.env.Creating Your AirTable BaseNeed to create the base first. Create Volunteers, Events, & RSVPs table. Get base id from first part of the URL (starts withapp).Getting Your AirTable API KeyUser stetings -> Developer Hub -> Personal Access token.data.records:read&data.records:write. Access to the created base. Copy the token, add to.env.Creating Your AirTable Volunteers TableCreate fields returned byto_airtablemethod. Copy from second part of URL and add to.env.Creating Your AirTAble Events TableCreate fields returned byto_airtablemethod. Copy from second part of URL and add to.env.Creating Your AirTable RSVP TableCreate fields returned byto_airtablemethod. Copy from second part of URL and add to.env.Run your first sync!python-man_at_syncsyncevents--rsvpsConfiguring the Webhook Handler on Fly.ioCreate a Dockerfile in your repo:FROMmaadhattah/an-at-sync:latestCOPY./project_name/app/project_nameThis will copy your configuration onto the Docker image.Next, deploy it to Fly.io:flylaunchIt will create an app based on the Dockerfile image. Next, import your secrets:cat.env|flysecretsimportLastly, create the webhook in ActionNetwork with the urlhttps://<projet-name>.fly.dev/api/webhooks/actionnetwork.
|
anatta_collector
|
UNKNOWN
|
anatta_common
|
UNKNOWN
|
anatta_logger
|
UNKNOWN
|
anatta_publisher
|
UNKNOWN
|
anavnet
|
AnavNetClient library to theAnavNetwebsite, which provides the current warnings from the Portuguese maritime ports.Includes all the available port names and identifiers.Counts the total messages per port.Extracts all the information from the messages.Includes a console script to consume the library.RequirementsBeautiful SoupRequestsInstallation$pipinstallanavnetUsageLibrary>>>fromanavnetimportAnavNet>>>anavnet=AnavNet()>>>anavnet.set_port(16)>>>anavnet.get_total_messages()>>>12>>>anavnet.get_message(1){'num_aviso':'288/18','dt_promulgacao':'23-Ago-2018','dt_inicio':'24-Ago-2018','dt_fim':'05-Set-2018','ent_promulgacao':'Capitania do Porto de Lisboa - CAPIMARLISBOA','local':'Rio Tejo - Cais Militar do Portinho da Costa.','assunto':'Área interdita à navegação','descricao':'No período de 24AGO a 05SET, está interdita a navegação a menos de 50 metros do Cais Militar do Portinho da Costa.','dt_cancelamento':'Data de cancelamento: 05-Set-2018'}Console script:$anavclient--help
usage:anavclient[-h](--list|--totalTOTAL|--textTEXTTEXT|--jsonJSONJSON)optionalarguments:-h,--helpshowthishelpmessageandexit--listListsavailableports--totalTOTALGetsthetotalofmessages.Argument:PORT_ID--textTEXTTEXTGetmessageasformattedtext.Arguments:PORT_ID,MESSAGE_INDEX--jsonJSONJSONGetmessageasJSON.Arguments:PORT_ID,MESSAGE_INDEXTests$python-munittestdiscover-stestsLicenseBSD-3-Clause
|
anawesomepackage
|
No description available on PyPI.
|
anawsutils
|
anawsutilsA python3 module with miscallaneous AWS utils.Installpython3-mpipinstallanawsutils
|
anax
|
AnaxDbAn encrypted non-linear database based on PandasInstall AnaxDbpip install anaxGetting Startedimport anaxNext, bootstrap a new database.anax.Database(bootstrap=True);Using the DatabaseConnect:anax = anax.Database()Show tables:anax.tables()
['users']Read a table:anax.read("users")
uid username email password admin
0 54c355db7d3d432ca8bfea093affb501 admin [email protected] YWRtaW4= TrueThere is much more to Anax. For examples and explanations clickhere.
|
anaximander
|
AnaximanderAnaximander aims to evolve into a full-fledge rapid application development framework for data-intensive backends. In its initial release, currently under development, it will take the form of a data modeling tool that facilitates storage and integration across multiple database technologies. Modern data applications are polyglot, in the sense that they use a plurality of storage engines -say a relational database for modeling entities like people, places and things, a document database for nested specifications, a columnar database for time series, and a cloud data warehouse for analytics and report generation. Anaximander allows Python developers to declare all application data models as object classes, using unifying semantics. Hence it generalizes the concept of Object-Relational Mapper to no-SQL databases, and enables programmers to work with richly featured objects that embed the application's domain modeling. Further, the object-oriented paradigm automatically extends to data collections, either in the form of the built-in list, set and dict primitives, or in the form of vectorized Series and Dataframes from the pandas library. For instance, if an application programmer declares aTemperatureProbeentity model and aTemperatureSamplerecord model, then typingnx.Set[TemperatureProbe]ornx.Table[TemperatureSample]returns concrete object classes that have been programmatically generated by the framework's metaclasses and that carry metadata and attributes derived from the model class declarations. Among other advantages, this provides automated data validation, intuitive data exploration, and clean, expressive programming.At the highest level, the framework considers three primary data types: entities, records and specs.Entitiesrepresent discrete, identifiable application constructs such as physical assets, products, people, places, etc.Recordsare purely informational objects that represent observations, such as sensor readings, sales reports, or alert notifications.Specsare arbitrarily nested key-value mappings that capture specifications or configurations with a flexible schema, be it detailed entity attributes, schedules or model parameters.Generally speaking, entities are relational and fit best in a relational database or an object store. Records are indexed by key and time, making them equivalent to event messages, and fit well in columnar databases or data lakes. In the data warehouse terminology, entities and records are analogous to dimensions and facts. Finally, specs are document-like objects that may be stored in a document database, a data lake, or embedded in a JSON-typed column as part of an entity table.For a basic usage demonstration, consider an industrial IoT application in which temperature probes are deployed on manufacturing machines. The probes send regular messages containing temperature readings, which are logged and summarized on preset periodicities. Besides, a stream processor analyzes the values in near-real time to emit overheat alerts, and any such occurrence eventually gets logged as an overheat session, i.e. a time interval during which overheating conditions were met.The following code is a partial demonstration, using the current implementation state. The most notable gaps separating it from the target release are as follows:Model relations are still lacking, particularly the ability to link records to entities.Tabular data indexing is still limited.Most crucially, there is no I/O yet. The first step will be to link models to Arrow datasets so that tabular data can be imported and exported to and from the parquet format. JSON and .csv formats will also be available. Once this is done, the framework will be integrated with database engines.Tabular data validation works but with a sub-optimal implementation that converts dataframes to individual records and back. This will be fixed by integrating the panderas library.Installation is straightforward:pipinstallanaximanderHere are model declarations:fromdatetimeimportdatetimefromtypingimportOptionalimportanaximanderasnxfromanaximander.operatorsimportSessionizerimportnumpyasnpimportpandasaspd# =========================================================================== ## Model declarations ## =========================================================================== ## Entities are identifiable thingsclassMachine(nx.Entity):id:int=nx.id()machine_type:str=nx.data()machine_floor:Optional[str]=nx.data()# Measurements feature units that get printed# The metadata is carried into model schemas that use this data type and# can be used by plotting libraries, or for unit conversions.# Also note the validation input (greater or equal to -273), using# Pydantic's notations.classTemperature(nx.Measurement):unit="Celsius"ge=-273# Samples are timestamped records expected to show up at a somewhat set frequency,# though not necessarily strictly so. In other words, the freq metadata is# used as a time characteristic in summarization operations, but missing or# irregular samples are tolerated.# Note that the 'machine_id' field will eventually be replaced by a relational# 'machine' field of type Machine. This functionality is still pending.# Also note that the temperature field defines its own validation parameters,# supplemental to those already defined in the Temperature class (not easy!)classTemperatureSample(nx.Sample):machine_id:int=nx.key()timestamp:datetime=nx.timestamp(freq="5T")temperature:Temperature=nx.data(ge=0,le=200)# Unlike samples, Journals are strictly periodic -by construction, since they# are intended as regular summaries, and hence feature a period field, whose# type is a pandas Period.classTemperatureJournal(nx.Journal):machine_id:int=nx.key()period:pd.Period=nx.period(freq="1H")avg_temp:Temperature=nx.data()min_temp:Temperature=nx.data()max_temp:Temperature=nx.data()# Spec models are intended as general-purpose nested documents, for storing# specifications, configuration, etc. They have no identifier because they# always have an 'owner' -typically an Entity or Record object. This bit is# not implemented yet. If it was, the Machine model would carry an operating# spec as a data attribute.# Here the spec defines the nominal operating temperature range, and will# be used to compute overheat sessions.classMachineOperatingSpec(nx.Spec):min_temp:Temperature=nx.data()max_temp:Temperature=nx.data()# Sessions are timestamped-records with two entries: a start and end times.# These are ubiquitous in natural data processing, particularly for aggregating# events, such as oveheat events in this case.classOverheatSession(nx.Session):machine_id:int=nx.key()start_time:datetime=nx.start_time()end_time:datetime=nx.end_time()# This is a very rudimentary implementation of a parametric operator that will# be used to compute overheat sessions. Note that the Sessionizer class reads# metadata from the TemperatureSample class, such as the names of the key# and timestamp field, as well as the timestamp frequency.sessionizer=Sessionizer(TemperatureSample,OverheatSession,feature="temperature")# In Anaximander, types are composable, and automatically assembled without# the need for a class declaration. Here we explicitly name a class for# tables of temperature samples. Unlike the use of generics in type annotations,# Table[TemperatureSample] is an actual class -which is cached once it is# created.# Table is a so-called archetype, and Table[TemperatureSample] is a concrete# subtype. A table instance wraps a dataframe along with metadata inherited# from its class -primarily the model's schema, which is used to conform and# validate the data.TempTable=nx.Table[TemperatureSample]And some data inputs:# =========================================================================== ## Data inputs ## =========================================================================== ## Machine instance and operating specm0=Machine(id=0,machine_type="motor")m0_spec=MachineOperatingSpec(min_temp=40.0,max_temp=55.0)# Building a table of temperature samplestimes=pd.date_range(start="2022-2-18 12:00",freq="5T",periods=12)temperatures=[45.0,46.0,45.0,50.0,59.0,50.0,48.0,51.0,52.0,56.0,58.0,53.0]sample_log=TempTable(dict(machine_id=0,timestamp=times,temperature=temperatures))# Computing stats over an hour to fill a Journal instance. Note that eventually# this kind of summarization will be specified in model declarations and# carried out by operators -which will be at least partially automated, see# the sessionizer for a prototype.avg_temp=round(np.mean(temperatures),0)min_temp=min(temperatures)max_temp=max(temperatures)hourly=TemperatureJournal(machine_id=0,period="2022-2-18 12:00",avg_temp=avg_temp,min_temp=min_temp,max_temp=max_temp,)# Computing overheat sessions# Note that the key and threshold will not be necessary once relations are# established between models -the sample log will be able to point to a machine# as part of its metadata, and the machine will own its operating spec,# providing a path to the threshold.overheat_sessions=sessionizer(sample_log,key=0,threshold=m0_spec.max_temp.data)And some evaluations:# TempTable exposes a data frame, that is automatically indexed by key# and timestamp>>>assertisinstance(sample_log.data,pd.DataFrame)>>>print(sample_log)temperaturemachine_idtimestamp02022-02-1812:00:0045.02022-02-1812:05:0046.02022-02-1812:10:0045.02022-02-1812:15:0050.02022-02-1812:20:0059.02022-02-1812:25:0050.02022-02-1812:30:0048.02022-02-1812:35:0051.02022-02-1812:40:0052.02022-02-1812:45:0056.02022-02-1812:50:0058.02022-02-1812:55:0053.0# TempTable can broken down into individual records of type TemperatureSample.# These feature temperature attributes with unit metadata.# Likewise, the temperature attribute of the TempTable is a series of# temperatures, and individual data points are measurements with metadata.>>>r0=next(sample_log.records())>>>assertisinstance(r0,TemperatureSample)>>>print(r0.temperature)>>>assertnext(sample_log.temperature.values())==r0.temperature45.0Celsius# Temperature defines a lower bound, which is used to validate inputs>>>try:>>>Temperature(-300)>>>exceptValueErrorase:>>>print(e)Couldnotvalidate-300.0asa<nxtype:Temperature>instance# TemperatureSample defines its own bounds as well# This would also work if one tried to directly instantiate a table of# temperature samples from a dataframe -though in the current implementation# the framework converts the data frame to records and uses Pydantic, which# is obviously very inefficient. The target is to use the Panderas library,# with no change in the interface.>>>try:>>>TemperatureSample(machine_id=m0.id,timestamp=times[0],temperature=250)>>>exceptValueErrorase:>>>print(e)Couldnotvalidatemachine_id0timestamp2022-02-1812:00:00temperature250.0dtype:objectasa<nxtype:TemperatureSample>instance# Here is a printout of our Journal instance>>>print(hourly)machine_id0period2022-02-1812:00avg_temp51.0min_temp45.0max_temp59.0dtype:object# And finally a printout of our overheating sessions' timespans, a pandas# IntervalIndex>>>print(overheat_sessions.timespans)IntervalIndex([[2022-02-1812:17:30,2022-02-1812:22:30),[2022-02-1812:42:30,2022-02-1812:52:30)],dtype='interval[datetime64[ns], left]',name='timespan')
|
anaynayak-tut
|
anaynayak_tutThis package is only created for demonstarion of
how to create and publish python package on pypi.org using twine lib
you will find this on GeeksForGeeks as soon as it publishedNew Features!There are 2 function written in this packageFunctions:import anaynayak tut as a
get_sum(a,b) prints and returns addition of arguments
get_mul(a,b) prints and returns multiplication of arguments
# Output will reflect on terminalInstallationpip install anaynayak_tutYou can reach out me at,[email protected]
|
anbang-getip
|
Just a simple program to get templity proxy ip.———————–2019-4-18 11:06:51—————-from Arbert
|
anbani
|
AnbaniPyGeorgian Python toolkit for NLP, Transliteration and more. Partially based onanbani.js.InstallpipinstallanbaniQuickstartTransliteration example:fromanbani.core.converterimportconvert,interpretinterpret("გამარჯობა","asomtavruli")# 'ႢႠႫႠႰႿႭႡႠ'Georgianisation example:fromanbani.nlp.georgianisationimportgeorgianisegeorgianise("gamarjoba - rogor xar - rasa iqm - kaia kata - kai erti")# 'გამარჯობა - როგორ ხარ - რასა იქმ - კაია კატა - კაი ერთი'Convert ebooks with qwerty encoding to unicode Mkhedruli:fromanbani.nlp.utilsimportebook2textfromanbani.core.converterimportclassify_textfromanbani.core.converterimportconverttext=ebook2text("/home/george/Dev/georgian-text-corpus/sources/mylibrary/raw/files/ჩარლზ დიკენსი - დევიდ კოპერფილდი.pdf")print(text[:300])print(classify_text(text))print(convert(text,"qwerty","mkhedruli")[:300])# Carlz dikensi daviT koperfildi Tavi pirveli dabadeba me viqnebi gmiri Cemive sakuTari Tavgadasavlisa Tu sxva...# latin# ჩარლზ დიკენსი დავით კოპერფილდი თავი პირველი დაბადება მე ვიქნები გმირი ჩემივე საკუთარი თავგადასავლისა თუ სხვა...Expand contractions:fromanbani.nlp.contractionsimportexpand_texttext="ილია ჭავჭავაძე (დ. 8 ნოემბერი, 1837, სოფელი ყვარელი — გ. 12 სექტემბერი, 1907, წიწამური)"print(text)print(expand_text(text))# ილია ჭავჭავაძე (დ. 8 ნოემბერი, 1837, სოფელი ყვარელი — გ. 12 სექტემბერი, 1907, წიწამური)# ილია ჭავჭავაძე (დაბადება 8 ნოემბერი, 1837, სოფელი ყვარელი — გარდაცვალება 12 სექტემბერი, 1907, წიწამური)To-DoFeel free to fork this repo!TokenizerTransliterationExpand contractionsebook2pdf converterStemmerLemmatizerStopwordsResources usedhttp://www.nplg.gov.ge/civil/statiebi/wignebi/qartul_enis_marTlwera/qartul_enis_marTlwera_tavi-12.htmhttp://www.nplg.gov.ge/civil/upload/Semokleba.htm
|
anbefm
|
anbefm一个后台框架manage 应用管理handle_payload 解析请求bodyjwt 保存登录信息model 数据库连接请求数据校验排序启动应用python -m anbefm.manage startapp --port=9201 --app=client如何发布python到pypi可以参考这篇教程主要步骤:setup.py生成包pip install --user --upgrade setuptools wheel twinepython setup.py sdist bdist_wheel上传包到 PyPI注册pypi账号python -m twine upload dist/*如何调试python包npm 中的npm link命令可以很方便的将当前项目链接到全局安装目录,python中也有类似的命令# 将当前包安装到全局, 可以指定包所在的文件夹(setup.py所在的文件夹)pipinstall-e.# 或pythonsetup.pydevelop
|
anbima_calendar
|
Anbima Calendar 🏦 📆IntroductionAnbima Calendaris a Python library designed to simplify handling banking holidays specific to Brazil.ANBIMA (Associação Brasileira das Entidades dos Mercados Financeiro e de Capitais), the Brazilian Financial and Capital Markets Association, plays a crucial role in the development of financial markets in Brazil. This library provides a robust set of tools for determining business days, calculating due dates, and identifying holidays based on the official holiday calendar published by ANBIMA.The holiday data used inAnbima Calendaris sourced directly from ANBIMA's official website:https://www.anbima.com.br/feriados/feriados.asp. This ensures that the library stays up-to-date with the most accurate and relevant holiday information, making it an invaluable resource for financial applications, scheduling systems, and any software dealing with date calculations in the Brazilian context.Whether you're developing a finance-related application, a scheduling tool, or simply need to be aware of Brazilian banking holidays, Anbima Calendar offers a straightforward and efficient solution to navigate through the complexities of holiday scheduling in Brazil's financial markets.FeaturesIdentify Business Days: Quickly determine if a specific date is a business day in Brazil.Calculate Due Dates: Accurately calculate due dates taking into account weekends and holidays.Discover Holidays: Retrieve information about specific Brazilian holidays.Add Business Days: Add a specified number of business days to a given date.InstallationInstall Anbima Calendar using pip:pipinstallanbima_calendarQuickstartHere's a quick example to get you started with Anbima Calendar:fromanbima_calendarimportis_business_day,add_business_days,get_holiday# Check if a date is a business dayprint(is_business_day('2023-04-21'))# False, as it's Tiradentes' Day# Add business days to a datenew_date=add_business_days(5,'2023-04-18')print(new_date)# Returns the date after adding 5 business days# Retrieve the holiday nameholiday=get_holiday('2023-04-21')print(holiday)# TiradentesChecking Business Days:Determine if a given date is a business day in Brazil.fromanbima_calendarimportis_business_dayprint(is_business_day('2023-12-25'))# False, Christmas Day is a holidayAdding Business Days:Add business days to a date, automatically skipping weekends and holidays.fromanbima_calendarimportadd_business_daysnew_date=add_business_days(10,'2023-12-20')print(new_date)# Date 10 business days from December 20, 2023Identifying HolidaysFind out if a date is a holiday and get its name.fromanbima_calendarimportget_holidayholiday_name=get_holiday('2023-05-01')print(holiday_name)# Dia do Trabalho (Labor Day)ContributingContributions to Anbima Calendar are welcome! Whether it's bug reports, feature requests, or code contributions, your input is highly valued. Please refer to our Please refer to ourcontributing guidelinesfor more information.LicenseAnbima Calendar is licensed under the MIT License. See the LICENSE file for more details.
|
anbimapi
|
No description available on PyPI.
|
anbufirstpdf
|
This is updated version 1.1
|
ancalagon
|
No description available on PyPI.
|
ancb
|
Another NumPy Circular BufferAnother NumPy Circular Buffer (or ANCB for short) is an attempt to make a circular buffer work with NumPy ufuncs for
real-time data processing. One can think of a NumpyCircularbuffer in ANCB as being a fixed length deque with random access
functionality (unlike the deque). For users more familar with NumPy, one can think of this buffer as a way of automatically
rolling the array into the right order.ANCB was developed by Drason "Emmy" Chow during their time as an undergraduate researcher at IU: Bloomington for use in
makingSavitzky-Golay filters, which take an array of positions in chronological or reverse-chronological order and produce
estimates of velocity, acceleration, and possibly higher order derivatives if desired.Looking for the documentation? You can find it here:https://ancb-docs.readthedocs.io/en/latest/
|
ancer-python
|
Seethe project repositoryfor more information.
|
ancestralcost
|
Ancestral CostAncestral Cost is a tool for validating multiple sequence alignments prior to performing ancestral sequence reconstruction.It checks for each position in a given ancestor that the presence of ancestral content implied to be there by a given alignment and tree is not substantially less parsimonious then the alternative of not having ancestral content there.InstallationUsing Pip$pipinstallancestralcostManual$gitclonehttps://github.com/gabefoley/ancestralcost$cdancestralcost$pythonsetup.pyinstallUsage$python-mancestralcost-a<alignment>-t<tree>WorkflowBefore performing ancestral sequence reconstruction (ASR) we can recognise that a multiple sequence alignment implies that every aligned column should have a common ancestor.Ancestral Cost checks that for every ancestral position that is implied by a given alignment and tree the parsimony cost of having ancestral content there isn't far greater than not having ancestral content.Ancestral Cost is intended to be run before ASR in order to validate alignments and trees. It highlights positions that may be erroneously aligned.If an alignment suggests two positions should be aligned but they are only present in distant clades then they shouldn't be one column but split into two columns. Failing to do this will influence ancestors that are predicted at these positions.First Ancestral Cost calculates all of the positions required to be there. In the example this is done by simply looking at the highest ancestral position implied by each column. From the example, N3 is the only ancestral node that has content at each of the four alignment positions, all of the other nodes have content at three alignment positions.It then calculates the parsimony cost for each implied position and reports on the cost of content being present and cost of content being absent.This allows users to filter on particularly informative sites or particularly large discrepencies in parsimony scores.The intention is to look at the positions identified by Ancestral Cost and potentially amend the multiple sequence alignment as a result.All commands-a Path to alignment
-t Path to phylogenetic tree
-n Node to return cost for (default is root)
-p Just return the positions required to be there
-f Return all ancestors as a FASTA file
-to Write out the ancestor tree
|
ancestry
|
ancestry
|
anchcloud-sdk
|
This repository allows you to accessAnchCloudand control your resources from your applications.This SDK is licensed underApache Licence, Version 2.0.NoteRequires Python 2.7,
for more information please seeAnchCloud SDK DocumentationInstallationInstall viapip$ pip install anchcloud-sdkUpgrade to the latest version$ pip install --upgrade anchcloud-sdkGetting StartedIn order to operate AnchCloud IaaS.
you need applyaccess keyonanchcloud consolefirst.AnchCloud IaaS APIPass access key id and secret key into classAPIConnectionto create connection>>> from anchcloud.iaas.instances import *
>>> conn = APIConnection('CLIENTID','SECRETKEY')The variableAPIConnectionis the instance ofanchcloud.conn.iaas_client,
we can use it to call resource related methods.Example:# launch instances
>>> d = {
"instance": {
"zone": "ac2",
"image_id": "centos64x64c",
"instance_type": "PERFORMANCE",
"cpu": 1,
"memory": 1024,
"count": 1,
"login_mode": "passwd",
"login_passwd": "Abcd1234"
},
"order": {
"payment_type": "POSTPAY"
}
}
>>> ret = Instances(conn).create("ac2",d)
# stop instances
>>> d = {
"instances": [
"ins-Y4DFAOQ"
]
}
>>> ret = Instances(conn).stop("ac2",d)
# describe instances
>>> d = {"status": "running,stopped"]}
>>> ret = Instances(conn).list("ac2",d)
|
ancho
|
Failed to fetch description. HTTP Status Code: 404
|
anchor
|
Anchor======.. image:: https://img.shields.io/pypi/v/anchor.svg:target: https://pypi.python.org/pypi/anchor/:alt: Latest Version.. image:: https://img.shields.io/pypi/pyversions/anchor.svg:target: https://pypi.python.org/pypi/anchor/:alt: Python Versions.. image:: https://img.shields.io/pypi/format/anchor.svg:target: https://pypi.python.org/pypi/anchor/:alt: Format.. image:: https://img.shields.io/badge/license-Apache%202-blue.svg:target: https://git.openstack.org/cgit/openstack/anchor/plain/LICENSE:alt: LicenseAnchor is an ephemeral PKI service that, based on certain conditions,automates the verification of CSRs and signs certificates for clients.The validity period can be set in the config file with hour resolution.Ideas behind Anchor===================A critical capability within PKI is to revoke a certificate - to ensurethat it is no longer trusted by any peer. Unfortunately research hasdemonstrated that the two typical methods of revocation (CertificateRevocation Lists and Online Certificate Status Protocol) both havefailings that make them unreliable, especially when attempting toleverage PKI outside of web-browser software.Through the use of short-lifetime certificates Anchor introduces theconcept of "passive revocation". By issuing certificates with lifetimesmeasured in hours, revocation can be achieved by simply not re-issuingcertificates to clients.The benefits of using Anchor instead of manual long-term certificatesare:* quick certificate revoking / rotation* always tested certificate update mechanism (used daily)* easy integration with certmonger for service restarting* certificates are signed only when validation is passed* signing certificates follows consistent processInstallation============In order to install Anchor from source, the following systemdependencies need to be present:* python 2.7* python (dev files)* libffi (dev)* libssl (dev)When everything is in place, Anchor can be installed in one of threeways: a local development instance in a python virtual environment, a localproduction instance or a test instance in a docker container.For a development instance with virtualenv, run:virtualenv .venv && source .venv/bin/activate && pip install .For installing in production, either install a perpared system package,or install globally in the system:python setup.py installRunning the service===================In order to run the service, it needs to be started via the `pecan`application server. The only extra parameter is a config file:pecan serve config.pyFor development, an additional `--reload` parameter may be used. It willcause the service to reload every time a source file is changed, howeverit requires installing an additional `watchdog` python module.In the default configuration, Anchor will wait for web requests on port5016 on local network interface. This can be adjusted in the `config.py`file.Preparing a test environment============================In order to test Anchor with the default configuration, the followingcan be done to create a test CA. The test certificate can be then usedto sign the new certificates.openssl req -out CA/root-ca.crt -keyout CA/root-ca-unwrapped.key \-newkey rsa:4096 -subj "/CN=Anchor Test CA" -nodes -x509 -days 365chmod 0400 CA/root-ca-unwrapped.keyNext, a new certificate request may be generated:openssl req -out anchor-test.example.com.csr -nodes \-keyout anchor-test.example.com.key -newkey rsa:2048 \-subj "/CN=anchor-test.example.com"That reqest can be submitted using curl (while `pecan serve config.py`is running):curl http://0.0.0.0:5016/v1/sign/default -F user='myusername' \-F secret='simplepassword' -F encoding=pem \-F 'csr=<anchor-test.example.com.csr'This will result in the signed request being created in the `certs`directory.Docker test environment=======================We have provided a Dockerfile that can be used to build a container thatwill run anchorThese instructions expect the reader to have a working Docker installalready. Docker should *not* be used to serve Anchor in any productionenvironments.Assuming you are already in the anchor directory, build a containercalled 'anchor' that runs the anchor service, with any local changesthat have been made in the repo:docker build -t anchor .To start the service in the container and serve Anchor on port 5016:docker run -p 5016:5016 anchorThe anchor application should be accessible on port 5016. If you arerunning docker natively on Linux, that will be 5016 on localhost(127.0.0.1). If you are running docker under Microsoft Windows or AppleOSX it will be running in a docker machine. To find the docker machineIP address run:docker-machine ip defaultRunning Anchor in production============================Anchor shouldn't be exposed directly to the network. It's running via anapplication server (Pecan) and doesn't have all the features you'dnormally expect from a http proxy - for example dealing well withdeliberately slow connections, or using multiple workers. Anchor canhowever be run in production using a better frontend.To run Anchor using uwsgi you can use the following command:uwsgi --http-socket :5016 --venv path/to/venv --pecan config.py -p 4In case a more complex scripted configuration is needed, for example tohandle custom headers, rate limiting, or source filtering a completeHTTP proxy like Nginx may be needed. This is however out of scope forAnchor project. You can read more about production deployment in[Pecan documentation](http://pecan.readthedocs.org/en/latest/deployment.html).Additionally, using an AppArmor profile for Anchor is a good idea toprevent exploits relying on one of the native libraries used by Anchor(for example OpenSSL). This can be done with sample profiles which youcan find in the `tools/apparmor.anchor_*` files. The used file needs tobe reviewed and updated with the right paths depending on the deploymentlocation.Validators==========One of the main features of Anchor are the validators which make surethat all requests match a given set of rules. They're configured in`config.json` and the sample configuration includes a few of them.Each validator takes a dictionary of options which provide the specificmatching conditions.Currently available validators are:* `common_name` ensures CN matches one of names in `allowed_domains` orranges in `allowed_networks`* `alternative_names` ensures alternative names match one of the namesin `allowed_domains`* `alternative_names_ip` ensures alternative names match one of thenames in `allowed_domains` or IP ranges in `allowed_networks`* `blacklist_names` ensures CN and alternative names do not contain anyof the configured `domains`* `server_group` ensures the group the requester is contained within`group_prefixes`* `extensions` ensures only `allowed_extensions` are present in therequest* `key_usage` ensures only `allowed_usage` is requested for thecertificate* `ca_status` ensures the request does/doesn't require the CA flag* `source_cidrs` ensures the request comes from one of the ranges in`cidrs`A configuration entry for a validator might look like one from thesample config:"key_usage": {"allowed_usage": ["Digital Signature","Key Encipherment","Non Repudiation"]}Authentication==============Anchor can use one of the following authentication modules: static,keystone, ldap.Static: Username and password are present in `config.json`. This modeshould be used only for development and testing."auth": {"static": {"secret": "simplepassword","user": "myusername"}}Keystone: Username is ignored, but password is a token valid in theconfigured keystone location."auth": {"keystone": {"url": "https://keystone.example.com"}}LDAP: Username and password are used to bind to an LDAP user in aconfigured domain. User's groups for the `server_group` filter areretrieved from attribute `memberOf` in search for`(sAMAccountName=username@domain)`. The search is done in the configuredbase."auth": {"ldap": {"host": "ldap.example.com","base": "ou=Users,dc=example,dc=com","domain": "example.com""port": 636,"ssl": true}}Signing backends================Anchor allows the use of configurable signing backend. Currently it provides twoimplementation: one based on cryptography.io ("anchor"), the other using PKCS#11libraries ("pkcs11"). The first one is used in the sample config. Other backendsmay have extra dependencies: pkcs11 requires the PyKCS11 module, not required byanchor by default.The resulting certificate is stored locally if the `output_path` is setto any string. This does not depend on the configured backend.Backends can specify their own options - please refer to the backenddocumentation for the specific list. The default backend takes thefollowing options:* `cert_path`: path where local CA certificate can be found* `key_path`: path to the key for that certificate* `signing_hash`: which hash method to use when producing signatures* `valid_hours`: number of hours the signed certificates are valid forSample configuration for the default backend:"ca": {"backend": "anchor""cert_path": "CA/root-ca.crt","key_path": "CA/root-ca-unwrapped.key","output_path": "certs","signing_hash": "sha256","valid_hours": 24}Other backends may be created too. For more information, please refer to thedocumentation.Fixups======Anchor can modify the submitted CSRs in order to enforce some rules,remove deprecated elements, or just add information. Submitted CSR maybe modified or entirely redone. Fixup are loaded from "anchor.fixups"namespace and can take parameters just like validators.Reporting bugs and contributing===============================For bug reporting and contributing, please check the CONTRIBUTING.rstfile.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.