package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
alphanum
|
alphanumSimple Python library to generate pseudo-random alphanumeric strings of
arbitrary length. Requires Python 3.5+.Installationalphanum can be obtained from PyPI withpip install alphanum.Alternatively, build it withPoetry:pipinstallpoetry
gitclonehttps://github.com/clpo13/alphanumcdalphanum
poetrybuild
pipinstalldist/alphanum-x.y.z-py3-none-any.whlUsageimportalphanumfoo=alphanum.generate(10)print(foo)LicenseCopyright (c) 2020 Cody Logan. MIT licensed.
|
alphanum-code
|
This module generates unique consecutive alphanumeric codes of specified size.
A comment can be associated with a code on request.Requirementspython >= 2.7SQLAchemyPackage depencies are included in alphanum_code package and will be automatically installed.
For more details, seerequirements.txt.InstallInstall from PyPI:pip install alphanum_codeInstall from source:`bash git clonehttps://github.com/ylaizet/alphanum_codecd alphanum_code pip install-e. `Usage>>> from alphanum_code import AlphaNumCodeManager
>>> dbname = "sqlite:///test_alphanum.sqlite"
>>> manager = AlphaNumCodeManager(dbname)
>>> first_code = manager.next_code("with comment")
>>> print("my first code:", first_code)NotesAlphanumeric order is digits then letters :0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ.Letters in the alphanumeric code are UPPERCASE.TipsAtmanagerinstanciation, you can set:code_sizeto specify the lenght of the code you want to generate each timeinit_codeto specify the starting point for code generationTestInstall Pytestpip install pytestRun test from base directorypython -m pytest tests/
|
alphanym
|
Official Alphanym Python Client
|
alpha-orm
|
alpha-ormAn extraordinary python database ormFeaturesAutomatically creates tables and columns.No configuration required, simply create database.Currently supported databases include mysql.ExamplesSetup (MySQL)importalphaorm.AlphaORMasDBDB.setup('mysql',{'host':'localhost','user':'root','password':'','database':'alphaorm'})CREATE#--------------------------------------# CREATE 1#--------------------------------------product=DB.create('product')product.name='Running shoes'product.price=5000DB.store(product)#--------------------------------------# CREATE 2#--------------------------------------author=DB.create('author')author.name='Chimamanda Adichie'book=DB.create('book')book.title='Purple Hibiscus'book.author=authorDB.store(book)READ#--------------------------------------# READ 1 [get all records]#--------------------------------------books=DB.getAll('book')forbookinbooks:print(f'{book.title}by{book.author.name}')#--------------------------------------# READ 2 [filter one]#--------------------------------------book=DB.find('book','id = :bid',{'bid':1})print(f'{book.title}by{book.author.name}')#--------------------------------------# READ 3 [filter all]#--------------------------------------author=DB.find('author','name = :authorName',{'authorName':'William Shakespare'})booksByShakespare=DB.findAll('book','author_id = :authorId',{'authorId':author.getID()})print('Books by William Shakespare are :')forbookinbooksByShakespare:print(book.title)UPDATE#--------------------------------------# UPDATE#--------------------------------------product=DB.find('product','id = :pid',{'pid':1})product.price=500book=DB.find('book','id = :bid',{'bid':1})book.author.name='New author'book.isbn='3847302-SD'book.title='New Title'DB.store(book)print(book)DELETE#--------------------------------------# DELETE 1 [delete single record]#--------------------------------------book=DB.find('book','id = :bid',{'bid':1})DB.drop(book)#--------------------------------------# DELETE 2 [delete all records]#--------------------------------------DB.dropAll('book')
|
alpha-pca
|
Alpha-PCAAlpha-PCA is more robust to outliers than standard PCA.Standard PCA is a special case of alpha PCA (when alpha=1).UsageUsageThe model is inherited from a sklearn module and works the same way as thestandard PCA.It also supportsPyTorchtensors (on cpu and GPU).fromalpha_pcaimportAlphaPCAimporttorchX=torch.randn(16,10)# also works with numpypca=AlphaPCA(n_components=5,alpha=0.7,random_state=123)# alpha=1 -> standard PCApca.fit(X)# to project X in the latent spaceX_transformed=pca.transform(X)# (16, 10) -> (16, 5)# fit inverseX_=pca.inverse_transform(X_transformed)# (16, 5) -> (16, 10)# directly approximate X_ == inverse_transform(transform(X))X_=pca.approximate(X)# (16, 10) -> (16, 10)# Find the optimal alpha via a reconstruction lossbest_alpha=pca.compute_optimal_alpha(X,n_components=5)
|
alphapept
|
AlphaPeptDOI:10.1101/2021.07.23.453379PreprintOur preprintAlphaPept, a modern and open framework for MS-based
proteomicsis now availablehere.Be sure to check out other packages of our ecosystem: -alphatims: Fast access to
TimsTOF data. -alphamap:
Peptide level MS data exploration. -alphapeptdeep: Predicting
properties from peptides. -alphaviz: Vizualization of MS
data.Windows QuickstartDownload the latest installerhere,
install and click the shortcut on the desktop. A browser window with
the AlphaPept interface should open. In the case of Windows Firewall
asking for network access for AlphaPept, please allow.In theNew Experiment, select a folder with raw files and FASTA
files.Specify additional settings such as modifications withSettings.ClickStartand run the analysis.See also below for more detailed instructions.Current functionalityFeatureImplementedTypeDDAFiletypesBruker, ThermoQuantificationLFQIsobaric labelsNonePlatformWindowsLinux and macOS should, in principle, work but are not heavily tested
and might require additional work to set up (see detailed instructions
below). To read Thermo files, we use Mono, which can be used on Mac and
Linux. For Bruker files, we can use Linux but not yet macOS.Python Installation InstructionsRequirementsWe highly recommend theAnacondaorMinicondaPython
distribution, which comes with a powerful package manager. See below for
additional instructions for Linux and Mac as they require additional
installation of Mono to use the RawFileReader.AlphaPept can be used as an application as a whole or as a Python
Package where individual modules are called. Depending on the use case,
AlphaPept will need different requirements, and you might not want to
install all of them.Currently, we have the defaultrequirements.txt, additional
requirements to run the GUIguiand packages used for developingdevelop.Therefore, you can install AlphaPept in multiple ways:The defaultalphapeptWith GUI-packagesalphapept[gui]With pacakges for developmentalphapept[develop](alphapept[develop,gui]) respectivelyThe requirements typically contain pinned versions and will be
automatically upgraded and tested withdependabot. Thisstableversion allows having a reproducible workflow. However, in order to
avoid conflicts with package versions that are too strict, the
requirements are not pinned when being installed. To use the strict
version use the-stable-flag, e.g.alphapept[stable].For end-users that want to set up a processing environment in Python,
the"alphapept[stable,gui-stable]"is thebatteries-included-version
that you want to use.PythonIt is strongly recommended to install AlphaPept in its own
environment. 1. Open the console and create a new conda environment:conda create --name alphapept python=3.82. Activate the environment:conda activate alphapept3. Install AlphaPept via pip:pip install "alphapept[stable,gui-stable]". If you want to use
AlphaPept as a package without the GUI dependencies and without strict
version dependencies, usepip install alphapept.If AlphaPept is installed correctly, you should be able to import
AlphaPept as a package within the environment; see below.LinuxInstall the build-essentials:sudo apt-get install build-essential.Install AlphaPept via pip:pip install "alphapept[stable,gui-stable]". If you want to use
AlphaPept as a package withouth the GUI dependencies and strict
version dependencies usepip install alphapept.Install libgomp.1 withsudo apt-get install libgomp1.Bruker SupportCopy-paste the Bruker library for feature finding to your /usr/lib
folder withsudo cp alphapept/ext/bruker/FF/linux64/alphapeptlibtbb.so.2 /usr/lib/libtbb.so.2.Thermo SupportInstall Mono from mono-project websiteMono
Linux.
NOTE, the installed mono version should be at least 6.10, which
requires you to add the ppa to your trusted sources!Install pythonnet withpip install pythonnet>=2.5.2MacInstall AlphaPept via pip:pip install "alphapept[stable,gui-stable]". If you want to use
AlphaPept as a package withouth the GUI dependencies and strict
version dependencies usepip install alphapept.Bruker SupportOnly supported for preprocessed files.Thermo SupportInstallbrewand pkg-config:brew install pkg-configInstall Mono from mono-project websiteMono
MacRegister the Mono-Path to your system: For macOS Catalina, open the
configuration of zsh via the terminal:Type incdto navigate to the home directory.Typenano ~/.zshrcto open the configuration of the terminalAdd the path to your mono installation:export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:/usr/lib/pkgconfig:/Library/Frameworks/Mono.framework/Versions/Current/lib/pkgconfig:$PKG_CONFIG_PATH.
Make sure that the Path matches to your version (Here 6.12.0)Save everything and execute. ~/.zshrcInstall pythonnet withpip install pythonnet>=2.5.2DeveloperRedirect to the folder of choice and clone the repository:git clone https://github.com/MannLabs/alphapept.gitNavigate to the alphapept folder withcd alphapeptand install the
package withpip install .(default users) or withpip install -e .to enable developers mode. Note that you can use
the different requirements here aswell
(e.g.pip install ".[gui-stable]")GPU SupportSome functionality of AlphaPept is GPU optimized that uses Nvidia’s
CUDA. To enable this, additional packages need to be installed.Make sure to have a workingCUDA
toolkitinstallation
that is compatible with CuPy. To check typenvcc --versionin your
terminal.Installcupy. Make sure to install the cupy
version matching your CUDA toolkit (e.g.pip install cupy-cuda110for CUDA toolkit 11.0.Additional NotesTo access Thermo files, we have integratedRawFileReaderinto
AlphaPept. We rely onMonofor
Linux/Mac systems.To access Bruker files, we rely on thetimsdata-library. Currently,
only Windows is supported. For feature finding, we use the Bruker
Feature Finder, which can be found in theextfolder of this
repository.Notes for NBDEVFor developing with the notebooks, install the nbdev package (see the
development requirements)To facilitate navigating the notebooks, use jupyter notebook
extensions. They can be called from a running jupyter instance like
so:http://localhost:8888/nbextensions. The extensionscollapsible headingsandtoc2are very beneficial.Standalone Windows InstallerTo use AlphaPept as a stand-alone program for end-users, it can be
installed on Windows machines via a one-click installer. Download the
latest versionhere.DockerIt is possible to run AlphaPept in a docker container. For this, we
provide two Dockerfiles:Dockerfile_thermoandDockerfile_bruker,
depending on which filetypes you want to analyse. They are split because
of drastically different requirements.To run, navigate to the AlphaPept repository and rename the dockerfile
you want to use, e.g.Dockerfile_thermotoDockerfile.Build the image with:docker build -t docker-alphapept:latest .To run usedocker run -p 8505:8505 -v /Users/username/Desktop/docker:/home/alphapept/ docker-alphapept:latest alphapept gui(Note that -v maps a local folder for convient file transfer)Access the AlphaPept GUI vialocalhost:8505in your browser.Note 1: The Thermo Dockerfile is built on a Jupyter image, so you can
also start a jupyter instance:docker run -p 8888:8888 -v /Users/username/Desktop/docker:/home/jovyan/ docker-alphapept:latest jupyter notebook --allow-rootDocker Troubleshooting on M1-MacThe Thermo dockerfile was tested on an M1-Mac. Resources were set to
18GB RAM and 2 CPUs, 200 GB diskIt was possible to build the Bruker dockerfile with the platform tag--platform linux/amd64. However, it was very slow and the Bruker
file is not recommended for an M1-Mac. Windows worked nicely.Additional DocumentationThe documentation is automatically built based on the jupyter notebooks
(nbs/index.ipynb) and can be foundhere:Version PerformanceAn overview of the performance of different versions can be foundhere.
We re-run multiple tests on datasets for different versions so that
users can assess what changes from version to version. Feel free tosuggesta test set
in case.How to useAlphaPept is meant to be a framework to implement and test new ideas
quickly but also to serve as a performant processing pipeline. In
principle, there are three use-cases:GUI: Use the graphical user interface to select settings and process
files manually.CMD: Use the command-line interface to process files. Useful when
building automatic pipelines.Python: Use python modules to build individual workflows. Useful when
building customized pipelines and using Python as a scripting language
or when implementing new ideas.Windows Standalone InstallationFor thewindows
installation,
simply click on the shortcut after installation. The windows
installation also installs the command-line tool so that you can call
alphapept viaalphapeptin the command line.Python PackageOnce AlphaPept is correctly installed, you can use it like any other
python module.fromalphapept.fastaimportget_frag_dict,parsefromalphapeptimportconstantspeptide='PEPT'get_frag_dict(parse(peptide),constants.mass_dict){'b1': 98.06004032687,
'b2': 227.10263342687,
'b3': 324.15539728686997,
'y1': 120.06551965033,
'y2': 217.11828351033,
'y3': 346.16087661033}Using as a toolIf alphapept is installed an a conda or virtual environment, launch this
environment first.To launch the command line interface use: *alphapeptThis allows us to select different modules. To start the GUI use: *alphapept guiTo run a workflow, use: *alphapept workflow your_own_workflow.yamlAn example workflow is easily generated by running the GUI once and
saving the settings which can be modified on a per-project basis.CMD / PythonCreate a settings-file. This can be done by changing thedefault_settings.yamlin the repository or using the GUI.Run the analysis with the new settings file.alphapept run new_settings.yamlWithin Python (i.e., Jupyter notebook) the following code would be
required)from alphapept.settings import load_settings
import alphapept.interface
settings = load_settings('new_settings.yaml')
r = alphapept.interface.run_complete_workflow(settings)This also allows you to break the workflow down in indiviudal steps,
e.g.:settings = alphapept.interface.import_raw_data(settings)
settings = alphapept.interface.feature_finding(settings)NotebooksWithin the notebooks, we try to cover most aspects of a proteomics
workflow:Settings: General settings to define a workflowChem: Chemistry related functions, e.g., for calculating isotope
distributionsInput / Output: Everything related to importing and exporting and the
file formats usedFASTA: Generating theoretical databases from FASTA filesFeature Finding: How to extract MS1 features for quantificationSearch: Comparing theoretical databases to experimental spectra and
getting Peptide-Spectrum-Matches (PSMs)Score: Scoring PSMsRecalibration: Recalibration of data based on identified peptidesQuantification: Functions for quantification, e.g., LFQMatching: Functions for Match-between-runsConstants: A collection of constantsInterface: Code that generates the command-line-interface (CLI) and
makes workflow steps callablePerformance: Helper functions to speed up code with CPU / GPUExport: Helper functions to make exports compatbile to other Software
toolsLabel: Code for support isobaric label searchDisplay: Code related to displaying in the streamlit guiAdditional code: Overview of additional code not covered by the
notebooksHow to contribute: Contribution guidelinesAlphaPept workflow and files: Overview of the worfklow, files and
column namesContributingIf you have a feature request or a bug report, please post it either as
an idea in thediscussionsor as
an issue on theGitHub issue
tracker. Upvoting
features in the discussions page will help to prioritize what to
implement next. If you want to contribute, put a PR for it. You can find
more guidelines for contributing and how to get startedhere. We will
gladly guide you through the codebase and credit you accordingly.
Additionally, you can check out the Projects page on GitHub. You can
also contact us [email protected] you like the project, consider starring it!Cite us@article {Strauss2021.07.23.453379,
author = {Strauss, Maximilian T and Bludau, Isabell and Zeng, Wen-Feng and Voytik, Eugenia and Ammar, Constantin and Schessner, Julia and Ilango, Rajesh and Gill, Michelle and Meier, Florian and Willems, Sander and Mann, Matthias},
title = {AlphaPept, a modern and open framework for MS-based proteomics},
elocation-id = {2021.07.23.453379},
year = {2021},
doi = {10.1101/2021.07.23.453379},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2021/07/26/2021.07.23.453379},
eprint = {https://www.biorxiv.org/content/early/2021/07/26/2021.07.23.453379.full.pdf},
journal = {bioRxiv}
}
|
alphaplinkpython
|
No description available on PyPI.
|
alphapose
|
No description available on PyPI.
|
alphaPredict
|
alphaPredict: A predictor of AlphaFold2 confidence scoresLast updated Novemeber 2023alphaPredictuses a bidirectional recurrent neural network (BRNN) trained on the per residue pLDDT (predicted IDDT-Ca) confidence scores generated by AlphaFold2 (AF2). The confidence scores from 21 proteomes (about 363,000 total protein sequences) were used to train the BRNN behind alphaPredict. These confidence scores measure the local confidence that AlphaFold2 has in its predicted structure. The scores go from 0-100 where 0 represents low confidence and 100 represents high confidence. For more information, please see:Highly accurate protein structure prediction with AlphaFoldhttps://doi.org/10.1038/s41586-021-03819-2. In describing these scores, the team states that regionds with pLDDT scores of less than 50 should not be interpreted except aspossibledisordered regions.Which proteomes were used to generate the network used by alphaPredict?The confidence scores (pLDDT) from the proteomes ofDanio rerio,Candida albicans,Mus musculus,Escherichia coli,Drosophila melanogaster,Methanocaldococcus jannaschii,Plasmodium falciparum,Mycobacterium tuberculosis,Caenorhabditis elegans,Dictyostelium discoideum,Trypanosoma cruzi,Saccharomyces cerevisiae,Schizosaccharomyces pombe,Rattus norvegicus,Homo sapiens,Arabidopsis thaliana,Zea mays,Leishmania infantum,Staphylococcus aureus,Glycine max,Oryza sativawere used to generate the BRNN (V4 and up).Why is alphaPredict useful?alphaPredictallows for rapid generation of predicted AF2 residue-by-residue confidence scores of any protein of interest. This can be used for many applications such as generating a quick preview of which regions of your protein of interest AF2 might be able to predict with high confidence, or which regions of your proteinmightbe disordered. AF2 is not (strictly speaking) a disorder predictor, and the confidence scores are not directly representative of protein disorder. Therefore, any conclusions drawn with regards to disorder from predicted AF2 confidence scores should be interpreted with care, but they may be able to provide an additional metric to assess the likelihood that any given protein region may be disordered.How accurate is alphaPredict?The current BRNN (V7) has on average an ~8% error per residue. This is the average error rate for the V7 network. We are currently waiting on a new computer to try to generate new networks using more data, so this will be the most accurate network for the forseeable future. If you choose to use other networks:
V1 has an average per residue error rate of about 11.5%
V2 has an average per residue error rate of about 9.5%
V3 through V6 were all networks that were tweaked before finishing V7, so I don't have those numbers on hand. They have errors of between 8.5% and 11% per residue.Installation:alphaPredictis available through PyPI - to install simply run$ pip install alphaPredictAlternatively, you can getalphaPredictdirectly from GitHub.To clone the GitHub repository and gain the ability to modify a local copy of the code, run$ git clone https://github.com/ryanemenecker/alphaPredict.git
$ cd alphapredict
$ pip install .This will installalphapredictlocally.Usage:alphaPredictis usable from Python.Using alphaPredict from PythonFirst import alphaPredict -import alphaPredict as alphaOnce imported, you can begin to generate predicted confidence scores.Predicting Confidence ScoresThealpha.predict()function will return a list of predicted confidence scores for each residue of the input sequence. The input sequence should be a string. Running -alpha.predict("DSSPEAPAEPPKDVPHDWLYSYVFLTHHPADFLR")would output -[39.5097, 43.5166, 46.9381, 55.6352, 54.2278, 56.5101, 60.3866, 58.0785, 60.2979, 65.6772, 69.3595, 66.0048, 68.0264, 68.4496, 71.1201, 70.3302, 73.5393, 76.7108, 81.8086, 85.8871, 86.4789, 87.4088, 88.8859, 87.3609, 84.9879, 79.5814, 80.5888, 79.3752, 79.8667, 83.2751, 83.6576, 81.2429, 78.8213, 72.8758]Graphing Confidence ScoresThealpha.graph()function will return a graph of predicted confidence scores for each residue of the input sequence. The input sequence should be a string. Running -alpha.graph("MASNDYTQQATQSYGAYPTQPGQGYSQQSSQPYGQQSYSGYSQSTDTSGYGQSSYSSYGQSQNTGYGTQSTPQGYGSTGGYGSSQSSQSSYGQQSSYPGYGQQPAPSSTSGSYGSSSQSSSYGQPQSGSYSQQPSYGGQQQSYGQQQSYNPPQGYGQQNQYNSSSGGGGGGGGGGNYGQDQSSMSSGGGSGGGYGNQDQSGGGGSGGYGQQDRGGRGRGGSGGGGGGGGGGYNRSSGGYEPRGRGGGRGGRGGMGGSDRGGFNKFGGPRDQGSRHDSEQDNSDNNTIFVQGLGENVTIESVADYFKQIGIIKTNKKTGQPMINLYTDRETGKLKGEATVSFDDPPSAKAAIDWFDGKEFSGNPIKVSFATRRADFNRGGGNGRGGRGRGGPMGRGGYGGGGSGGGGRGGFPSGGGGGGGQQRAGDWKCPNPTCENMNFSWRNECNQCKAPKPDGPGGGPGGSHMGGNYGDDRRGGRGGYDRGGYRGRGGDRGGFRGGRGGGDRGGFGPGKMDSRGEHRQDRRERPY")would generate -AcknowledgementsWe would like to thank theDeepMindteam for developing AlphaFold.We would like to thankDan Griffithfrom the Holehouse Lab at Washington University School of Medicine for developing PARROT, which is the tool that was used to generate the BRNN behind alphaPredict. For more info (and if you want to generate machine-learning networks for predicting anything related to proteins) see:https://idptools-parrot.readthedocs.io/en/latest/.ChangesV1.01 - November 2023Changed prediction to get rid of deprecation warning for numpy 1.25 and later.Restricted maximum python version to less than 3.12.0
|
alpha-public-registry-grpc
|
No description available on PyPI.
|
alphapulldown
|
AlphaPulldown🥳 AlphaPulldown has entered the era of version 1.xWe have brought some exciting useful features to AlphaPulldown and updated its computing environment.AlphaPulldown is a Python package that streamlines protein-protein interaction screens and high-throughput modelling of higher-order oligomers using AlphaFold-Multimer:provides a convenient command line interface to screen a bait protein against many candidates, calculate all-versus-all pairwise comparisons, test alternative homo-oligomeric states, and model various parts of a larger complexseparates the CPU stages (MSA and template feature generation) from GPU stages (the actual modeling)allows modeling fragments of proteins without recalculation of MSAs and keeping the original full-length residue numbering in the modelssummarizes the results in a CSV table with AlphaFold scores, pDockQ and mpDockQ, PI-score, and various physical parameters of the interfaceprovides a Jupyter notebook for an interactive analysis of PAE plots and models🆕 integrates cross-link mass spec data with AlphaFold predictions viaAlphaLink2models🆕 able to integrate experimental models into AlphaFold pipeline using custom multimeric databasesPre-installationCheck if you have downloaded necessary parameters and databases (e.g. BFD, MGnify etc.) as instructed inAlphFold's documentation. You should have a directory like below:alphafold_database/ # Total: ~ 2.2 TB (download: 438 GB)
bfd/ # ~ 1.7 TB (download: 271.6 GB)
# 6 files.
mgnify/ # ~ 64 GB (download: 32.9 GB)
mgy_clusters_2018_12.fa
params/ # ~ 3.5 GB (download: 3.5 GB)
# 5 CASP14 models,
# 5 pTM models,
# 5 AlphaFold-Multimer models,
# LICENSE,
# = 16 files.
pdb70/ # ~ 56 GB (download: 19.5 GB)
# 9 files.
pdb_mmcif/ # ~ 206 GB (download: 46 GB)
mmcif_files/
# About 180,000 .cif files.
obsolete.dat
pdb_seqres/ # ~ 0.2 GB (download: 0.2 GB)
pdb_seqres.txt
small_bfd/ # ~ 17 GB (download: 9.6 GB)
bfd-first_non_consensus_sequences.fasta
uniclust30/ # ~ 86 GB (download: 24.9 GB)
uniclust30_2018_08/
# 13 files.
uniprot/ # ~ 98.3 GB (download: 49 GB)
uniprot.fasta
uniref90/ # ~ 58 GB (download: 29.7 GB)
uniref90.fastaCreate Anaconda environmentFirstly, installAnacondaand create AlphaPulldown environment, gathering necessary dependenciescondacreate-nAlphaPulldown-comnia-cbioconda-cconda-forgepython==3.10openmm==8.0pdbfixer==1.9kalign2cctbx-basepytestimportlib_metadatahhsuiteOptionally, if you do not have it yet on your system, installHMMERfrom AnacondasourceactivateAlphaPulldown
condainstall-cbiocondahmmerThis usually works, but on some compute systems users may wish to use other versions or optimized builds of already installed HMMER and HH-suite.Installation using pipActivate the AlphaPulldown environment and install AlphaPulldownsourceactivateAlphaPulldown
python3-mpipinstallalphapulldown==1.0.3
pipinstalljax==0.4.23jaxlib==0.4.23+cuda11.cudnn86-fhttps://storage.googleapis.com/jax-releases/jax_cuda_releases.htmlFor older versions of AlphaFold:
If you haven't updated your databases according to the requirements of AlphaFold 2.3.0, you can still use AlphaPulldown with your older version of AlphaFold database. Please follow the installation instructions on thededicated branchHow to developFollow the instructions atDeveloping guidelinesManualsAlphaPulldown supports four different modes of massive predictions:pulldown- to screen a list of "bait" proteins against a list or lists of other proteinsall_vs_all- to model all pairs of a protein listhomo-oligomer- to test alternative oligomeric statescustom- to model any combination of proteins and their fragments, such as a pre-defined list of pairs or fragments of a complexAlphaPulldown will return models of all interactions, summarize results in a score table, and will provide aJupyternotebook for an interactive analysis, including PAE plots and 3D displays of models colored by chain and pLDDT score.ExamplesExample 1 is a case wherepulldownmode is used. Manual:example_1Example 2 is a case wherecustomandhomo-oligomermodes are used. Manual:example_2Example 3 is demonstrating the usage of multimeric templates for guiding AlphaFold predictions. Manual:example_3all_vs_allmode can be viewed as a special case of thepulldownmode thus the instructions of this mode are added as Appendix in both manuals mentioned above.CitationsIf you use this package, please cite as the following:@Article{AlphaPUlldown,author={DingquanYu,GrzegorzChojnowski,MariaRosenthal,andJanKosinski},journal={Bioinformatics},title={AlphaPulldown—apythonpackageforprotein–proteininteractionscreensusingAlphaFold-Multimer},year={2023},volume={39},issue={1},doi={https://doi.org/10.1093/bioinformatics/btac749}}
|
alphapy
|
alphapy is a Python library for machine learning using scikit-learn. We have a stock market pipeline and a sports pipeline so that speculators can test predictive models, along with functions for trading systems and portfolio management.
|
alphaqso
|
No description available on PyPI.
|
alpha_quant
|
No description available on PyPI.
|
alphaquantum
|
============
AlphaQuantum.. image::https://img.shields.io/pypi/v/alphaquantum.svg:target:https://pypi.python.org/pypi/alphaquantum.. image::https://img.shields.io/travis/antobzzll/alphaquantum.svg:target:https://travis-ci.com/antobzzll/alphaquantum.. image::https://readthedocs.org/projects/alphaquantum/badge/?version=latest:target:https://alphaquantum.readthedocs.io/en/latest/?version=latest:alt: Documentation StatusOpen source algorithmic trading platformFree software: Apache Software License 2.0Documentation:https://alphaquantum.readthedocs.io.FeaturesTODOCreditsThis package was created with Cookiecutter_ and theaudreyr/cookiecutter-pypackage_ project template... _Cookiecutter:https://github.com/audreyr/cookiecutter.. _audreyr/cookiecutter-pypackage:https://github.com/audreyr/cookiecutter-pypackage
|
alpharaw
|
AlphaRawAboutAn open-source Python package of the AlphaPept ecosystem from theMann
Labs at the Max Planck Institute of
Biochemistryto unify raw MS data
accession and storage. To enable all hyperlinks in this document, please
view it atGitHub.AboutLicenseInstallationPip installerDeveloper installerUsagePython and jupyter notebooksTroubleshootingCitationsHow to contributeChangelogLicenseAlphaRaw was developed by theMann Labs at the Max Planck Institute of
Biochemistryand is freely available
with anApache License. External Python packages
(available in therequirementsfolder) have their own
licenses, which can be consulted on their respective websites.InstallationPythonnet must be installed to access Thermo or Sciex raw data.For WindowsPythonnet will be automatically installed via pip.For Linux (or MacOS without M1/M2/M3/..., not tested yet)conda install mono.Install pythonnet withpip install pythonnet.Ifconda install monodoes not work, we can install Mono from mono-project websiteMono
Linux.
NOTE, the installed mono version should be at least 6.10, which
requires you to add the ppa to your trusted sources!For MacOS including M1/M2 platformInstallbrew.Install mono:brew install mono.If the pseudo mono folder/Library/Frameworks/Mono.framework/Versionsdoes not exist, create it by runningsudo mkdir -p /Library/Frameworks/Mono.framework/Versions.Link homebrew mono to pseudo mono folder:sudo ln -s /opt/homebrew/Cellar/mono/6.12.0.182 /Library/Frameworks/Mono.framework/Versions/Current. Here,6.12.0.182is the brew-installed mono version, please check your installed version. Navigate to/Library/Frameworks/Mono.framework/Versionsand runls -lto verify that the linkCurrentpoints to/opt/homebrew/Cellar/mono/6.12.0.182. IfCurrentpoints to a different installation and/or/opt/homebrew/Cellar/mono/6.12.0.182is referenced by a different link, delete the corresponding links and runsudo ln -s /opt/homebrew/Cellar/mono/6.12.0.182 Current.Install pythonnet:pip install pythonnet.AlphaRaw can be installed and used on all major operating systems
(Windows, macOS and Linux). There are three different types of
installation possible:Pip installer:Choose this installation if you want to use
AlphaRaw as a Python package in an existing Python 3.8 environment
(e.g. a Jupyter notebook).Developer installer:Choose this installation if you
are familiar with CLI tools,condaand Python. This installation allows access to all available features
of AlphaRaw and even allows to modify its source code directly.
Generally, the developer version of AlphaRaw outperforms the
precompiled versions which makes this the installation of choice for
high-throughput experiments.PipAlphaRaw can be installed in an existing Python 3.8 environment with a
singlebashcommand.Thisbashcommand can also be run directly
from within a Jupyter notebook by prepending it with a!:pipinstallalpharawInstalling AlphaRaw like this avoids conflicts when integrating it in
other tools, as this does not enforce strict versioning of dependancies.
However, if new versions of dependancies are released, they are not
guaranteed to be fully compatible with AlphaRaw. While this should only
occur in rare cases where dependencies are not backwards compatible, you
can always force AlphaRaw to use dependancy versions which are known to
be compatible with:pipinstall"alpharaw[stable]"NOTE: You might need to runpip install pip --upgradebefore installing
AlphaRaw like this. Also note the double quotes".For those who are really adventurous, it is also possible to directly
install any branch (e.g.@development) with any extras
(e.g.#egg=alpharaw[stable,development-stable]) from GitHub with e.g.pipinstall"git+https://github.com/MannLabs/alpharaw.git@development#egg=alpharaw[stable,development-stable]"DeveloperAlphaRaw can also be installed in editable (i.e. developer) mode with a
fewbashcommands. This allows to fully customize the software and
even modify the source code to your specific needs. When an editable
Python package is installed, its source code is stored in a transparent
location of your choice. While optional, it is advised to first (create
and) navigate to e.g. a general software folder:mkdir~/folder/where/to/install/softwarecd~/folder/where/to/install/softwareThe following commands assume you do not perform any additionalcdcommands anymore.Next, download the AlphaRaw repository from GitHub either directly or
with agitcommand. This creates a new AlphaRaw subfolder in your
current directory.gitclonehttps://github.com/MannLabs/alpharaw.gitFor any Python package, it is highly recommended to use a separateconda virtual environment, as
otherwisedependancy conflicts can occur with already existing
packages.condacreate--namealpharawpython=3.9-y
condaactivatealpharawFinally, AlphaRaw and all itsdependanciesneed to be
installed. To take advantage of all features and allow development (with
the-eflag), this is best done by also installing thedevelopment
dependenciesinstead of only
thecore dependencies:pipinstall-e"./alpharaw[development]"By default this installs loose dependancies (no explicit versioning),
although it is also possible to use stable dependencies
(e.g.pip install -e "./alpharaw[stable,development-stable]").By using the editable flag-e, all modifications to theAlphaRaw
source code folderare directly reflected when running
AlphaRaw. Note that the AlphaRaw folder cannot be moved and/or renamed
if an editable version is installed.UsagePythonNOTE: The first time you use a fresh installation of AlphaRaw, it is
often quite slow because some functions might still need compilation on
your local operating system and architecture. Subsequent use should be a
lot faster.Python and Jupyter notebooksAlphaRaw can be imported as a Python package into any Python script or
notebook with the commandimport alpharaw.A briefJupyter notebook tutorialon how to use
the API is also present in thenbs folder.TroubleshootingIn case of issues, check out the following:Issues: Try a few
different search terms to find out if a similar problem has been
encountered beforeDiscussions: Check
if your problem or feature requests has been discussed before.CitationsThere are currently no plans to draft a manuscript.How to contributeIf you like this software, you can give us astarto boost our
visibility! All direct contributions are also welcome. Feel free to post
a newissueor clone the
repository and create apull
requestwith a new branch.
For an even more interactive participation, check out thediscussionsand thethe Contributors License Agreement.ChangelogSee theHISTORY.mdfor a full overview of the changes made
in each version.
|
alphareader
|
AlphaReaderAfter several attempts to try thecsvpackage orpandasfor reading large files with custom delimiters, I ended up writting a little program that does the job without complaints.AlphaReaderis a high performant, pure python, 15-line of code library, that reads chunks of bytes from your files, and retrieve line by line, the content of it.The inspiration of this library came by having to extract data from a MS-SQL Server database, and having to deal with theCP1252encoding. By default AlphaReader takes this encoding as it was useful in our use case.It works also withHDFSthrough thepyarrowlibrary. But is not a depedency.CSVs# !cat file.csv# 1,John,Doe,2010# 2,Mary,Smith,2011# 3,Peter,Jones,2012>reader=AlphaReader(open('file.csv','rb'),encoding='cp1252',terminator=10,delimiter=44)>next(reader)>['1','John','Doe','2010']TSVs# !cat file.tsv# 1 John Doe 2010# 2 Mary Smith 2011# 3 Peter Jones 2012>reader=AlphaReader(open('file.tsv','rb'),encoding='cp1252',terminator=10,delimiter=9)>next(reader)>['1','John','Doe','2010']XSVs# !cat file.tsv# 1¦John¦Doe¦2010# 2¦Mary¦Smith¦2011# 3¦Peter¦Jones¦2012>ord('¦')>166>chr(166)>'¦'>reader=AlphaReader(open('file.tsv','rb'),encoding='cp1252',terminator=10,delimiter=166)>next(reader)>['1','John','Doe','2010']HDFS# !hdfs dfs -cat /raw/tsv/file.tsv# 1 John Doe 2010# 2 Mary Smith 2011# 3 Peter Jones 2012>importpyarrowaspa>fs=pa.hdfs.connect()>reader=AlphaReader(fs.open('/raw/tsv/file.tsv','rb'),encoding='cp1252',terminator=10,delimiter=9)>next(reader)>['1','John','Doe','2010']Transformations# !cat file.csv# 1,2,3# 10,20,30# 100,200,300>fn=lambdax:int(x)>reader=AlphaReader(open('/raw/tsv/file.tsv','rb'),encoding='cp1252',terminator=10,delimiter=44,fn_transform=fn)>next(reader)>[1,2,3]>next(reader)>[10,20,30]Chain Transformations# !cat file.csv# 1,2,3# 10,20,30# 100,200,300>fn_1=lambdax:x+1>fn_2=lambdax:x*10>reader=AlphaReader(open('/raw/tsv/file.tsv','rb'),encoding='cp1252',terminator=10,delimiter=44,fn_transform=[int,fn_1,fn_2])>next(reader)>[20,30,40]>next(reader)>[110,210,310]Caution>reader=AlphaReader(open('large_file.xsv','rb'),encoding='cp1252',terminator=172,delimiter=173)>records=list(reader)# Avoid this as it will load all file in memoryLimitationsNo support formulti-bytedelimitersRelatively slower performance thancsvlibrary. Usecsvand dialects when your files have\r\nterminatorsTransformations are per row, perhaps vectorization could aid performancePerformance24MB file loaded withlist(AlphaReader(file_handle))tests/test_profile.py::test_alphareader_with_encoding
---------------------------------------------------------------------------------livelogcallINFOroot:test_profile.py:22252343functioncallsin0.386secondsOrderedby:cumulativetimencallstottimepercallcumtimepercallfilename:lineno(function)1196050.0390.0000.3860.000.\alphareader\__init__.py:39(AlphaReader)1222280.2660.0000.2660.000{method'split'of'str'objects}26250.0050.0000.0540.000{method'decode'of'bytes'objects}26240.0010.0000.0490.000.\Python-3.7.4\lib\encodings\cp1252.py:14(decode)26240.0480.0000.0480.000{built-inmethod_codecs.charmap_decode}26250.0270.0000.0270.000{method'read'of'_io.BufferedReader'objects}10.0000.0000.0000.000.\__init__.py:5(_validate)10.0000.0000.0000.000{built-inmethod_codecs.lookup}
|
alpharelu
|
No description available on PyPI.
|
alpharotate
|
Documentation:https://rotationdetection.readthedocs.io/
|
alphascope
|
alphascope~ under devDeveloping and deploying an end-to-end python package for a subsection of computational finance.Installationto install alphascope, along with the tools that would be required to develop and run tests, run the following
in your virtual environment:'''bash
$ pip install -e .[dev]UsagetbdExamplestbd
|
alphascreen
|
No description available on PyPI.
|
alphaseeker
|
UNKNOWN
|
alphaserve
|
Project AlphaServecurrently running on python3 and FlaskIdeacreate a webserver, that receives control commands from clients using:Flaskis used for running the webserverHammer.jsserves implementations for mouse and click eventssocket.iolets us establish an TCP connection, so that the commands are sent fasterTODOSfix CSS for remote to properly render and disable scrolling when using the touchpadfix the scrolling baradd argument option for en-/disabling the debug mode
|
alphashape
|
Alpha Shape ToolboxToolbox for generating n-dimensional alpha shapes.Alpha shapes are often used to generalize bounding polygons containing sets of points. The alpha parameter is defined as the valuea, such that an edge of a disk of radius 1/acan be drawn between any two edge members of a set of points and still contain all the points. The convex hull, a shape resembling what you would see if you wrapped a rubber band around pegs at all the data points, is an alpha shape where the alpha parameter is equal to zero. In this toolbox we will be generating alpha complexes, which are closely related to alpha shapes, but which consist of straight lines between the edge points instead of arcs of circles.https://en.wikipedia.org/wiki/Alpha_shapehttps://en.wikipedia.org/wiki/Convex_hullCreating alpha shapes around sets of points usually requires a visually interactive step where the alpha parameter for a concave hull is determined by iterating over or bisecting values to approach a best fit. The alpha shape toolbox provides workflows to shorten the development loop on this manual process, or to bypass it completely by solving for an alpha shape with particular characteristics. A python API is provided to aid in the scripted generation of alpha shapes. A console application is also provided as an example usage of the alpha shape toolbox, and to facilitate generation of alpha shapes from the command line.Free software: MIT licenseDocumentation:https://alphashape.readthedocs.io.FeaturesImport DependenciesimportosimportsysimportpandasaspdimportnumpyasnpfromdescartesimportPolygonPatchimportmatplotlib.pyplotaspltsys.path.insert(0,os.path.dirname(os.getcwd()))importalphashape2 Dimensional ExampleDefine a set of pointspoints_2d=[(0.,0.),(0.,1.),(1.,1.),(1.,0.),(0.5,0.25),(0.5,0.75),(0.25,0.5),(0.75,0.5)]Visualize Test Coordinatesfig,ax=plt.subplots()ax.scatter(*zip(*points_2d))plt.show()Generate an Alpha Shape ($\alpha=0.0$) (Convex Hull)Every convex hull is an alpha shape, but not every alpha shape is a convex hull. When thealphashapefunction is called with an alpha parameter of 0, a convex hull will always be returned.Create the alpha shapeYou can visualize the shape within Jupyter notebooks using the built-in shapely renderer as shown below.alpha_shape=alphashape.alphashape(points_2d,0.)alpha_shapePlotting the alpha shape over the input data with Matplotlibfig,ax=plt.subplots()ax.scatter(*zip(*points_2d))ax.add_patch(PolygonPatch(alpha_shape,alpha=0.2))plt.show()Generate an Alpha Shape ($\alpha=2.0$) (Concave Hull)As we increase the alpha parameter value, the bounding shape will begin to fit the sample data with a more tightly fitting bounding box.Create the alpha shapealpha_shape=alphashape.alphashape(points_2d,2.0)alpha_shapePlotting the alpha shape over the input data with Matplotlibfig,ax=plt.subplots()ax.scatter(*zip(*points_2d))ax.add_patch(PolygonPatch(alpha_shape,alpha=0.2))plt.show()Generate an Alpha Shape ($\alpha=3.5$)If you go too high on the alpha parameter, you will start to lose points from the original data set.Create the alpha shapealpha_shape=alphashape.alphashape(points_2d,3.5)alpha_shapePlotting the alpha shape over the input data with Matplotlibfig,ax=plt.subplots()ax.scatter(*zip(*points_2d))ax.add_patch(PolygonPatch(alpha_shape,alpha=0.2))plt.show()Generate an Alpha Shape (Alpha=5.0)If you go too far, you will lose everything.alpha_shape=alphashape.alphashape(points_2d,5.0)print(alpha_shape)GEOMETRYCOLLECTION EMPTYUsing a varying Alpha ParameterThe alpha parameter can be defined locally within a region of points by supplying a callback that will return what alpha parameter to use. This can be utilized to create tighter fitting alpha shapes where point densitities are different in different regions of a data set. In the following example, the alpha parameter is changed based off of the value of the x-coordinate of the points.alpha_shape=alphashape.alphashape(points_2d,lambdaind,r:1.0+any(np.array(points_2d)[ind][:,0]==0.0))alpha_shapePlotting the alpha shape over the input data with Matplotlibfig,ax=plt.subplots()ax.scatter(*zip(*points_2d))ax.add_patch(PolygonPatch(alpha_shape,alpha=0.2))plt.show()Generate an Alpha Shape by Solving for an Optimal Alpha ValueThe alpha parameter can be solved for if it is not provided as an argument, but with large datasets this can take a long time to calculate.Create the alpha shapealpha_shape=alphashape.alphashape(points_2d)alpha_shapePlotting the alpha shape over the input datafig,ax=plt.subplots()ax.scatter(*zip(*points_2d))ax.add_patch(PolygonPatch(alpha_shape,alpha=0.2))plt.show()3 Dimensional ExampleDefine a set of pointspoints_3d=[(0.,0.,0.),(0.,0.,1.),(0.,1.,0.),(1.,0.,0.),(1.,1.,0.),(1.,0.,1.),(0.,1.,1.),(1.,1.,1.),(.25,.5,.5),(.5,.25,.5),(.5,.5,.25),(.75,.5,.5),(.5,.75,.5),(.5,.5,.75)]Visualize Test Coordinatesfig=plt.figure()ax=plt.axes(projection='3d')ax.scatter(df_3d['x'],df_3d['y'],df_3d['z'])plt.show()Alphashape with Static Alpha ParameterYou can visualize the shape within Jupyter notebooks using the built-in trimesh renderer by calling the.show()method as shown below.alpha_shape=alphashape.alphashape(points_3d,1.1)alpha_shape.show()fig=plt.figure()ax=plt.axes(projection='3d')ax.plot_trisurf(*zip(*alpha_shape.vertices),triangles=alpha_shape.faces)plt.show()Alphashape with Dymanic Alpha Parameteralpha_shape=alphashape.alphashape(points_3d,lambdaind,r:1.0+any(np.array(points_3d)[ind][:,0]==0.0))alpha_shape.show()fig=plt.figure()ax=plt.axes(projection='3d')ax.plot_trisurf(*zip(*alpha_shape.vertices),triangles=alpha_shape.faces)plt.show()Alphashape found by solving for the Alpha Parameteralpha_shape=alphashape.alphashape(points_3d)alpha_shape.show()fig=plt.figure()ax=plt.axes(projection='3d')ax.plot_trisurf(*zip(*alpha_shape.vertices),triangles=alpha_shape.faces)plt.show()4 Dimensional ExampleDefine a set of pointspoints_4d=[(0.,0.,0.,0.),(0.,0.,0.,1.),(0.,0.,1.,0.),(0.,1.,0.,0.),(0.,1.,1.,0.),(0.,1.,0.,1.),(0.,0.,1.,1.),(0.,1.,1.,1.),(1.,0.,0.,0.),(1.,0.,0.,1.),(1.,0.,1.,0.),(1.,1.,0.,0.),(1.,1.,1.,0.),(1.,1.,0.,1.),(1.,0.,1.,1.),(1.,1.,1.,1.),(.25,.5,.5,.5),(.5,.25,.5,.5),(.5,.5,.25,.5),(.5,.5,.5,.25),(.75,.5,.5,.5),(.5,.75,.5,.5),(.5,.5,.75,.5),(.5,.5,.5,.75)]df_4d=pd.DataFrame(points_4d,columns=['x','y','z','r'])Visualize Test Coordinatesfig=plt.figure()ax=plt.axes(projection='3d')ax.scatter(df_4d['x'],df_4d['y'],df_4d['z'],c=df_4d['r'])plt.show()The Edges of a 4 Dimensional Alpha Shape are Tetrahedrons Defined by the Following Coordinates (No Visualizations)alphashape.alphashape(points_4d,1.0){(16,1,2,0),(16,1,3,0),(16,2,3,0),(16,4,2,3),(16,4,7,2),(16,4,7,3),(16,5,1,3),(16,5,7,1),(16,5,7,3),(16,6,1,2),(16,6,7,1),(16,6,7,2),(17,1,2,0),(17,1,8,0),(17,2,8,0),(17,6,1,2),(17,6,14,1),(17,6,14,2),(17,9,1,8),(17,9,14,1),(17,9,14,8),(17,10,2,8),(17,10,14,2),(17,10,14,8),(18,1,3,0),(18,1,8,0),(18,3,8,0),(18,5,1,3),(18,5,13,1),(18,5,13,3),(18,9,1,8),(18,9,13,1),(18,9,13,8),(18,11,3,8),(18,11,13,3),(18,11,13,8),(19,2,3,0),(19,2,8,0),(19,3,8,0),(19,4,2,3),(19,4,12,2),(19,4,12,3),(19,10,2,8),(19,10,12,2),(19,10,12,8),(19,11,3,8),(19,11,12,3),(19,11,12,8),(20,9,13,8),(20,9,14,8),(20,9,14,13),(20,10,12,8),(20,10,14,8),(20,10,14,12),(20,11,12,8),(20,11,13,8),(20,11,13,12),(20,13,12,15),(20,14,12,15),(20,14,13,15),(21,4,7,3),(21,4,7,12),(21,4,12,3),(21,5,7,3),(21,5,7,13),(21,5,13,3),(21,7,12,15),(21,7,13,15),(21,11,12,3),(21,11,13,3),(21,11,13,12),(21,13,12,15),(22,4,7,2),(22,4,7,12),(22,4,12,2),(22,6,7,2),(22,6,7,14),(22,6,14,2),(22,7,12,15),(22,7,14,15),(22,10,12,2),(22,10,14,2),(22,10,14,12),(22,14,12,15),(23,5,7,1),(23,5,7,13),(23,5,13,1),(23,6,7,1),(23,6,7,14),(23,6,14,1),(23,7,13,15),(23,7,14,15),(23,9,13,1),(23,9,14,1),(23,9,14,13),(23,14,13,15)}Alpha Shapes with GeoPandasSample DataThe data used in this notebook can be obtained from the Alaska Department of Transportation and Public Facilities website at the link below. It consists of a point collection for each of the public airports in Alaska.http://www.dot.alaska.gov/stwdplng/mapping/shapefiles.shtmlLoad the Shapefileimportosimportgeopandasdata=os.path.join(os.getcwd(),'data','Public_Airports_March2018.shp')gdf=geopandas.read_file(data)%matplotlibinlinegdf.plot()gdf.crs{'init': 'epsg:4269'}Generate Alpha ShapeThe alpha shape will be generated in the coordinate frame the geodataframe is in. In this example, we will project into an Albers Equal Area projection, construct our alpha shape in that coordinate system, and then convert back to the source projection.Project to Albers Equal Area Spatial Referenceimportcartopy.crsasccrsgdf_proj=gdf.to_crs(ccrs.AlbersEqualArea().proj4_init)gdf_proj.plot()Determine the Alpha Shapeimportalphashapealpha_shape=alphashape.alphashape(gdf_proj)alpha_shape.plot()Plotting the Alpha Shape over the Data PointsPlate Carree Projectionimportmatplotlib.pyplotaspltax=plt.axes(projection=ccrs.PlateCarree())ax.scatter([p.xforpingdf_proj['geometry']],[p.yforpingdf_proj['geometry']],transform=ccrs.AlbersEqualArea())ax.add_geometries(alpha_shape['geometry'],crs=ccrs.AlbersEqualArea(),alpha=.2)plt.show()Robinson Projectionimportmatplotlib.pyplotaspltax=plt.axes(projection=ccrs.Robinson())ax.scatter([p.xforpingdf_proj['geometry']],[p.yforpingdf_proj['geometry']],transform=ccrs.AlbersEqualArea())ax.add_geometries(alpha_shape['geometry'],crs=ccrs.AlbersEqualArea(),alpha=.2)plt.show()St. Sulpice Point Cloud DataThe following data can be obtained from the Lib E57 example data set found at the link below. To reduce the amount of data included in thealphashapetoolbox repository, only a subset of point data was converted to a shapefile format and all data except point locations were dropped.http://www.libe57.org/data.htmlimportosimportgeopandasdata=os.path.join(os.getcwd(),'data','Trimble_StSulpice-Cloud-50mm.shp')gdf=geopandas.read_file(data)fromalphashapeimportalphashapealphashape([point.coords[0]forpointingdf['geometry'][0]],0.7).show()CreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History1.3.1 (2021-04-16)Small bug fixesDocumentation cleanup1.3.0 (2021-04-02)Support for generating alphashapes for 3 or more dimensional input data.GeoJSON support in command line interface.1.2.1 (2021-03-13)Adding in support for Python 3.6 and 3.91.2.0 (2021-02-25)Updated dependencies for geopandas notebook examples.Updated source information for Alaska Airports example data set.Dropping support for Python 3.6.1.1.0 (2020-08-19)Updated dependency version numbers.Including optional bounds for alpha paramter solver.1.0.1 (2019-05-06)Added gallery plot for optimized alpha function.Documentation cleanup.1.0.0 (2019-05-06)#1 Update features in README.md#2 Create Application Utilizing the alphashape Toolbox0.1.10 (2019-05-05)Correcting formatting on PyPi long description.0.1.9 (2019-05-05)#7 Include GeoPandas Integration0.1.8 (2019-05-05)#8 Include capability to optimize alpha parameter0.1.7 (2019-04-26)Complete code coverage of existing capabilities.0.1.6 (2019-04-24)#6 Include Jupyter Notebook in Examples0.1.5 (2019-04-24)#5 Create an Example Gallery in the Documentation0.1.4 (2019-04-24)Bug fixes.0.1.3 (2019-04-24)Bug fixes.0.1.2 (2019-04-24)Bug fixes.0.1.1 (2019-04-24)Bug fixes.0.1.0 (2019-04-23)First release on PyPI.
|
alpha-shapes
|
Alpha ShapesA Python package for reconstructing the shape of a 2D point cloud on the plane.IntroductionGiven a finite set of points (or point cloud) in the Euclidean plane,alpha shapesare members of a family of closed polygons on the 2D plane associated with the shape of this point cloud. Each alpha shape is associated with a single non negative parameterα.Intuitively an alpha shape can be conceptualized as follows. Imagine carving out the plane using a cookie scoop of radius 1/α, without removing any of the points in the point cloud. The shape that remainsisthe shape of the point cloud. If we replace the arc-like edges, due to the circular rim of the scoop, with straight segments, we are left with the alpha shape of parameterα.Given a finite set of points (or point cloud) in the Euclidean plane,alpha shapesare members of a family of closed polygons on the 2D plane associated with the shape of this point cloud. Each alpha shape is associated with a single non negative parameterα.Intuitively an alpha shape can be conceptualized as follows. Imagine carving out the plane using a cookie scoop of radius 1/α, without removing any of the points in the point cloud. The shape that remainsisthe shape of the point cloud. If we replace the arc-like edges, due to the circular rim of the scoop, with straight segments, we are left with the alpha shape of parameterα.Installationpip install alpha_shapesUsageimportmatplotlib.pyplotaspltfromalpha_shapesimportAlpha_Shaper,plot_alpha_shapeDefine a set of points. Care must be taken to avoid duplicate points:points=[(0.,0.),(0.,1.),(1.,1.1),(1.,0.),(0.25,0.15),(0.65,0.45),(0.75,0.75),(0.5,0.5),(0.5,0.25),(0.5,0.75),(0.25,0.5),(0.75,0.25),(0.,2.),(0.,2.1),(1.,2.1),(0.5,2.5),(-0.5,1.5),(-0.25,1.5),(-0.25,1.25),(0,1.25),(1.5,1.5),(1.25,1.5),(1.25,1.25),(1,1.25),(1.,2.),(0.25,2.15),(0.65,2.45),(0.75,2.75),(0.5,2.25),(0.5,2.75),(0.25,2.5),(0.75,2.25)]Create the alpha shapershaper=Alpha_Shaper(points)For the alpha shape to be calculated, the user must choose a value for thealphaparameter.
Here, let us setalphato 5.3:# Calculate the shapealpha=5.3alpha_shape=shaper.get_shape(alpha=alpha)Visualize the result:fig,(ax0,ax1)=plt.subplots(1,2)ax0.scatter(*zip(*points))ax0.set_title('data')ax1.scatter(*zip(*points))plot_alpha_shape(ax1,alpha_shape)ax1.set_title(f"$\\alpha={alpha:.3}$")foraxin(ax0,ax1):ax.set_aspect('equal')Good results depend on a successful choise for the value ofalpha. If for example we choose a smaller value, e.g. $\alpha = 3.7$:# Calculate the shape for smaller alphaalpha=3.7alpha_shape=shaper.get_shape(alpha=alpha)fig,(ax0,ax1)=plt.subplots(1,2)ax0.scatter(*zip(*points))ax0.set_title('data')ax1.scatter(*zip(*points))plot_alpha_shape(ax1,alpha_shape)ax1.set_title(f"$\\alpha={alpha:.3}$")foraxin(ax0,ax1):ax.set_aspect('equal')We find out that the hole is no longer there.
On the other hand, for larger alpha values, e.g. $\alpha = 5.6$# Calculate the shape for larger alphaalpha=5.6alpha_shape=shaper.get_shape(alpha=alpha)fig,(ax0,ax1)=plt.subplots(1,2)ax0.scatter(*zip(*points))ax0.set_title('data')ax1.scatter(*zip(*points))plot_alpha_shape(ax1,alpha_shape)ax1.set_title(f"$\\alpha={alpha:.3}$")foraxin(ax0,ax1):ax.set_aspect('equal')We find out that mabe we have cut out too much. The point on the bottom right is no longer incuded in the shapeFeaturesOptimizationA satisfactory calculation of the alpha shape requires a successful guess of the alpha parameter. While trial and error might work well in some cases, users can let theAlpha_Shaperchoose a value for them. That is what theoptimizemethod is about. It calculates the largest possible value foralpha, so that no points from the point cloud are left out.alpha_opt,alpha_shape=shaper.optimize()alpha_opt5.391419185032161fig,(ax0,ax1)=plt.subplots(1,2)ax0.scatter(*zip(*points))ax0.set_title('data')ax1.scatter(*zip(*points))plot_alpha_shape(ax1,alpha_shape)ax1.set_title(f"Optimal $\\alpha={alpha_opt:.3}$")foraxin(ax0,ax1):ax.set_aspect('equal')The optimize method runs efficiently for relatively large point clouds. Here we calculate the optimal alpha shape of an ensemble of 1000 random points uniformly distributed on the unit square.fromtimeimporttimeimportnumpyasnpnp.random.seed(42)# for reproducibility# Define a set of random pointspoints=np.random.random((1000,2))# Prepare the shaperalpha_shaper=Alpha_Shaper(points)# Estimate the optimal alpha value and calculate the corresponding shapets=time()alpha_opt,alpha_shape=alpha_shaper.optimize()te=time()print(f'optimization took:{te-ts:.2}sec')fig,axs=plt.subplots(1,2,sharey=True,sharex=True,constrained_layout=True)foraxinaxs:ax.plot(*zip(*points),linestyle='',color='k',marker='.',markersize=1)ax.set_aspect('equal')_=axs[0].set_title('data')plot_alpha_shape(axs[1],alpha_shape)axs[1].triplot(alpha_shaper)_=axs[1].set_title(r'$\alpha_{\mathrm{opt}}$')optimization took: 0.081 secused as triangulationThe Alpha_Shaper class implements the interface of matplotlib.tri.Triangulation. This means that it will work with algorithms that expect a triangulation as input (e.g. for contour plotting or interpolation)# Define a set of pointsnp.random.seed(42)# for reproducibilitypoints=np.random.random((1000,2))x=points[:,0]y=points[:,1]z=x**2*np.cos(5*x*y-8*x+9*y)+y**2*np.sin(5*x*y-8*x+9*y)# If the characteristic scale along each axis varies significantly,# it may make sense to turn on the `normalize` option.shaper=Alpha_Shaper(points,normalize=True)alpha_opt,alpha_shape_scaled=shaper.optimize()# mask = shaper.set_mask_at_alpha(alpha_opt)fig,ax=plt.subplots()ax.tricontourf(shaper,z)ax.triplot(shaper)ax.plot(x,y,".k",markersize=2)ax.set_aspect('equal')NormalizationBefore calculating the alpha shape, Alpha_Shaper normalizes by default the input points so that they are distributed on the unit square. When there is a scale separation along the x and y direction, deactivating this feature may yield surprising results.# Define a set of pointspoints=[(0.0,2.1),(-0.25,1.5),(0.25,0.5),(-0.25,1.25),(0.75,2.75),(0.75,2.25),(0.0,2.0),(1.0,0.0),(0.25,0.15),(1.25,1.5),(1.25,1.25),(1.0,2.1),(0.65,2.45),(0.25,2.5),(0.0,1.0),(0.5,0.5),(0.5,0.25),(0.5,0.75),(0,1.25),(1.5,1.5),(1.0,2.0),(0.25,2.15),(1.0,1.1),(0.75,0.75),(0.75,0.25),(0.0,0.0),(-0.5,1.5),(1,1.25),(0.5,2.5),(0.5,2.25),(0.5,2.75),(0.65,0.45),]# Scale the points along the x-dimensionx_scale=1e-3points=np.array(points)points[:,0]*=x_scale# Create the alpha shape without accounting for the x and y scale separationunnormalized_shaper=Alpha_Shaper(points,normalize=False)_,alpha_shape_unscaled=unnormalized_shaper.optimize()# If the characteristic scale along each axis varies significantly,# it may make sense to turn on the `normalize` option.shaper=Alpha_Shaper(points,normalize=True)alpha_opt,alpha_shape_scaled=shaper.optimize()# Compare the alpha shapes calculated with and without scaling.fig,(ax0,ax1,ax2)=plt.subplots(1,3,sharey=True,sharex=True,constrained_layout=True)ax0.scatter(*zip(*points))ax0.set_title("data")ax1.scatter(*zip(*points))ax2.scatter(*zip(*points))plot_alpha_shape(ax1,alpha_shape_scaled)ax1.set_title("with normalization")ax2.set_title("without normalization")plot_alpha_shape(ax2,alpha_shape_unscaled)foraxin(ax1,ax2):ax.set_axis_off()foraxin(ax0,ax1,ax2):ax.set_aspect(x_scale)InspirationThis library is inspired by thealphashapelibrary.
|
alpha-shifter-cypher
|
No description available on PyPI.
|
alphasign
|
Implementation of the Alpha Sign Communications Protocol, which is used by many commercial LED signs, including the Betabrite.
|
alphasms-client
|
UNKNOWN
|
alphaspace2
|
AlphaSpace2 is a surface topographical mapping tool.Based on the algorithm of original AlphaSpace published
[here](http://pubs.acs.org/doi/abs/10.1021/acs.jcim.5b00103),
the alphaspace2 is a rewritten implementation with multiple new features
are added for a more friendly user interface and performance boost.
|
alphastats
|
DocumentationStreamlit WebAppAn open-source Python package for downstream mass spectrometry downstream data analysis from theMann Group at the University of Copenhagen.CitationInstallationTroubleshootingLicenseHow to contributeChangelogCitationPublication:AlphaPeptStats: an open-source Python package for automated and scalable statistical analysis of mass spectrometry-based proteomicsCitation:Krismer, E., Bludau, I., Strauss M. & Mann M. (2023). AlphaPeptStats: an open-source Python package for automated and scalable statistical analysis of mass spectrometry-based proteomics. Bioinformaticshttps://doi.org/10.1093/bioinformatics/btad461InstallationAlphaPeptStats can be used aspython library (pip-installation), orGraphical User Interface (either pip-installation or one-click installer).Further we provide a Dockerimage for the GUI.Pip InstallationAlphaStats can be installed in an existing Python 3.8/3.9/3.10 environment with a singlebashcommand.pipinstallalphastatsIn case you want to use the Graphical User Interface, use following command in the command line:alphastatsguiAlphaStats can be imported as a Python package into any Python script or notebook with the commandimport alphastats.
A briefJupyter notebook tutorialon how to use the API is also present in thenbs folder.One Click InstallerOne click Installer for MacOS, Windows and Linux can be foundhere.Docker ImageWe provide two Dockerfiles, one for the library and one for the Graphical User Interface.
The Image can be pulled from Dockerhubdockerpullelenakrismer/alphapeptstats_streamlitGUITroubleshootingIn case of issues, check out the following:Issues: Try a few different search terms to find out if a similar problem has been encountered beforeLicenseAlphaStats was developed by theMann Group at the University of Copenhagenand is freely available with anApache License. External Python packages (available in therequirementsfolder) have their own licenses, which can be consulted on their respective websites.How to contributeIf you like this software, you can give us astarto boost our visibility! All direct contributions are also welcome. Feel free to post a newissueor clone the repository and create apull requestwith a new branch. For an even more interactive participation, check out thediscussionsand thethe Contributors License Agreement.ChangelogSee theHISTORY.mdfor a full overview of the changes made in each version.FAQHow can I resolve the Microsoft visual error message when installing: error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools"?Please, find a description on how to update required toolshere.How to resolve ERROR:: Could not find a local HDF5 installation. on Mac M1?Before installing AlphaPeptStats you might need to install pytables first:conda install -c anaconda pytables
|
alphatims
|
AlphaTimsAlphaTims is an open-source Python package that provides fast accession and visualization of unprocessed LC-TIMS-Q-TOF data fromBruker’s timsTOF Proinstruments. It indexes the data such that it can easily be sliced along all five dimensions: LC, TIMS, QUADRUPOLE, TOF and DETECTOR. It was developed by theMann Labs at the Max Planck Institute of Biochemistryas a modular tool of theAlphaPept ecosystem. To enable all hyperlinks in this document, please view it atGitHub.AlphaTimsAboutLicenseInstallationOne-click GUIPip installerDeveloper installerInstallation issuesTest dataTest sampleLCDDADIAUsageGUICLIPython and jupyter notebooksOther toolsPerformanceSpeedRAMTroubleshootingHow it worksBruker raw dataTimsTOF objects in PythonSlicing TimsTOF objectsFuture perspectivesCiting AlphaTimsHow to contributeChangelog1.0.00.3.20.3.10.3.00.2.80.2.7AboutHigh-resolution quadrupole time-of-flight (Q-TOF) tandem mass spectrometry can be coupled to several other analytical techniques such as liquid chromatography (LC) and trapped ion mobility spectrometry (TIMS). LC-TIMS-Q-TOF has gained considerable interest since the introduction of theParallel Accumulation–Serial Fragmentation (PASEF)method in both data-dependent (DDA) and data-independent acquisition (DIA). With this setup, ion intensity values are acquired as a function of the chromatographic retention time, ion mobility, quadrupole mass to charge and TOF mass to charge. As these five-dimensional data points are detected at GHz rates, datasets often contain billions of data points which makes them impractical and slow to access. Raw data are therefore frequently binned for faster data analysis or visualization. In contrast, AlphaTims is a Python package that provides fast accession and visualization of unprocessed raw data. By recognizing that all measurements are ultimately arrival times linked to intensity values, it constructs an efficient set of indices such that raw data can be interpreted as a sparse five-dimensional matrix. On a modern laptop, this indexing takes less than half a minute for raw datasets of more than two billion datapoints. Following this step, interactive visualization of the same dataset can also be done in milliseconds. AlphaTims is freely available, open-source and available on all major Operating Systems. It can be used with a graphical user interface (GUI), a command-line interface (CLI) or as a regular Python package.LicenseAlphaTims was developed by theMann Labs at the Max Planck Institute of Biochemistryand is freely available with anApache License. Since AlphaTims uses Bruker libraries (available in thealphatims/extfolder) additionalthird-party licensesare applicable. External Python packages (available in therequirementsfolder) have their own licenses, which can be consulted on their respective websites.InstallationAlphaTims can be installed and used on all major operating systems (Windows, macOS and Linux).
There are three different types of installation possible:One-click GUI installer:Choose this installation if you only want the GUI and/or keep things as simple as possible.Pip installer:Choose this installation if you want to use AlphaTims as a Python package in an existing Python 3.8 environment (e.g. a Jupyter notebook). If needed, the GUI and CLI can be installed with pip as well.Developer installer:Choose this installation if you are familiar with CLI tools,condaand Python. This installation allows access to all available features of AlphaTims and even allows to modify its source code directly. Generally, the developer version of AlphaTims outperforms the precompiled versions which makes this the installation of choice for high-throughput experiments.Docker:Use this installation if you want to use a container based workflow. This is usefull to preserve a clean environment or when running multiple tools that might have conflicting dependencies.IMPORTANT: While AlphaTims is mostly platform independent, some calibration functions requireBruker librarieswhich are only available on Windows and Linux.One-click GUIThe GUI of AlphaTims is a completely stand-alone tool that requires no knowledge of Python or CLI tools. Click on one of the links below to download the latest release for:WindowsmacOSLinuxIMPORTANT: Please refer to theGUI manualfor detailed instructions on the installation, troubleshooting and usage of the stand-alone AlphaTims GUI.Older releases remain available on therelease page, but no backwards compatibility is guaranteed.PipAlphaTims can be installed in an existing Python 3.8 environment with a singlebashcommand.Thisbashcommand can also be run directly from within a Jupyter notebook by prepending it with a!. The lightweight version of AlphaTims that purely focuses on data accession (no plotting without additional packages) can be installed with:pipinstallalphatimsInstalling AlphaTims like this avoids conflicts when integrating it in other tools, as this does not enforce strict versioning of dependancies. However, if new versions of dependancies are released, they are not guaranteed to be fully compatible with AlphaTims. While this should only occur in rare cases where dependencies are not backwards compatible, you can always force AlphaTims to use dependancy versions which are known to be compatible with:pipinstall"alphatims[stable]"NOTE: You might need to runpip install pip==21.0before installing AlphaTims like this. Also note the double quotes".Alternatively, some basic plotting functions can be installed with the following command:pipinstall"alphatims[plotting]"While the above command does allow usage of the full GUI, there are some known compatability issues with newer versions of bokeh. As such, it is generally advised to not use loose plotting dependancies and force a stable installation with:pipinstall"alphatims[plotting-stable]"When older samples need to be analyzed, it might be essential to install thelegacyversion as well (See also thetroubleshootingsection):pipinstall"alphatims[legacy]"When a new version of AlphaTims becomes available, the old version can easily be upgraded by running e.g. the command again with an additional--upgradeflag:pipinstall"alphatims[plotting,legacy,stable]"--upgradeThe following extra options are available:stableplottingplotting-stablelegacylegacy-stabledevelopmentdevelopment-stableNOTE: Multiple dependancy packs can be installed by comma-separation. Note however that this only works without spaces!DeveloperAlphaTims can also be installed in editable (i.e. developer) mode with a fewbashcommands. This allows to fully customize the software and even modify the source code to your specific needs. When an editable Python package is installed, its source code is stored in a transparent location of your choice. While optional, it is advised to first (create and) navigate to e.g. a general software folder:mkdir~/folder/where/to/install/softwarecd~/folder/where/to/install/softwareThe following commands assume you do not perform any additionalcdcommands anymore.Next, download the AlphaTims repository from GitHub either directly or with agitcommand. This creates a new AlphaTims subfolder in your current directory.gitclonehttps://github.com/MannLabs/alphatims.gitFor any Python package, it is highly recommended to use aconda virtual environment. The best way to install an editable version of AlphaTims is to use AlphaTims' pre-built conda development environment (note that the--forceflag overwrites an already existing AlphaTims environment):condaenvcreate--force--namealphatims--filealphatims/misc/conda_development_environment.yaml
condaactivatealphatimsAlternatively, a new conda environment can manually be created or AlphaTims can be installed in an already existing environment.Note that dependancy conflicts can occur with already existing packages in the latter case! Once a conda environment is activated, AlphaTims and all itsdependanciesneed to be installed. To take advantage of all features and allow development (with the-eflag), this is best done by installing both theplotting dependenciesanddevelopment dependenciesinstead of only thecore dependencies:condacreate-nalphatimspython=3.8-y
condaactivatealphatims
pipinstall-e"./alphatims[plotting-stable,development]"By using the editable flag-e, all modifications to the AlphaTimssource code folderare directly reflected when running AlphaTims. Note that the AlphaTims folder cannot be moved and/or renamed if an editable version is installed.The following steps are optional, but make working with AlphaTims slightly more convenient:To avoid callingconda activate alphatimsandconda deactivateevery time AlphaTims is used, the binary execution (which still reflects all modifications to the source code) can be added as an alias. On linux and MacOS, this can be done with e.g.:condaactivatealphatimsalphatims_bin="$(whichalphatims)"echo"alias alphatims='"${alphatims_bin}"'">>~/.bashrc
condadeactivateWhenzshis the default terminal instead ofbash, replace~/.bashrcwith~/.zshrc. On Windows, the commandwhere alphatimscan be used to find the location of the binary executable. This path can then be (permanently) added to Windows' path variable.When using Jupyter notebooks and multiple conda environments direcly from the terminal, it is recommended toconda install nb_conda_kernelsin the conda base environment. Hereafter, running ajupyter notebookfrom the conda base environment should have apython [conda env: alphatims]kernel available, in addition to all other conda kernels in which the commandconda install ipykernelwas run.Docker(WIP)dockerpullghcr://MannLabs/alphatims:latestInstallation issuesSee the generaltroubleshootingsection.Test dataAlphaTims is compatible with bothddaPASEFanddiaPASEF.Test sampleA test sample of human cervical cancer cells (HeLa, S3, ATCC) is provided for AlphaTims. These cells were cultured in Dulbecco's modified Eagle's medium (all Life Technologies Ltd., UK). Subsequently, the cells were collected, washed, flash-frozen, and stored at -80 °C.
Following the previously publishedin-StageTip protocol, cell lysis, reduction, and alkylation with chloroacetamide were carried out simultaneously in a lysis buffer (PreOmics, Germany). The resultant dried peptides were reconstituted in water comprising 2 vol% acetonitrile and 0.1% vol% trifluoroacetic acid, yielding a 200 ng/µL solution. This solution was further diluted with water containing 0.1% vol% formic acid. The manufacturer's instructions were followed to load approximately 200ng peptides onto Evotips (Evosep, Denmark).LCSingle-run LC-MS analysis was executed via anEvosep One LC system (Evosep). This was coupled online with a hybridTIMS quadrupole TOF mass spectrometer (Bruker timsTOF Pro, Germany). A silica emitter (Bruker) was placed inside a nano-electrospray ion source (Captive spray source, Bruker) and connected to an 8 cm x 150 µm reverse phase column to perform LC. The column was packed with 1.5 µm C18-beads (Pepsep, Denmark). Mobile phases were water and acetonitrile, buffered with 0.1% formic acid. The samples were separated with a predefined 60 samples per day method (Evosep).DDAA ddaPASEF dataset is available fordownload from the release page. Each topN acquisition cycle consisted of 10 PASEF MS/MS scans, and the accumulation and ramp times were set to 100 ms. Single-charged precursors were excluded using a polygon filter in the m/z-ion mobility plane. Furthermore, all precursors, which reached the target value of 20000, were excluded for 0.4 min from the acquisition. Precursors were isolated with a quadrupole window of 2 Th for m/z <700 and 3 Th for m/z >700.DIAThe same sample was acquired with diaPASEF and is also available fordownload from the release page. The "high-speed" method (mass range: m/z 400 to 1000, 1/K0: 0.6 – 1.6 Vs cm- 2, diaPASEF windows: 8 x 25 Th) was used, as described inMeier et al.UsageThere are three ways to use AlphaTims:GUI:This allows to interactively browse, visualize and export the data.CLI:This allows to incorporate AlphaTims in automated workflows.Python:This allows to access data and explore it interactively with custom code.NOTE: The first time you use a fresh installation of AlphaTims, it is often quite slow because some functions might still need compilation on your local operating system and architecture. Subsequent use should be a lot faster.GUIPlease refer to theGUI manualfor detailed instructions on the installation, troubleshooting and usage of the stand-alone AlphaTims GUI.If the GUI was not installed through a one-click GUI installer, it can be activate with the followingbashcommand:alphatimsguiNote that this needs to be prepended with a!when you want to run this from within a Jupyter notebook. When the command is run directly from the command-line, make sure you use the right environment (activate it with e.g.conda activate alphatimsor set an alias to the binary executable).CLIThe CLI can be run with the following command (after activating thecondaenvironment withconda activate alphatimsor if an alias was set to the alphatims executable):alphatims-hIt is possible to get help about each function and their (required) parameters by using the-hflag. For instance, the commandalphatims export hdf -hwill produce the following output:************************
* AlphaTims 0.0.210310 *
************************
Usage: alphatims export hdf [OPTIONS] BRUKER_D_FOLDER
Export BRUKER_D_FOLDER as hdf file.
Options:
--disable_overwrite Disable overwriting of existing files.
--enable_compression Enable compression of hdf files. If set, this
roughly halves files sizes (on-disk), at the
cost of taking 2-10 longer accession times.
-o, --output_folder DIRECTORY A directory for all output (blank means
`input_file` root is used).
-l, --log_file PATH Save all log data to a file (blank means
'log_[date].txt' with date format
yymmddhhmmss in 'log' folder of AlphaTims
directory). [default: ]
-t, --threads INTEGER The number of threads to use (0 means all,
negative means how many threads to leave
available). [default: -1]
-s, --disable_log_stream Disable streaming of log data.
-p, --parameter_file FILE A .json file with (non-required) parameters
(blank means default parameters are used).
NOTE: Parameters defined herein override all
default and given CLI parameters.
-e, --export_parameters FILE Save currently selected parameters to a
parameter file.
-h, --help Show this message and exit.For this particular command, the lineUsage: alphatims export hdf [OPTIONS] BRUKER_D_FOLDERshows that you always need to provide a path to aBRUKER_D_FOLDERand that all other options are optional (indicated by the brackets in[OPTIONS]). Each option can be called with a double dash--followed by a long name, while common options also can be called with a single dash-followed by their short name. It is indicated what type of parameter is expected, e.g. aDIRECTORYfor--output_folderor nothing forenable/disableflags. Defaults are also shown and all parameters will be saved in a log file. Alternatively, all used parameters can be exported with the--export_parametersoption and the non-required ones can be reused with the--parameter_file.IMPORTANT: Please refer to theCLI manualfor detailed instructions on the usage and troubleshooting of the stand-alone AlphaTims CLI.Python and Jupyter notebooksAlphaTims can be imported as a Python package into any Python script or notebook with the commandimport alphatims. Documentation for all functions is available in theRead the Docs API.A briefJupyter notebook tutorialon how to use the API is also present in thenbs folder. When running locally it provides interactive plots, which are not rendered on GitHub. Instead, they are available as individual html pages in thenbs folder.Other toolsInitial exploration of Bruker TimsTOF data files can be done by opening the .tdf file in the .d folder with anSQL browser.HDF filescan be explored withHDF CompassorHDFView.Annotating Bruker TimsTOF data files can be done withAlphaPeptVisualization of identified Bruker TimsTOF data files can be done withAlphaVizPerformancePerformance can be measured in function ofspeedorRAMusage.SpeedTypical time performance statistics on data in-/output and slicing of standardHeLa datasetsare available in theperformance notebook. All result can be summarized as follows:RAMOn average, RAM usage is twice the size of a raw Bruker .d folder. Since most .d folders have file sizes of less than 10 Gb, a modern computer with 32 Gb RAM suffices to explore most datasets with ease.TroubleshootingCommon installation/usage issues include:Always make sure you have activated the AlphaTims environment withconda activate alphatims.If this fails, make sure you have installedcondaand have created an AlphaTims environment withconda create -n alphatims python=3.8.Nogitcommand. Make suregitis installed. In a notebook!conda install git -ymight work.Wrong Python version.AlphaTims is only guaranteed to be compatible with Python 3.8. You can check if you have the right version with the commandpython --version(or!python --versionin a notebook). If not, reinstall the AlphaTims environment withconda create -n alphatims python=3.8.Dependancy conflicts/issues.Pip changed their dependancy resolver withpip version 20.3. Downgrading or upgrading pip to version 20.2 or 21.0 withpip install pip==20.2orpip install pip==21.0(before runningpip install alphatims) could solve dependancy conflicts.AlphaTims is not found.Make sure you use the right folder. Local folders are best called by prefixing them with./(e.g.pip install "./alphatims"). On some systems, installation specifically requires (not) to use single quotes'around the AlphaTims folder, e.g.pip install "./alphatims[plotting-stable,development]".Modifications to the AlphaTims source code are not reflected.Make sure you use the-eflag when usingpip install -e alphatims.Numpy does not work properly.On Windows,numpy==1.19.4has some issues. After installing AlphaTims, downgrade NumPy withpip install numpy==1.19.3.Exporting PNG images with the CLI or Python package might not work out-of-the-box. If a conda environment is used, this can be fixed by runningconda install -c conda-forge firefox geckodriverin the AlphaTims conda environment. Alternatively, a file can be exported as html and opened in a browser. From the browser there is asave as pngbutton available.GUI does not open.In some cases this can be simply because of using an incompatible (default) browser. AlphaTims has been tested with Google Chrome and Mozilla Firefox. Windows IE and Windows Edge compatibility is not guaranteed.When older Bruker files need to be processed as well,thelegacy dependenciesare also needed. However, note that this requiresMicrosoft Visual C++to be manually installed (on Windows machines) prior to AlphaTims installation! To include the legacy dependencies, install AlphaTims withpip install "alphatims[legacy]"orpip install "alphatims[legacy]" --upgradeif already pre-installed.When installed throughpip, the GUI cannot be started.Make sure you install AlphaTims withpip install "alphatims[plotting-stable]"to include the GUI with stable dependancies. If this was done and it still fails to run the GUI, a possible fix might be to runpip install panel==0.10.3after AlphaTims was installed.Some external libraries are missing.On some OS, there might be libraries missing. As an exmaple, the following error message might pop up:OSError: libgomp.so.1: cannot open shared object file: No such file or directory. This can be solved by installing those manually, e.g. on Linux:apt-get install libgomp1.How it worksThe basic workflow of AlphaTims looks as follows:Read data from aBruker.dfolder.Convert data to aTimsTOF object in Pythonand optionally store them as a persistentHDF5 file.Use Python'sslicing mechanismto retrieve data from this object e.g. for visualisation.Also checkout:Thepaperfor a complete overview.ThepresentationatISCBfor a brief video.Bruker raw dataBruker stores TimsTOF raw data in a.dfolder. The two main files in this folder areanalysis.tdfandanalysis.tdf_bin.Theanalysis.tdffile is an SQL database, in which all metadata are stored together with summarised information. This includes theFramestable, wherein information about each individual TIMS cycle is summarised including the retention time, the number of scans (i.e. a single TOF push is related to a single ion mobility value), the summed intensity and the total number of ions that have hit the detector. More details about individual scans of the frames are available in thePasefFrameMSMSInfo(for PASEF acquisition) orDiaFrameMsMsWindows(for diaPASEF acquisition) tables. This includes quadrupole and collision settings of the frame/scan combinations.Theanalysis.tdf_binfile is a binary file that contains the number of detected ions per individual scan, all detector arrival times and their intensity values. These values are grouped and compressed per frame (i.e. TIMS cycle), thereby allowing fast appendage during online acquisition.TimsTOF objects in PythonAlphaTims first reads relevant metadata from theanalysis.tdfSQL database and creates a Python object of thebruker.TimsTOFclass. Next, AlphaTims reads the summary information from theFramestable and creates three empty arrays:An emptytof_indicesarray, in which all TOF arrival times of each individual detector hit will be stored. Its size is determined by summing the number of detector hits for all frames.An emptyintensitiesarray of the same size, in which all intensity values of each individual detector hit will be stored.An emptytof_indptrarray, that will store the number of detector hits per scan. Its size is equal to(frame_max_index + 1) * scans_max_index + 1. It includes one additional frame to compensate for the fact that Bruker arrays are 1-indexed, while Python uses 0-indexing. The final+1is because this array will be converted to an offset array, similar to the index pointer array of acompressed sparse row matrix. Typical values arescans_max_index = 1000andframe_max_index = gradient_length_in_seconds * 10, resulting in approximatelylen(tof_indptr) = 10000 * gradient_length_in_seconds.After reading thePasefFrameMSMSInfoorDiaFrameMsMsWindowstable from theanalysis.tdfSQL database, four arrays are created:Aquad_indptrarray that indexes thetof_indptrarray. Each element points to an index of thetof_indptrwhere the voltage on the quadrupole and collision cell is adjusted. For PASEF acquisitions, this is typically 20 times per MSMS frame (turning on and off a value for 10 precursor selections) and once per change from an MS (precursor) frame to an MSMS (fragment) frame. For diaPASEF, this is typically twice to 10 times per frame and with a repetitive pattern over the frame cycle. This results in an array of approximatelylen(quad_indptr) = 100 * gradient_length_in_seconds. As with thetof_indptrarray, this array is converted to an offset array with size+1.Aquad_low_valuesarray oflen(quad_indptr) - 1. This array stores the lower m/z boundary that is selected with the quadrupole. For precursors without quadrupole selection, this value is set to -1.Aquad_high_valuesarray, similar toquad_low_values.Aprecursor_indicesarray oflen(quad_indptr) - 1. For PASEF this array stores the index of the selected precursor. For diaPASEF, this array stores theWindowGroupof the fragment frame. A value of 0 indicates an MS1 ion (i.e. precursor) without quadrupole selection.After processing this summarising information from theanalysis.tdfSQL database, the actual raw data from theanalysis.tdf_binbinary file is read and stored in the emptytof_indices,intensitiesandtof_indptrarrays.Finally, three arrays are defined that allow quick translation offrame_,scan_andtof_indicestort_values,mobility_valuesandmz_valuesarrays.Thert_valuesarray is read read directly from theFramestable inanalysis.tdfand has a length equal toframe_max_index + 1. Note that an empty zeroth frame withrt = 0is created to make Python's 0-indexing compatible with Bruker's 1-indexing.Themobility_valuesarray is defined by using the functiontims_scannum_to_oneoverk0fromtimsdata.dllon the first frame and typically has a length of1000.Similarly, themz_valuesarray is defined by using the functiontims_index_to_mzfromtimsdata.dllon the first frame. Typically this has a length of400000.All these arrays can be loaded into memory, taking up roughly twice as much RAM as the.dfolder on disk. This increase in RAM memory is mainly due to the compression used in theanalysis.tdf_binfile. The HDF5 file can also be compressed so that its size is roughly halved and thereby has the same size as the Bruker.dfolder, but (de)compression reduces accession times by 3-6 fold.Slicing TimsTOF objectsOnce a Python TimsTOF object is available, it can be loaded into memory for ultrafast accession. Accession of thedataobject is done by simple Python slicing such as e.g.selected_ion_indices = data[frame_selection, scan_selection, quad_selection, tof_selection]. This slicing returns apd.DataFramefor subsequent analysis. The columns of this dataframe contain all information for all selected ions, i.e.frame,scan,precursorandtofindices andrt,mobility,quad_low,quad_high,mzandintensityvalues. See thetutorial jupyter notebookfor usage examples.Future perspectivesDetection of:precursor and fragment ionsisotopic envelopes (i.e. features)fragment clusters (i.e. pseudo MSMS spectra)Citing AlphaTimsCheck out thepaper.How to contributeIf you like AlphaTims you can give us astarto boost our visibility! All direct contributions are also welcome. Feel free to post a newissueor clone the repository and create apull requestwith a new branch. For an even more interactive participation, check out thediscussions.
For more information seethe Contributors License Agreement.ChangelogThe following changes were introduced in the following versions of AlphaTims. Download the latest version in theinstallation section.1.0.0FEAT: tempmmap for large arrays by default.0.3.2FEAT: cli/gui allow bruker data as argument.FEAT/FIX: Polarity included in frame table.FIX: utils cleanup.FIX: utils issues.FEAT: by default use -1 threads in utils.FIX: disable cla check.0.3.1FIX/FEAT: Intensity correction when ICC is used. Note that this is only for exported data, not for visualized data.FEAT: By default, hdf files are now mmapped, making them much faster to initially load and use virtual memory in favor of residual memory.0.3.0FEAT: Introduction of global mz calibration.FEAT: Introduction of dia_cycle for diaPASEF.CHORE: Verified Python 3.9 compatibility.FEAT: Included option to open Bruker raw data when starting the GUI.FEAT: Provided hash for TimsTOF objects.FEAT: Filter push indices.CHORE: included stable and loose versions for all dependancies0.2.8FIX: Ensure stable version for one click GUI.FIX: Do not require plotting dependancies for CLI export csv selection.FIX: Import of very old diaPASEF samples where the analysis.tdf file still looks like ddaPASEF.FIX: frame pointers of fragment_frame table.FEAT: Include visual report in performance notebook.FEAT: Include DIA 120 sample in performance tests.FEAT: Show performance in README.FIX: Move python-lzf dependancy (to decompress older Bruker files) to legacy requirements, as pip install on Windows requires visual c++ otherwise.DOCS: BioRxiv paper link.FEAT/FIX: RT in min column.FEAT: CLI manual.FEAT: Inclusion of more coordinates in CLI.0.2.7CHORE: Introduction of changelog.CHORE: Automated publish_and_release action to parse version numbers.FEAT/FIX: Include average precursor mz in MGF titles and set unknown precursor charges to 0.FIX: Properly resolve set_global argument ofalphatims.utils.set_threads.FIX: Set nogil option foralphatims.bruker.indptr_lookup.DOCS: GUI Manual typos.FEAT: Include buttons to download test data and citation in GUI.FEAT: Include option for progress_callback in alphatims.utils.pjit.FIX/FEAT: Older samples with TimsCompressionType 1 can now also be read. This is at limited performance.FEAT: By default use loose versioning for the base dependancies. Stable dependancy versions can be enforced withpip install "alphatims[stable]". NOTE: This option is not guaranteed to be maintained. Future AlphaTims versions might opt for an intermediate solution with semi-strict dependancy versioning.
|
alphatools
|
No description available on PyPI.
|
alphatools-jv
|
About projectA small wrapper over smartapi to backtest basic trading strategies.
Easy installation and usageInstallationpip install alphatools_jvUsageCreating and running a strategyfromalphatools.backtesting_appimportBackTestingAppfromdatetimeimportdatetimeclassTestSmartApiApp(BackTestingApp):defon_md(self,data_row):# your strat code goes hereprint("New row found:{}".format(data_row))app=TestSmartApiApp('/Users/jaskiratsingh/projects/smart-api-creds.ini')app.set_start_date(datetime.strptime('2022-12-20 11:39:00+05:30','%Y-%m-%d%H:%M:%S%z'))app.set_end_date(datetime.strptime('2022-12-29 11:39:00+05:30','%Y-%m-%d%H:%M:%S%z'))app.set_interval('ONE_MINUTE')app.add_instrument(53825,"NFO")app.add_instrument(48756,"NFO")app.load_data()# Loads the data into a dataframeapp.get_candle_info_df()# Returns the entire simulation dataframeapp.simulate()# Starts simulation from the beginning# To place a trade, use trade api to send the orders to the pnl calculator.# Pnl calculator uses last tick prices to calculate the observed Pnlapp.trade(53825,1)# Buys 1 unit for token 53825app.trade(53825,-3)# Sells 3 units for token 53825app.get_total_pnl()# Returns pnl after all trades are madeHelper for instrumentsfromalphatools.utils.token_managerimportTokenManagertok=TokenManager()# Refer documentation for more overridesinfo=tok.get_instrument(53825)# Returns instrument infoinfo=tok.get_instrument('NIFTY23FEB23FUT')# Returns instrument info
|
alphatrade
|
Python APIs for SAS Online Alpha Trade Web PlatformMAJOR CHANGES : NEW VERSION 1.0.0API endpoints are changed to match the new ones, bugs expectedRemoved check for enabled exchanges, you can now download or search symbols from MCX as well if it is not enabledTOTP SECRET or TOTP both can be given as argument while creating AlphaTrade object (if it is 6 digits it will conside TOTP else TOTP SECRET)Added new search function to search scrips which will return json for found scrips, you need to process it furtherMore functions to come.Check whether streaming websocket is working or notTheexamplesfolder is removed and examples are renamed and kept in root directory for ease of developmentSTEPS to workClone the repo locally -git clone https://github.com/algo2t/alphatrade.gitCreate a virtualenv -python -m pip install virtualenvand thenpython -m virtualenv venvand activate thevenvenvironment.Install dev-requirement.txt -python -m pip install -r dev-requirements.txt- this is to ensuresetuptools==57.5.0is installed. There is a bug withprotlib, target is to get reed ofprotlibin futureInstall requirement.txt -python -m pip install -r requirement.txtCreate theconfig.pyfile in root of cloned repo withlogin_id,passwordandTOTPSECRET, you can add theaccess_token.txtif you want to use existingaccess_token.Try the examplespython zlogin_example.py,python zexample_sas_login.py,python zhistorical_data.pyandpython zstreaming_data.pyExpecting issues with the streaming data !!! :PNOTE:: This is Unofficial python module, don't ask SAS support team for help, use it AS-ISThe Python APIs for communicating with the SAS Online Alpha Trade Web Platform.Alpha Trade Python library provides an easy to use python wrapper over the HTTPS APIs.The HTTP calls have been converted to methods and JSON responses are wrapped into Python-compatible objects.Websocket connections are handled automatically within the library.This work is completely based on Python SDK / APIs forAliceBlueOnline.Thanks tokrishnavelu.Author:algo2tGithub Repository:alphatradeInstallationThis module is installed via pip:pip install git+https://github.com/algo2t/alphatrade.gitIt can also be installed frompypipip install alphatradeTo force upgrade existing installations:pip uninstall alphatrade
pip --no-cache-dir install --upgrade alphatradePrerequisitesPython 3.xAlso, you need the following modules:protlibwebsocket_clientrequestspandasThe modules can also be installed usingpipExamples - Start Here - ImportantPlease clone this repository and check the examples folder to get started.CheckhereGetting started with APIOverviewThere is only one class in the whole library:AlphaTrade. When theAlphaTradeobject is created an access token from the SAS Online alpha trade server is stored in text fileaccess_token.txtin the same directory. An access token is valid for 24 hours. See the examples folder withconfig.pyfile to see how to store your credentials.
With an access token, you can instantiate an AlphaTrade object again. Ideally you only need to create an access_token once every day.REST DocumentationThe original REST API that this SDK is based on is available online.Alice Blue API REST documentationUsing the APILoggingThe whole library is equipped with python‘sloggingmodule for debugging. If more debug information is needed, enable logging using the following code.importlogginglogging.basicConfig(level=logging.DEBUG)Get an access tokenImport alphatradefromalphatradeimport*Createconfig.pyfileAlways keep credentials in a separate filelogin_id="XXXXX"password="XXXXXXXX"Totp='XXXXXXXXXXXXXXXX'try:access_token=open('access_token.txt','r').read().rstrip()exceptExceptionase:print('Exception occurred ::{}'.format(e))access_token=NoneImport the configimportconfigCreate AlphaTrade ObjectCreateAlphaTradeobject with yourlogin_id,password,TOTP/TOTP_SECRETand/oraccess_token.Useconfigobject to getlogin_id,password,TOTPandaccess_token.fromalphatradeimportAlphaTradeimportconfigimportpyotpTotp=config.Totppin=pyotp.TOTP(Totp).now()totp=f"{int(pin):06d}"iflen(pin)<=5elsepinsas=AlphaTrade(login_id=config.login_id,password=config.password,twofa=totp,access_token=config.access_token)## filename config.pylogin_id="RR24XX"password="SuperSecretPassword!!!"TOTP_SECRET='YOURTOTPSECRETEXTERNALAUTH'try:access_token=open('access_token.txt','r').read().rstrip()exceptExceptionase:print(f'Exception occurred ::{e}')access_token=NonefromalphatradeimportAlphaTradeimportconfigimportpyotpsas=AlphaTrade(login_id=config.login_id,password=config.password,twofa=config.TOTP_SECRET,access_token=config.access_token)You can run commands here to check your connectivityprint(sas.get_balance())# get balance / margin limitsprint(sas.get_profile())# get profileprint(sas.get_daywise_positions())# get daywise positionsprint(sas.get_netwise_positions())# get netwise positionsprint(sas.get_holding_positions())# get holding positionsGet master contractsGetting master contracts allow you to search for instruments by symbol name and place orders.
Master contracts are stored as an OrderedDict by token number and by symbol name. Whenever you get a trade update, order update, or quote update, the library will check if master contracts are loaded. If they are, it will attach the instrument object directly to the update. By default all master contracts of all enabled exchanges in your personal profile will be downloaded. i.e. If your profile contains the following as enabled exchanges['NSE', 'BSE', 'CDS', 'MCX', NFO']all contract notes of all exchanges will be downloaded by default. If you feel it takes too much time to download all exchange, or if you don‘t need all exchanges to be downloaded, you can specify which exchange to download contract notes while creating the AlphaTrade object.sas=AlphaTrade(login_id=config.login_id,password=config.password,twofa=totp,access_token=config.access_token,master_contracts_to_download=['NSE','BSE'])This will reduce a few milliseconds in object creation time of AlphaTrade object.Get tradable instrumentsSymbols can be retrieved in multiple ways. Once you have the master contract loaded for an exchange, you can get an instrument in many ways.Get a single instrument by it‘s name:tatasteel_nse_eq=sas.get_instrument_by_symbol('NSE','TATASTEEL')reliance_nse_eq=sas.get_instrument_by_symbol('NSE','RELIANCE')ongc_bse_eq=sas.get_instrument_by_symbol('BSE','ONGC')india_vix_nse_index=sas.get_instrument_by_symbol('NSE','India VIX')sensex_nse_index=sas.get_instrument_by_symbol('BSE','SENSEX')Get a single instrument by it‘s token number (generally useful only for BSE Equities):ongc_bse_eq=sas.get_instrument_by_token('BSE',500312)reliance_bse_eq=sas.get_instrument_by_token('BSE',500325)acc_nse_eq=sas.get_instrument_by_token('NSE',22)Get FNO instruments easily by mentioning expiry, strike & call or put.bn_fut=sas.get_instrument_for_fno(symbol='BANKNIFTY',expiry_date=datetime.date(2019,6,27),is_fut=True,strike=None,is_call=False)bn_call=sas.get_instrument_for_fno(symbol='BANKNIFTY',expiry_date=datetime.date(2019,6,27),is_fut=False,strike=30000,is_call=True)bn_put=sas.get_instrument_for_fno(symbol='BANKNIFTY',expiry_date=datetime.date(2019,6,27),is_fut=False,strike=30000,is_call=False)Search for symbolsSearch for multiple instruments by matching the name. This works case insensitive and returns all instrument which has the name in its symbol.all_sensex_scrips=sas.search_instruments('BSE','sEnSeX')print(all_sensex_scrips)The above code results multiple symbol which has ‘sensex’ in its symbol.[Instrument(exchange='BSE', token=1, symbol='SENSEX', name='SENSEX', expiry=None, lot_size=None), Instrument(exchange='BSE', token=540154, symbol='IDFSENSEXE B', name='IDFC Mutual Fund', expiry=None, lot_size=None), Instrument(exchange='BSE', token=532985, symbol='KTKSENSEX B', name='KOTAK MAHINDRA MUTUAL FUND', expiry=None, lot_size=None), Instrument(exchange='BSE', token=538683, symbol='NETFSENSEX B', name='NIPPON INDIA ETF SENSEX', expiry=None, lot_size=None), Instrument(exchange='BSE', token=535276, symbol='SBISENSEX B', name='SBI MUTUAL FUND - SBI ETF SENS', expiry=None, lot_size=None)]Search for multiple instruments by matching multiple namesmultiple_underlying=['BANKNIFTY','NIFTY','INFY','BHEL']all_scripts=sas.search_instruments('NFO',multiple_underlying)Instrument objectInstruments are represented by instrument objects. These are named-tuples that are created while getting the master contracts. They are used when placing an order and searching for an instrument. The structure of an instrument tuple is as follows:Instrument=namedtuple('Instrument',['exchange','token','symbol','name','expiry','lot_size'])All instruments have the fields mentioned above. Wherever a field is not applicable for an instrument (for example, equity instruments don‘t have strike prices), that value will beNoneQuote updateOnce you have master contracts loaded, you can easily subscribe to quote updates.Four types of feed data are availableYou can subscribe any one type of quote update for a given scrip. Using theLiveFeedTypeenum, you can specify what type of live feed you need.LiveFeedType.MARKET_DATALiveFeedType.COMPACTLiveFeedType.SNAPQUOTELiveFeedType.FULL_SNAPQUOTEPlease refer to the original documentationherefor more details of different types of quote update.Subscribe to a live feedsas.subscribe(sas.get_instrument_by_symbol('NSE','TATASTEEL'),LiveFeedType.MARKET_DATA)sas.subscribe(sas.get_instrument_by_symbol('BSE','RELIANCE'),LiveFeedType.COMPACT)Subscribe to multiple instruments in a single call. Give an array of instruments to be subscribed.sas.subscribe([sas.get_instrument_by_symbol('NSE','TATASTEEL'),sas.get_instrument_by_symbol('NSE','ACC')],LiveFeedType.MARKET_DATA)Note: There is a limit of 250 scrips that can be subscribed on total. Beyond this point the server may disconnect web-socket connection.Start getting live feed via socketsocket_opened=Falsedefevent_handler_quote_update(message):print(f"quote update{message}")defopen_callback():globalsocket_openedsocket_opened=Truesas.start_websocket(subscribe_callback=event_handler_quote_update,socket_open_callback=open_callback,run_in_background=True)while(socket_opened==False):passsas.subscribe(sas.get_instrument_by_symbol('NSE','ONGC'),LiveFeedType.MARKET_DATA)sleep(10)Unsubscribe to a live feedUnsubscribe to an existing live feedsas.unsubscribe(sas.get_instrument_by_symbol('NSE','TATASTEEL'),LiveFeedType.MARKET_DATA)sas.unsubscribe(sas.get_instrument_by_symbol('BSE','RELIANCE'),LiveFeedType.COMPACT)Unsubscribe to multiple instruments in a single call. Give an array of instruments to be unsubscribed.sas.unsubscribe([sas.get_instrument_by_symbol('NSE','TATASTEEL'),sas.get_instrument_by_symbol('NSE','ACC')],LiveFeedType.MARKET_DATA)Get All Subscribed Symbolssas.get_all_subscriptions()# AllMarket Status messages & Exchange messages.Subscribe to market status messagessas.subscribe_market_status_messages()Getting market status messages.print(sas.get_market_status_messages())Example result ofget_market_status_messages()[{'exchange': 'NSE', 'length_of_market_type': 6, 'market_type': b'NORMAL', 'length_of_status': 31, 'status': b'The Closing Session has closed.'}, {'exchange': 'NFO', 'length_of_market_type': 6, 'market_type': b'NORMAL', 'length_of_status': 45, 'status': b'The Normal market has closed for 22 MAY 2020.'}, {'exchange': 'CDS', 'length_of_market_type': 6, 'market_type': b'NORMAL', 'length_of_status': 45, 'status': b'The Normal market has closed for 22 MAY 2020.'}, {'exchange': 'BSE', 'length_of_market_type': 13, 'market_type': b'OTHER SESSION', 'length_of_status': 0, 'status': b''}]Note: As peralice bluedocumentationall market status messages should be having a timestamp. But in actual the server doesn‘t send timestamp, so the library is unable to get timestamp for now.Subscribe to exchange messagessas.subscribe_exchange_messages()Getting market status messages.print(sas.get_exchange_messages())Example result ofget_exchange_messages()[{'exchange': 'NSE', 'length': 32, 'message': b'DS : Bulk upload can be started.', 'exchange_time_stamp': 1590148595}, {'exchange': 'NFO', 'length': 200, 'message': b'MARKET WIDE LIMIT FOR VEDL IS 183919959. OPEN POSITIONS IN VEDL HAVE REACHED 84 PERCENT OF THE MARKET WIDE LIMIT. ', 'exchange_time_stamp': 1590146132}, {'exchange': 'CDS', 'length': 54, 'message': b'DS : Regular segment Bhav copy broadcast successfully.', 'exchange_time_stamp': 1590148932}, {'exchange': 'MCX', 'length': 7, 'message': b'.......', 'exchange_time_stamp': 1590196159}]Market Status messages & Exchange messages through callbackssocket_opened=Falsedefmarket_status_messages(message):print(f"market status messages{message}")defexchange_messages(message):print(f"exchange messages{message}")defopen_callback():globalsocket_openedsocket_opened=Truesas.start_websocket(market_status_messages_callback=market_status_messages,exchange_messages_callback=exchange_messages,socket_open_callback=open_callback,run_in_background=True)while(socket_opened==False):passsas.subscribe_market_status_messages()sas.subscribe_exchange_messages()sleep(10)Place an orderPlace limit, market, SL, SL-M, AMO, BO, CO ordersprint(sas.get_profile())# TransactionType.Buy, OrderType.Market, ProductType.Deliveryprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.Market,product_type=ProductType.Delivery,price=0.0,trigger_price=None,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.Market, ProductType.Intradayprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%2%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.Market,product_type=ProductType.Intraday,price=0.0,trigger_price=None,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.Market, ProductType.CoverOrderprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%3%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.Market,product_type=ProductType.CoverOrder,price=0.0,trigger_price=7.5,# trigger_price Here the trigger_price is taken as stop loss (provide stop loss in actual amount)stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.Limit, ProductType.BracketOrder# OCO Order can't be of type marketprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%4%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.Limit,product_type=ProductType.BracketOrder,price=8.0,trigger_price=None,stop_loss=6.0,square_off=10.0,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.Limit, ProductType.Intradayprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%5%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.Limit,product_type=ProductType.Intraday,price=8.0,trigger_price=None,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.Limit, ProductType.CoverOrderprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%6%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.Limit,product_type=ProductType.CoverOrder,price=7.0,trigger_price=6.5,# trigger_price Here the trigger_price is taken as stop loss (provide stop loss in actual amount)stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))################################ TransactionType.Buy, OrderType.StopLossMarket, ProductType.Deliveryprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%7%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.StopLossMarket,product_type=ProductType.Delivery,price=0.0,trigger_price=8.0,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.StopLossMarket, ProductType.Intradayprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%8%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.StopLossMarket,product_type=ProductType.Intraday,price=0.0,trigger_price=8.0,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.StopLossMarket, ProductType.CoverOrder# CO order is of type Limit and And Market Only# TransactionType.Buy, OrderType.StopLossMarket, ProductType.BO# BO order is of type Limit and And Market Only#################################### TransactionType.Buy, OrderType.StopLossLimit, ProductType.Deliveryprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%9%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.StopLossMarket,product_type=ProductType.Delivery,price=8.0,trigger_price=8.0,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.StopLossLimit, ProductType.Intradayprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%10%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.StopLossLimit,product_type=ProductType.Intraday,price=8.0,trigger_price=8.0,stop_loss=None,square_off=None,trailing_sl=None,is_amo=False))# TransactionType.Buy, OrderType.StopLossLimit, ProductType.CoverOrder# CO order is of type Limit and And Market Only# TransactionType.Buy, OrderType.StopLossLimit, ProductType.BracketOrderprint("%%%%%%%%%%%%%%%%%%%%%%%%%%%%11%%%%%%%%%%%%%%%%%%%%%%%%%%%%%")print(sas.place_order(transaction_type=TransactionType.Buy,instrument=sas.get_instrument_by_symbol('NSE','INFY'),quantity=1,order_type=OrderType.StopLossLimit,product_type=ProductType.BracketOrder,price=8.0,trigger_price=8.0,stop_loss=1.0,square_off=1.0,trailing_sl=20,is_amo=False))Place basket orderBasket order is used to buy or sell group of securities simultaneously.order1={"instrument":sas.get_instrument_by_symbol('NSE','INFY'),"order_type":OrderType.Market,"quantity":1,"transaction_type":TransactionType.Buy,"product_type":ProductType.Delivery}order2={"instrument":sas.get_instrument_by_symbol('NSE','SBIN'),"order_type":OrderType.Limit,"quantity":2,"price":280.0,"transaction_type":TransactionType.Sell,"product_type":ProductType.Intraday}order=[order1,order2]print(sas.place_basket_order(orders))Cancel an ordersas.cancel_order('170713000075481')#Cancel an open orderGetting order history and trade detailsGet order history of a particular orderprint(sas.get_order_history('170713000075481'))Get order history of all orders.print(sas.get_order_history())Get trade bookprint(sas.get_trade_book())Get historical candles dataThis will provide historical data butnot for current day.This returns apandasDataFrameobject which be used withpandas_tato get various indicators values.fromdatetimeimportdatetimeprint(sas.get_historical_candles('MCX','NATURALGAS NOV FUT',datetime(2020,10,19),datetime.now(),interval=30))OutputInstrument(exchange='MCX', token=224365, symbol='NATURALGAS NOV FUT', name='', expiry=datetime.date(2020, 11, 24), lot_size=None)open high low close volumedate2020-10-19 09:00:00+05:30 238.9 239.2 238.4 239.0 3732020-10-19 09:30:00+05:30 239.0 239.0 238.4 238.6 2102020-10-19 10:00:00+05:30 238.7 238.7 238.1 238.1 2132020-10-19 10:30:00+05:30 238.0 238.4 238.0 238.1 1162020-10-19 11:00:00+05:30 238.1 238.2 238.0 238.0 69... ... ... ... ... ...2020-10-23 21:00:00+05:30 237.5 238.1 237.3 237.6 3312020-10-23 21:30:00+05:30 237.6 238.5 237.6 237.9 7542020-10-23 22:00:00+05:30 237.9 238.1 237.2 237.9 5182020-10-23 22:30:00+05:30 237.9 238.7 237.7 238.1 8972020-10-23 23:00:00+05:30 238.2 238.3 236.3 236.5 1906Better way to get historical data, first get the latest version from githubpython -m pip install git+https://github.com/algo2t/alphatrade.gitfromdatetimeimportdatetimeindia_vix_nse_index=sas.get_instrument_by_symbol('NSE','India VIX')print(sas.get_historical_candles(india_vix_nse_index.exchange,india_vix_nse_index.symbol,datetime(2020,10,19),datetime.now(),interval=30))Get intraday candles dataThis will give candles data forcurrent day only.This returns apandasDataFrameobject which be used withpandas_tato get various indicators values.print(sas.get_intraday_candles('MCX','NATURALGAS NOV FUT',interval=15))Better way to get intraday data, first get the latest version from githubpython -m pip install git+https://github.com/algo2t/alphatrade.gitfromdatetimeimportdatetimenifty_bank_nse_index=sas.get_instrument_by_symbol('NSE','Nifty Bank')print(sas.get_intraday_candles(nifty_bank_nse_index.exchange,nifty_bank_nse_index.symbol,datetime(2020,10,19),datetime.now(),interval=10))Order properties as enumsOrder properties such as TransactionType, OrderType, and others have been safely classified as enums so you don‘t have to write them out as stringsTransactionTypeTransaction types indicate whether you want to buy or sell. Valid transaction types are of the following:TransactionType.Buy- buyTransactionType.Sell- sellOrderTypeOrder type specifies the type of order you want to send. Valid order types include:OrderType.Market- Place the order with a market priceOrderType.Limit- Place the order with a limit price (limit price parameter is mandatory)OrderType.StopLossLimit- Place as a stop loss limit orderOrderType.StopLossMarket- Place as a stop loss market orderProductTypeProduct types indicate the complexity of the order you want to place. Valid product types are:ProductType.Intraday- Intraday order that will get squared off before market closeProductType.Delivery- Delivery order that will be held with you after market closeProductType.CoverOrder- Cover orderProductType.BracketOrder- One cancels other order. Also known as bracket orderWorking with examplesHere, examples directory there are 3 fileszlogin_example.py,zstreaming_data.pyandstop.txtStepsClone the repository to your local machinegit clone https://github.com/algo2t/alphatrade.gitCopy the examples directory to any location where you want to write your codeInstall thealphatrademodule usingpip=>python -m pip install https://github.com/algo2t/alphatrade.gitOpen the examples directory in your favorite editor, in our case it isVSCodiumOpen thezlogin_example.pyfile in the editorNow, createconfig.pyfile as per instructions given below and in the above fileProvide correct login credentials like login_id, password and 16 digit totp code (find below qr code)This is generally set from the homepage of alpha web trading platformhereClick onFORGET PASSWORD?=> SelectReset 2FAradio button.Enter the CLIENT ID (LOGIN_ID), EMAIL ID and PAN NUMBER, click onRESETbutton.Click onBACK TO LOGINand enterCLIENT IDandPASSWORD, click onSECURED SIGN-INSet same answers for 5 questions and click onSUBMITbutton.config.pylogin_id="XXXXX"password="XXXXXXXX"Totp='XXXXXXXXXXXXXXXX'try:access_token=open('access_token.txt','r').read().rstrip()exceptExceptionase:print('Exception occurred ::{}'.format(e))access_token=NoneExample strategy using alpha trade APIHereis an example moving average strategy using alpha trade web API.
This strategy generates a buy signal when 5-EMA > 20-EMA (golden cross) or a sell signal when 5-EMA < 20-EMA (death cross).Example for getting historical and intraday candles dataHereis an example for getting historical data using alpha trade web API.For historical candles datastart_timeandend_timemust be provided in format as shown below.
It can also be provided astimedelta. Check the scriptzhistorical_data.pyin examples.fromdatetimeimportdatetime,timedeltastart_time=datetime(2020,10,19,9,15,0)end_time=datetime(2020,10,21,16,59,0)df=sas.get_historical_candles('MCX','NATURALGAS OCT FUT',start_time,end_time,5)print(df)end_time=start_time+timedelta(days=5)df=sas.get_historical_candles('MCX','NATURALGAS NOV FUT',start_time,end_time,15)print(df)For intraday or today‘s / current day‘s candles data.df=sas.get_intraday_candles('MCX','NATURALGAS OCT FUT')print(df)df=sas.get_intraday_candles('MCX','NATURALGAS NOV FUT',15)print(df)Read this before creating an issueBefore creating an issue in this library, please follow the following steps.Search the problem you are facing is already asked by someone else. There might be some issues already there, either solved/unsolved related to your problem. Go toissuespage, useis:issueas filter and search your problem.If you feel your problem is not asked by anyone or no issues are related to your problem, then create a new issue.Describe your problem in detail while creating the issue. If you don‘t have time to detail/describe the problem you are facing, assume that I also won‘t be having time to respond to your problem.Post a sample code of the problem you are facing. If I copy paste the code directly from issue, I should be able to reproduce the problem you are facing.Before posting the sample code, test your sample code yourself once. Only sample code should be tested, no other addition should be there while you are testing.Have some print() function calls to display the values of some variables related to your problem.Post the results of print() functions also in the issue.Use the insert code feature of github to inset code and print outputs, so that the code is displayed neat.
|
alpha-trader-python
|
Alpha-Trader Python SDKWelcome to the Python SDK for Alpha-Trader.
The functionality will be extended and Pull Requests are welcome.For documentation, mkdocs is used, please see:DOCUMENTATION
|
alpha-trainer
|
Alpha Trainer - Machine Learning Game Training FrameworkThe Alpha Trainer package is a versatile framework designed for training machine learning models for custom games. It simplifies the process of training and evaluating models on custom game environments. This README provides an overview of the package and focuses on the main function,simulate_game.InstallationYou can install thealpha_trainerpackage using pip:pipinstallalpha_trainerThe "Example" section now includes a code snippet showing how to use thesimulate_gamefunction within the README.md file. You can customize the example to match your specific use case and provide more detailed information as needed.ExampleHere's an example of how to use thesimulate_gamefunction:fromalpha_trainerimportsimulate_game,AlphaTrainableGame,AlphaMove# Define your custom game classclassMyGame(AlphaTrainableGame):# Implement your custom game logic here# Simulate a game and collect datagame_results=simulate_game(MyGame,num_simulations=1000,model=my_model)# Use the collected data to train and evaluate your machine learning model# ...
|
alpha-transform
|
Adaptive transform for manifold-valued dataThis package contains all software developed as part of Workpackage 2.1 (Adaptive transform for manifold-valued data)
of the DEDALE project. For an in-depth description of this workpackage, we refer to the associated technical report.The software consists of three main parts:An implementation of the (bandlimited)α-shearlet transform(inAlphaTransform.py,
in three versions:Afully sampled(non-decimated), translation invariant, fast, but memory-consumingimplementationA fully sampled, translation invariant, slightly slower, but memory-efficientimplementationAsubsampled(decimated),nottranslation invariant, but fast and memory-efficient implementation.Implementations (inAdaptiveAlpha.py) of three criteria that can be used to adaptively choose
the value of α, namely:Theasymptotic approximation rate(AAR),themean approximation error(MAE),thethresholding denoising performance(TDP).Achart-based implementation(inSphereTransform.py) of the α-shearlet transform for
functions defined on the sphere.In the following, we provide brief explanations and hands-on experiments for all of these aspects. The following
table of content can be used for easy navigation:Table of ContentsThe α-shearlet transform: A crash courseAdaptively choosing αa)Asymptotic approximation rate (AAR)b)Mean approximation error (MAE)c)Thresholding denoising performance (TDP)The α-shearlet transform for functions defined on the sphereThe α-shearlet transform: A crash courseThe following demonstrates a very simple use case of the α-shearlet transform: We compute the transform of an example image,
threshold the coefficients, reconstruct and compute the error. The code is longer than strictly necessary, since
along the way give a demonstration of the general usage of the transform.>>># Importing necessary packages>>>fromAlphaTransformimportAlphaShearletTransformasAST>>>importnumpyasnp>>>importmatplotlib.pyplotasplt>>>fromscipyimportmisc>>>im=misc.face(gray=True)>>>im.shape(768,1024)>>># Setting up the transform>>>trafo=AST(im.shape[1],im.shape[0],[0.5]*3)# 1Precomputingshearletsystem:100%|███████████████████████████████████████|52/52[00:05<00:00,8.83it/s]>>># Computing and understanding the α-shearlet coefficients>>>coeff=trafo.transform(im)# 2>>>coeff.shape# 3(53,768,1024)>>>trafo.indices# 4[-1,(0,-1,'r'),(0,0,'r'),(0,1,'r'),(0,1,'t'),(0,0,'t'),(0,-1,'t'),(0,-1,'l'),(0,0,'l'),(0,1,'l'),(0,1,'b'),(0,0,'b'),(0,-1,'b'),(1,-2,'r'),(1,-1,'r'),(1,0,'r'),(1,1,'r'),(1,2,'r'),(1,2,'t'),(1,1,'t'),(1,0,'t'),(1,-1,'t'),(1,-2,'t'),(1,-2,'l'),(1,-1,'l'),(1,0,'l'),(1,1,'l'),(1,2,'l'),(1,2,'b'),(1,1,'b'),(1,0,'b'),(1,-1,'b'),(1,-2,'b'),(2,-2,'r'),(2,-1,'r'),(2,0,'r'),(2,1,'r'),(2,2,'r'),(2,2,'t'),(2,1,'t'),(2,0,'t'),(2,-1,'t'),(2,-2,'t'),(2,-2,'l'),(2,-1,'l'),(2,0,'l'),(2,1,'l'),(2,2,'l'),(2,2,'b'),(2,1,'b'),(2,0,'b'),(2,-1,'b'),(2,-2,'b')]>>># Thresholding the coefficients and reconstructing>>>np.max(np.abs(coeff))# 52041.1017181588547>>>np.sum(np.abs(coeff)>200)/coeff.size# 60.020679905729473761>>>thresh_coeff=coeff*(np.abs(coeff)>200)# 7>>>recon=trafo.inverse_transform(thresh_coeff,real=True)# 8>>>np.linalg.norm(im-recon)/np.linalg.norm(im)# 90.13912540983541383>>>plt.imshow(recon,cmap='gray')<matplotlib.image.AxesImageobjectat0x2b0f568c25c0>>>>plt.show()Importing necessary packagesThe first few (unnumbered) lines import relevant packages and load (a gray scale version of) the following
test image, which has a resolution of 1024 x 768 pixels:Setting up the transformThen, in line 1, we create an instancetrafoof the classAlphaShearletTransform(which is
calledASTabove for brevity).
During construction of this object, the necessary shearlet filters are precomputed. This may take some time
(5 seconds in the example above), but speeds up later computations.The three parameters passed to the constructor oftraforequire some explanation:The first parameter is thewidthof the images which can be analyzed using thetrafoobject.Similarly, the second parameter determines theheight.The third parameter is of the form[alpha] * N, whereNdetermines the number of scales of the transform andalphadetermines the value of α.The reason for this notation is that in principle, one can choose a
different value of α on each scale. Since[0.5] * 3 = [0.5, 0.5, 0.5], one can use this notation to obtain a
system using a single value of α across all scales.All in all, we see that line 1 creates a shearlet system (i.e., α = 0.5) with 3 scale (plus a low-pass)
for images of dimension 1024 x 768.Computing and understanding the α-shearlet coefficientsLine 2 shows that the α-shearlet coefficients of the imageimcan be readily computed using thetransformmethod oftrafo.
As seen in line 3, this results in an array of size53, where each of the elements of the array is an array (an image) of
size 1024 x 768, i.e., of the same size as the input image.To help understand the meaning of each of thesecoefficient imagescoeff[i], fori = 0, ..., 52, the output of line 4
is helpful: Associated to each coefficient imagecoeff[i], there is anindextrafo.indices[i]which encodes themeaningof the coefficient image, i.e., the shearlet used to compute it.The special index-1stands for thelow-pass part. All other indices are of the form(j, k, c), wherejencodes thescale. In the present case,jranges from0to2, since we have 3 scales.kencodes theamount of shearing, ranging from -⌈2j(1 - α)⌉ to ⌈2j(1 - α)⌉ on scalej.cencodes theconeto which the shearlet belongs (in the Fourier domain). Precisely, we have the following correspondence
between the value ofcand the corresponding frequency cones:value ofc'r''t''l''b'Frequency conerighttopleftbottomNote that if we divide the frequency plane into four cones, such that each shearlet has a real-valued Fourier transform
which is supported in one of these cones, then the shearlets themselves (i.e., in space)can notbe real-valued.
Hence, if real-valued shearlets are desired, one can pass the constructor of the classAlphaShearletTransformthe
additional argumentreal=True. In this case, the frequency plane is split into a horizontal (encoded by'h') and
a vertical (encoded by'v') cone, as is indicated by the following example:>>>trafo_real=AST(im.shape[1],im.shape[0],[0.5]*3,real=True)Precomputingshearletsystem:100%|██████████████████████████████████████|26/26[00:03<00:00,5.62it/s]>>>trafo_real.indices[-1,(0,-1,'h'),(0,0,'h'),(0,1,'h'),(0,1,'v'),(0,0,'v'),(0,-1,'v'),(1,-2,'h'),(1,-1,'h'),(1,0,'h'),(1,1,'h'),(1,2,'h'),(1,2,'v'),(1,1,'v'),(1,0,'v'),(1,-1,'v'),(1,-2,'v'),(2,-2,'h'),(2,-1,'h'),(2,0,'h'),(2,1,'h'),(2,2,'h'),(2,2,'v'),(2,1,'v'),(2,0,'v'),(2,-1,'v'),(2,-2,'v')]Thresholding the coefficients & reconstructingWhen called without further parameters, the methodtrafo.transformcomputes anormalizedtransform,
so that effectively all shearlets are normalized to have L² norm 1. With this normalization, line 5 shows
that the largest coefficient has size about 2041. We now (arbitrarily) pick a threshold of 200 and see (in line 6)
that only about 2% of the coefficients are larger than this threshold. Next, we set all coefficients which are smaller
(in absolute value) than 200 to zero and save the resulting thresholded coefficients asthresh_coeff, in line 7.In line 8, we then use the methodinverse_transformof thetrafoobject to compute the inverse transform.
Since we know that the original image was real-valued, we pass the additional argumentreal=True.
This has the same effect as reconstructing without this additional argument and then taking the real part.
Line 9 shows that the relative error is about 13.9%.Finally, the last two lines display the reconstructed image.Changes for the subsampled transformAbove, we showed how our implementation of thefully sampledα-shearlet transform can be used to compute the α-shearlet
transform of an image and reconstruct (with thresholded coefficients). For thesubsampledtransform, this can be
done very similarly; the main difference one has to keep in mind is that for the fully sampled transform,
one obtains an array of "coefficient images" which are all of the same size. In contrast, due to the subsampling,
the "coefficient images" for the subsampled transform are all of different sizes:>>>fromAlphaTransformimportAlphaShearletTransformasAST>>>importmatplotlib.pyplotasplt>>>importnumpyasnp>>>fromscipyimportmisc>>>im=misc.face(gray=True)>>>trafo=AST(im.shape[1],im.shape[0],[0.5]*3,subsampled=True)# 1Precomputingshearlets:100%|████████████████████████████████████████████|52/52[00:00<00:00,69.87it/s]>>>coeff=trafo.transform(im)# 2>>>type(coeff).__name__# 3'list'>>>[c.shapeforcincoeff]# 4[(129,129),(364,161),(364,161),(364,161),(97,257),(97,257),(97,257),(364,161),(364,161),(364,161),(97,257),(97,257),(97,257),(514,321),(514,321),(514,321),(514,321),(514,321),(193,364),(193,364),(193,364),(193,364),(193,364),(514,321),(514,321),(514,321),(514,321),(514,321),(193,364),(193,364),(193,364),(193,364),(193,364),(727,641),(727,641),(727,641),(727,641),(727,641),(385,513),(385,513),(385,513),(385,513),(385,513),(727,641),(727,641),(727,641),(727,641),(727,641),(385,513),(385,513),(385,513),(385,513),(385,513)]>>>np.max([np.max(np.abs(c))forcincoeff])# 52031.0471969998314>>>np.sum([np.sum(np.abs(c)>200)forcincoeff])# 622357>>>np.sum([np.sum(np.abs(c)>200)forcincoeff])/np.sum([c.sizeforcincoeff])# 70.0023520267754635542>>>thresh_coeff=[c*(np.abs(c)>200)forcincoeff]# 8>>>recon=trafo.inverse_transform(thresh_coeff,real=True)# 9>>>np.linalg.norm(im-recon)/np.linalg.norm(im)0.13945789596375507Up to the first marked line, everything is identical to the code for the fully sampled transform.
The only difference in line 1 is the additional argumentsubsampled=Trueto obtain a subsampled transform.
Then, in line 2, the transform of the imageimis computed just as for the fully sampled case.The main difference to the fully sampled transform becomes visible in lines 3 and 4:
In contrast to the fully sampled transform, where the coefficients are a 3-dimensional numpyarray,
the subsampled transform yields alistof 2-dimensional numpy arrays, withvarying shapes.
This shape is constant with respect to the shearkas long as the scalejand the conecare kept fixed,
but varies strongly withjandc. In face, for aquadraticimage, the shape would only depend on the scalej.Since we have a list of numpy arrays instead of a single numpy array, all operations on the coefficients are more
cumbersome to write down (using list comprehensions), but are identical in spirit to the case of the fully sampled
transform, cf. lines 5-8.The actual reconstruction (in line 9) is exactly identical to the fully sampled case. It is interesting to note that
only about 0.24% of the coefficients - and thus much less than the 2% for the fully sampled transform - are larger than
the threshold. Nevertheless, the relative error is essentially the same.Adaptively choosing αIn the following, we show for each of the three optimality criteria (AAR,MAEandTDP) how our
implementation can be used to determine the optimal value of α for a given set of images.Asymptotic approximation rate (AAR)The following code uses a grid search to determine the value of α which yields the bestasymptotic approximation rate(as described in the technical report) for the given set of images:>>>fromAdaptiveAlphaimportoptimize_AAR>>>shearlet_args={'real':True,'verbose':False}# 1>>>images=['./Review/log_dust_test.npy']# 2>>>num_scales=4# 3>>>num_alphas=3# 4>>>optimize_AAR(images,num_scales,1/(num_alphas-1),shearlet_args=shearlet_args)Firststep:Determinethemaximumrelevantvalue...alphaloop:100%|████████████████████████████████████████████████████████|3/3[00:10<00:00,3.48s/it]imageloop:100%|████████████████████████████████████████████████████████|1/1[00:01<00:00,1.60s/it]Maximumrelevantvalue:0.04408709120918954Secondstep:Computingtheapproximationerrors...alphaloop:100%|████████████████████████████████████████████████████████|3/3[01:40<00:00,30.99s/it]Imageloop:100%|████████████████████████████████████████████████████████|1/1[00:46<00:00,46.89s/it]Thresh.loop:100%|██████████████████████████████████████████████████████|50/50[00:45<00:00,1.10it/s]Thirdstep:Computingtheapproximationrates...Commonbreakpoints:[0,4,34,50]Lastcommonlinearpart:[34,50)lastslopes:alpha=1.00:-0.161932+0.012791alpha=0.50:-0.161932+0.000108*alpha=0.00:-0.161932-0.012900Optimalvalue:alpha=0.00In addition to the output shown above, executing this code will display the following plot:We now briefly explain the abovecodeand output:
In the short program above, we first import the functionoptimize_AARwhich will do the actual work.
Then, we define the parameters to be passed to this function:In line 1, we determine the properties of the α-shearlet systems that will be used:'real': Trueensures that real-valued shearlets are used.'verbose': Falsesuppresses some output, e.g. the progress bar for precomputing the shearlet system.Another possible option would be'subsampled': Trueif one wants to use the subsampled transform. Note though
that this is incompatible with the'real': Trueoption.In line 2, we determine the set of images that is to be used. To ensure fast computations, we only take
a single image for this example. Specifically, the used image is the logarithm of one of the 12 faces of cosmic
dust data provided by CEA, as depicted in the following figure:The variablenum_scalesdetermines how many scales the α-shearlet systems should use.Since we are using a grid search (i.e., we are only considering finitely many values of α), the
variablenum_alphasis used to determine how many different values of α should be distinguished.
These are then uniformly spread in the interval [0,1]. Again, to ensure fast computations, we only consider
three different values, namely α=0, α=0.5 and α=1.Finally, we invoke the functionoptimize_AARwith the chosen parameters. As described in the technical report,
this function does the following:It determines a range [0, c0] such that for c≥c0, all α-shearlet transforms yield thesame errorwhen all coefficients of absolute value ≤c are set to zero ("thresholded").It computes the reconstruction errors for the different values of α after thresholding the coefficients with
a threshold of c0·bkfor k=0,...,K-1.The default value is K=50. This can be adjusted by passing e.g. the argumentnum_x_values=40as an
additional argument tooptimize_AAR. Likewise, the baseb(with default valueb=1.25) can be adjusted
by passing e.g.base=1.3as a further argument.It determines a partition of {0, ..., K-1} into at most 4 intervals such that on each of these intervals,
each of the (logarithmic) error curves is almost linear.In the example run above, the end points of the
resulting partition are given byCommon breakpoints: [0, 4, 34, 50]. In this case, the resulting
partition has only three intervals instead of 4, since on each of these intervals, the best linear approximation
is already sufficiently good.For each value of α, the function then determines theslopesof the (logarithmic) error curve on the
last of these intervals and compares these slopes.The optimal value of α (in this case α=0) is the one with the smallest slope, i.e., with the highest
decay of the error. To allow for a visual comparison,optimize_AARalso displays aplotof the
(logarithmic) error curves, including the partition into the almost linear parts.Mean approximation error (MAE)The following code uses a grid search to determine the value of α which yields the bestmean approximation error(as described in the technical report) for the given set of images:>>>fromAdaptiveAlphaimportoptimize_MAE>>>shearlet_args={'real':True,'verbose':False}# 1>>>images=['./Review/log_dust_test.npy']# 2>>>num_scales=4>>>num_alphas=3>>>optimize_MAE(images,num_scales,1/(num_alphas-1),shearlet_args=shearlet_args)Firststep:Determinethemaximumrelevantvalue...alphaloop:100%|████████████████████████████████████████████████████████|3/3[00:11<00:00,3.61s/it]imageloop:100%|████████████████████████████████████████████████████████|1/1[00:01<00:00,1.61s/it]Maximumrelevantvalue:0.006053772894213893Secondstep:Computingtheapproximationerrors...alphaloop:100%|████████████████████████████████████████████████████████|3/3[01:40<00:00,31.13s/it]imageloop:100%|████████████████████████████████████████████████████████|1/1[00:48<00:00,48.25s/it]thresholdingloop:100%|█████████████████████████████████████████████████|50/50[00:46<00:00,1.08it/s]Finalstep:Computingoptimalvalueofalpha...meanerrors:alpha=1.00:+0.024105+0.000258alpha=0.50:+0.024105+0.000015*alpha=0.00:+0.024105-0.000273Optimalvalue:alpha=0.00In addition to the output shown above, executing this code will display the following plot:We now briefly explain the abovecodeand output:
In the short program above, we first import the functionoptimize_MAEwhich will do the actual work.
Then, we define the parameters to be passed to this function:In line 1, we determine the properties of the α-shearlet systems that will be used:'real': Trueensures that real-valued shearlets are used.'verbose': Falsesuppresses some output, e.g. the progress bar for precomputing the shearlet system.Another possible option would be'subsampled': Trueif one wants to use the subsampled transform. Note though
that this is incompatible with the'real': Trueoption.In line 2, we determine the set of images that is to be used. To ensure fast computations, we only take
a single image for this example. Specifically, the used image is the logarithm of one of the 12 faces of cosmic
dust data provided by CEA, as depicted in thefigureabove.The variablenum_scalesdetermines how many scales the α-shearlet systems should use.Since we are using a grid search (i.e., we are only considering finitely many values of α), the
variablenum_alphasis used to determine how many different values of α should be distinguished.
These are then uniformly spread in the interval [0,1]. Again, to ensure fast computations, we only consider
three different values, namely α=0, α=0.5 and α=1.Finally, we invoke the functionoptimize_MAEwith the chosen parameters. As described in the technical report,
this function does the following:It determines a range [0, c0] such that for c≥c0, all α-shearlet transforms yield thesame errorwhen all coefficients of absolute value ≤c are set to zero ("thresholded").It computes the reconstruction errors for the different values of α after thresholding the coefficients with
a threshold of c = c0·i / K for i = 0, ..., K-1.The default value is K=50. This can be adjusted by passing e.g. the argumentnum_x_values=40as an additional
argument tooptimize_MAE.For each value of α, the function then determines themeanof all these approximation errors. The optimal
value of α (in this case α=0) is the one with the smallest mean approximation error.To allow for a visual comparison,optimize_MAEalso displays aplotof the error curves.Thresholding Denoising Performance (TDP)In the following, we show how the denoising performance of an α-shearlet system can be used as an optimality
criterion for adaptively choosing the correct value of α.Since the(logarithmic) dust dataused for the previous experiments does not allow for an
easy visual comparison between the original image and the different denoised versions, we decided to instead
use the following 684 x 684 cartoon image (taken fromSMBC) as a toy example:The following code uses a grid search over different values of α to determine the value with the
optimal denoising performance:>>>fromAdaptiveAlphaimportoptimize_denoising>>>image_paths=['./Review/cartoon_example.png']# 1>>>num_alphas=3# 2>>>num_scales=5# 3>>>num_noise_levels=5# 4>>>shearlet_args={'real':True,'verbose':False}# 5>>>optimize_denoising(image_paths,num_scales,1/(num_alphas-1),num_noise_levels,shearlet_args=shearlet_args)imageloop:100%|██████████████████████████████████████████████████████████|1/1[01:33<00:00,93.16s/it]alphaloop:100%|██████████████████████████████████████████████████████████|3/3[01:33<00:00,28.44s/it]noiseloop:100%|██████████████████████████████████████████████████████████|5/5[00:46<00:00,9.38s/it]Averagederroroverallimagesandallnoiselevels:alpha=1.00:0.0961alpha=0.50:0.0900alpha=0.00:0.0948Optimalvalueonwholeset:alpha=0.50In addition to the output shown above, executing the sample code also displays the following plot:We now briefly explain the abovecodeand output:
First, we import fromAdaptiveAlpha.pythe functionoptimize_denoisingwhich will do the actual work.
We then set the parameters to this function. In the present case, we want toanalyze thecartoon imageshown above,use α-shearlet transforms withreal-valuedshearlets (cf. line 5, the second part of that line suppresses some output),use α-shearlet transforms with 5 scales (line 4),use 5 different noise levels λ (line 4) which are uniformly spaced in [0.02, 0.4],compare three different values of α, which are uniformly spaced in [0,1], i.e, α=1, α=0.5 and α=0.In more realistic experiment, one would of course use a larger set of test images and consider more different values of
α, possibly also with a larger number of different noise levels. But here, we are mainly interested in a quick
execution time, so we keep everything small.Finally, we invoke the functionoptimize_denoisingwhich - briefly summarized - does the following:It normalizes each of the K x K input images to have L² norm equal to 1.
Then, for each image, each value of α and each noise level λ in [0.02,0.4], a distorted image is calculated
by adding artificial Gaussian noise with standard deviation σ=λ/K to the image.This standard deviation is chosen in such a way that the expected squared L² norm of the noise is λ².One can specify other ranges for the noise level than the default range [0.02, 0.4] by using the paramtersnoise_minandnoise_max.The α-shearlet transform of the distorted image is determined.Hard thresholding is performed on the set of α-shearlet coefficients. The thresholding parameter (i.e., the cutoff value)
c is chosen scale- and noise dependent via c = mσ, with m being a scale-dependentmultiplier.Numerical experiments showed that good results are obtained by taking m=3 for all scales except the highest and m=4
for the highest scale. This is the default choice made inoptimize_denoising. If desired, this default choice can be
modified using the parameterthresh_multiplier.The inverse α-shearlet transform of the thresholded coefficients is determined and the L²-error between this
reconstruction and the original image is calculated.The optimal value of α is the one for which the L²-error averaged over all images and all noise levels is the smallest.Theplotwhich is displayed byoptimize_denoisingdepicts the mean error over all images as a function of
the employed noise level λ for different values of α. In the present case, we are only considering one image (N=1).
Clearly, shearlets (i.e., α=0.5) turn out to be optimal for our toy example.For an eye inspection,optimize_denoisingalso saves the noisy image and the reconstructions for the largest noise level
(λ=0.4) and the different α values to the current working directory.
A small zoomed part of these reconstructions - together with the same part of the original image and the noisy image - can
be seen below:original imagenoisy imageα=0α=0.5α=1The α-shearlet transform for functions defined on the sphereAs described in the technical report, we use achart-basedapproach for computing the
α-shearlet transform of functions defined on the sphere. Precisely, we use the charts provided
by theHEALPixpixelizationof the sphere,
which divides the sphere into 12 faces and provides cartesian charts for each of these faces. The crucial property
of this pixelization is that each pixel has exactly the same spherical area.Now, given a functionfdefined on the sphere (in a discretized version, as a so-calledHEALPix map), one can use
the functionget_all_faces()fromSphereTransform.pyto obtain a family of 12 cartesian images, each of which
represents the functionfrestricted to one of the 12 faces of the sphere. As detailed in the technical report, analyzing
each of these 12 cartesian images and concatenating the coefficients is equivalent to analyzingfusing the sphere-based
α-shearlets.Conversely, one needs a way to reconstruct (a slightly modified version of)f, given the (possibly thresholded or
otherwise modified) α-shearlet coefficients. To this end, one first reconstructs each of the 12 cartesian images using
the usual α-shearlet transform and then concatenates these to obtain a functiongdefined on the sphere, via the
functionput_all_facesdefined inSphereTransform.py.Here, we just briefly indicate howput_all_facescan be used to obtain a plot of certain (randomly selected)
α-shearlets on the sphere. Below, we use an alpha-shearlet system with 6 scales, but only use alpha-shearlets
from the first three scales for plotting, since the alpha-shearlets on higher scales are very small/short
and thus unsuitable for producing a nice plot.>>>importhealpyashp>>>importnumpyasnp>>>importmatplotlib.pyplotasplt>>>fromSphereTransformimportput_all_facesasCartesian2Sphere>>>fromAlphaTransformimportAlphaShearletTransformasAST>>>width=height=512>>>alpha=0.5>>>num_scales=6# we use six scales, so that the shearlets on lower scales are big (good for plotting)>>>trafo=AST(width,height,[alpha]*num_scales,real=True)>>>all_shearlets=trafo.shearlets# get a list of all shearlets>>>cartesian_faces=np.empty((12,height,width))>>># for each of the 12 faces, select a random shearlet from one of the first three scales>>>upper_bound=trafo.scale_slice(3).start>>>shearlet_indices=np.random.choice(np.arange(upper_bound),size=12,replace=False)>>>fori,indexinenumerate(shearlet_indices):cartesian_faces[i]=all_shearlets[index]# normalize, so that the different shearlets are comparable in sizemax_val=np.max(np.abs(cartesian_faces[i]))cartesian_faces[i]/=max_val>>># use HEALPix charts to push the 12 cartesian faces onto the sphere>>>sphere_shearlets=Cartesian2Sphere(cartesian_faces)>>>hp.mollview(sphere_shearlets,cbar=False,hold=True)>>>plt.title(r"Random $\alpha$-shearlets on the sphere",fontsize=20)>>>plt.show()The above code produces a plot similar to the following:Required packagesIn addition to Python 3, the software requires the following Python packages:numpymatplotlibnumexprpyfftwtqdmhealpy(only required forSphereTransform.py)PIL, the Python Imaging Libraryscipy.ndimage
|
alphaturtle
|
> To Draw Alphabet And Number Using Turtle ModuleAbout Turtle:This project is made using in-built python library called “Turtle”.Turtle graphics is a popular way for introducing programming to kids.It was part of the original Logo programming language developed by Wally Feurzig and Seymour Papert in 1966.InstallationGui Installation(How To Use):gitclonehttps://github.com/C0DE-SLAYER/alphaturtlecdalphaturtlepipinstall-rrequirement.txtpythonalphaturtle_gui.pyTo Use Function On Your Own / CLI(How To Use):pipinstallalphaturtleUsage/ExamplesGui Usage :1.Run usingpython alphaturtle_gui.py2.Fill All The Requried Field3.Hit The Draw Button :CLI Usage :If you have install alphaturtle using pip just run this in terminal/powershell/cmd :alphaturtle -i pythonFor help runalphaturtle -hUse Function Own :Include alphaturtle in your project :from alphaturtle import * # import aplhaturtle
import turtle # importing turtle module
SetVar(2, 5, 20, 'black', 'yellow', 0) # setting values
drawLetterP() # calling function to draw P
whiteSpace() # drawing whitespace
drawLetterY() # calling function to draw Y
whiteSpace() # drawing whitespace
drawLetterT() # calling function to draw T
whiteSpace() # drawing whitespace
drawLetterH() # calling function to draw H
whiteSpace() # drawing whitespace
drawLetterO() # calling function to draw O
whiteSpace() # drawing whitespace
drawLetterN() # calling function to draw N
turtle.hideturtle() # turtle method to hide the cursor
turtle.done() # To see the outputLimitationsThe code works fine for all capatial alphabets and numericals ,but not for special characters except dot and hipanLicenseMIT
|
alphatwirl
|
A Python library for summarizing event data into multivariate categorical dataDescriptionAlphaTwirlis a Python library that summarizes event data into multivariate categorical data as data frames. Event data, input to AlphaTwirl, are data with one entry (or row) for one event: for example, data inROOTTTreeswith one entry per collision event of anLHCexperiment atCERN. Event data are often large—too large to be loaded in memory—because they have as many entries as events. Multivariate categorical data, the output of AlphaTwirl, have one row for one category. They are usually small—small enough to be loaded in memory—because they only have as many rows as categories. Users can, for example, import them as data frames intoRandpandas, which usually load all data in memory, and can perform categorical data analyses with a rich set of data operations available in R and pandas.Quick startJupyter Notebook:Quick start of AlphaTwirlPublicationTai Sakuma,"AlphaTwirl: A Python library for summarizing event data into multivariate categorical data",
EPJ Web of Conferences214, 02001 (2019),doi:10.1051/epjconf/201921402001,1905.06609LicenseAlphaTwirl is licensed under the BSD license.ContactTai Sakuma [email protected]
|
alphaui
|
No description available on PyPI.
|
alphausblue
|
No description available on PyPI.
|
alphav
|
alphavalpha vantage api wrapperDescriptionusing a symbol object that pulls the data once the property is accessed.The data is then saved for the next property calls.ExamplefromalphavimportSymbolimportos# generate symbolapikey=os.environ.get('API_KEY')s=Symbol('IBM',apikey)# print the data it providesprint(s.balance_sheet)print(s.earnings)print(s.income_statement)print(s.cash_flow)print(s.overview)print(s.global_quote)print(s.time_series_daily)print(s.time_series_monthly)print(s.time_series_monthly_adjusted)print(s.time_series_weekly)print(s.time_series_weekly_adjusted)The properties are of data object, supporting# for balance sheet dataprop.main# the main data slice, annual by defaultprop.annual# annual data when proviededprop.quarterly# quarterly data when providedprop.set_main('quarterly')prop.main# will be prop.annual# for the rest of the dataprop.main# only
|
alphavant
|
This is the homepage of the AlphaVantage package.
|
alphavantage
|
alphavantagealphavantage is a Python wrapper for the Alpha Vantage API.The API wrapper can be used to retrieve historical prices such as intraday or daily prices for global equities and ETFs.StatusThe API aims to support equity time-series data as a first step.The package is currently in alpha status. It has not been used extensively yet and therefore mainly of the potential quirks of Alpha Vantage's actual API may not be accounted for. We plan on using this wrapper for price history charting in ourcompany lookup and ratings tool.Design ConsiderationThis library is intended to provide a simple wrapper with minimal dependencies, and does not intend to introduce pydata stack dependencies (numpy, pandas, etc.) in the future. Differences with existing wrappers for the Alpha Vantage API include:Library DifferencesNo Pandas dependencies or optional dependencyFocuses on simplifying data for ingestingAvoids logical branching making the code simpler (only two if statements at moment)Provides symbology mapping referencesThe library carries out some conveniences versus using the API without a wrapper.ConveniencesConverts timestamps to UTC time when applicable.Simplifies record field names i.e. "4. close" -> "close".Appends the timestamp field to record vs. having the timestamp act as dictionary key.Uses time ascending list versus a dictionary for price record data structure.Returns multiple tickers over a given parameter set using threads.Maps ticker symbology from other vendors.Excludes intraday data in daily price history requests.Examplesfromalphavantage.price_historyimport(AdjustedPriceHistory,get_results,PriceHistory,IntradayPriceHistory,filter_dividends)# weekly priceshistory=PriceHistory(period='W',output_size='compact')results=history.get('AAPL')# intraday prices, 5 minute intervalhistory=IntradayPriceHistory(utc=True,interval=5)results=history.get('AAPL')# adjusted daily priceshistory=AdjustedPriceHistory(period='D')results=history.get('AAPL')dividends=list(filter_dividends(results.records))# Return multiple tickersparameters={'output_size':'compact','period':'D'}tickers=['AAPL','MSFT']results=dict(get_results(PriceHistory,tickers,parameters))ContributingContributions are welcome. Someone can immediately contribute by building out wrappers for the rest of the API such as FX rates or crypto prices.Getting StartedInstallingpipinstallalphavantageDeveloper InstallationThese instructions assume Python 3.6. It is recommended that you use conda or a virtualenv.For conda install follow:Download theconda installer.
And follow setupinstructions.Conda Environmentcondacreate--name<environment_name>python=3.6
activate<environment_name>
condainstall--filerequirements.txt
pythonsetup.pyinstallbdist_wheeldebian installationInstructionFollow the instructions in the link provided.DO NOT SUDO PIP INSTALL. Alias the preferred Python installation by adding, for example:aliaspython='/usr/bin/python3.6'When using Pippipinstall--upgradepip
pipinstallwheel
pipinstall-rrequirements.txt
pythonsetup.pyinstallbdist_wheelRunning the Testspy.testRunning Coverage Reportpy.test--cov
|
alpha-vantage
|
alpha_vantagePython module to get stock data/cryptocurrencies from the Alpha Vantage APIAlpha Vantage delivers a free API for real time financial data and most used finance indicators in a simple json or pandas format. This module implements a python interface to the free API provided byAlpha Vantage. It requires a free API key, that can be requested fromhttp://www.alphavantage.co/support/#api-key. You can have a look at all the API calls available in theirAPI documentation.For code-less access to the APIs, you may also consider the officialGoogle Sheet Add-onor theMicrosoft Excel Add-onby Alpha Vantage. Check outthisguide for some common tips on working with financial market data.NewsFrom version 2.3.0 onwards, fundamentals data and extended intraday is supported.From version 2.2.0 onwards, asyncio support now provided. See below for more information.From version 2.1.3 onwards,rapidAPIkey integration is now available.From version 2.1.0 onwards, error logging of bad API calls has been made more apparent.From version 1.9.0 onwards, the urllib was substituted by pythons request library that is thread safe. If you have any error, post an issue.From version 1.8.0 onwards, the column names of the data frames have changed, they are now exactly what alphavantage gives back in their json response. You can see the examples in better detail in the following git repo:https://github.com/RomelTorres/av_exampleFrom version 1.6.0, pandas was taken out as a hard dependency.InstallTo install the package use:pipinstallalpha_vantageOr install with pandas support, simply install pandas too:pipinstallalpha_vantagepandasIf you want to install from source, then use:gitclonehttps://github.com/RomelTorres/alpha_vantage.git
pipinstall-ealpha_vantageUsageTo get data from the API, simply import the library and call the object with your API key. Next, get ready for some awesome, free, realtime finance data. Your API key may also be stored in the environment variableALPHAVANTAGE_API_KEY.fromalpha_vantage.timeseriesimportTimeSeriests=TimeSeries(key='YOUR_API_KEY')# Get json object with the intraday data and another with the call's metadatadata,meta_data=ts.get_intraday('GOOGL')You may also get a key fromrapidAPI. Use your rapidAPI key for the key variable, and setrapidapi=Truets=TimeSeries(key='YOUR_API_KEY',rapidapi=True)Internally there is a retries counter, that can be used to minimize connection errors (in case that the API is not able to respond in time), the default is set to
5 but can be increased or decreased whenever needed.ts=TimeSeries(key='YOUR_API_KEY',retries='YOUR_RETRIES')The library supports giving its results as json dictionaries (default), pandas dataframe (if installed) or csv, simply pass the parameter output_format='pandas' to change the format of the output for all the API calls in the given class. Please note that some API calls do not support the csv format (namelyForeignExchange, SectorPerformances and TechIndicators) because the API endpoint does not support the format on their calls either.ts=TimeSeries(key='YOUR_API_KEY',output_format='pandas')The pandas data frame given by the call, can have either a date string indexing or an integer indexing (by default the indexing is 'date'),
depending on your needs, you can use both.# For the default date string index behaviorts=TimeSeries(key='YOUR_API_KEY',output_format='pandas',indexing_type='date')# For the default integer index behaviorts=TimeSeries(key='YOUR_API_KEY',output_format='pandas',indexing_type='integer')Data frame structureThe data frame structure is given by the call on alpha vantage rest API. The column names of the data frames
are the ones given by their data structure. For example, the following call:fromalpha_vantage.timeseriesimportTimeSeriesfrompprintimportpprintts=TimeSeries(key='YOUR_API_KEY',output_format='pandas')data,meta_data=ts.get_intraday(symbol='MSFT',interval='1min',outputsize='full')pprint(data.head(2))Would result on:The headers from the data are specified from Alpha Vantage (in previous versions, the numbers in the headers were removed, but long term is better to have the data exactly as Alpha Vantage produces it.)PlottingTime SeriesUsing pandas support we can plot the intra-minute value for 'MSFT' stock quite easily:fromalpha_vantage.timeseriesimportTimeSeriesimportmatplotlib.pyplotaspltts=TimeSeries(key='YOUR_API_KEY',output_format='pandas')data,meta_data=ts.get_intraday(symbol='MSFT',interval='1min',outputsize='full')data['4. close'].plot()plt.title('Intraday Times Series for the MSFT stock (1 min)')plt.show()Giving us as output:Technical indicatorsThe same way we can get pandas to plot technical indicators like Bollinger Bands®fromalpha_vantage.techindicatorsimportTechIndicatorsimportmatplotlib.pyplotaspltti=TechIndicators(key='YOUR_API_KEY',output_format='pandas')data,meta_data=ti.get_bbands(symbol='MSFT',interval='60min',time_period=60)data.plot()plt.title('BBbands indicator for MSFT stock (60 min)')plt.show()Giving us as output:Sector PerformanceWe can also plot sector performance just as easy:fromalpha_vantage.sectorperformanceimportSectorPerformancesimportmatplotlib.pyplotaspltsp=SectorPerformances(key='YOUR_API_KEY',output_format='pandas')data,meta_data=sp.get_sector()data['Rank A: Real-Time Performance'].plot(kind='bar')plt.title('Real Time Performance (%) per Sector')plt.tight_layout()plt.grid()plt.show()Giving us as output:Crypto currencies.We can also plot crypto currencies prices like BTC:fromalpha_vantage.cryptocurrenciesimportCryptoCurrenciesimportmatplotlib.pyplotaspltcc=CryptoCurrencies(key='YOUR_API_KEY',output_format='pandas')data,meta_data=cc.get_digital_currency_daily(symbol='BTC',market='CNY')data['4b. close (USD)'].plot()plt.tight_layout()plt.title('Daily close value for bitcoin (BTC)')plt.grid()plt.show()Giving us as output:Foreign Exchange (FX)The foreign exchange endpoint has no metadata, thus only available as json format and pandas (using the 'csv' format will raise an Error)fromalpha_vantage.foreignexchangeimportForeignExchangefrompprintimportpprintcc=ForeignExchange(key='YOUR_API_KEY')# There is no metadata in this calldata,_=cc.get_currency_exchange_rate(from_currency='BTC',to_currency='USD')pprint(data)Giving us as output:{
'1. From_Currency Code': 'BTC',
'2. From_Currency Name': 'Bitcoin',
'3. To_Currency Code': 'USD',
'4. To_Currency Name': 'United States Dollar',
'5. Exchange Rate': '5566.80500105',
'6. Last Refreshed': '2017-10-15 15:13:08',
'7. Time Zone': 'UTC'
}Asyncio supportFrom version 2.2.0 on, asyncio support will now be available. This is only for python versions 3.5+. If you do not have 3.5+, the code will break.The syntax is simple, just mark your methods with theasynckeyword, and use theawaitkeyword.Here is an example of a for loop for getting multiple symbols asyncronously. This greatly improving the performance of a program with multiple API calls.importasynciofromalpha_vantage.async_support.timeseriesimportTimeSeriessymbols=['AAPL','GOOG','TSLA','MSFT']asyncdefget_data(symbol):ts=TimeSeries(key='YOUR_KEY_HERE')data,_=awaitts.get_quote_endpoint(symbol)awaitts.close()returndataloop=asyncio.get_event_loop()tasks=[get_data(symbol)forsymbolinsymbols]group1=asyncio.gather(*tasks)results=loop.run_until_complete(group1)loop.close()print(results)We have written a much more in depth article to explain asyncio for those who have never used it but want to learn about asyncio, concurrency, and multi-threading. Check it out here:Which Should You Use: Asynchronous Programming or Multi-Threading?ExamplesI have added a repository with examples in a python notebook to better see the
usage of the library:https://github.com/RomelTorres/av_exampleTestsIn order to run the tests you have to first export your API key so that the test can use it to run, also the tests require pandas, mock and nose.exportAPI_KEY=YOUR_API_KEYcdalpha_vantage
nosetestsDocumentationThe code documentation can be found athttps://alpha-vantage.readthedocs.io/en/latest/ContributingContributing is always welcome. Just contact us on how best you can contribute, add an issue, or make a PR.TODOs:The integration tests are not being run at the moment within travis, gotta fix them to run.Add test for csv calls as well.Add tests for incompatible parameter raise errors.Github actions & other items in the issues page.Contact:You can reach/follow the Alpha Vantage team on any of the following platforms:SlackTwitter: @alpha_vantageMedium-PatrickMedium-AlphaVantageEmail:[email protected] events:https://alphavhack.devpost.com/Star if you like it.If you like or use this project, consider showing your support by starring it.:venezuela:-:de:
|
alphaVantage-api
|
An Opinionated AlphaVantage API Wrapper in Python 3.9 and compatible with Pandas TA
|
alphavantage-api-cesar
|
alphav_clientA simple API client for the AlphaVantage API:https://www.alphavantage.co/documentation/
|
alphavantage-api-client
|
Alpha Vantage API ClientOur MissionCreate a simple python wrapper aroundalpha vantage api. Normalize responses so you have consistency across end points. Provide direct access to each end point so customers who already use the API can have the flexibility. Make it easy to debug, so users can track down issues quickly.You can find alpha vantage here:https://www.alphavantage.co/See the alpha vantage api documentation:https://www.alphavantage.co/documentation/Get your free api key here:https://www.alphavantage.co/support/#api-keyOverviewHow to InstallSpecify API KeyObtain Stock PriceObtain Accouting / Financial StatementsDebugging / LoggingRetry / Cache(optimize your free account!)Our WikiCalculate free cash flow and free cash flow per shareGet Financial Statements and Company DetailsHow to Installpip install alphavantage_api_clientSpecifying API KeyThere are a few ways you include your API Key:1. Within each requestfrom alphavantage_api_client import AlphavantageClient
client = AlphavantageClient()
event = {
"symbol": "ibm",
"interval": "5min",
"apikey" : "[your key here]"
}
global_quote = client.get_global_quote(event)
assert global_quote.success, "Success field is missing or False"
assert not global_quote.limit_reached, "Limit reached is true but not hitting API"
assert global_quote.symbol == event["symbol"], "Symbol from results don't match event"
assert "meta_data" not in global_quote, "Metadata should not be present since it's not in the api"
assert len(global_quote.data) > 0, "Data field is zero or not present"
print(f"Response data {global_quote.json()}")2. Within the Clientfrom alphavantage_api_client import AlphavantageClient
client = AlphavantageClient().with_api_key("[your api key here]")
event = {
"symbol": "ibm",
"interval": "5min"
}
global_quote = client.get_global_quote(event)
assert global_quote.success, "Success field is missing or False"
assert not global_quote.limit_reached, "Limit reached is true but not hitting API"
assert global_quote.symbol == event["symbol"], "Symbol from results don't match event"
assert "meta_data" not in global_quote, "Metadata should not be present since it's not in the api"
assert len(global_quote.data) > 0, "Data field is zero or not present"
print(f"Response data {global_quote.json()}")3. Within a system environment variableOn mac/linux based machines run the following command BUT use your own API KEYexport ALPHAVANTAGE_API_KEY=[your key here]Now try the belowfrom alphavantage_api_client import AlphavantageClient
client = AlphavantageClient()
event = {
"symbol": "ibm",
"interval": "5min"
}
global_quote = client.get_global_quote(event)
assert global_quote.success, "Success field is missing or False"
assert not global_quote.limit_reached, "Limit reached is true but not hitting API"
assert global_quote.symbol == event["symbol"], "Symbol from results don't match event"
assert "meta_data" not in global_quote, "Metadata should not be present since it's not in the api"
assert len(global_quote.data) > 0, "Data field is zero or not present"
print(f"Response data {global_quote.json()}")4. Within an ini fileOn mac/linux based machines run the following command BUT use your own API KEYecho -e "[access]\napi_key=[your key here]" > ~/.alphavantageNow try the belowfrom alphavantage_api_client import AlphavantageClient
client = AlphavantageClient()
event = {
"symbol": "ibm",
"interval": "5min"
}
global_quote = client.get_global_quote(event)
assert global_quote.success, "Success field is missing or False"
assert not global_quote.limit_reached, "Limit reached is true but not hitting API"
assert global_quote.symbol == event["symbol"], "Symbol from results don't match event"
assert "meta_data" not in global_quote, "Metadata should not be present since it's not in the api"
assert len(global_quote.data) > 0, "Data field is zero or not present"
print(f"Response data {global_quote.json()}")Obtain Stock Pricefrom alphavantage_api_client import AlphavantageClient, GlobalQuote
def sample_get_stock_price():
client = AlphavantageClient()
event = {
"symbol": "TSLA"
}
global_quote = client.get_global_quote(event)
if not global_quote.success:
raise ValueError(f"{global_quote.error_message}")
print(global_quote.json()) # convenience method that will convert to json
print(f"stock price: ${global_quote.get_price()}") # convenience method to get stock price
print(f"trade volume: {global_quote.get_volume()}") # convenience method to get volume
print(f"low price: ${global_quote.get_low_price()}") # convenience method to get low price for the day
if __name__ == "__main__":
sample_get_stock_price()returns the following output{"success": true, "limit_reached": false, "status_code": 200, "error_message": null, "csv": null, "symbol": "TSLA", "data": {"01. symbol": "TSLA", "02. open": "259.2900", "03. high": "262.4500", "04. low": "252.8000", "05. price": "256.6000", "06. volume": "177460803", "07. latest trading day": "2023-06-23", "08. previous close": "264.6100", "09. change": "-8.0100", "10. change percent": "-3.0271%"}}
stock price: $256.6000
trade volume: 177460803
low price: $252.8000Obtain Accounting Reports / Financial StatementsThere are 4 different accounting reports:Cash Flow- A cash flow statement is a financial statement that provides information about the cash inflows and outflows of a company during a specific period of time. It helps investors understand how a company generates and uses cash.Balance Sheet- a financial statement that provides a snapshot of a company's financial position at a specific point in time. It shows the company's assets, liabilities, and shareholders' equity.Income Statement- also known as a profit and loss statement or P&L statement, is a financial statement that provides an overview of a company's revenues, expenses, and net income or loss over a specific period of time. It is one of the key financial statements used by investors to assess a company's profitability and financial performance.Earnings Statements- An earnings statement, also known as an earnings report or earnings statement, is a financial statement that provides an overview of a company's revenue, expenses, and profit or loss for a specific period of time. It is commonly used by investors to evaluate a company's financial performance.from alphavantage_api_client import AlphavantageClient, GlobalQuote, AccountingReport
def sample_accounting_reports():
client = AlphavantageClient()
earnings = client.get_earnings("TSLA")
cash_flow = client.get_cash_flow("TSLA")
balance_sheet = client.get_balance_sheet("TSLA")
income_statement = client.get_income_statement("TSLA")
reports = [earnings,cash_flow, balance_sheet, income_statement]
# show that each report is in the same type and how to access the annual and quarterly reports
for accounting_report in reports:
if not accounting_report.success:
raise ValueError(f"{accounting_report.error_message}")
print(accounting_report.json())
print(accounting_report.quarterlyReports) # array of all quarterly report
print(accounting_report.annualReports) # array of all annual reports
print(accounting_report.get_most_recent_annual_report()) # get the most recent annual report
print(accounting_report.get_most_recent_quarterly_report()) # get the most recent quarterly report;
if __name__ == "__main__":
sample_accounting_reports()Debugging / LoggingWe use the built inimport logginglibrary in python. Obtaining more information from the client behavior
is as simple as adjusting your log levels.logging.INFO- This will get you json log statements (in case you put these into splunk or cloudwatch)
that show which method is doing the work, the action, and the value or data is produced (where applicable).Example log showing where it found your API key{
"method": "__init__",
"action": "/home/[your user name]/.alphavantage config file found"
}Example log during client.global_quote(...) call. The data property is the raw response from alpha vantage api:{
"method": "get_data_from_alpha_vantage",
"action": "response_from_alphavantage",
"status_code": 200,
"data": "{\n \"Global Quote\": {\n \"01. symbol\": \"TSLA\",\n \"02. open\": \"712.4050\",\n \"03. high\": \"738.2000\",\n \"04. low\": \"708.2600\",\n \"05. price\": \"737.1200\",\n \"06. volume\": \"31923565\",\n \"07. latest trading day\": \"2022-06-24\",\n \"08. previous close\": \"705.2100\",\n \"09. change\": \"31.9100\",\n \"10. change percent\": \"4.5249%\"\n }\n}"
}Example log after converting response text into dictionary before returning to client:{
"method": "get_data_from_alpha_vantage",
"action": "return_value",
"data": {
"success": true,
"limit_reached": false,
"status_code": 200,
"Global Quote": {
"01. symbol": "TSLA",
"02. open": "712.4050",
"03. high": "738.2000",
"04. low": "708.2600",
"05. price": "737.1200",
"06. volume": "31923565",
"07. latest trading day": "2022-06-24",
"08. previous close": "705.2100",
"09. change": "31.9100",
"10. change percent": "4.5249%"
},
"symbol": "tsla"
}
}logging.DEBUG- This will get you all of the log statements from #1 and from the dependant libraries.Example:INFO:root:{"method": "__init__", "action": "/home/[your username]/.alphavantage config file found"}
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): www.alphavantage.co:443
DEBUG:urllib3.connectionpool:https://www.alphavantage.co:443 "GET /query?symbol=tsla&function=GLOBAL_QUOTE&apikey=YRV1XL63GDIFS42A HTTP/1.1" 200 None
INFO:root:{"method": "get_data_from_alpha_vantage", "action": "response_from_alphavantage", "status_code": 200, "data": "{\n \"Global Quote\": {\n \"01. symbol\": \"TSLA\",\n \"02. open\": \"712.4050\",\n \"03. high\": \"738.2000\",\n \"04. low\": \"708.2600\",\n \"05. price\": \"737.1200\",\n \"06. volume\": \"31923565\",\n \"07. latest trading day\": \"2022-06-24\",\n \"08. previous close\": \"705.2100\",\n \"09. change\": \"31.9100\",\n \"10. change percent\": \"4.5249%\"\n }\n}"}
INFO:root:{"method": "get_data_from_alpha_vantage", "action": "return_value", "data": {"success": true, "limit_reached": false, "status_code": 200, "Global Quote": {"01. symbol": "TSLA", "02. open": "712.4050", "03. high": "738.2000", "04. low": "708.2600", "05. price": "737.1200", "06. volume": "31923565", "07. latest trading day": "2022-06-24", "08. previous close": "705.2100", "09. change": "31.9100", "10. change percent": "4.5249%"}, "symbol": "tsla"}}Retry and CacheA free account only allows so many calls per min. You can configure the client to use a simple cache and retry
if it detects your limit has been reached. This way you can get the most out of your free tier :-)from alphavantage_api_client import AlphavantageClient, GlobalQuote
def sample_retry_when_limit_reached():
client = AlphavantageClient().use_simple_cache().should_retry_once()
symbols = ["TSLA","F","C","WFC","ZIM","PXD","PXD","POOL","INTC","INTU"] # more than 5 calls so should fail
for symbol in symbols:
event = {
"symbol": symbol
}
global_quote = client.get_global_quote(event)
if not global_quote.success:
raise ValueError(f"{global_quote.error_message}")
if global_quote.limit_reached:
raise ValueError(f"{global_quote.error_message}")
print(f"symbol: {global_quote.symbol}, Price: {global_quote.get_price()}, success {global_quote.success}")
client.clear_cache() # when you are done making calls, clear cache
if __name__ == "__main__":
sample_retry_when_limit_reached()Produces outputsymbol: TSLA, Price: 256.6000, success True
symbol: F, Price: 14.0200, success True
symbol: C, Price: 46.0200, success True
symbol: WFC, Price: 40.6100, success True
symbol: ZIM, Price: 12.1800, success True
symbol: PXD, Price: 198.6600, success True
symbol: PXD, Price: 198.6600, success True
symbol: POOL, Price: 352.3400, success True
symbol: INTC, Price: 33.0000, success True
symbol: INTU, Price: 452.6900, success True
Process finished with exit code 0More!Check out ourwikifor more info!
|
alpha-vantage-atarax
|
alpha_vantagePython module to get stock data/cryptocurrencies from the Alpha Vantage APIAlpha Vantage delivers a free API for real time financial data and most used finance indicators in a simple json or pandas format. This module implements a python interface to the free API provided byAlpha Vantage. It requires a free API key, that can be requested fromhttp://www.alphavantage.co/support/#api-key. You can have a look at all the API calls available in theirAPI documentation.For code-less access to the APIs, you may also consider the officialGoogle Sheet Add-onor theMicrosoft Excel Add-onby Alpha Vantage. Check outthisguide for some common tips on working with financial market data.NewsFrom version 2.3.0 onwards, fundamentals data and extended intraday is supported.From version 2.2.0 onwards, asyncio support now provided. See below for more information.From version 2.1.3 onwards,rapidAPIkey integration is now available.From version 2.1.0 onwards, error logging of bad API calls has been made more apparent.From version 1.9.0 onwards, the urllib was substituted by pythons request library that is thread safe. If you have any error, post an issue.From version 1.8.0 onwards, the column names of the data frames have changed, they are now exactly what alphavantage gives back in their json response. You can see the examples in better detail in the following git repo:https://github.com/RomelTorres/av_exampleFrom version 1.6.0, pandas was taken out as a hard dependency.InstallTo install the package use:pipinstallalpha_vantageOr install with pandas support, simply install pandas too:pipinstallalpha_vantagepandasIf you want to install from source, then use:gitclonehttps://github.com/RomelTorres/alpha_vantage.git
pipinstall-ealpha_vantageUsageTo get data from the API, simply import the library and call the object with your API key. Next, get ready for some awesome, free, realtime finance data. Your API key may also be stored in the environment variableALPHAVANTAGE_API_KEY.fromalpha_vantage.timeseriesimportTimeSeriests=TimeSeries(key='YOUR_API_KEY')# Get json object with the intraday data and another with the call's metadatadata,meta_data=ts.get_intraday('GOOGL')You may also get a key fromrapidAPI. Use your rapidAPI key for the key variable, and setrapidapi=Truets=TimeSeries(key='YOUR_API_KEY',rapidapi=True)Internally there is a retries counter, that can be used to minimize connection errors (in case that the API is not able to respond in time), the default is set to
5 but can be increased or decreased whenever needed.ts=TimeSeries(key='YOUR_API_KEY',retries='YOUR_RETRIES')The library supports giving its results as json dictionaries (default), pandas dataframe (if installed) or csv, simply pass the parameter output_format='pandas' to change the format of the output for all the API calls in the given class. Please note that some API calls do not support the csv format (namelyForeignExchange, SectorPerformances and TechIndicators) because the API endpoint does not support the format on their calls either.ts=TimeSeries(key='YOUR_API_KEY',output_format='pandas')The pandas data frame given by the call, can have either a date string indexing or an integer indexing (by default the indexing is 'date'),
depending on your needs, you can use both.# For the default date string index behaviorts=TimeSeries(key='YOUR_API_KEY',output_format='pandas',indexing_type='date')# For the default integer index behaviorts=TimeSeries(key='YOUR_API_KEY',output_format='pandas',indexing_type='integer')Data frame structureThe data frame structure is given by the call on alpha vantage rest API. The column names of the data frames
are the ones given by their data structure. For example, the following call:fromalpha_vantage.timeseriesimportTimeSeriesfrompprintimportpprintts=TimeSeries(key='YOUR_API_KEY',output_format='pandas')data,meta_data=ts.get_intraday(symbol='MSFT',interval='1min',outputsize='full')pprint(data.head(2))Would result on:The headers from the data are specified from Alpha Vantage (in previous versions, the numbers in the headers were removed, but long term is better to have the data exactly as Alpha Vantage produces it.)PlottingTime SeriesUsing pandas support we can plot the intra-minute value for 'MSFT' stock quite easily:fromalpha_vantage.timeseriesimportTimeSeriesimportmatplotlib.pyplotaspltts=TimeSeries(key='YOUR_API_KEY',output_format='pandas')data,meta_data=ts.get_intraday(symbol='MSFT',interval='1min',outputsize='full')data['4. close'].plot()plt.title('Intraday Times Series for the MSFT stock (1 min)')plt.show()Giving us as output:Technical indicatorsThe same way we can get pandas to plot technical indicators like Bollinger Bands®fromalpha_vantage.techindicatorsimportTechIndicatorsimportmatplotlib.pyplotaspltti=TechIndicators(key='YOUR_API_KEY',output_format='pandas')data,meta_data=ti.get_bbands(symbol='MSFT',interval='60min',time_period=60)data.plot()plt.title('BBbands indicator for MSFT stock (60 min)')plt.show()Giving us as output:Sector PerformanceWe can also plot sector performance just as easy:fromalpha_vantage.sectorperformanceimportSectorPerformancesimportmatplotlib.pyplotaspltsp=SectorPerformances(key='YOUR_API_KEY',output_format='pandas')data,meta_data=sp.get_sector()data['Rank A: Real-Time Performance'].plot(kind='bar')plt.title('Real Time Performance (%) per Sector')plt.tight_layout()plt.grid()plt.show()Giving us as output:Crypto currencies.We can also plot crypto currencies prices like BTC:fromalpha_vantage.cryptocurrenciesimportCryptoCurrenciesimportmatplotlib.pyplotaspltcc=CryptoCurrencies(key='YOUR_API_KEY',output_format='pandas')data,meta_data=cc.get_digital_currency_daily(symbol='BTC',market='CNY')data['4b. close (USD)'].plot()plt.tight_layout()plt.title('Daily close value for bitcoin (BTC)')plt.grid()plt.show()Giving us as output:Foreign Exchange (FX)The foreign exchange endpoint has no metadata, thus only available as json format and pandas (using the 'csv' format will raise an Error)fromalpha_vantage.foreignexchangeimportForeignExchangefrompprintimportpprintcc=ForeignExchange(key='YOUR_API_KEY')# There is no metadata in this calldata,_=cc.get_currency_exchange_rate(from_currency='BTC',to_currency='USD')pprint(data)Giving us as output:{
'1. From_Currency Code': 'BTC',
'2. From_Currency Name': 'Bitcoin',
'3. To_Currency Code': 'USD',
'4. To_Currency Name': 'United States Dollar',
'5. Exchange Rate': '5566.80500105',
'6. Last Refreshed': '2017-10-15 15:13:08',
'7. Time Zone': 'UTC'
}Asyncio supportFrom version 2.2.0 on, asyncio support will now be available. This is only for python versions 3.5+. If you do not have 3.5+, the code will break.The syntax is simple, just mark your methods with theasynckeyword, and use theawaitkeyword.Here is an example of a for loop for getting multiple symbols asyncronously. This greatly improving the performance of a program with multiple API calls.importasynciofromalpha_vantage.async_support.timeseriesimportTimeSeriessymbols=['AAPL','GOOG','TSLA','MSFT']asyncdefget_data(symbol):ts=TimeSeries(key='YOUR_KEY_HERE')data,_=awaitts.get_quote_endpoint(symbol)awaitts.close()returndataloop=asyncio.get_event_loop()tasks=[get_data(symbol)forsymbolinsymbols]group1=asyncio.gather(*tasks)results=loop.run_until_complete(group1)loop.close()print(results)We have written a much more in depth article to explain asyncio for those who have never used it but want to learn about asyncio, concurrency, and multi-threading. Check it out here:Which Should You Use: Asynchronous Programming or Multi-Threading?ExamplesI have added a repository with examples in a python notebook to better see the
usage of the library:https://github.com/RomelTorres/av_exampleTestsIn order to run the tests you have to first export your API key so that the test can use it to run, also the tests require pandas, mock and nose.exportAPI_KEY=YOUR_API_KEYcdalpha_vantage
nosetestsDocumentationThe code documentation can be found athttps://alpha-vantage.readthedocs.io/en/latest/ContributingContributing is always welcome. Just contact us on how best you can contribute, add an issue, or make a PR.TODOs:The integration tests are not being run at the moment within travis, gotta fix them to run.Add test for csv calls as well.Add tests for incompatible parameter raise errors.Github actions & other items in the issues page.Contact:You can reach/follow the Alpha Vantage team on any of the following platforms:SlackTwitter: @alpha_vantageMedium-PatrickMedium-AlphaVantageEmail:[email protected] events:https://alphavhack.devpost.com/Star if you like it.If you like or use this project, consider showing your support by starring it.:venezuela:-:de:
|
alpha-vantage-cli
|
Command line interface for Alpha Vantage APIs (WIP)Command line interface to get stock data from the Alpha Vantage APIAlpha Vantage offers an API for financial data and other popular finance indicators.
This library provides a series of commands that you can use to query the API from your terminal in an easy way.Getting startedGet an alpha vantage free api key. Visithttp://www.alphavantage.co/support/#api-keyInstallalpha-vantage-cli:pipinstallalpha-vantage-cliSet your api key:av set-keyTry it out:avstockquoteibmUsage examplesav--helpOutput:Usage: av [OPTIONS] COMMAND [ARGS]...
Unofficial Alpha Vantage command line interaface.
Get stocks data from the command line.
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
crypto Manages the Cryptocurrences APIs (Not yet implemented)
data Manages the Fundamental Data APIs (Not yet implemented)
econ Manages the Economic Indicators APIs (Not yet implemented)
forex Manages the Forex APIs (Not yet implemented)
intel Manages the Alpha Intelligence APIs (Not yet implemented)
set-key Set your API key so that you can send requests to Alpha...
stock Manages the Core Stocks APIs
tech Manages the Technical Indicators APIs (Not yet implemented)Get quote for stockavstockquoteaaplSample output:{'Global Quote': {'01. symbol': 'AAPL', '02. open': '151.2100', '03. high': '151.3500', '04. low': '148.3700', '05. price': '150.7000', '06. volume': '162278841', '07. latest trading day': '2022-09-16', '08. previous close': '152.3700', '09. change': '-1.6700', '10. change percent': '-1.0960%'}}Download monthly data as CSVavstockmonthlyibm--datatype=csv>ibm.csv
|
alpha-vantage-downloader
|
No description available on PyPI.
|
alpha-vantage-proxy
|
No description available on PyPI.
|
alpha-vantage-py
|
Alpha Vantage Pythonalpha_vantageis a simple Python wrapper around the Alpha Vantage API:fromalpha_vantageimportClientfromalpha_vantage.functionsimportTimeSeries# setup API client and TimeSeries interface for retrieving IBM stock dataclient=Client("API_TOKEN")ts=TimeSeries(client=client,symbol="IBM")# retrieve historic stock data on a daily leveldaily=ts.daily()# loop over resultsfordayindaily.timeseries:print(f"-{day.timestamp}:{day.high}")# close clientclient.close()Installationalpha_vantage is available on PyPi:$python-mpipinstallalpha-vantage-pyFeaturesEasy integration with your Python projectAlpha Vantage endpoints wrapped in functions
|
alphaver
|
# CG_Accumulator
Accumulates CG of students in IIT Kharagpur
|
alphavideo
|
No description available on PyPI.
|
alpha-video
|
No description available on PyPI.
|
alpha-viewer
|
alpha_viewerClass to view Alphafold modelsalpha_viewer is based on the alphafold2 colab notebook visualization. It automatically chooses the best available prediction. alpha_viewer includes functions to plot PAE (plot_pae) and pLDDT (plot_pLDDT). It also allows for coloring of the py3Dmol view based on pLDDT withshow_confidence. alpha_viewer contains.obs, a pandas data-frame, that can be used for custom annotations based on aa postion within the chain(s). You can use a key of.obsto color the py3Dmol view of your protein(s) based that annotation withshow_annotation. This can also be used to inspect substructures withshow_substructures.alpha-viewer has been tested to work withmonomer,monomer_ptmandmultimermodels.
So far it's sadly not possible to save the py3Dmol view for exporting.If you use alpha_viewer in your publication please cite:Installyou can install this repository from pypi with:pip install alpha-viewerIt's recommended to use alpha_viewer within jupyterlab. Please enable jupyterlab extensions and installjupyter-widgets/jupyterlab-managerandjupyterlab_3dmolA tutorial notebook can be found inhttps://github.com/Intron7/alpha_viewer/tree/main/tutorials
|
alphaviz
|
AlphaVizAlphaVizis a cutting-edge browser-based interactive visualization tool allowing to visualize the processed mass spectrometry data acquired withBrukerinstrument. TheAlphaVizdashboard facilitates easy quality control of your analyzed samples and a clear inspection of the raw data of significant peptides/proteins.To enable all hyperlinks in this document, please view it atGitHub.AboutLicenseInstallationOne-click GUIPip installerDeveloper installerUsageGUICLIPython and jupyter notebooksTroubleshootingCitationsHow to contributeChangelogAboutSoftware tools such asMaxQuantorDIA-NNidentify and quantify high amounts of proteins. After downstream processing inPerseus,MSstatsor theClinical Knowledge Graph, differentially expressed proteins become possible candidates for biomarker discovery.AlphaVizis an automated visualization pipeline to link these identifications with the original raw data and easily assess their individual quality or the overall quality whole samples.An open-source Python package of the AlphaPept ecosystem from theMann Labs at the Max Planck Institute of Biochemistry. This project is built purely in Python using a new cutting-edgeHoloviz ecosystemand Plotly library to create interactive dashboards and plots.LicenseAlphaViz was developed by theMann Labs at the Max Planck Institute of Biochemistryand is freely available with anApache License. External Python packages (available in therequirementsfolder) have their own licenses, which can be consulted on their respective websites.InstallationAlphaViz can be installed and used on all major operating systems (Windows, macOS, Linux).
There are three different types of installation possible:One-click GUI installer:Choose this installation if you only want the GUI and/or keep things as simple as possible.Pip installer:Choose this installation if you want to use AlphaViz as a Python package in an existing Python 3.8 environment (e.g. a Jupyter notebook). If needed, the GUI and CLI can be installed with pip as well.Developer installer:Choose this installation if you are familiar with CLI tools,condaand Python. This installation allows access to all available features of AlphaViz and even allows to modify its source code directly. Generally, the developer version of AlphaViz outperforms the precompiled versions which makes this the installation of choice for high-throughput experiments.One-click GUIThe GUI of AlphaViz is a completely stand-alone tool that requires no knowledge of Python or CLI tools. Click on one of the links below to download the latest release for:WindowsmacOSLinuxOlder releases remain available on therelease page, but no backwards compatibility is guaranteed.IMPORTANT: Please refer to theGUI manualfor detailed instructions on the installation, troubleshooting and usage of the stand-alone AlphaViz GUI.PipAlphaViz can be installed in an existing Python 3.8 environment with a singlebashcommand.Thisbashcommand can also be run directly from within a Jupyter notebook by prepending it with a!:pipinstallalphavizInstalling AlphaViz like this avoids conflicts when integrating it in other tools, as this does not enforce strict versioning of dependancies. However, if new versions of dependancies are released, they are not guaranteed to be fully compatible with AlphaViz. While this should only occur in rare cases where dependencies are not backwards compatible, you can always force AlphaViz to use dependancy versions which are known to be compatible with:pipinstall"alphaviz[gui-stable]"NOTE: You might need to runpip install pip==21.0before installing alphaviz like this. Also note the double quotes".For those who are really adventurous, it is also possible to directly install any branch (e.g.@development) with any extras (e.g.#egg=alphaviz[stable,development-stable]) from GitHub with e.g.pipinstall"git+https://github.com/MannLabs/alphaviz.git@development#egg=alphaviz[stable,development-stable]"DeveloperAlphaViz can also be installed in editable (i.e. developer) mode with a fewbashcommands. This allows to fully customize the software and even modify the source code to your specific needs. When an editable Python package is installed, its source code is stored in a transparent location of your choice. While optional, it is advised to first (create and) navigate to e.g. a general software folder:mkdir~/folder/where/to/install/softwarecd~/folder/where/to/install/softwareThe following commands assume you do not perform any additionalcdcommands anymore.Next, download the AlphaViz repository from GitHub either directly or with agitcommand. This creates a new AlphaViz subfolder in your current directory.gitclonehttps://github.com/MannLabs/alphaviz.gitFor any Python package, it is highly recommended to use a separateconda virtual environment, as otherwisedependancy conflicts can occur with already existing packages.condacreate--namealphavizpython=3.8-y
condaactivatealphavizFinally, AlphaViz and all itsdependanciesneed to be installed. To take advantage of all features and allow development (with the-eflag), this is best done by also installing thedevelopment dependenciesand/or thegui dependenciesinstead of only thecore dependencies:pipinstall-e"./alphaviz[gui,development]"By using the editable flag-e, all modifications to theAlphaViz source code folderare directly reflected when running AlphaViz. Note that the AlphaViz folder cannot be moved and/or renamed if an editable version is installed.UsageThere are two ways to use AlphaViz:GUIPythonNOTE: The first time you use a fresh installation of AlphaViz, it is often quite slow because some functions might still need compilation on your local operating system and architecture. Subsequent use should be a lot faster.GUIIf the GUI was not installed through a one-click GUI installer, it can be activate with the followingbashcommand:alphavizguiNote that this needs to be prepended with a!when you want to run this from within a Jupyter notebook. When the command is run directly from the command-line, make sure you use the right environment (activate it with e.g.conda activate alphavizor set an alias to the binary executable (can be obtained withwhere alphavizorwhich alphaviz)).Python and Jupyter notebooksAlphaViz can be imported as a Python package into any Python script or notebook with the commandimport alphaviz.An ‘nbs’ folder in the GitHub repository contains several Jupyter Notebooks as tutorials regarding using AlphaViz as a Python package for all available pipelines: for DDA data analyzed with MaxQuant, for DIA data analyzed with DIA-NN, and for the targeted mode.TroubleshootingIn case of issues, check out the following:Issues: Try a few different search terms to find out if a similar problem has been encountered beforeDiscussions: Check if your problem or feature requests has been discussed before.CitationsPre-print published online: bioRxiv (2022), doi: 10.1101/2022.07.12.499676v1.How to contributeIf you like this software, you can give us astarto boost our visibility! All direct contributions are also welcome. Feel free to post a newissueor clone the repository and create apull requestwith a new branch. For an even more interactive participation, check out thediscussionsand thethe Contributors License Agreement.ChangelogSee theHISTORY.mdfor a full overview of the changes made in each version.
|
alphav-pkg
|
alphav_clientA simple API client for the AlphaVantage API:https://www.alphavantage.co/documentation/
|
alphaware
|
No description available on PyPI.
|
alphawave
|
AlphaWaveminor bug fixes, OS Client now correctly handles host, port, temperature, top_p, max_tokensNew: SearchCommand will search the web. You will need a google api key.
See tests/SearchCommandAgentTest.pyAlphaWave is a very opinionated client for interfacing with Large Language Models (LLM). It usesPromptrixfor prompt management and has the following features:Supports calling OpenAI and Azure OpenAI hosted models out of the box but a simple plugin model lets you extend AlphaWave to support any LLM.Supports OS LLMs through an OSClient. Currently assumes a server on port 5004, see details below.Promptrix integration means that all prompts are universal and work with either Chat Completion or Text Completion API's.Automatic history management. AlphaWave manages a prompts conversation history and all you have todo is tell it where to store it. It uses an in-memory store by default but a simple plugin interface (provided by Promptrix) lets you store short term memory, like conversation history, anywhere.State-of-the-art response repair logic. AlphaWave lets you provide an optional "response validator" plugin which it will use to validate every response returned from an LLM. Should a response fail validation, AlphaWave will automatically try to get the model to correct its mistake. More below...Automatic Response RepairA key goal of AlphaWave is to be the most reliable mechanisms for talking to an LLM on the planet. If you lookup the wikipedia definition for Alpha Waves you see that it's believed that they may be used to help predict mistakes in the human brain. One of the key roles of the AlphaWave library is to help automatically correct for mistakes made by an LLM, leading to more reliable output. It can correct for everything from hallucinations to just malformed output. It does this by using a series of techniques.First it uses validation to programmatically verify the LLM's output. This would be the equivalent of a "guard" in other libraries like LangChain. When a validation fails, AlphaWave immediately forks the conversation to isolate the mistake. This is critical because the last thing you want to do is promote a mistake/hallucination to the conversation history as the LLM will just double down on the mistake. They are primarily pattern matchers.Once AlphaWave has isolated the mistake, it will attempt to get the model to repair the mistake itself. It uses a process called "feedback" which simply tells the model the mistake it made and asks it to correct it. For GPT-4 this works more often then not in 1 turn. For the other models it sometimes works but it depends on the type of mistake. AlphaWave will even ask the model to slow down and think step-by-step on the last try, to give it every shot at fixing itself.If the LLM can correct its mistake, AlphaWave will delete the conversation fork, write the corrected response to the conversation history, and move forward as if nothing ever happened. For GPT-4, you should be able to make several hundred sequential model calls before running into a sequence that can't be repaired.In the event that the model isn't able to repair itself, a result with a status ofinvalid_responsewill be returned and the app can either abort the task or give it one more go. For well defined prompts and tasks I'd recommend given it one more go. The reason for that is that, if you've made it hundreds of model calls without it making a mistake, the odds of it making a mistake if you simply try again are low. You just hit the stochastic nature of talking to LLMs.So why even use "feedback" at all if retrying can work? It doesn't always work. Some mistakes, especially hallucinations, the LLM will make over and over again. They need to be confronted with their mistake and then they will happily correct it. You need both appproaches, feedback & retry, to build a system that's as reliable as possible.InstallationTo get started, you'll want to install the latest versions of both AlphaWave and Promptrix. Pip should pull both if you just install alphawavepipinstallalphawaveBasic UsageYou'll need to import a couple of components from "alphawave", along with the various prompt parts you want to use from "promptrix". Here's a super simple wave that creates a basic ChatGPT like bot:importosfrompathlibimportPathimportreadlinefromalphawave.AlphaWaveimportAlphaWavefromalphawave.alphawaveTypesimportPromptCompletionOptionsfromalphawave.OpenAIClientimportOpenAIClientimportpromptrixfrompromptrix.PromptimportPromptfrompromptrix.SystemMessageimportSystemMessagefrompromptrix.ConversationHistoryimportConversationHistoryfrompromptrix.UserMessageimportUserMessageimportasyncio# Create an OpenAI or AzureOpenAI clientclient=OpenAIClient(apiKey=os.getenv("OPENAI_API_KEY"))# Create a wavewave=AlphaWave(client=client,prompt=Prompt([SystemMessage('You are an AI assistant that is friendly, kind, and helpful',50),ConversationHistory('history',1.0),UserMessage('{{$input}}',450)]),prompt_options=PromptCompletionOptions(completion_type='chat',model='gpt-3.5-turbo',temperature=0.9,max_input_tokens=2000,max_tokens=1000))# Define main chat loopasyncdefchat(bot_message=None):# Show the bots messageifbot_message:print(f"\033[32m{bot_message}\033[0m")# Prompt the user for inputuser_input=input('User: ')# Check if the user wants to exit the chatifuser_input.lower()=='exit':# Exit the processexit()else:# Route users message to waveresult=awaitwave.completePrompt(user_input)ifresult['status']=='success':print(result)awaitchat(result['message']['content'])else:ifresult['message']:print(f"{result['status']}:{result['message']}")else:print(f"A result status of '{result['status']}' was returned.")# Exit the processexit()# Start chat sessionasyncio.run(chat("Hello, how can I help you?"))One of the key features of Promptrix is its ability to proportionally layout prompts, so this prompt has an overall budget of 2000 input tokens. It will give theSystemMessageup to 50 tokens, theUserMessageup to 450 tokens, and then theConversationHistorygets 100% of the remaining tokens.Once the prompt is formed, we just need to callcompletePrompt()on the wave to process the users inputThe parameter to wave.completePrompt is optional and the wave can also take input directly from memory, but you don't have to pass prompts input. You can see in the example that if the prompt doesn't reference the input via a{{$input}}template variable it won't use it anyway.Loggingif you want to see the traffic with the server, the Client constructors (OSClient and OpenAIClient) take a logRequests parameter - False by default, set it to True to see prompts and responses on the console.OSClientthe 'default' way to use Alphawave-py with OpenAI is to use the OpenAI client as in line 49 of the example above.
If you want to use your own LLM, you can use instead:client=OSClient(apiKey=None)The current OSClient assumes a server exists on localhost port 5004, using my own unique protocol.
Not very useful, I know.
Short term plans include:allow specification of the host and port in the client constructorallow FastChat-like specification of the conversation template (user/assistant/etc). Support for this is already in the OSClient, just need to bring it out to the ConstructorImplementation of a FastChat compatible api. Again, this was running in a dev version of the code, just need to re-insert it now that basic port is stable.OSClient protocolOSClient sends json to the server:server_message={'prompt':prompt,'temp':temp,'top_p':top_p,'max_tokens':max_tokens}smj=json.dumps(server_message)client_socket.sendall(smj.encode('utf-8'))client_socket.sendall(b'x00xff')where prompt is a string containing the messages:{"role":"system","content":"You are an AI assistant that is friendly, kind, and helpful"}{"role":"user","content":"Hi. How are you today?"}and the x00xff is the end of send message because I know nothing about socketsOSClient expects to receive from the server:streaming or all at once, the text, followed by 'x00xff'that's it, no return code, no json wrapper, no {role: assistant, content: str), just the response.**the x00ff signals end of messages
|
alphawaves
|
Alpha Waves DatasetRepository with basic scripts for using the Alpha Waves Dataset developed at GIPSA-lab [1]. The dataset files and their documentation are all available athttps://zenodo.org/record/2348892The code of this repository was developed inPython 3.9 and 3.10using MNE-Python [2, 3] as a tool for the EEG processing.The package can be downloaded with pip:pip install alphawavesAlternatively, you might want to clone the GitHub repository on your local computer, and install the package by running:python setup.py developAll dependencies are listed inrequirements.txtfor your interest.Then, to ensure that your code finds the right scripts, open a Python shell and type:import alphawavesNote that you might want to create avirtual environmentbefore doing all these installations, e.g.:conda create -n eegalpha python=3.9References[1] Cattan et al. "EEG Alpha Waves dataset"DOI[2] Gramfort et al. "MNE software for processing MEG and EEG data"DOI[3] Gramfort et al. "MEG and EEG data analysis with MNE-Python"DOI
|
alpha-wordle
|
AlphaWordleTaking the fun out of WORDLE! We use AI to solve wordle!
|
alphax
|
alpha rough estimator
|
alphax-anishpyan
|
alpha rough estimator
|
alphax-robot
|
Python library to control a robot from 'Alpha X Robot'
|
alphaz
|
No description available on PyPI.
|
alphazero
|
A project template for playing with Alpha-Zero.
|
alphazerocode
|
AlphaZeroCodepip install AlphaZeroCodeYou must implement tain code with Udemy rectures.On python,import AlphaZeroCode
|
alphazerogeneral
|
#alpha_zero_general
|
alpha-zero-general
|
Failed to fetch description. HTTP Status Code: 404
|
alphazeta.warden
|
Welcome to WARden implementation for Specter ServerThis is a light weight version of the original WARden designed for integration with Specter Server.Transactions will be imported automatically from Specter.This app was built with a couple of goals:Easily track portfolio values in fiat (private requests through Tor)Monitor Wallets and Addresses for activity using your own node and notify user.Track your full node statuswarden (wɔːʳdən )
A warden is responsible for making sure that the laws or regulations are obeyed.InstallationPlease note that the WARden needs to be installed at the same machine running Specter Server.Installation instructions for Specter can be foundhere.Log in to your computer running Specter, open Terminal and type:pip3installalphazeta.wardenThen run the WARden server:python3-mwardenOpen your browser and navigate to:http://localhost:5000/UpgradeFrom the WARden directory, type:pip3installalphazeta.warden--upgradeThis is an Open Source projectWe believe Open Source is the future of development for bitcoin. There is no other way when transparency and privacy are critical.The code is not compiled and it can be easily audited.Sats for FeaturesAs interest for the app grows and if the community contributes, new features will be added like:
. Import of other transactions
. Editing of transactions
. Enhanced statistics - volatility, compare performance, heatmaps, ...
. Specter implementation without MyNode
. Email notifications
. And suggested improvementsBut the app is also open source so anyone can contribute. Anyone looking to contribute / get a bounty is welcome.PrivacyMost portfolio tracking tools ask for personal information and may track your IP and other information. Our experience is that even those who say they don't, may have log files at their systems that do track your IP and could be easily linked to your data.Why NAV is important?NAV is particularly important to anyone #stackingsats since it tracks performance relative to current capital allocated.
For example, a portfolio going from $100 to $200 may seem like it 2x but the performance really depends if any new capital was invested or divested during this period.NAV adjusts for cash inflows and outflows.NAV TrackingNAV tracks performance based on amount of capital allocated. For example, a portfolio starts at $100.00 on day 0. On day 1, there is a capital inflow of an additional $50.00. Now, if on day 2, the Portfolio value is $200, it's easy to conclude that there's a $50.00 profit. But in terms of % appreciation, there are different ways to calculate performance.
CB Calculates a daily NAV (starting at 100 on day zero).
In this example:DayPortfolio Value*Cash FlowNAVPerformance0$0.00+ $100.00100--1$110.00+ $50.00110+10.00% (1)2$200.00None125+25.00% (2)Portfolio Market Value at beginning of day
(1) 10% = 110 / 100 - 1
(2) 25% = 200 / (110 + 50) - 1Tracking NAV is particularly helpful when #stackingsats. It calculates performance based on capital invested at any given time. A portfolio starting at $100 and ending at $200 at a given time frame, at first sight, may seem like is +100% but that depends entirely on amount of capital invested
along that time frame.TroubleshootingIf you get a message telling you that pip is not installed:sudoapt-get-yinstallpython3-pipIf you get a message that git was not found:sudoapt-getinstallgitPlease note that this is ALPHA software. There is no guarantee that the
information and analytics are correct. Also expect no customer support. Issues are encouraged to be raised through GitHub but they will be answered on a best efforts basis.
|
alphaz-next
|
The alphaz-next library is a Python library that is designed to simplify the setup of RESTAPi using FastAPI & pydantic. It provides some usefull toolkit to setup Logger, Config, ...Installing Alphaz-NextTo install alphaz-next, if you already have Python, you can install with:pip install alphaz-nextHow to import alphaz-nextTo access alphaz-next and its functions import it in your Python code like this:from alphaz-next import DataBase, Logger
from alphaz-next.models.config.config_settings import create_config_settings
|
alphinity
|
alphinityalphinity encodes integers to alphabetize in ascending order.fromalphinityimportencodeencode(100)# -> "ccv"# passesassertsorted(map(encode,range(10**6)))==[*map(encode,range(10**6))]For stable encoding,fromalphinity.v1importencodeNotesAlphabetization doesn't work strictly infinitely, just through very, very high integers i.e.,1e35.Non-negative integers only.
|
alphonse
|
Alphonse Database ManagerWhat is?Alphonse is an interface that makes it easier to manage databases using SQLAlchemy.
It provides a set of methods that make complex database queries more performant and easier to write.
For a project already leveraging SQLAlchemy, it only requires a few lines of code to start using it immediately.Getting startedRequirementsSQLAlchemy version1.4or higher (2.0+for best performance)A SQLAlchemy "Engine" object.This is the connection to the database. It can be created using thesqlalchemy.create_enginemethod.A list of all defined SQLAlchemy models that all inherit from the same instance of asqlalchemy.ext.declarative.declarative_base.Example setup""" Example setup for Alphonse Database Manager """fromalphonseimportDbManagerfromsqlalchemyimportcreate_enginefromsqlalchemy.engineimportEnginefrommy_modelsimportMyModel1,MyModel2,MyModel3# Create the singelton instance of the enginedb_connection_url:str="postgresql://user:password@localhost:port/my_db_name"engine:Engine=create_engine(db_connection_url)# Create a list of all defined models that all inherit from the same# instance of a "declarative_base" from sqlalchemy.ext.declarativemodel_list:list=[MyModel1,MyModel2,MyModel3]# Initialize the db_managerdb_manager:DbManager=DbManager(engine,model_list)Methods and usageThe main interface to making queries is the singletonDbManagerobject created in the setup.
It provides a set of methods that make database queries more performant and easier to write.
This singleton instance acts as an abstraction layer to prevent circular imports and to provide
a single point of access to the database, ensuring that all sessions are managed correctly.create()Thedb_manager.create()method is used to create new rows in the database.
A boolean is returned to determine if the creation was successful.It takes two arguments:table_key: A string representing the model name. This will be the same as the class name of the model.req_payload: A dictionary mapping of the columns and their values representing the row to be created.Example:fromdb.objectsimportdb_manager# Create a new row in the "User" tablecreation_map:dict={"username":"new_user","password":"password123","email":"[email protected]"}creation_was_successful:bool=db_manager.create("User",creation_map)# creation_was_successful = True or False depending on if the creation was successful.map()methodIf your table model has specific requirements for creating a new row, you can define amap()method directly on the model.This method must take in a dictionary mapping of the columns, assign the key value pairs to the
attributes of the model, and return a mapped instance of the model class orNoneif the mapping fails.The returned class instance must be a valid instance of the model with all required fields populated
(excepting nullable fields and auto-incrementing fields like primary key ids).Themap()method will be called automatically when thedb_manger.create()method is used
and does not need to be called directly.
This means that thereq_payloaddictionary passed to
thecreate()method only needs to include the values to be assigned to the model in the user-definedmap()method.Example model:""" User table model. """fromtypingimportOptionalfromsqlalchemyimportColumn,Integer,String,VARCHARfromsqlalchemy.ormimportrelationship# The below imports are theorhetical and are not included in the package.# --------------------------------------------------------------------# Base is the instatiated singleton instance of `declarative_base` orignally imported from# sqlalchemy.ext.declarative. All models in this theorhetical project should inherit from this same instance.fromdb.objects.baseimportBasefromutils.loggerimportloggerclassUser(Base):""" User orm model. """__tablename__="users"id=Column(Integer,primary_key=True,nullable=False)name=Column(String(50),unique=True,nullable=False)status=Column(String(20),nullable=False)website=Column(VARCHAR(255),nullable=True)@classmethoddefmap(cls,req_payload:dict)->Optional["User"]:"""Map the `request_payload` dictionary to the User model.:param req_payload: A dictionary mapping of the columns in the table and their values representing the new row to be created.:return: A mapped instance of the User model or None if the mapping fails."""try:# Create a new instance of the User model.new_user:User=User(# Use the `.get()` method to safely access the dictionary values and return None as a default if the key is not found.name=req_payload.get("name"),status=req_payload.get("status"),partner_key=req_payload.get("partner_key"),# In this example, the id column is an auto-incrementing primary key and the website column is nullable.# Therefore, they not required to be filled in when the row is created. If this method wasn't defined,# the `db_manager.create()` method would fail if the `req_payload` dictionary didn't include a `website` value.)# Return the mapped instance of the User model.returnnew_usersexcept(KeyError,AttributeError):logger.log_error("Error mapping the request payload to the User model.")# If the mapping fails, None is the expected return value to indicate that the mapping failed.returnNoneread()The read method is used to retrieve rows from a single table in the database.
A dictionary of result(s) is returned, or None if the query fails.It takes two required arguments and one optional argument:table_key: A string representing the model name. This will be the same as the class name of the model.search_params: A dictionary mapping of the columns in the table and their values.
Represents the search criteria for the query.
See thesearch_params argumentsection below for more information.select_paramsAn optional list of strings representing the columns to select from the db.
Represents the filter parameters for the query.
See theselect_params argumentsection below for more information.-search_paramsargumentThe values in thesearch_paramsdictionary are used to filter the results of the query.
You can supply a single value or a list of values for each key in this dictionary (column in the table.)
For example:{"status": "ACTIVE"}or{"status": ["ACTIVE", "SUSPENDED"]}Select the rows where the status is "ACTIVE" vs. select the rows where the status is "ACTIVE" or "SUSPENDED."-select_paramsargumentTheselect_paramsargument is an optional list of strings representing the columns to select from the db.
They can be used to return partial data from the table if only certain values are needed.
The strings must match the column names in the table exactly as they are defined on the models.
If a valid list is provided, only the columns in the list will be returned.
Returns the full table row if the list is empty or not provided.
For example:["id", "status"]Whatever thesearch_paramsreturn, only return the 'id' and 'status'
columns (and their value) for any results from the queried table.Example queries:fromtypingimportOptionalfromdb.objectsimportdb_manager# =============# BASIC EXAMPLE# =============# Return all rows from the "User" table with the a status of "ACTIVE".user:Optional[dict]=db_manager.read("User",{"status":"ACTIVE"})# If one result is found:# user = {# "id": 1,# "name": "test_user",# "status": "ACTIVE",# "website": "www.testwebsite.com"# }# If multiple results are found, they will be within a list at a key of "result."# user = {# "result": [# {"id": 1, "name": "test_user", "status": "ACTIVE", "website": "www.testwebsite.com"},# {"id": 55, "name": "test_user_55", "status": "ACTIVE", "website": None}# ]# }# If the no rows are found meeting the criteria:# user = {}# If an exception was raised during the read operation# user = None# ====================================# EXAMPLE USING MULTIPLE SEARCH PARAMS# ====================================# Return all rows from the "User" table with the a status of "ACTIVE" or "SUSPENDED".user:Optional[dict]=db_manager.read("User",{"status":["ACTIVE","SUSPENDED"]})# If multiple results are found,# user = {# "result": [# {"id": 1, "name": "test_user", "status": "ACTIVE", "website": "www.testwebsite.com"},# {"id": 55, "name": "test_user_55", "status": "ACTIVE", "website": None},# {"id": 55, "name": "test_user_56", "status": "SUSPENDED", "website": "www.othertestwebsite.com"}# ]# }# ===========================# EXAMPLE USING SELECT PARAMS# ===========================# Return the id and status of all active users.user:Optional[dict]=db_manager.read("User",{"status":"ACTIVE"},["id","status",])# If one result is found:# user = {# "id": 1,# "status": "ACTIVE",# }update()The update method is used to edit existing rows in the database.
A boolean is returned to determine if the creation was successful.It takes three arguments:table_key: A string representing the model name. This will be the same as the class name of the model.search_params: A dictionary mapping of parameters pertinent to specifying the query. Represents the search criteria for the query. All columns in the dictionary must be present in the table. See thesearch_params argumentsection below for more information.insert_params: Mapped dictionary of key/value pairs corresponding to db columns to be updated.
All columns in the dictionary must be present in the table.
Operations that leave orphaned rows will not be performed and will result in the operation failing.-search_paramsargumentThe values in thesearch_paramsdictionary are used to filter the results of the query.
You can supply a single value or a list of values for each key in this dictionary (column in the table.)
For example:{"status": "ACTIVE"}or{"status": ["ACTIVE", "SUSPENDED"]}Select the rows where the status is "ACTIVE" vs. select the rows where the status is "ACTIVE" or "SUSPENDED."Example queries:fromdb.objectsimportdb_manager# Find the row in the "User" table with the id of 1 and update the website column.params_to_update:dict={"website":"www.newwebsite.com"}update_was_successful:bool=db_manager.update("User",{"id":1},params_to_update)# update_was_successful = True or False depending on if the update was successful# Find the all rows in the "User" table with a status of "ACTIVE" or "SUSPENDED" and update the status column to "DELETED"update_was_successful:bool=db_manager.update("User",{"status":["ACTIVE","SUSPENDED"]},{"status":"DELETED"})# update_was_successful = True or False depending on if the update was successfuldelete()The delete method is used to remove existing rows from the database.
A boolean is returned to determine if the creation was successful.It takes two arguments:table_key: A string representing the model name. This will be the same as the class name of the model.search_params:A dictionary mapping of parameters pertinent to specifying the query. Represents the search criteria for the query. All columns in the dictionary must be present in the table. See thesearch_params argumentsection below for more information.-search_paramsargumentThe values in thesearch_paramsdictionary are used to filter the results of the query.
You can supply a single value or a list of values for each key in this dictionary (column in the table.)
For example:{"status": "ACTIVE"}or{"status": ["ACTIVE", "SUSPENDED"]}Select the rows where the status is "ACTIVE" vs. select the rows where the status is "ACTIVE" or "SUSPENDED."Example queries:fromdb.objectsimportdb_manager# Find the row in the "User" table with the id of 1 and delete it.delete_was_successful:bool=db_manager.delete("User",{"id":1})# delete_was_successful = True or False depending on if the delete was successful# Find all row(s) in the "User" table with a status of "ACTIVE" or "SUSPENDED" and delete them.delete_was_successful:bool=db_manager.delete("User",{"status":["DELETE","SUSPENDED"]})# delete_was_successful = True or False depending on if the delete was successfuljoined_read()The joined_read method is used to retrieve rows from multiple tables in the database.
A dictionary of results is returned, or None is returned if the query fails.It takes three required arguments and one optional argument:starting_table: A string representing the table where the read should start looking. This will be the same as the class name of the model.ending_table: A string representing the table where the read should, inclusively, stop looking. This will be the same as the class name of the model.search_params: This can be one of two datastructures:A dictionary mapping of the columns in the starting table and their values representing the search criteria for the query.A list of dictionary mappings each representing a table that will be traversed.
Represents the search criteria for each table (in order traversed) for the query.
Seesearch_params argumentsection below for more information.select_paramsAn optional list representing the columns to select from the db to be used as filter parameters for the query.
This can be one of two datastructures:A list of strings representing the columns to select from the starting table.A list of lists containing strings representing the columns to select from each table in the order they are traversed.
Seeselect_params argumentsection below for more information.-search_params argumentIf only a single dict ofsearch_pramsis provided,
the JOINS statement will find all rows from related tables with a foreign key pointing at the found of the starting table.
For example, if thestarting_tableis the "User" table, the list ofsearch_paramscould look like:# In these examples there are three related tables: "User", "Post", "Comments" and "Likes".# A User can have many Posts, a Post can have many Comments and a comment can have many Likes.db_manager.joined_read("User","Comments",{"id":1})# Ordb_manager.joined_read("User","Comments",[{"id":1}])# This reads as:# find the User with an 'id' of 1,# then find the all Posts that have a 'user_id' of 1,# then find all Comments that have a 'post_id' that matches any of the found Posts.You can also use a list of values to broaden the search criteria, just like in theread()method.db_manager.joined_read("User","Comments",{"status":["ACTIVE","SUSPENDED"]})# Ordb_manager.joined_read("User","Comments",[{"status":["ACTIVE","SUSPENDED"]}])# This reads as:# find the all Users with a status of "ACTIVE" or "SUSPENDED",# then find the all Posts that have a 'user_id's that match any of the found Users,# then find all Comments that have a 'post_id' that matches any of the found Posts.If a list of these dictionaries is supplied, it must be the same length as the number of tables to be traversed
in the order that they are traversed.
An empty dict is supplied if no additional search criteria is needed for a table in the JOINS statement.
For example, if the starting table is "User"
from the below examples and the ending table is "Likes," the list ofsearch_paramwould look like:# In these examples there are three related tables: "User", "Post", "Comments" and "Likes".# A User can have many Posts, a Post can have many Comments and a comment can have many Likes.db_manager.joined_read("User","Likes",[{"id":1},{"title":"test_post"},{}])# This reads as find the User with an 'id' of 1,# then find the Post with a 'user_id' of 1 and a 'title' of "test_post,"# then find all Likes that have a 'post_id' that matches the id(s) of the Post called "test_post."-select_params argumentIf noselect_paramsare provided, the full row of each table will be returned.If only a single list ofselect_paramsis provided,
the JOINS statement will only apply the filter to the first table in the JOINS statement.
For example, if thestarting_tableis the "User" table from the below examples and
a filter is applied, the list of select params would look like:["name"],or[["name"]]This reads as, "whatever thesearch_paramsfind,
only return the 'name' column for any results from the User table."If a list of these lists is supplied, the filter is applied in order as the tables are traversed. For example:[["name"],[],["id", "content"]]This reads as, "whatever thesearch_paramsfind, only return
the 'name' column for any results from the User table,
the all columns (or the full row) for any results from the Post table,
and only return the 'id' and 'content' columns for any results from the Comments table."Example queries:fromtypingimportOptionalfromdb.objectsimportdb_manager# In these examples there are three related tables: "User", "Post", "Comments" and "Likes".# A User can have many Posts, a Post can have many Comments and a comment can have many Likes.# =============# BASIC EXAMPLE# =============# Return the user with an 'id' of 1, all of the user's posts, & all post's comments.result_object:Optional[dict]=db_manager.joined_read("User","Comments",{"id":1})# If some results are found:# result_object: dict = {# "User": [# {"id": 1, "name": "test_user", "status": "ACTIVE", "website": "www.testwebsite.com"}# ],# "Posts": [# {"id": 1, "user_id": 1, "title": "test_post", "content": "This is a test post."},# {"id": 2, "user_id": 1, "title": "test_post_2", "content": "This is a test post."}# ],# "Comments": [# {"id": 1, "post_id": 1, "content": "This is a test comment."},# {"id": 2, "post_id": 1, "content": "This is a test comment."},# {"id": 3, "post_id": 2, "content": "This is a test comment."},# ]# }# If no results are found:# result_object: dict = {}# If an exception was raised during the read operation# result_object: dict = None# ===========================# EXAMPLE USING SELECT PARAMS# ===========================# Return the name of the user with an 'id' of 1, all of the user's posts, & all posts' comments.result_object:Optional[dict]=db_manager.joined_read("User","Comments",{"id":1},["name"])# If some results are found:# result_object: dict = {# "User": [{"name": "test_user"}],# "Posts": [# {"id": 1, "user_id": 1, "title": "test_post", "content": "This is a test post."},# {"id": 2, "user_id": 1, "title": "test_post_2", "content": "This is a test post."}# ],# "Comments": [# {"id": 1, "post_id": 1, "content": "This is a test comment."},# {"id": 2, "post_id": 1, "content": "This is a test comment."},# {"id": 3, "post_id": 2, "content": "This is a test comment."},# ]# }# ====================================# EXAMPLE USING MULTIPLE SEARCH PARAMS# ====================================# Return the the user with the id of 1, the post belonging to the user with a title of "test_post", & all comments belonging to the post.result_object:Optional[dict]=db_manager.joined_read("User","Comments",[{"id":1},{"title":"test_post"},{}],)# If some results are found:# result_object: dict = {# "User": [# {"id": 1, "name": "test_user", "status": "ACTIVE", "website": "www.testwebsite.com"}# ],# "Posts": [# {"id": 1, "user_id": 1, "title": "test_post", "content": "This is a test post."},# ],# "Comments": [# {"id": 1, "post_id": 1, "content": "This is a test comment."},# {"id": 2, "post_id": 1, "content": "This is a test comment."},# ]# }# ====================================# EXAMPLE USING MULTIPLE SELECT PARAMS# ====================================# Return the name of the user with an "id" of 1, full rows ofall of that user's posts, & the "id" and "content" of all posts'"comments."result_object:Optional[dict]=db_manager.joined_read("User","Comments",{"id":1},[["name"],[],["id","content"]])# If some results are found:# result_object: dict = {# "User": [# {"name": "test_user"}# ],# "Posts": [# {"id": 1, "user_id": 1, "title": "test_post", "content": "This is a test post."},# {"id": 2, "user_id": 1, "title": "test_post_2", "content": "This is a test post."}# ],# "Comments": [# {"id": 1, "content": "This is a test comment."},# {"id": 2, "content": "This is a test comment."},# {"id": 3, "content": "This is a test comment."},# ]# }# ===============================================================# EXAMPLE USING MULTIPLE SEARCH PARAMS AND MULTIPLE SELECT PARAMS# ===============================================================# Return the 'name' of the user with an `id` of 1, all posts belonging to the user with a title of "test_post", & and the 'id' and 'conent' of each comment belonging to the post.result_object:Optional[dict]=db_manager.joined_read("User","Comments",[{"id":1},{"title":"test_post"},{}],[["name"],[],["id","content"]])# If some results are found:# result_object: dict = {# "User": [# {"name": "test_user"}# ],# "Posts": [# {"id": 1, "user_id": 1, "title": "test_post", "content": "This is a test post."},# ],# "Comments": [# {"id": 1, "content": "This is a test comment."},# {"id": 2, "content": "This is a test comment."},# {"id": 3, "content": "This is a test comment."},# ]# }count()The count method is used to count existing rows that meet criteria in the database.
A dictionary is returned with a count of the rows that meet the criteria, or None is returned if the count fails.It takes two arguments:table_key: A string representing the model name. This will be the same as the class name of the model.search_params:A dictionary mapping of parameters pertinent to specifying the query. Represents the search criteria for the query. All columns in the dictionary must be present in the table. See thesearch_params argumentsection below for more information.-search_paramsargumentThe values in thesearch_paramsdictionary are used to filter the results of the query.
You can supply a single value or a list of values for each key in this dictionary (column in the table.)Example queries:fromdb.objectsimportdb_manager# Count the number of rows in the "User" table that have a status of "DELETED".count:dict=db_manager.count("User",{"status":"DELETED"})# Count the number of rows in the "User" table that have a status of "DELETED" or "SUSPENDED".count:dict=db_manager.count("User",{"status":["DELETED","SUSPENDED"]})# If no rows are found meeting the search criteria:# count = {"count": 0}# If some rows are found meeting the search criteria:# count = {"count": 5}# If an exception was raised during the count operation:# count = NoneAdvanced OptionsCertain methods have additional options that can be used to further specify the query.search_params optionsAvailable to theread(),update(),delete(),joined_read(), andcount()methods.equality operators:You can apply equality operators to the search parameters concatenating the operator to the end of the column name key. The valid operators are:==for "equals" (default if no operator is provided)!=for "not equals"<for "less than"<=for "less than or equal to">for "greater than">=for "greater than or equal to"For example, if you want to return all rows from the "User"
table where the "id" is greater than 5, you would use:{"id>": 5}.If a column name has no operator concatenated to the end of it, the operator will be used instead of the default "==" operator.The data type of the value must be compatible with the operator used. For example, if you use the "<" operator, the value must be a number or a date. If the operator is not valid for the column type, the query will fail.If an equality operator is used when multiple values are provided for a column,
the operator will be applied to each value in the list.
For example, if you use{"status!=": ["ACTIVE", "SUSPENDED"]},
the query will return all rows where the status is not "ACTIVE"
or "SUSPENDED."Example queries:importdatetimefromdb.objectsimportdb_manager# Return all rows from the "User" table with the a status is not "ACTIVE" or "SUSPENDED".users:dict=db_manager.read("User",{"status!=":["ACTIVE","SUSPENDED"]})# Delete all rows from the Users table with an "id" that is less than or equal to 4000 and a status of "DELETED."delete_was_succesful:bool=db_manager.delete("User",{"id<=":4000,"status":"DELETED"})# Find all rows in the User table that were created on or before October 1, 2021# with a "status" of "DELETED" or "SUSPENDED" and update each rows' "status" to "ACTIVE".update_was_successful:bool=db_manager.update("User",{"created_date<=":datetime.date(year=2021,month=10,day=1),"status":["DELETED","SUSPENDED"]},{"status":"ACTIVE"})select_params optionsAvailable to theread()andjoined_read()methods.Distinctly selected columns:You can specify that only distinct rows are returned by using the%concatenated
on to the back of a string value from theselect_paramslist.
For example, if you want to return only distinct rows for the "name" column, you would use:["name%"]Distinctly selected columns can be used in conjunction with normalselect_params.For example,
if you want to return only distinct rows for the "name" column and all rows for the "id" and "content" columns.
You would use:["name%", "id", "content"]All distinct columns must be at the beginning of theselect_paramslist. If they are not, the query will fail.Example queries:importdatetimefromdb.objectsimportdb_manager# Return the User rows with with an "ACTIVE" status and return distinct "name" values and all "id" and values.users:dict=db_manager.read("User",{"status":"ACTIVE"},["name%","id",])Complex Examplesimportdatetimefromdb.objectsimportdb_manager# In these examples there are three related tables: "User", "Post", "Comments" and "Likes".# Return the distinct name and any id of all rows from the Users table that do not have a status of "DELETED" or "SUSPENDED".# Return the full row of all Posts that have a user_id that matches any of the found Users and were created on or before October 1, 2021.# Return the id and content of all Comments that have a post_id that matches any of the found Posts.query_results:dict=db_manager.joined_read("User","Likes",[{"status!=":["DELETED","SUSPENDED"]},{"created_date<":datetime.date(year=2021,month=10,day=1)},{}],[["name%","id"],[],["id","content"]])
|
alphorder
|
AlphorderAlphorder is a tool that allows you to sort your folders with a simple line of code either from the command line or you can import it into your Python project.InstalationYou can install alphorder whitpiprunning the following commandpip install alphorderUsageThere are two main ways you can use alphorder, from the comandline and importing it to your proyect.ProyectYou can import alphorder to your proyect with this line of codefromalphorderimportAlphorderThis way you have imported the class that contains the methods to sort your foldersAlphabetic orderYou can put all the files and folders into the corresponding folder to his name, this mean that if there is a file calledCasawill me moved to the folderC.All of the numeric starting folders or files will be moved to a folder called#.All of this will be achived bia the methodsortAlphorder.sort("path/to/folder")Move to a specific folderYou can perform the same that before but just for the content that matches one of the keywords that you specifie with the methodmoveToFolder.The method will receive two params, the first one will be the path of the folder, and the second one will be the folder to move.If the second param is not a valid path, then the program will create a folder with the second param as its name and move all the content to it.Alphorder.sort("/path/to/folder","path/to/second/folder")# or you can simply add a word or sentenceAlphorder.sort("/path/to/folder","ordenado con alphorder")Move to a specific folder with paramsYou can perform the same that before but just for the content that matches a series one of the keywords that you specifie with the methodmoveToFolderByKeywordsThis method will receive a param more that the previous one: an array of strings to matchThe target will be moved if any of the keywords is found on its nameAlphorder.sort("/path/to/folder","second path",["sort","alph"])CommandlineYou can use the same methods that on the proyect directly from the commandline with thealphorderkeyword.Alphabetic orderalphorder"/path/to/folder"As you can see is allmost the same as before.Move to a specific folderAs in a proyect you need to specify two params: the main path and the target path.alphorder"/path/to/folder""/target/path"If the second param is not an existing folder then a folder with the same name will be created on the main folderMove to a specific folder with paramsIf you add more params after the two path this ones will be taked askeywords, and then just will be moved the files or the folders thatcontainsany of thekeywordsin its namealphorder"/path/to/folder""/target/path"word1word2"sample of a sentence"
|
alpina
|
No description available on PyPI.
|
alpine
|
Python wrapper for the Alpine APIWelcome to the official Python library for the Alpine API. In this first release we’ve focused on a subset of the full
API that we feel users will most frequently use.This library can be used to automate, add, or simplify functionality of Alpine.Documentation (and examples):http://python-alpine-api.readthedocs.io/Source code:https://github.com/AlpineNow/python-alpine-apiPython Package Index:https://pypi.python.org/pypi/alpineSetup:pip install alpineRequirements:Using this package requires access to a TIBCO Team Studio instance. For more information, see the TIBCO Team Studio homepage:https://community.tibco.com/products/tibco-data-scienceLicense:We use the MIT license. See the LICENSE file on GitHub for details.ExampleRunning a workflow and downloading the results:>>> import alpine as AlpineAPI
>>> session = AlpineAPI.APIClient(host, port, username, password)
>>> process_id = session.workfile.process.run(workfile_id)
>>> session.workfile.process.wait_until_finished(workfile_id, process_id)
>>> results = session.workfile.process.download_results(workfile_id, process_id)
|
alpineer
|
AlpineerCI / CDPackageMetaToolbox for Multiplexed Imaging. Contains scripts and little tools which are used throughoutark-analysis,mibi-bin-tools, andtoffyalpineerRequirementsSetupDevelopment NotesQuestions?RequirementsPython PoetryRecommeded to install it with either:Official Installer:curl-sSLhttps://install.python-poetry.org|python3-pipx, (requirespipx)If you are usingpipx, run the following installation commandsbrewinstallpipx
pipxensurepathpre-commitbrewisntallpre-commitSetupClone the repo:git clone https://github.com/angelolab/alpineer.gitcdintoalpineer.Install the pre-commit hooks withpre-commit installSet uppython-poetryforalpineerRunpoetry installto installalpineerinto your virtual environment. (Poetry utilizesPython's Virtual Environments)Runpoetry install --with test: Installs all thedependencies needed for tests(labeled undertool.poetry.group.test.dependencies)Runpoetry install --with dev: Installs all thedependencies needed for development(labeled undertool.poetry.group.dev.dependencies)You may combine these as well withpoetry install --with dev,test. Installing the base dependencies and the two optional groups.In order to test to see if Poetry is working properly, runpoetry show --tree. This will output the dependency tree for the base dependencies (labeled undertool.poetry.dependencies).Sample Output:matplotlib3.6.1Pythonplottingpackage
├──contourpy>=1.0.1
│└──numpy>=1.16
├──cycler>=0.10
├──fonttools>=4.22.0
├──kiwisolver>=1.0.1
├──numpy>=1.19
├──packaging>=20.0
│└──pyparsing>=2.0.2,<3.0.5||>3.0.5
├──pillow>=6.2.0
├──pyparsing>=2.2.1
├──python-dateutil>=2.7
│└──six>=1.5
└──setuptools-scm>=7├──packaging>=20.0│└──pyparsing>=2.0.2,<3.0.5||>3.0.5├──setuptools*├──tomli>=1.0.0└──typing-extensions*
natsort8.2.0SimpleyetflexiblenaturalsortinginPython.
numpy1.23.4NumPyisthefundamentalpackageforarraycomputingwithPython.
pillow9.1.1PythonImagingLibrary(Fork)pip22.3ThePyPArecommendedtoolforinstallingPythonpackages.
tifffile2022.10.10ReadandwriteTIFFfiles
└──numpy>=1.19.2Development NotesI'd highly suggest refering to Poetry's extensive documentation oninstalling packages,updating packagesand more.Tests can be ran withpoetry run pytest. No additional arguments needed, they are all stored in thepyproject.tomlfile.As an aside, if you need to execute code in the poetry venv, use prefix your command withpoetry runUpdatingIn order to updatealpineer's dependencies we can run:poetry update: for all dependenciespoetry update <package>: where<package>can be something likenumpy.To update Poetry itself, runpoetry self update.Questions?Feel free to open an issue on ourGitHub page
|
alpinejsstate
|
No description available on PyPI.
|
alpinejswidget
|
No description available on PyPI.
|
alpinemath-sympy
|
Failed to fetch description. HTTP Status Code: 404
|
alpinepkgs
|
alpinepkgsGive you information about packages from pkgs.alpinelinux.org.NOTE!: This package uses web scraping to gather the information.InstallpipinstallalpinepkgsExamplefromalpinepkgs.packagesimportget_packageprint(get_package('python3'))>{'package':'python3','branch':'v3.16','x86_64':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'x86':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'aarch64':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'armhf':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'s390x':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'armv7':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'ppc64le':{'version':'3.10.5-r0','date':'2022-07-25','licence':'PSF-2.0','maintainer':'Natanael Copa','url':'https://www.python.org/'},'versions':['3.10.5-r0']}
|
alpine-release-info
|
Alpine Linux Release InfoCommand line utility to query Alpine Linux’s distribution tree.The targeted use case is the continuous delivery of products based on Alpine Linux such as Docker Images.This script will deliver the latest release given branch, architecture, flavor, etc. There are many other parameters
that can be queried such as: url to download, sha512, gpg signature, etc.DemoUsageTo install the latest release of this utility:pip install alpine_release_infoFor help on the available parameters:alpine_release_info -hTo query the download url for the latest release on the v3.5 branch for armhf architecture and minirootfs flavor:alpine_release_info -a armhf -b v3.5 -f alpine-minirootfs -q url
|
alpinonaf
|
UNKNOWN
|
alpino-query
|
Alpino Querypipinstallalpino-queryWhen running locally without installing, instead ofalpino-queryusepython -m alpino_query.ParseParse a tokenized sentence using the Alpino instance running ongretel.hum.uu.nl.For example:alpino-queryparseDitiseenvoorbeeldzin.Note that the period is a separate token.It also works when the sentence is passed as a single argument.alpino-queryparse"Dit is een voorbeeldzin ."MarkMark which part of the treebank should selected for filtering. It has three inputs:Lassy/Alpino XMLthe tokens of the sentencefor each token specify the properties which should be markedFor example:alpino-querymark"$(<tests/data/001.xml)""Dit is een voorbeeldzin .""pos pos pos pos pos"It is also possible to mark multiple properties for a token, this is done by separating them with a comma. Each of these can also be specified to be negated. These will then be marked as 'exclude' in the tree.alpino-querymark"$(<tests/data/001.xml)""Dit is een voorbeeldzin .""pos pos,-word,rel pos pos pos"SubtreeGenerates a subtree containing only the marked properties. It will also contain additional attributes to mark that properties should be excluded and/or case sensitive.The second argument can be empty,cat,relor both (i.e.catrelorcat,rel). This indicates which attributes should be removed from the top node. When only one node is left in the subtree, this argument is ignored.alpino-querysubtree"$(<tests/data/001.marked.xml)"catXPathGenerates an XPath to query a treebank from the generated subtree. Second argument indicates whether a query should be generated which is order-sensitive.alpino-queryxpath"$(<tests/data/001.subtree.xml)"0Using as Modulefromalpino_queryimportAlpinoQuerytokens=["Dit","is","een","voorbeeldzin","."]attributes=["pos","pos,-word,rel","pos","pos","pos"]query=AlpinoQuery()alpino_xml=query.parse(tokens)query.mark(alpino_xml,tokens,attributes)print(query.marked_xml)# query.marked contains the lxml Elementquery.generate_subtree(["rel","cat"])print(query.subtree_xml)# query.subtree contains the lxml Elementquery.generate_xpath(False)# True to make order sensitiveprint(query.xpath)ConsiderationsExclusiveWhen querying a node this could be exclusive in multiple ways.
For example:a node should not be a nounnode[@pos!="noun"]it should not have a node which is a nounnot(node[@pos="noun"])The first statement doesrequirethe existence of a node, whereas the second also holds true if there is no node at all. When a token is only exclusive (e.g. not a noun) a query of the second form will be generated, if a token has both inclusive and exclusive properties a query of the first form will be generated.Relations@catand@relare always preserved for nodes which have children. The only way for this to be dropped is for when all the children are removed by specifying thenaproperty for the child tokens.Upload to PyPipipinstalltwine
pythonsetup.pysdist
twineuploaddist/*
|
alp-objectifier
|
ALP ObjectifierAs AnyLogic has hundreds of built-in example models that cover a wide range of industries and use-cases, it is extremely time consuming to find a model that fulfills more conditions than can be searched for by descriptions alone. For example, looking for a model that uses System Dynamics inside of a non-Main agent, or one that contains more than three layers of nested agents.This library is able to parse AnyLogic source files (.alp) and turn it into a parsable, read-only Python object.Getting StartedTo get started, simply install and use!InstallingTo install, you'll need a version of Python 3.5 or later.Use pip to install:pip install alp_objectifierCheck installation by attempting to import:importalp_objectifierUsage<TODO: Explain how to use the library>Contributing<TODO: Add contributing info>AuthorsTyler Wolfe-AdamLicenseThis project is licensed under the MIT License - see theLICENSEfile for details
|
alppb
|
Amazon Linux Python Package Builder (alppb)alppb builds Python packages using the same version of Amazon Linux that the AWS Lambda service uses. Using alppb helps guarantee that any PyPi package your AWS Lambda app depends on will run properly.Why is this a problem that needs to be solved? AWS Lambda requires you to package up your Python project along with all of its dependencies in order to run. If your AWS Lambda Python project has package(s) with C extension modules (or dependencies that do), you will need to build them on Amazon Linux for your app to work. alppb uses the AWS CodeBuild service (perpetual free tier includes 100 build minutes per month) to build the package(s) on Amazon Linux and download them to your local machine for you. Simply unzip the downloaded package(s) into your deployment bundle and upload to the AWS Lambda service.How To Use alppbpipinstallalppb
alppb-hBuild package requests in bucket fooalppb requests fooTODOPre 1.0.0Foundation - create a CodeBuild project with hardcoded build that puts an artifact in s3Fix artifact so its a zip of the contents (excluding parent dir)Download the module locally to dir alppb was run fromMove codebuild stuff to a moduleDelete the artifact from s3 as part of cleanupAdd creation of IAM role for CodeBuild instead of using hardcoded, pre-built roleAdd deletion of IAM role as part of cleanupMove aws-cli stuff to boto3Allow user specification of the desired module to be built using alppbCleanup existing docstringsRemove base64 stuff in iam.py as it obscures whats happeningAxe the examples dirAllow user specification of the bucket1.0.0Exception handlingUpdate and overwrite if resources already existpre-req checkingValid PyPi packageBucket and CodeBuild need to be in same regionBucket exists (NoSuchBucket)Bucket has valid name (botocore.exceptions.ParamValidationError)Unit testsIntegration testsTest each version of Python supportedVerify they're using the actual right python versions as part of each testInspect the zip and make sure it contains what's expectedPackage and Submit to PyPiMake CodeBuild Docker image details more clear and documentAdd verbosity levelsAdd Sphinx docsreadthedocs.orgPlannedOne or more modules can be specified in one invocation of alppbAllow specification of a requirements.txt file to use as a list of all modules to buildSpecify download location of the artifactCreate an s3 bucket when an arg is specifiedAllow user to optionally specify an IAM roleSpecify the Python version that should be used to build the package (choices come from supported AWS Lambda versions)Dockerize and submit to DockerhubFAQsWhy AWS CodeBuild? Why not X instead?AWS CodeBuild has a perpetual free tier and it's super easy to spin up, and teardown, a build job. Further, we can easily specify various Docker images to use for the build that match the AWS Lambda environment. I will likely add support for other build methods/services. If you have a suggestion, please open an issue or contact me on [email protected] image is being used for CodeBuild? Can I inspect the image being used for the build?There are three images, one for each version of Python supported by AWS Lambda:Python 2.7 -https://hub.docker.com/r/irlrobot/alppb-python27/Dockerfile:https://github.com/irlrobot/dockerfiles/tree/master/alppb-python27Python 3.6 -https://hub.docker.com/r/irlrobot/alppb-python36/Dockerfile:https://github.com/irlrobot/dockerfiles/tree/master/alppb-python36Python 3.7 -https://hub.docker.com/r/irlrobot/alppb-python37/Dockerfile:https://github.com/irlrobot/dockerfiles/tree/master/alppb-python37Each image is running Amazon Linux 1 version 2017.03.1.20170812 which iswhat AWS Lambda uses.
|
alp-proj
|
Failed to fetch description. HTTP Status Code: 404
|
alpr
|
ALPRAutomatic License Plate Recognition software that works in all environments, optimized for your location.InstructionsInstall:pip install alprGenerate an aesthetic ASCII visual:fromALPRimportlicense_recognitionaslr# DEMO Versionlicense_plates=lr.GetLicensePlateDemo("/path/to/image.jpg")license_plates.get_license_img("/path/to/image.jpg")# REAL Version# initialize object with tokenapp=lr.GetLicensePlateDemo("token")# Get more info from your imageapp.get_license("/path/to/image.jpg")Enjoy!To DoAdd more optionsAdd ability to inoput video or stream
|
alprotobuff
|
This is simple create python module and create a package with dependencies.Change Log0.0.1 (30/01/2022)First Release
|
alps
|
No description available on PyPI.
|
alpsplot
|
alpsplotPython plotting library of alps-lab style using matplotlib.
|
alps-py
|
ALSP-PYA python package to make implementingApplication-Level Profile Semantics (ALPS)in projects.Package DevelopmentI am developing this package as part of work on theOpen Distributed Information Service (ODIS).
Hopefully, it will be useful beyond this project, but I am not ready to make any promises just yet.
At the moment it should be considered an experimental work.Related WorkThere is also theWSTL-PYdeveloped for the same reason,
which helps withWeb Service Transition Language (WSTL)implementations.FeedbackAny feedback will be welcomed. Create an issue, or start a discussion on theODIS repoUsageAt the moment is is possible to create a valid ALPS representation from code:alps=Alps(title='Sample API')alps.add_doc(MarkDownDoc('A sample MarkDown documentation'))alsp.add_descriptor(Semantic(id='identifier',text='An identifier of a thing',ref='https://schema.org/identifier'))alsp.add_descriptor(Semantic(id='email',text='Email address for a person or an organisation',ref='https://schema.org/email'))print(alps.to_data())and the output is{"alps":{"version":"1.0","title":"Sample API","doc":{"format":"markdown","value":"A sample MarkDown documentation"},"descriptor":[{"id":"identifier","type":"semantic","text":"An identifier of a thing","ref":"https://schema.org/identifier"},{"id":"email","type":"semantic","text":"Email address for a person or an organisation","ref":"https://schema.org/identifier"}]}}Plans[ ] Abiility to read ALPS documents with validation
[ ] Standard descriptors fromSchema.org[ ] Integration withWSTL-PYproject
|
alps-unified-ts
|
alps-unified-tsThat is an enhanced TypeScript library ofalps-unified. With it you can convert an ALPS API spec to other API spec like openApi, Graph QL Schema.Very useful to understand the idea of ALPS API is this video on YT:https://www.youtube.com/watch?v=oG6-r3UdenEWant to know more about ALPS? --> please visit:http://alps.io/https://github.com/alps-io/https://github.com/mamund/alps-unifiedFeaturesgenerating and publishing alps unified libraries for JavaScript, TypeScript, Python, Java and .NETType support for ALPS specs (see example 'Create from Spec' down below)ExamplesLoad from YAML fileYou can load the ALPS spec directly from a YAML file. JSON ist atm not supported.Convert to OpenApi# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826Alps.unified(Alps.load_yaml("test/todo-alps.yaml"),format_type=FormatType.OPENAPI)Convert to GraphQL Schema# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826Alps.unified(Alps.load_yaml("test/todo-alps.yaml"),format_type=FormatType.SDL)Create from SpecCreating the API specification from the spec is very powerful. As it gives you much support in an idea like VS as it is typed and documented. So you alway produce valid API specs.# Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826Alps.unified(Alps.spec(alps={"version":"1.0","doc":{"value":"Simple Todo list example"},"ext":[{"type":"metadata","name":"title","value":"simpleTodo","tags":"oas"},{"type":"metadata","name":"root","value":"http://api.example.org/todo","tags":"oas"}],"descriptor":[{"id":"id","type":"semantic","text":"storage id of todo item"}]}))For Python to benefit from the types better do this:importalps_unified_tsasalpsalps_def=alps.AlpsDef(version='1.0',descriptor=[alps.DescriptorDef(id="id",type="semantic",text="sotrage id of todo item")],doc=alps.DocDef(value="Simple Todo list example"),ext=[alps.ExtDef(name="root",tags="oas",type="metadata",value="http://api.example.org/todo"),alps.ExtDef(name="title",tags="oas",type="metadata",value="simpleTodo")])alps.Alps.unified(alps_document=alps.Alps.spec(alps=alps_def),format_type=alps.FormatType.OPENAPI)Thanks toThe AWS CDK Community for the repo toolprojenwhich I use for this repo.
|
alpy
|
Test network virtual appliance using Docker containersContentsGeneral informationThe projectAuthorLicenseDescriptionExamplesFeaturesThe simplest docker to QEMU networking connectionReliable packet captureFirst-class Docker container supportLoggingNo trash left behindNo root requiredAPI documentationNetwork designBuilding a network of nodesFAQHow do I watch serial console?How do I watch traffic on an interface?Can I use Wireshark to watch traffic on an interface?How do I debug my program?How do I enter node network namespace?A note about GitLab Container RegistryRelated projectsGeneral informationThe projectThis project is a Python library for testing network virtual appliances.AuthorAlexey BogdanenkoLicenseAlpy is licensed underSPDX-License-Identifier:GPL-3.0-or-later. SeeCOPYINGfor more details.DescriptionAlpy manages containers viaDocker Python API.Alpy interacts withQEMUusing Python API of theQEMU Monitor Protocol(QMP). QMP is a JSON-based protocol that allows applications to communicate with
a QEMU instance.Alpy gives userPexpectobject to interact with a serial console. The Pexpect
object is configured to log console input and output via the standardlogging
module.Alpy is packaged and deployed to PyPI. Thepackagecan be installed usingpip.There are unit tests (pytest) and integration tests in GitLab CI pipeline.
Alpy is tested and works on the latest Ubuntu and the latest Ubuntu LTS release.ExamplesThe alpy library repository includes scripts and modules to build a simple
appliance called Rabbit. Rabbit is Alpine Linux with a few packages
pre-installed. Having this simple DUT allows to demonstrate the library
features and capabilities. The tests verify a few features of the network
appliance, for example:IPv4 routing (seerabbit/tests/forward-ipv4/main.py)rate-limiting network traffic (seerabbit/tests/rate-limit/main.py)load-balancing HTTP requests (seerabbit/tests/load-balancing/main.py)The tests are executed automatically in the GitLab CI pipeline.Example network (testrate-limit):+-------------------------------------+
| |
| Device under test |
| rate limit = 1mbps |
+-------+--------------------+--------+
| |
| |
| |
+-------+--------+ +-------+--------+
| | | |
| 192.168.1.1/24 | | 192.168.1.2/24 |
| | | |
| node0 | | node1 |
| iperf3 client | | iperf3 server |
+----------------+ +----------------+Example test output:INFO __main__ Test description: Check that rabbit rate-limits traffic.
INFO alpy.node Create tap interfaces...
INFO alpy.node Create tap interfaces... done
INFO alpy.qemu Initialize QMP monitor...
INFO alpy.qemu Initialize QMP monitor... done
INFO alpy.qemu Start QEMU...
INFO alpy.qemu Start QEMU... done
INFO alpy.qemu Accept connection from QEMU to QMP monitor...
INFO alpy.qemu Accept connection from QEMU to QMP monitor... done
INFO alpy.node Create nodes...
INFO alpy.node Create nodes... done
INFO alpy.console Connect to console...
INFO alpy.console Connect to console... done
INFO alpy.utils Enter test environment
INFO __main__ Start iperf3 server on node 1...
INFO __main__ Start iperf3 server on node 1... done
INFO alpy.qemu Start virtual CPU...
INFO alpy.qemu Start virtual CPU... done
INFO alpine Wait for the system to boot...
INFO alpine Wait for the system to boot... done
INFO alpine Login to the system...
INFO alpine Login to the system... done
INFO alpy.remote_shell Type in script configure-rabbit...
INFO alpy.remote_shell Type in script configure-rabbit... done
INFO alpy.remote_shell Run script configure-rabbit...
INFO alpy.remote_shell Run script configure-rabbit... done
INFO __main__ Start iperf3 client on node 0...
INFO __main__ Measure rate...
INFO __main__ Measure rate... done
INFO __main__ Parse iperf3 report...
INFO __main__ Parse iperf3 report... done
INFO __main__ Start iperf3 client on node 0... done
INFO alpine Initiate system shutdown...
INFO alpine Initiate system shutdown... done
INFO alpy.qemu Wait until the VM is powered down...
INFO alpy.qemu Wait until the VM is powered down... done
INFO alpy.qemu Wait until the VM is stopped...
INFO alpy.qemu Wait until the VM is stopped... done
INFO __main__ Rate received, bits per second: 976321
INFO __main__ Check rate...
INFO __main__ Check rate... done
INFO alpy.utils Exit test environment with success
INFO alpy.console Close console...
INFO alpy.console Close console... done
INFO alpy.qemu Quit QEMU...
INFO alpy.qemu Quit QEMU... done
INFO alpy.utils Test passedThe tests for the Rabbit device share a lot of code so the code is organized as
a library. The library is calledcarrot.FeaturesThe simplest docker to QEMU networking connectionNothing in the middle. No bridges, no veth pairs, no NAT etc.Each layer 2 frame emitted is delivered unmodified, reliably.Reliable packet captureEach frame is captured reliably thanks to the QEMUfilter-dumpfeature.First-class Docker container supportAlpy follows and encourages single process per container design.LoggingTest logs are easy to configure and customize. Alpy consistently uses Pythonloggingmodule.Alpy collects serial console log in binary as well as text (escaped) form.No trash left behindAlpy cleans up after itself:processes stopped with error codes and logs collected,files, directories unmounted,temporary files removed,sockets closed,interfaces removed…… reliably.No root requiredRun as a regular user.API documentationThe documentation is published on GitLab Pages of your GitLab project (if GitLab
Pages is enabled on your GitLab instance). For example, upstream project
documentation lives athttps://abogdanenko.gitlab.io/alpy.Alpy API documentation is generated usingSphinx. To generate HTML API
documentation locally, installSphinx packageand run the following
command:PYTHONPATH=. sphinx-build docs publicTo view the generated documentation, openpublic/index.htmlin a browser.Network designThe appliance being tested is referred to as adevice under testorDUT.The DUT communicates with containers attached to each of its network links.Guest network adapters are connected to the host via tap devices (Figure 1):+-----QEMU hypervisor------+
| | +-------------+
| +-----Guest OS-----+ | | |
| | | | | docker |
| | +--------------+ | | | container |
| | | | | | | network |
| | | NIC driver | | | | namespace |
| | | | | | | |
| +------------------+ | | +-----+ |
| | | | | | | |
| | NIC hardware +---+-----------+ tap | |
| | | | | | | | |
| +--------------+ | | | +-----+ |
| | | | |
+--------------------------+ +-------------+
|
|
v
+-----------+
| |
| pcap file |
| |
+-----------+Figure 1. Network link between QEMU guest and a docker container.Each tap device lives in its network namespace. This namespace belongs to a
dedicated container - anode. The node’s purpose is to keep the namespace
alive during the lifetime of a test.For an application to be able to communicate with the DUT the application is
containerized. The application container must be created in a special way: it
must share network namespace with one of the nodes.Figure 2 shows an example where application containersapp0andapp1share
network namespace with node containernode0. Application containerapp2shares another network namespace withnode2.This sharing is supported by Docker. All we have to do is to create the
application container with the--network=container:NODE_NAMEDocker option.
For example, if we want to send traffic to the DUT via its first link, we create
a traffic generator container with Docker option--network=container:node0.+----QEMU---+ +------shared network namespace-----+
| | | |
| | | eth0 |
| +---+ | | +---+ +-----+ +----+ +----+ |
| |NIC+-----------+tap| |node0| |app0| |app1| |
| +---+ | | +---+ +-----+ +----+ +----+ |
| | | |
| | +-----------------------------------+
| |
| |
| |
| | +------shared network namespace-----+
| | | |
| | | eth0 |
| +---+ | | +---+ +-----+ |
| |NIC+-----------+tap| |node1| |
| +---+ | | +---+ +-----+ |
| | | |
| | +-----------------------------------+
| |
| |
| |
| | +------shared network namespace-----+
| | | |
| | | eth0 |
| +---+ | | +---+ +-----+ +----+ |
| |NIC+-----------+tap| |node2| |app2| |
| +---+ | | +---+ +-----+ +----+ |
| | | |
+-----------+ +-----------------------------------+Figure 2. Application containers attached to the DUT links.Building a network of nodesNetwork configuration operations are performed by temporary one-off Docker
containers by callingipcommands inside the containers.A distinction is made between a simplified version of theipbinary and the
full version. The simplified version is abusyboxapplet. The full version is
shipped in theiproute2package.Here is a list of features which alpy requires but which are missing from the
simplified version:Move a network interface to a different namespace (“ip link set netns …”)Create a tap interface (“ip tuntap add mode tap …”)The image which contains the simplified version is calledbusybox_imagewhile
the full image is callediproute2_image.The images must be provided by the caller and must be present on the system. For
example, set:busybox_image = "busybox:latest"
iproute2_image = "registry.gitlab.com/abogdanenko/alpy/iproute2:latest"FAQHow do I watch serial console?Usetail:tail --follow name --retry console.logThe same command, but shorter:tail -F console.logHow do I watch traffic on an interface?Use tcpdump:tail --bytes +0 --follow name --retry link0.pcap | tcpdump -n -r -The same command, but shorter:tail -Fc +0 link0.pcap | tcpdump -nr-Can I use Wireshark to watch traffic on an interface?Yes, you can:tail --bytes +0 --follow name --retry link0.pcap | wireshark -k -i -The same command, but shorter:tail -Fc +0 link0.pcap | wireshark -ki-How do I debug my program?UseThe Python Debugger.How do I enter node network namespace?Get node pid:docker inspect --format '{{.State.Pid}}' node0Jump into node namespace using that pid:nsenter --net --target "$pid"One-liner:nsenter --net --target "$(docker inspect --format '{{.State.Pid}}' node0)"A note about GitLab Container RegistryMany CI jobs use one of the custom images built on the “build-docker-images”
stage. The images are stored in the GitLab Container Registry.The images are pulled from locations specified by GitLab variables. By default,
the variables point to the registry of the current GitLab project.If you forked this project and GitLab Container Registry is disabled in your
project, override the variables on a project level so that the images are pulled
from some other registry.For example, setIMAGE_UBUNTU_LTS=registry.gitlab.com/abogdanenko/alpy/ubuntu-lts:latest.Related projectsContainernetKatharaNetkitGNS3Virtual Networks over linuX (VNX)Pipework: Software-Defined Networking for Linux ContainersEve-NG
|
alpyca
|
alpyca (2.0.4)Python 3.7+ API library for allASCOM Alpacauniversal interfacesProduced by theASCOM Initiative, and derived from Ethan Chappel's
Alpyca 1.0.0. Ethan kindly released the nameAlpycato the ASCOM Initiative, hence this expanded
package starts life as Version 2.0.RequirementsThis package runs under Python 3.7 or later. It is compatible with most Linux distros, Windows , and MacOS.
Dependencies are minimal:requests,netifaces,typing-extensions,python-dateutil, andenum-tools.InstallationThe package installs from PyPi aspipinstallalpycaor if you have the source code in a tar file, extract it and run (withPython 3)pythonsetup.pyinstallThe dependencies listed above (and others they may depend on) are automatically
installed with alpyca.Current Status & DocumentationThis version 2.0.4 is the third production release (2.0.3 is unpublished).
The documentation is extensive and available
online asAlpyca: API Library for Alpacaas well as aPDF Document here. SeeCHANGES.rst(on GitHub) for
change log.Feedback and DiscussionFeedback can be given on theASCOM Driver and Application Development Support Forum.
Please note that the protocols are universal and strictly curated. This library is animplementationof the protocols, not the protocols themselves. For background please visitAbout Alpaca and ASCOM, as well as theASCOM Interface Principle,The Standards Process, and
theGeneral Requirements.ExampleFirst download, install and run thecross-platformAlpaca Omni Simulatorwhich will give you fully functional simulators forallAlpaca devices, as well as aliveOpenAPI/Swagger interface to the Alpaca RESTful endpoints (see the details below). This example will
use the Telescope simulator. Assuming you are running the Omni Simulator on your local host
at its default port of 32323, its address is thenlocalhost:32323. Here is a sample
program using alpaca:importtimefromalpaca.telescopeimport*# Multiple Classes including Enumerationsfromalpaca.exceptionsimport*# Or just the exceptions you want to catchT=Telescope('localhost:32323',0)# Local Omni Simulatortry:T.Connected=Trueprint(f'Connected to{T.Name}')print(T.Description)T.Tracking=True# Needed for slewing (see below)print('Starting slew...')T.SlewToCoordinatesAsync(T.SiderealTime+2,50)# 2 hrs east of meridianwhile(T.Slewing):time.sleep(5)# What do a few seconds matter?print('... slew completed successfully.')print(f'RA={T.RightAscension}DE={T.Declination}')print('Turning off tracking then attempting to slew...')T.Tracking=FalseT.SlewToCoordinatesAsync(T.SiderealTime+2,55)# 5 deg slew N# This will fail for tracking being offprint("... you won't get here!")exceptExceptionase:# Should catch specific InvalidOperationExceptionprint(f'Slew failed:{str(e)}')finally:# Assure that you disconnectprint("Disconnecting...")T.Connected=FalseResultsConnected to Alpaca Telescope Sim
Software Telescope Simulator for ASCOM
Starting slew...
... slew completed successfully.
RA=10.939969572854931 DE=50
Turning off tracking then attempting to slew...
Slew failed: SlewToCoordinatesAsync is not allowed when tracking is False
Disconnecting...
doneAlpaca Omni SimulatorsThe ASCOM Alpaca Simulators areavailable via GitHub here.
Using the [Latest] link, scroll down the
Assets section and pick the package for your OS and CPU type. Extract all files to a directory and start via./ascom-alpaca.simulators(or the equivalent on Windows or MacOS). A web browser should appear. This is the primary user interface to the simulator
server and simulated devices. Once you get this running you are ready to try the sample above.ASCOM RemoteAny current ASCOM COM device that is hosted on a Windows system can have an Alpaca interface added via theASCOM Remote Windows app. This app allows you to
expose any of your Windows-hosted astronomy devices to the Alpaca world, making them reachable from programs
using alpyca.WiresharkIf you are interested in monitoring the HTTP/REST traffic that alpyca creates and exchanges with the
Alpaca devices, you can install theWireshark network protocol analyzer.
One thing that trips people up is making the installation so that Wireshark has access to all of the
network insterfaces without needing root privs (linux) or running "As Administrator" on Windows. Pay close
attention the installation steps on this. On WIndows the capture driver installation will require elevation,
as it is a privileged module. For example installinn on Linux (e.g Debian/Raspberry Pi) you'll see this,
andbe sure to answer Yes.To watch Alpaca traffic, set this simple display filterhttp and tcp.port == 32323(with32323being the port of the OmniSim, see above). You'll get a nice analysis
of the Alpaca traffic like this
|
alpyen
|
.. raw:: html======
alpyen.. image::https://img.shields.io/pypi/v/alpyen.svg:target:https://pypi.python.org/pypi/alpyen.. image::https://readthedocs.org/projects/alpyen/badge/?version=latest:target:https://alpyen.readthedocs.io/en/latest/?version=latest:alt: Documentation Status
.. image::https://pepy.tech/badge/alpyen:target:https://pepy.tech/project/alpyen.. image::https://img.shields.io/github/repo-size/peeeffchang/alpyen:alt: GitHub repo size.. image::https://img.shields.io/pypi/pyversions/alpyen.. image::https://img.shields.io/github/commit-activity/m/peeeffchang/alpyenA lite-weight backtesting and live-trading algo engine for multiple brokers:Interactive Brokers (IB)GeminiLicense: GNU General Public License v3
Documentation:https://alpyen.readthedocs.io.FeaturesProviding a trading platform for IB that includes the functions ofData gatheringAlgo signal calculationAutomatic tradingBook monitoring and portfolio managementCurrent VersionAble to perform backtesting and live trading.Support This ProjectUse and discuss usReport a bugSubmit a bug fixInstallation::pip install alpyen"Hello World"/Quick StartFor a quick demo, do the following:Install alpyenCreate a py file that perform either backtesting (use thetest_backtesting_macrossing_reshuffletest as an example) or live trading (use thetest_live_tradingtest as an example)For live trading, create a yml control file (use thetest_control.ymlfile as an example)Example.. code-block:: pythonfrom alpyen import datacontainer
from alpyen import backtesting
from alpyen import utils
# Read data (assuming that BBH.csv from Yahoo Finance is in the Data folder)
data_folder = 'Data\\'
ticker_name = 'BBH'
file_path = os.path.join(os.path.dirname(__file__), data_folder)
short_lookback = 5
long_lookback = 200
short_lookback_name = ticker_name + '_MA_' + str(short_lookback)
long_lookback_name = ticker_name + '_MA_' + str(long_lookback)
ticker_names = [ticker_name]
all_input = datacontainer.DataUtils.aggregate_yahoo_data(ticker_names, file_path)
# Subscribe to signals
signal_info_dict = {}
signal_info_dict[short_lookback_name]\
= utils.SignalInfo('MA', ticker_names, [], [], short_lookback, {})
signal_info_dict[long_lookback_name]\
= utils.SignalInfo('MA', ticker_names, [], [], long_lookback, {})
# Subscribe to strategies
strategy_info_dict = {}
strategy_name = ticker_name + '_MACrossing_01'
strategy_info_dict[strategy_name] = utils.StrategyInfo(
'MACrossing',
[short_lookback_name, long_lookback_name],
1, {}, ticker_names, combo_definition={'combo1': [1.0]})
# Create backtester and run backtest
number_path = 1000
my_backtester = backtesting.Backtester(all_input, ticker_names, signal_info_dict, strategy_info_dict,
number_path)
my_backtester.run_backtest()
backtest_results = my_backtester.get_results()Themoving average signal / MA-crossing trading strategy; andweighted momentum signal / VAA strategyare built-in in the package, and are intended to serve as examples. Users can use them as references and create their custom signals/strategies by deriving from theSignalBaseclass within thesignalmodule, and theStrategyBaseclass within thestrategymodule. Note that the package needs a unique signature string for each derived signals/strategies for reflective object creation, so for example:.. code-block:: pythonclass MASignal(SignalBase):
"""
Moving average signal.
"""
_signal_signature = 'MA'
class MACrossingStrategy(StrategyBase):
"""
MA Crossing Strategy
"""
_strategy_signature = 'MACrossing'CreditsThis package was created with Cookiecutter_ and theaudreyr/cookiecutter-pypackage_ project template... _Cookiecutter:https://github.com/audreyr/cookiecutter.. _audreyr/cookiecutter-pypackage:https://github.com/audreyr/cookiecutter-pypackage=======
History0.1.0 (2021-09-12)First release on PyPI.0.1.1 (2021-10-12)0.1.2 (2021-10-17)0.1.3 (2021-11-12)0.1.4 (2021-11-19)
|
al-pyne
|
Failed to fetch description. HTTP Status Code: 404
|
alpynet
|
No description available on PyPI.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.