package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
aimodelshare-nightly
aimodelshareThe mission of the AI Model Share Platform (website w/ integrated Python library) is to provide a trusted non profit repository for machine learning model prediction APIs (python library + integrated website at modelshare.org. A beta version of the platform is currently being used by Columbia University students, faculty, and staff to test and improve platform functionality.In a matter of seconds, data scientists can launch a model into this infrastructure and end-users the world over will be able to engage their machine learning models.Launch machine learning models into scalable production ready prediction REST APIs using a single Python function.Details about each model, how to use the model's API, and the model's author(s) are deployed simultaneously into a searchable website at modelshare.org.Deployed models receive an individual Model Playground listing information about all deployed models. Each of these pages includes a fully functional prediction dashboard that allows end-users to input text, tabular, or image data and receive live predictions.Moreover, users can build on model playgrounds by 1) creating ML model competitions, 2) uploading Jupyter notebooks to share code, 3) sharing model architectures and 4) sharing data... with all shared artifacts automatically creating a data science user portfolio.Use the aimodelshare Python library to deploy your model, create a new ML competition, and more.Tutorials for deploying models.Find model playground web-dashboards to generate predictions now.View deployed models and generate predictions at modelshare.orgInstallationYou can then install aimodelshare from PyPipip install aimodelshare-nightly
ai-models-panguweather
ai-models-panguweatherai-models-panguweatheris anai-modelsplugin to runHuawei's Pangu-Weather.Pangu-Weather: A 3D High-Resolution Model for Fast and Accurate Global Weather Forecast, arXiv preprint: 2211.02556, 2022.https://arxiv.org/abs/2211.02556Pangu-Weather was created by Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu and Qi Tian. It is released by Huawei Cloud.The trained parameters of Pangu-Weather are made available under the terms of the BY-NC-SA 4.0 license.The commercial use of these models is forbidden.Seehttps://github.com/198808xc/Pangu-Weatherfor further details.InstallationTo install the package, run:pipinstallai-models-panguweatherThis will install the package and its dependencies, in particular the ONNX runtime. The installation script will attempt to guess which runtime to install. You can force a given runtime by specifying the theONNXRUNTIMEvariable, e.g.:ONNXRUNTIME=onnxruntime-gpupipinstallai-models-panguweather
aimojicommit
No description available on PyPI.
aimol
No description available on PyPI.
aimos
Drop a star to support AimOS ⭐Join AimOS discord communityOpen-source modular observability for AI Systems.Easily log, connect and observe any parts of your AI Systems from experiments to production to prompts to AI system monitoring.SEAMLESSLY INTEGRATES WITH:AimStack offers enterprise support that's beyond core AimOS. Contact [email protected]•Demos•Default logging apps•Quick Start•Examples•Community•Blog•ℹ️ AboutAimOS is an open-source operating system for logs. With AimOS you can build, run and combine any kind of logging applications - experiment tracking, production monitoring, AI System (LLM-based) monitoring, usage monitoring etc.The Logging applications are typically a combination of these components:The types and relationships of the data being loggedThe observability UI over the data loggedAutomations over the data loggedAimOS comes installed with a number of default logging apps:Base App - a basic generic log exploration and the logging primitivesAI Experiment Tracking App - log and explore your machine learning experiments. Includes integrations with the majority of leading ML frameworks.AI Systems Tracing and Debugging Apps - a combination of variety of apps that log from langchain to llamaindex traces all in one place.Apart from running the logging apps, AimOS comes with explorers and reports.Explorers are advanced logs comparison tools for specific kind of logs - they allow to compare 1000s of sessions of metrics, images, text, audio and other types of data.Reports are embedded knowledge-base that operate with the apps and explorers seamlessly to enable capture the knowledge built on top of the logged data from the observations through AimOS apps and explorers.With the rise of AI Systems and the challenges it brings forward, logging apps are going to be a crucial part of the software.Our mission is to democratize developer tools for building AI.Base AppA general observability over anything logged with AimOS.Visualize all the logs ever logged with AimOS for the given project 🗺️Base types to log common artifacts such as Images, Audio objects, Figures, MetricsHigh-level overview of the logs, the types logged and the respective sessions/ containersDeep-dive into each type of the logExperiment Tracking AppLog Metadata Across Your ML Pipeline 💾Visualize & Compare Metadata via UI 📊ML experiments and any metadata trackingIntegration with popular ML frameworksEasy migration from other experiment trackersMetadata visualization via AimOS ExplorersGrouping and aggregationQuerying using Python expressionsRun ML Trainings Effectively ⚡Organize Your Experiments 🗂️System info and resource usage trackingReal-time alerting on training progress (upcoming)Detailed run information for easy debuggingCentralized dashboard for holistic viewAI Systems Tracing AppsLog Inputs, Outputs and Actions of Executions 🤖Visualize & Compare Executions Steps via UI 🔍Track all the prompts, generations of LLMsTrack all the inputs, outputs of toolsCapture chains metadataDeep dive into single execution stepsCompare executions side-by-side🎬 DemosCheck out live AimOS demos NOW to see it in action.Tracing LangChain-based chatbot executionsView Demo|View CodeTracing LlamaIndex query executionsView Demo|View CodeTracking PyTorch-based CNN trainingsView Demo|View Code🌍 Default logging appsAimOS comes pre-installed with a wide variety of apps. Here is the full list:App NameDescriptionCategoryDocsSourcebaseBase AimOS app for general observability over anything logged with AimOS. Includes base types to log common artifacts, such as Image, Audio object, Figure, Metric.BasedocssourcedocsUse this AimOS app to access AimOS docs.Docs-sourcelangchain_debuggerDebugger for LangChain that logs LLMs prompts and generations, tools inputs/outputs, and chains metadata.AI Systems Tracingdocssourcellamaindex_observerDebugger and observer for LlamaIndex. Logs metadata like retrieval nodes, queries and responses, embeddings chunks, etc.AI Systems Tracingdocssourceexperiment_trackerApp for tracking and exploring ML experiments. Integrations with various ML libraries, including Acme, CatBoost, fastai, Hugging Face Transformers, Keras, Keras Tuner, LightGBM, MXNet, Optuna, PaddlePaddle, PyTorch Ignite, SDB3, and XGBoost.Experiment Trackingdocssource🏁 Quick startFollow the steps below to get started with AimOS.1. Install AimOS on your training environmentpip3installaimos2. Integrate AimOS with your codefromaimstack.baseimportRun,Metric# Initialize a new runrun=Run()# Log run parametersrun["hparams"]={"learning_rate":0.001,"batch_size":32,}# Init a metricmetric=Metric(run,name='loss',context={'subset':'training'})foriinrange(1000):metric.track(i,epoch=1)3. Start AimOS serveraimosserver4. Start AimOS UIaimosui👥 CommunityAimOS README badgeAdd AimOS badge to your README, if you've enjoyed using AimOS in your work:[![AimOS](https://img.shields.io/badge/powered%20by-AimOS-%231473E6)](https://github.com/aimhubio/aimos)Contributing to AimOSConsidering contibuting to AimOS? To get started, please take a moment to read theCONTRIBUTING.mdguide.Join AimOS contributors by submitting your first pull request. Happy coding! 😊Made withcontrib.rocks.More questions?Open a feature request or report a bugJoin Discord community server
aimos-ui
No description available on PyPI.
ai-mouse
Failed to fetch description. HTTP Status Code: 404
aimped
aimpedAimped is a unique python library that provides classes and functions for only exclusively business-tailored AI-based models.In this version, we provide the following features: API service, Sound processing tools and functions, NLP tools and functions, and a pipeline class for NLP tasks.InstallationpipinstallaimpedAPI UsageConfiguration of the Libraryfromaimped.services.apiimportAimpedAPI# Create new instance Aimpeduser_key=''# user_key received from A3M.user_secret=''# user_secret received from A3M.BASE_URL='https://aimped.ai'# Aimped domain urlapi_service=AimpedAPI(user_key,user_secret,{base_url:BASE_URL})Preparation of the model input datamodel_id=""# ID of the model run. The model ID is available on the model description page under API usage.payload={...}# Model input examples (payload) are available in the api usage tab on the Model description page.Usage of API Functionresult=api_service.run_mode(model_id,input_data)Usage of API Callback Function# return callback functiondefcallback(event,message,time,data=None):ifevent=='start':print(f'Start event at{time}:{message}')elifevent=='proccess':print(f'Progress event at{time}:{message}')elifevent=='error':print(f'Error event at{time}:{message}')elifevent=='end':print(f'End event at{time}:{message}. Data:{data}')result=api_service.run_model_callback(model_id,payload,callback)Usage of API File UploadSome of the models supports file inputs. These inputs are accepted as URIs. Here is the usage of API for file uploads.input=api_service.file_upload(model_id,'/Users/joe/Downloads/xyz.pdf'# sample file path to upload)Usage of API File DownloadSome of the models supports file outputs as result. These outputs are created as URIs. Here is the usage of API for file downloads.output_file=api_service.file_download_and_save('input/application/model_{{modelId}}/user_{{userId}}/file_name',# URI of the model output file in the result'/Users/joe/Downloads/123_file_name'# sample local file path to save)ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.Please make sure to update tests as appropriate.LicenseMIT
aim-platform-sdk
An API client for the AI Maintainer Marketplace # noqa: E501
aimport
No description available on PyPI.
aimp-sdk
Python SDK for Onepanel - Production scale vision AI platform with fully integrated components for model building, automated labeling, data processing and model training pipelines.
aimrecords
# AimRecords - Record-oriented data storage![GitHub Top Language](https://img.shields.io/github/languages/top/aimhubio/aimrecords) [![PyPI Package](https://img.shields.io/pypi/v/aimrecords?color=yellow)](https://pypi.org/project/aimrecords/) [![License](https://img.shields.io/badge/License-Apache%202.0-orange.svg)](https://opensource.org/licenses/Apache-2.0)Library to effectively store the tracked experiment logs.See the documentation [here](docs/README.md).## Getting StartedThese instructions will get you a copy of the project up and running on your local machine for development and testing purposes.### RequirementsPython 3We suggest to use [virtual environment](https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments) for managing local dependencies.To start development first install all dependencies:`bash pip install-rrequirements.txt `### Project Structure` ├── aimrecords<-----------main project code │   ├── artifact_storage <- manage storage of artifacts │   └── record_storage<---manage records storage of a single artifact ├── docs<-----------------data format documentation ├── examples<-------------example usages of aimrecords └── tests `## Running the testsRun tests via commandpytestin the root folder.### Code Style We follow [pep8](https://www.python.org/dev/peps/pep-0008/) style guide for python code.## ContributingPlease read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests to us.
aim-records
# AimRecordsAimRecords is an event-oriented data format which utilizes Protocol Buffers (protobuf) to be flexible and highly language-neutral.
aimrobot
AIM.RobotAIM.Robot is a Python library to run Robotic Assembly Digital Model (RADM) files directly on Universal Robots.RequirementsUniversal Robot (tested on a UR5e and UR10e).Python 3.8 or newerA valid RADM fileInstallationThe easiest way to install AIM.Robot is using pip:pip install aimrobotGetting StartedFirst of all, you need to test the connection to the robot:aimrobot --networktest 192.168.1.1If the connection is established, you can load and run a RADM file:aimrobot --file runme.radmThis will execute all the steps using the wait time set in the file. If you want to have manual control, you can add the flag --manualaimrobot --file runme.radm --manual
aimrocks
No description available on PyPI.
aims-convert
AIMS Roster Data ExtractionThis will primarily be of interest to easyJet pilots. It extracts data from a “detailed roster” as can be downloaded from AIMS. Pilots from other airlines that use AIMSmayalso find this useful — it has only been tested against the files from easyJet’s version of AIMS (as that is all that I have access to), so its utility will depend on whether whatever version is being used by your airline produces sufficiently similar output.As well as extracting the data from the AIMS roster, the registration and type of the aircraft operating the sectors is looked up, and a night flying calculation is carried out.The three main formats that the data can be extracted to are electronic Flight Journal (eFJ), iCalendar and CSV.An eFJ is a text file that can be used to store personal flight data in an intuitive non-tabular form. Full details of the format of this text file can be found athttps://hursts.org.uk/efjdocs/format.html, and an online tool capable of converting this format to an FCL.050 compliant logbook in HTML format can be found athttps://hursts.org.uk/efj.The iCalendar format can be imported into most calendar applications. There is an option to include full day events such as days off. This is useful for managing and sharing your future roster.The CSV format is for keeping logbook data in a spreadsheet.The package provides entry points for a command line interface, a Tk based graphical interface and an AWS lambda function. The AWS function is hooked up to a website athttps://hursts.org.uk/aims, which requires no installation, just a browser. The other two interfaces require the installation of a Python interpreter.
aimsgb
Introductionaimsgb, an efficient and open-source Python library for generating atomic coordinates in periodic grain boundary models. It is designed to construct various grain boundary structures from cubic and non-cubic initial configurations. A convenient command line tool has also been provided to enable easy and fast construction of tilt and twist boundaries by assigining the degree of fit (Σ), rotation axis, grain boundary plane and initial crystal structure. aimsgb is expected to greatly accelerate the theoretical investigation of grain boundary properties and facilitate the experimental analysis of grain boundary structures as well.A reference for the usage of aimsGB software is: Jianli Cheng, Jian Luo, and Kesong Yang, Aimsgb: An Algorithm and OPen-Source Python Library to Generate Periodic Grain Boundary Structures, Comput. Mater. Sci. 155, 92-103, (2018). DOI:10.1016/j.commatsci.2018.08.029Install aimsgbClone the latest version from github:git clone [email protected]:ksyang2013/aimsgb.gitNavigate to aimsgb folder:cd aimsgbType in the root of the repo:pip install .or to install the package in development mode:pip install -e .How to cite aimsgbIf you use aimsgb in your research, please consider citing the following work:Jianli Cheng, Jian Luo, Kesong Yang.Aimsgb: An algorithm and open-source python library to generate periodic grain boundary structures.Computational Materials Science, 2018, 155, 92-103.doi:10.1016/j.commatsci.2018.08.029CopyrightCopyright (C) 2018 The Regents of the University of CaliforniaAll Rights Reserved. Permission to copy, modify, and distribute this software and its documentation for educational, research and non-profit purposes, without fee, and without a written agreement is hereby granted, provided that the above copyright notice, this paragraph and the following three paragraphs appear in all copies. Permission to make commercial use of this software may be obtained by contacting:Office of Innovation and Commercialization 9500 Gilman Drive, Mail Code 0910 University of California La Jolla, CA 92093-0910 (858) [email protected] software program and documentation are copyrighted by The Regents of the University of California. The software program and documentation are supplied “as is”, without any accompanying services from The Regents. The Regents does not warrant that the operation of the program will be uninterrupted or error-free. The end-user understands that the program was developed for research purposes and is advised not to rely exclusively on the program for any reason.IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN “AS IS” BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.AuthorsDr. Jianli Cheng ([email protected]) New Email:[email protected]. Kesong Yang ([email protected])About the aimsgb Development Teamhttp://materials.ucsd.edu/
aimsim
AIMSim READMEVisualizing Diversity in your Molecular DatasetDocumentation and TutorialView our Online Documentationor try theAIMSimcomprehensive tutorial in your browser:PurposeWhy Do We Need To Visualize Molecular Similarity / Diversity?There are several contexts where it is helpful to visualize the diversity of a molecular dataset:Exploratory Experimental SynthesisFor a chemist, synthesizing new molecules with targeted properties is often a laborious and time consuming task. In such a case, it becomes useful to check the similarity of a newly proposed (un-synthesized) molecule to the ones already synthesized. If the proposed molecule is too similar to the existing repertoire of molecules, it will probably not yield not enough new information / property and thus need not be synthesized. Thus, a chemist can avoid spending time and effort synthesizing molecules not useful for the project.Lead Optimization and Virtual ScreeningThis application is the converse of exploratory synthesis where the interest is to find molecules in a database which are structurally similar to an "active" molecule. In this context, "active" might refer to pharmocological activity (drug discover campaigns) or desirable chemical properties (for example, to discover alternative chemicals and solvents for an application). In such a case, AIMSim helps to run virtual screenings over a molecular database and visualize the results.Machine Learning Molecular PropertiesIn the context of machine learning, visualizing the diversity of the training set gives a good idea about its information quality. A more diverse training data-set yields a more robust model, which generalizes well to unseen data. Additionally, such a visualization can identify "clusters of similarity" indicating the need for separately trained models for each cluster.Substrate Scope Robustness VerificationWhen proposing a novel reaction it is essential for the practicing chemist to evaluate the transformation's tolerance of diverse functional groups and substrates (Glorius, 2013). UsingAIMSim, one can evaluate the structural and chemical similarity across an entire susbtrate scope to ensure that it avoids redundant species. Below is an example similarity heatmap generated to visualize the diversity of a three-component sulfonamide coupling reaction with a substantial number of substrates (Chen, 2018).Many of the substrates appear similar to one another and thereby redundant, but in reality the core sulfone moiety and the use of the same coupling partner when evaluating functional group tolerance accounts for this apparent shortcoming. Also of note is the region of high similarity along the diagonal where the substrates often differ by a single halide heteratom or substitution pattern.Installing AIMSimIt is recommended to installAIMSimin a virtual environment withcondaor Python'svenv.pipAIMSimcan be installed with a single command using Python's package managerpip:pip install aimsimThis command also installs the required dependencies.condaAIMSimis also available with thecondapackage manager via:conda install -c conda-forge aimsimThis will install all dependencies fromconda-forge.Note for mordred-descriptorAIMSim v1 provided direct support for the descriptors provided in themordredpackage but unfortunately the originalmordredis now abandonware. Theunofficialmordredcommunityis now used in version 2.1 and newer to deliver the same features but with support for modern Python.Running AIMSimAIMSimis compatible with Python 3.8 to 3.12. StartAIMSimwith a graphical user interface:aimsimStartAIMSimwith a prepared configuration YAML file (config.yaml):aimsim config.yamlCurrently Implemented FingerprintsMorgan Fingerprint (Equivalent to the ECFP fingerprints)RDKit Topological FingerprintRDKit Daylight FingerprintThe following are available via command line use (config.yaml) only:MinHash Fingerprint (seeMHFP)All fingerprints available from theccbmlibpackage (specify 'ccbmlib:descriptorname' for command line input).All descriptors and fingerprints available fromPaDELPy, an interface to PaDEL-Descriptor. (specify 'padelpy:desciptorname' for command line input.).All descriptors available through theMordredlibrary (specify 'mordred:desciptorname' for command line input.). To enable this option, you must install withpip install 'aimsim[mordred]'(see disclaimer in the Installation section above).Currently Implemented Similarity Scores44 commonly used similarity scores are implemented in AIMSim. Additional L0, L1 and L2 norm based similarities are also implemented.View our Online Documentationfor a complete list of implemented similarity scores.Currently Implemented FunctionalitiesMeasure Search: Automate the search of fingerprint and similarity metric (called a "measure") using the following algorithm: Step 1: Select an arbitrary featurization scheme. Step 2: Featurize the molecule set using the selected scheme. Step 3: Choose an arbitrary similarity measure. Step 4: Select each molecule’s nearest and furthest neighbors in the set using the similarity measure. Step 5: Measure the correlation between a molecule’s QoI and its nearest neighbor’s QoI. Step 6: Measure the correlation between a molecule’s QoI and its further neighbor’s QoI. Step 7: Define a score which maximizes the value in Step 5 and minimizes the value in Step 6. Step 8: Iterate Steps 1 – 7 to select the featurization scheme and similarity measure to maximize the result of Step 7.See Property Variation with Similarity: Visualize the correlation in the QoI between nearest neighbor molecules (most similar pairs in the molecule set) and between the furthest neighbor molecules (most dissimilar pairs in the molecule set). This is used to verify that the chosen measure is appropriate for the task.Visualize Dataset: Visualize the diversity of the molecule set in the form of a pairwise similarity density and a similarity heatmap of the molecule set. Embed the molecule set in 2D space using using principal component analysis (PCA)[3], multi-dimensional scaling[4], t-SNE[5], Spectral Embedding[6], or Isomap[7].Compare Target Molecule to Molecule Set: Run a similarity search of a molecule against a database of molecules (molecule set). This task can be used to identify the most similar (useful in virtual screening operations) or most dissimilar (useful in application that require high diversity such as training set design for machine learning models) molecules.Cluster Data: Cluster the molecule set. The following algorithms are implemented:For arbitrary molecular features or similarity metrics with defined Euclidean distances: K-Medoids[3] and Ward[8] (hierarchical clustering).For binary fingerprints: Complete, single and average linkage hierarchical clustering[8].The clustered data is plotted in two dimensions using principal component analysis (PCA)[3], multi-dimensional scaling[4], or TSNE[5].Outlier Detection: Using an isolation forest, check for which molecules are potentially novel or are outliers according to the selected descriptor. Output can be directly to the command line by specifiyingoutputto beterminalor to a text file by instead providing a filename.ContributorsDeveloper: Himaghna Bhattacharjee, Vlachos Research Lab. (LinkedIn)Developer: Jackson Burns, Don Watson Lab. (Personal Site)AIMSimin the LiteratureApplications of Artificial Intelligence and Machine Learning Algorithms to CrystallizationRecent Advances in Machine-Learning-Based Chemoinformatics: A Comprehensive ReviewDeveloper NotesIssues and Pull Requests are welcomed! To propose an addition toAIMSimopen an issue and the developers will tag it as anenhancementand start discussion.AIMSimincludes an automated testing apparatus operated by Python'sunittestbuilt-in package. To execute tests related to the core functionality ofAIMSim, run this command:python -m unittest discoverFull multiprocessing speedup and efficiency tests take more than 10 hours to run due to the number of replicates required. To run these tests, create a file called.speedup-testin theAIMSimdirectory and execute the above command as shown.To manually build the docs, execute the following withsphinxandm2rinstalled and from the/docsdirectory:m2r ../README.md | mv ../README.rst . | sphinx-apidoc -f -o . .. | make html | cp _build/html/* .Documentation should manually build on push to master branch via an automated GitHub action.For packaging on PyPI:python -m build; twine upload dist/*Be sure to bump the version in__init__.py.CitationIf you use this code for scientific publications, please cite the following paper.Himaghna Bhattacharjee, Jackson Burns, Dionisios G. Vlachos, AIMSim: An accessible cheminformatics platform for similarity operations on chemicals datasets, Computer Physics Communications, Volume 283, 2023, 108579, ISSN 0010-4655,https://doi.org/10.1016/j.cpc.2022.108579.LicenseThis code is made available under the terms of theMIT Open License:Copyright (c) 2020-2027 Himaghna Bhattacharjee & Jackson BurnsPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.Works Cited[1] Collins, K. and Glorius, F., A robustness screen for the rapid assessment of chemical reactions. Nature Chem 5, 597–601 (2013).https://doi.org/10.1038/nchem.1669[2] Chen, Y., Murray, P.R.D., Davies, A.T., and Willis M.C., J. Am. Chem. Soc. 140 (28), 8781-8787 (2018).https://doi.org/10.1021/jacs.8b04532[3] Hastie, T., Tibshirani R. and Friedman J., The Elements of statistical Learning: Data Mining, Inference, and Prediction, 2nd Ed., Springer Series in Statistics (2009).[4] Borg, I. and Groenen, P.J.F., Modern Multidimensional Scaling: Theory and Applications, Springer Series in Statistics (2005).[5] van der Maaten, L.J.P. and Hinton, G.E., Visualizing High-Dimensional Data Using t-SNE. Journal of Machine Learning Research 9:2579-2605 (2008).[6] Ng, A.Y., Jordan, M.I. and Weiss, Y., On Spectral Clustering: Analysis and an algorithm. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, MIT Press (2001).[7] Tenenbaum, J.B., De Silva, V. and Langford, J.C, A global geometric framework for nonlinear dimensionality reduction, Science 290 (5500), 2319-23 (2000).https://doi.org/10.1126/science.290.5500.2319.[8] Murtagh, F. and Contreras, P., Algorithms for hierarchical clustering: an overview. WIREs Data Mining Knowl Discov (2011).https://doi.org/10.1002/widm.53
aims-immune
AIMS - An Automated Immune Molecule SeparatorQuick StartAs of AIMS v0.9, everything should be nicely wrapped up as an installable pypi package. You can simply install the AIMS GUI, CLI, and notebook using pip:pip install aims-immuneYou can then launch the GUI, the CLI, or the notebook from the terminal in the directory your data is located using one of:aims-guiaims-cliaims-notebookLastly, you can optionally copy test data into your current directory using:aims-testsWhether you are a new or returning AIMS user, it is strongly recommended you check out the documentation (see below) to learn details about formatting and usage. For returning users especially, the way AIMS is called has changed completely.DescriptionThe primary goal of AIMS is to identify discriminating factors between two distinct sets of immune molecules. As of versions 0.8 and later, the software is now capable of analyzing any set of sequences with general conservation and localized diversity. AIMS has specific analysis modes for Immunoglobulins (Ig - T Cell Receptors and Antibodies) and Peptides (Specifically those isolated from MHC), as well as a more general multi-sequence alignment analysis mode that has been used to characterize MHC molecules, MHC-like molecules, and the non-immunological Dpr-DIP proteins.AIMS is a python package distributed in a notebook, CLI, and GUI format. An example of an application of AIMS can be seen in this peer-reviewed article:https://elifesciences.org/articles/61393When publishing analysis from this software, please cite:Boughter CT, Borowska MT, Guthmiller JJ, Bendelac A, Wilson PC, Roux B, Adams EJ. Biochemical Patterns of Antibody Polyreactivity Revealed Through a Bioinformatics-Based Analysis of CDR Loops. eLife. 2020. DOI: 10.7554/eLife.61393&Boughter CT, Meier-Schellersheim M. An Integrated Approach to the Characterization of Immune Repertoires Using AIMS: An Automated Immune Molecule Separator. BioRxiv. 2022. DOI: 10.1101/2022.12.07.519510DocumentationRather than have all of the instructions on this GitHub page, all information on installation and usage (and more!) has been moved to a separate, more readable documentation page. Please follow this link:https://aims-doc.readthedocs.io/en/latest/For the comprehensive AIMS user guide.Reproduction of Published ResultsAs of versions 0.8 and later, the data necessary for reproducing data published thus far have been moved to a separate repository. This repository can be found here:https://github.com/ctboughter/AIMS_manuscriptsThe underlying code remains the same, and will continue to be updated. This has been done to keep the AIMS analysis software more streamlined and less cluttered with manuscript-specific analysis.Further ReadingNow that AIMS has been out and in the wild for around two years, there have been additional published peer-reviewed manuscripts or posted preprints that highlight the capabilities of AIMS! I'll try to keep this list relatively up to date, and if it ever gets lengthy will likely move it to the ReadTheDocs page. Manuscripts thus far include:An application of AIMS to non-immune molecules using multi-sequence alignment (MSA) encoding:https://pubs.acs.org/doi/abs/10.1021/acs.jpcb.2c02173The AIMS bible, with a thorough explanation of the rationale behind the AIMS analysis:https://www.biorxiv.org/content/10.1101/2022.12.07.519510v1An investigation of the nature of the germline interactions between TCR CDR loops and MHC:https://www.biorxiv.org/content/10.1101/2022.12.07.519507v1
aim-spacy
Aim-spaCy integration
aimsprop
aimspropTo use aimsprop, check out the documentationhere!To develop / contribute to the aimsprop code, seeCONTRIBUTING.mdto get your environment and branch set up.For a tutorial on the basic functionality of aimsprop, gohere.DescriptionA repository for the representation and manipulation of AIMS-type trajectories, particularly for use in computing time-dependent properties (bond lengths, angles, torsions, X-ray scattering, UED, photoelectron, UV-Vis, etc) in a semi-uniform manner.Directoriesaimsprop - the Python AIMS trajectory/property codedocs - mkdocs documentation that is publishedheretests - unit tests for aimsprop code using pytestnotes - notes used in implementingliterature - key papers/book chapters used in implementing certain properties.scripts - utility scripts for common ops like plotting absorption spectra, generating difference densities from Molden files, etc (no warrenty on these - the usual workflow is to copy these into your own project directories and modify until they work)examples - examples of use cases of property computations (these are not the most helpful, better to look at thetutorial).TodosH-Bond coordinatesBetter UED cross sectionsOOP anglesUpdate examplesAdiabatic dynamics are weirdly done in 0.5 fs, non-adiabatic dynamics are in 20 au. These are not the same. We should fix/accomodate this.AuthorsRob Parrish, Monika Williams, Hayley Weir, Colton Hicks, Alice Walker, Alessio Valentini
aimstack
# aim #### Version control for AISee the docs [here](https://docs.aimhub.io).## Development### RequirementsPython 3We suggest to use [virtual environment](https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments) for managing local dependencies.To start development first install all dependencies:`bash pip install-rrequirements.txt `### Project Structure` ├── aim<----------------main project code │   ├── cli<-------------command line interface │   ├── engine<----------business logic │   ├── sdk<-------------Python SDK │   ├── artifacts<-------managing tracked data │   └── version_control <- managing files and code ├── examples<------------example usages of aim SDK └── tests `### Code Style We follow [pep8](https://www.python.org/dev/peps/pep-0008/) style guide for python code. We use [autopep8](https://pypi.org/project/autopep8/) and [pycodestyle](https://pypi.org/project/pycodestyle/) to enable checking and formatting Python code.To check code styles, runpycodestyle .in the root folder.To auto format, runautopep8 –in-place –recursive –aggressive –aggressive .in the root folder.
aim-stack
# aim #### Version control for AISee the docs [here](https://docs.aimhub.io).## Development### RequirementsPython 3We suggest to use [virtual environment](https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments) for managing local dependencies.To start development first install all dependencies:`bash pip install-rrequirements.txt `### Project Structure` ├── aim<----------------main project code │   ├── cli<-------------command line interface │   ├── engine<----------business logic │   ├── sdk<-------------Python SDK │   ├── artifacts<-------managing tracked data │   └── version_control <- managing files and code ├── examples<------------example usages of aim SDK └── tests `### Code Style We follow [pep8](https://www.python.org/dev/peps/pep-0008/) style guide for python code. We use [autopep8](https://pypi.org/project/autopep8/) and [pycodestyle](https://pypi.org/project/pycodestyle/) to enable checking and formatting Python code.To check code styles, runpycodestyle .in the root folder.To auto format, runautopep8 –in-place –recursive –aggressive –aggressive .in the root folder.
aim-ui
No description available on PyPI.
aim-ui-custom
No description available on PyPI.
aim-with-auth-support
An easy-to-use & supercharged open-source experiment trackerAim logs your training runs, enables a beautiful UI to compare them and an API to query them programmatically.About•Features•Demos•Examples•Quick Start•Documentation•Roadmap•Discord Community•TwitterIntegrates seamlessly with your favorite toolsAbout AimTrack and version ML runsVisualize runs via beautiful UIQuery runs metadata via SDKAim is an open-source, self-hosted ML experiment tracking tool. It's good at tracking lots (1000s) of training runs and it allows you to compare them with a performant and beautiful UI.You can use not only the great Aim UI but also its SDK to query your runs' metadata programmatically. That's especially useful for automations and additional analysis on a Jupyter Notebook.Aim's mission is to democratize AI dev tools.Why use Aim?Compare 100s of runs in a few clicks - build models fasterCompare, group and aggregate 100s of metrics thanks to effective visualizations.Analyze, learn correlations and patterns between hparams and metrics.Easy pythonic search to query the runs you want to explore.Deep dive into details of each run for easy debuggingHyperparameters, metrics, images, distributions, audio, text - all available at hand on an intuitive UI to understand the performance of your model.Easily track plots built via your favourite visualisation tools, like plotly and matplotlib.Analyze system resource usage to effectively utilize computational resources.Have all relevant information organised and accessible for easy governanceCentralized dashboard to holistically view all your runs, their hparams and results.Use SDK to query/access all your runs and tracked metadata.You own your data - Aim is open source and self hosted.DemosMachine translationlightweight-GANTraining logs of a neural translation model(from WMT'19 competition).Training logs of 'lightweight' GAN, proposed in ICLR 2021.FastSpeech 2Simple MNISTTraining logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech".Simple MNIST training logs.Quick StartFollow the steps below to get started with Aim.1. Install Aim on your training environmentpip3installaim2. Integrate Aim with your codefromaimimportRun# Initialize a new runrun=Run()# Log run parametersrun["hparams"]={"learning_rate":0.001,"batch_size":32,}# Log metricsforiinrange(10):run.track(i,name='loss',step=i,context={"subset":"train"})run.track(i,name='acc',step=i,context={"subset":"train"})See the full list of supported trackable objects(e.g. images, text, etc)here.3. Run the training as usual and start Aim UIaimup4. Or query runs programmatically via SDKfromaimimportRepomy_repo=Repo('/path/to/aim/repo')query="metric.name == 'loss'"# Example query# Get collection of metricsforrun_metrics_collectioninmy_repo.query_metrics(query).iter_runs():formetricinrun_metrics_collection:# Get run paramsparams=metric.run[...]# Get metric valuessteps,metric_values=metric.values.sparse_numpy()IntegrationsIntegrate PyTorch Lightningfromaim.pytorch_lightningimportAimLogger# ...trainer=pl.Trainer(logger=AimLogger(experiment='experiment_name'))# ...See documentationhere.Integrate Hugging Facefromaim.hugging_faceimportAimCallback# ...aim_callback=AimCallback(repo='/path/to/logs/dir',experiment='mnli')trainer=Trainer(model=model,args=training_args,train_dataset=train_datasetiftraining_args.do_trainelseNone,eval_dataset=eval_datasetiftraining_args.do_evalelseNone,callbacks=[aim_callback],# ...)# ...See documentationhere.Integrate Keras & tf.kerasimportaim# ...model.fit(x_train,y_train,epochs=epochs,callbacks=[aim.keras.AimCallback(repo='/path/to/logs/dir',experiment='experiment_name')# Use aim.tensorflow.AimCallback in case of tf.kerasaim.tensorflow.AimCallback(repo='/path/to/logs/dir',experiment='experiment_name')])# ...See documentationhere.Integrate KerasTunerfromaim.keras_tunerimportAimCallback# ...tuner.search(train_ds,validation_data=test_ds,callbacks=[AimCallback(tuner=tuner,repo='.',experiment='keras_tuner_test')],)# ...See documentationhere.Integrate XGBoostfromaim.xgboostimportAimCallback# ...aim_callback=AimCallback(repo='/path/to/logs/dir',experiment='experiment_name')bst=xgb.train(param,xg_train,num_round,watchlist,callbacks=[aim_callback])# ...See documentationhere.Integrate CatBoostfromaim.catboostimportAimLogger# ...model.fit(train_data,train_labels,log_cout=AimLogger(loss_function='Logloss'),logging_level="Info")# ...See documentationhere.Integrate fastaifromaim.fastaiimportAimCallback# ...learn=cnn_learner(dls,resnet18,pretrained=True,loss_func=CrossEntropyLossFlat(),metrics=accuracy,model_dir="/tmp/model/",cbs=AimCallback(repo='.',experiment='fastai_test'))# ...See documentationhere.Integrate LightGBMfromaim.lightgbmimportAimCallback# ...aim_callback=AimCallback(experiment='lgb_test')aim_callback.experiment['hparams']=paramsgbm=lgb.train(params,lgb_train,num_boost_round=20,valid_sets=lgb_eval,callbacks=[aim_callback,lgb.early_stopping(stopping_rounds=5)])# ...See documentationhere.Integrate PyTorch Ignitefromaim.pytorch_igniteimportAimLogger# ...aim_logger=AimLogger()aim_logger.log_params({"model":model.__class__.__name__,"pytorch_version":str(torch.__version__),"ignite_version":str(ignite.__version__),})aim_logger.attach_output_handler(trainer,event_name=Events.ITERATION_COMPLETED,tag="train",output_transform=lambdaloss:{'loss':loss})# ...See documentationhere.Comparisons to familiar toolsTensorboardTraining run comparisonOrder of magnitude faster training run comparison with AimThe tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params.TensorBoard doesn't have features to group, aggregate the metricsScalabilityAim is built to handle 1000s of training runs - both on the backend and on the UI.TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.Beloved TB visualizations to be added on AimEmbedding projector.Neural network visualization.MLFlowMLFlow is an end-to-end ML Lifecycle tool. Aim is focused on training tracking. The main differences of Aim and MLflow are around the UI scalability and run comparison features.Run comparisonAim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.UI ScalabilityAim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!MLflow UI becomes slow to use when there are a few hundreds of runs.Weights and BiasesHosted vs self-hostedWeights and Biases is a hosted closed-source MLOps platform.Aim is self-hosted, free and open-source experiment tracking tool.RoadmapDetailed Sprints:sparkle: TheAim product roadmapTheBacklogcontains the issues we are going to choose from and prioritize weeklyThe issues are mainly prioritized by the highly-requested featuresHigh-level roadmapThe high-level features we are going to work on the next few monthsDoneLive updates (Shipped:Oct 18 2021)Images tracking and visualization (Start:Oct 18 2021, Shipped:Nov 19 2021)Distributions tracking and visualization (Start:Nov 10 2021, Shipped:Dec 3 2021)Jupyter integration (Start:Nov 18 2021, Shipped:Dec 3 2021)Audio tracking and visualization (Start:Dec 6 2021, Shipped:Dec 17 2021)Transcripts tracking and visualization (Start:Dec 6 2021, Shipped:Dec 17 2021)Plotly integration (Start:Dec 1 2021, Shipped:Dec 17 2021)Colab integration (Start:Nov 18 2021, Shipped:Dec 17 2021)Centralized tracking server (Start:Oct 18 2021, Shipped:Jan 22 2022)Tensorboard adaptor - visualize TensorBoard logs with Aim (Start:Dec 17 2021, Shipped:Feb 3 2022)Track git info, env vars, CLI arguments, dependencies (Start:Jan 17 2022, Shipped:Feb 3 2022)MLFlow adaptor (visualize MLflow logs with Aim) (Start:Feb 14 2022, Shipped:Feb 22 2022)Activeloop Hub integration (Start:Feb 14 2022, Shipped:Feb 22 2022)PyTorch-Ignite integration (Start:Feb 14 2022, Shipped:Feb 22 2022)Run summary and overview info(system params, CLI args, git info, ...) (Start:Feb 14 2022, Shipped:Mar 9 2022)Add DVC related metadata into aim run (Start:Mar 7 2022, Shipped:Mar 26 2022)Ability to attach notes to Run from UI (Start:Mar 7 2022, Shipped:Apr 29 2022)Fairseq integration (Start:Mar 27 2022, Shipped:Mar 29 2022)LightGBM integration (Start:Apr 14 2022, Shipped:May 17 2022)CatBoost integration (Start:Apr 20 2022, Shipped:May 17 2022)Run execution details(display stdout/stderr logs) (Start:Apr 25 2022, Shipped:May 17 2022)Long sequences(up to 5M of steps) support (Start:Apr 25 2022, Shipped:Jun 22 2022)Figures Explorer (Start:Mar 1 2022, Shipped:Aug 21 2022)Notify on stuck runs (Start:Jul 22 2022, Shipped:Aug 21 2022)Integration with KerasTuner (Start:Aug 10 2022, Shipped:Aug 21 2022)Integration with WandB (Start:Aug 15 2022, Shipped:Aug 21 2022)Stable remote tracking server (Start:Jun 15 2022, Shipped:Aug 21 2022)Integration with fast.ai (Start:Aug 22 2022, Shipped:Oct 6 2022)Integration with MXNet (Start:Sep 20 2022, Shipped:Oct 6 2022)Project overview page (Start:Sep 1 2022, Shipped:Oct 6 2022)In ProgressRemote tracking server scaling (Start:Sep 1 2022)Aim SDK low-level interface (Start:Aug 22 2022)To DoAim UIRuns managementRuns explorer – query and visualize runs data(images, audio, distributions, ...) in a central dashboardExplorersAudio ExplorerText ExplorerDistributions ExplorerDashboards – customizable layouts with embedded explorersSDK and StorageScalabilitySmooth UI and SDK experience with over 10.000 runsRuns managementCLI interfacesReporting - runs summary and run details in a CLI compatible formatManipulations – copy, move, delete runs, params and sequencesIntegrationsML Frameworks:Shortlist: MONAI, SpaCy, Raytune, PaddlePaddleDatasets versioning toolsShortlist: HuggingFace DatasetsResource management toolsShortlist: Kubeflow, SlurmWorkflow orchestration toolsOthers: Hydra, Google MLMD, Streamlit, ...On holdscikit-learn integrationCloud storage support – store runs blob(e.g. images) data on the cloud (Start:Mar 21 2022)Artifact storage – store files, model checkpoints, and beyond (Start:Mar 21 2022)CommunityIf you have questionsRead the docsOpen a feature request or report a bugJoin Discord community server
aimysearch
No description available on PyPI.
ain
ainDeveloper GuideSetup# create conda environment$mambaenvcreate-fenv.yml# update conda environment$mambaenvupdate-nbnx--fileenv.ymlInstallpipinstall-e.# install from pypipipinstallbnxnbdev# activate conda environment$condaactivatebnx# make sure the bnx package is installed in development mode$pipinstall-e.# make changes under nbs/ directory# ...# compile to have changes apply to the bnx package$nbdev_preparePublishing# publish to pypi$nbdev_pypi# publish to conda$nbdev_conda--build_args'-c conda-forge'$nbdev_conda--mambabuild--build_args'-c conda-forge -c dsm-72'UsageInstallationInstall latest from the GitHubrepository:$pipinstallgit+https://github.com/dsm-72/bnx.gitor fromconda$condainstall-cdsm-72bnxor frompypi$pipinstallbnxDocumentationDocumentation can be found hosted on GitHubrepositorypages. Additionally you can find package manager specific guidelines oncondaandpypirespectively.
aina
ainaAina is a general-purpose stream processing framework. It includes a simple but powerful templating system which powers a versitle command line utility.NOTE: This is new code. Master is in flux and docs are lacking, but it is in a point where it could be useful to someone. If it is useful to you, help us get to 1.0.0. You can start by reading the contributing guide athttps://github.com/ilovetux/aina/CONTRIBUTING.rst.Free software: GNU General Public License v3Documentation:https://aina.readthedocs.io.FeaturesSimple, Powerful templating systemCommand line utilitiesAll the power of PythonNo hacks or magicApproachable source codeTestedTODO: Web UITODO: Many default use cases coveredTODO: –no-overwrite optionInstallingYou can install the latest stable version with the following command:$ pip install ainaAlternately, to clone the latest development version:$ git clone https://github.com/ilovetux/aina $ cd aina # Optional $ python setup.py test $ pip install .ConceptsThe built-in templating engine is very simple, it consists of a namespace and a template. The template is rendered within the context of the namespace.Rendering involves two stages:scanning the template for strings matching the pattern{%<Source>%}where<Source>is Python source code which is executed (exec) within the context of the namespace. During execution, stdout is captured. After execution,{%<Source>%}is replaced with a string containing the output.scanning the remaining output for strings matching the pattern{{<Expression>}}where<Expression>is a Python expression which is replaced with the value to which it evaluates (eval)As an example, let’s look at the following template:{% me = "iLoveTux" name = "Bill" age = 35 %} hello {{name}}: I heard that you just turned {{str(age)}}. Congratulations! Sincerely: {% print(me) %}If this were rendered, the output would be as follows:Hello Bill, I heard that you just turned 35. Congratulations! Sincerely: iLoveTuxThis concept is applied to a variety of use cases and embodied in the form of command line utilities which cover a number of common use cases.UsageAina can be used directly from within Python, like so:from aina import render namespace = {"foo": "bar"} template = "The value of foo is {{foo}}" result = render(template, namespace)This usage has first-class support, but a much handier solution is to use the provided CLI.The command line utility, aina, can be run in two modes:Streaming mode: Data is streamed through and used to populate templatesDocument mode: Render files src and write the results to dstStreaming modeStreaming mode runs in the following manner:Accept a list offilenames(wildcards are accepted), which defaults to stdinAt this point any expressions passed to–beginsare executedThe files specified are processed in orderAny expressions passed to–begin-filesare executedThe data from the current file is read line-by-lineAny statements passed to–testsare evaluatedIff all tests pass, the following process is performed.Any expressions passed to–begin-linesare executedAny templates are rendered through the python logging systemAny expressions passed to–end-linesare executedAny expressions passed to–end-filesare executedAny expressions passed to–endsare executedBelow are a few examples. See the documentation for more details:# Like grep $ aina stream --test "'error' in line.lower()" --template "{{line}}" *.log # Like wc -l $ aina stream --end-files "print(fnr, filename)" *.log # Like wc -wl $ aina stream --begins "words=0" --begin-lines "words += nf" --end-files "print(words, fnr, filename)" # Find all numbers "\d+" for each line $ aina stream --begins "import re" --begin-lines "print(re.findall(r'\d+', line))" *.log # Run an XPath $ aina stream --begins "from lxml import etree" --begin-lines "tree = etree.fromstring(line)" --templates "{{"\n".join(tree.xpath("./*"))}}"Please see the documentation for more as well as trying:$ aina stream --helpImportant Note:If anything passed to any of the hooks is determined to exist byos.path.existsthen it will be read and executed as if that text was passed in on the CLI. This is useful for quickly solving character escaping issues.Document modeDocument mode renders one or more files and/or directoriessrcto another locationdst. It is used like this:$ aina doc <src> <dst>There are options to control behavior, but the gist of it is:if src is a fileif dst is a filename, src is rendered and written to dstif dst is a directory, src is rendered and written to a file in dst with the same basename as srcif src is a directorydst must be a directory and every file in src is rendered into a file in dst with the same basename as the file from srcIf–recursiveis specified, the subdirectories will be reproduced in dstSome important notes:If–intervalis passed an integer value, the program will sleep for that many seconds and check for changes to your templates (based on the file’s mtime) in which case they will be re-renderedUse CasesStreaming mode is great for processing incoming log files withtail –follow=nameor for ad-hoc analysis of text files.Document mode is incredibly useful for a powerful configuration templating system. The–intervaloption is incredibly useful as it will only re-render on a file change, so is great for developing your templates as you can view the results in near-real-time.Document mode is also useful for near-real-time rendering of static web resources such as charts, tables, dashboards and more.If you find any more use cases, please open an issue or pull request to add it here and in the wikiCreditsAuthor: iLoveTux This package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.1 (2018-07-05)Improved test-suiteExecution blocks are now replaced with whatever is sent to stdoutaina docnow respects the–intervalargumentaina doc –intervalis fixed and now only re-renders templates which have changedaina docwith–recursivenow processes files in a top-down mannerLots of bug fixesImproved documentationvalues of expressions are now automatically coerced into stringsDrop support for Python 3.4 and add support for Python 3.7Improved logging0.1.0 (2018-06-20)First release on PyPI.
aina-gradio-theme
Aina Theme is a custom Gradio theme. Feel free to use this theme to create Gradio apps that have a visual connection to the world of cloud technology.How to edit Aina Gradio Theme colors and properties?In case you would like to change theme properties, just editAinaTheme/aina_class.pyfile.There are custom colors that you can set atutils/custom_colors.py. And if you would like to include extra colors on the theme set them atAinaTheme/__init__.pyHow to use this theme in my Gradio app?First install the theme package.pipinstallaina-gradio-themeOnce you have installed it, add it to the interface/block parameters.... from AinaTheme import theme with gr.Blocks(theme=theme) as demo: ...
ainainain
No description available on PyPI.
ainainainain
No description available on PyPI.
aina-visualiser
No description available on PyPI.
aindapy
import aindapy import random import time import json import datetimefrom numpy.core.numeric import _correlate_dispatcherimport loggingimport http.client as http_clienthttp_client.HTTPConnection.debuglevel = 1logging.basicConfig()logging.getLogger().setLevel(logging.DEBUG)requests_log = logging.getLogger("requests.packages.urllib3")requests_log.setLevel(logging.DEBUG)requests_log.propagate = Trueaindapy.config(logLevel=1)auth = aindapy.Auth( apiUrl='https://aindaanalytics.com/ainda/api/', userName='asdfasdfasdfsdfsdf', passWord='asdfasdfsdf' )The datasource now accepts only the ids, so pls check what is the correct id for it.Demo WareHouse dataWareHouseId=7, dataSourceId=20Ainda Packaging Line WareHouse dataWareHouseId=8, dataSourceId=22dataSource = aindapy.DataSource( auth=auth, dataWareHouseId=7, dataSourceId=20 )Generate Data for graphics that are not timeseriesdata = aindapy.Data(auth=auth, dataSource=dataSource, bufferSize=1000)Generate Data Sample for piedata.deleteDataKeys([ 'basicdemo/pie1', 'basicdemo/pie2', 'basicdemo/bar1', 'basicdemo/bar2', 'basicdemo/bar10Columns', 'basicdemo/bar50Columns', 'basicdemo/scaleline250points', 'basicdemo/scaleline500points' ]) data.addToBuffer('basicdemo/pie1', random.randint(50, 150), 'Ilha 1') data.addToBuffer('basicdemo/pie1', random.randint(70, 180), 'Ilha 2') data.addToBuffer('basicdemo/pie1', random.randint(10, 75), 'Ilha 3') data.addToBuffer('basicdemo/pie1', random.randint(25, 45), 'Ilha 4')data.addToBuffer('basicdemo/pie2', random.randint(50, 150), 'Ilha 1') data.addToBuffer('basicdemo/pie2', random.randint(70, 180), 'Ilha 2') data.addToBuffer('basicdemo/pie2', random.randint(10, 75), 'Ilha 3') data.addToBuffer('basicdemo/pie2', random.randint(25, 45), 'Ilha 4') data.addToBuffer('basicdemo/pie2', random.randint(25, 45), 'Ilha 4') data.addToBuffer('basicdemo/pie2', random.randint(25, 45), 'Ilha 4') data.addToBuffer('basicdemo/pie2', random.randint(25, 45), 'Ilha 4')Generate Data Sample for one bar graphic with 10 columnsfor step in range(10): data.addToBuffer('basicdemo/bar10Columns', random.randint(50, 150), step)Generate Data Sample for one bar graphic with 50 columnsfor step in range(50): data.addToBuffer('basicdemo/bar50Columns', random.randint(50, 150), step)Generate Data Sample for linefor step in range(250): data.addToBuffer('basicdemo/scaleline250points', random.randint(50, 150), step)for step in range(500): data.addToBuffer('basicdemo/scaleline500points', random.randint(50, 150), step)data.commit()Generate data for one timeseries datatag. This data we can not deleted what was added beforeIf the tag do not exist, the code create this tag inside our system.stag1 = aindapy.SensorTag(auth=auth, dataSource=dataSource, channel='1', datatag='XRND1', tag='Random Value 1', tag_unit='KG', tag_updaterate=1000) stag2 = aindapy.SensorTag(auth=auth, dataSource=dataSource, channel='1', datatag='XRND2', tag='Random Value 2', tag_unit='KG', tag_updaterate=1000)tdata = aindapy.DataTimeSeries(auth=auth, dataSource=dataSource, bufferSize=1000) for step in range(10000): tdata.addToBuffer(sensorTag=stag1, timeStamp=datetime.datetime.now(), value=random.randint(45,90))# if you do not pass timestamp, we will generate internaly tdata.addToBuffer(sensorTag=stag2, value=random.randint(45,90))tdata.commit()
aind-behavior-curriculum
aind-behavior-curriculumUsageTo use this template, click the greenUse this templatebutton andCreate new repository.After github initially creates the new repository, please wait an extra minute for the initialization scripts to finish organizing the repo.To enable the automatic semantic version increments: in the repository go toSettingsandCollaborators and teams. Click the greenAdd peoplebutton. Addsvc-aindscicompas an admin. Modify the file in.github/workflows/tag_and_publish.ymland remove the if statement in line 10. The semantic version will now be incremented every time a code is committed into the main branch.To publish to PyPI, enable semantic versioning and uncomment the publish block in.github/workflows/tag_and_publish.yml. The code will now be published to PyPI every time the code is committed into the main branch.The.github/workflows/test_and_lint.ymlfile will run automated tests and style checks every time a Pull Request is opened. If the checks are undesired, thetest_and_lint.ymlcan be deleted. The strictness of the code coverage level, etc., can be modified by altering the configurations in thepyproject.tomlfile and the.flake8file.InstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-codeocean-api
aind-codeocean-apiPython wrapper around CodeOcean's REST API.InstallationTo install fromPyPI, run:pip install aind-codeocean-apiTo install from a clone of the repository, in the root directory, runpip install -e .To install the development libraries of the code, runpip install -e .[dev]UsageExample of getting data asset metadata:from aind_codeocean_api.codeocean import CodeOceanClient domain = "https://acmecorp.codeocean.com" token = "AN_API_TOKEN" # Replace with your api token data_asset_id = "37a93748-ce90-4980-913b-2de0908d5212" co_client = CodeOceanClient(domain=domain, token=token) response = co_client.get_data_asset(data_asset_id=data_asset_id) metadata = response.json()To store credentials locally, run:python -m aind_codeocean_api.credentialsContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coverage run -m unittest discover && coverage reportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate .Useflake8to check that code is up to standards (no unused imports, etc.):flake8 .Useblackto automatically format the code into PEP standards:black .Useisortto automatically sort import statements:isort .Pull requestsFor internal members, please create a branch. For external members, please fork the repo and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect the build system or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bug fixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsDocumentationTo generate the rst files source files for documentation, runsphinx-apidoc -o doc_template/source/ srcThen to create the documentation html files, runsphinx-build -b html doc_template/source/ doc_template/build/htmlMore info on sphinx installation can be found here:https://www.sphinx-doc.org/en/master/usage/installation.html
aind-codeocean-utils
aind-codeocean-utilsLibrary to contain useful utility methods to interface with Code Ocean.InstallationTo use the package, you can install it frompypi:pipinstallaind-codeocean-utilsTo install the package from source, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]UsageThe package includes helper functions to interact with Code Ocean:CodeOceanJobThis class enables one to run a job that:Registers a new asset to Code Ocean from s3Runs a capsule/pipeline on the newly registered asset (or an existing assey)Captures the run results into a new assetSteps 1 and 3 are optional, while step 2 (running the computation) is mandatory.Here is a full example that registers a new ecephys asset, runs the spike sorting capsule with some parameters, and registers the results:importosfromaind_codeocean_api.codeoceanimportCodeOceanClientfromaind_codeocean_utils.codeocean_jobimport(CodeOceanJob,CodeOceanJobConfig)# Set up the CodeOceanClient from aind_codeocean_apiCO_TOKEN=os.environ["CO_TOKEN"]CO_DOMAIN=os.environ["CO_DOMAIN"]co_client=CodeOceanClient(domain=CO_DOMAIN,token=CO_TOKEN)# Define Job Parametersjob_config_dict=dict(register_config=dict(asset_name="test_dataset_for_codeocean_job",mount="ecephys_701305_2023-12-26_12-22-25",bucket="aind-ephys-data",prefix="ecephys_701305_2023-12-26_12-22-25",tags=["codeocean_job_test","ecephys","701305","raw"],custom_metadata={"modality":"extracellular electrophysiology","data level":"raw data",},viewable_to_everyone=True),run_capsule_config=dict(data_assets=None,# when None, the newly registered asset will be usedcapsule_id="a31e6c81-49a5-4f1c-b89c-2d47ae3e02b4",run_parameters=["--debug","--no-remove-out-channels"]),capture_result_config=dict(process_name="sorted",tags=["np-ultra"]# additional tags to the ones inherited from input))# instantiate config modeljob_config=CodeOceanJobConfig(**job_config_dict)# instantiate code ocean jobco_job=CodeOceanJob(co_client=co_client,job_config=job_config)# run and wait for resultsjob_response=co_job.run_job()This job will:Register thetest_dataset_for_codeocean_jobasset from the specified s3 bucket and prefixRun the capsulea31e6c81-49a5-4f1c-b89c-2d47ae3e02b4with the specified parametersRegister the result astest_dataset_for_codeocean_job_sorter_{date-time}To run a computation on existing data assets, do not provide theregister_configand provide thedata_assetfield in therun_capsule_config.To skip capturing the result, do not provide thecapture_result_configoption.ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-data-access-api
aind-data-access-apiAPI to interact with a few AIND databases.UsageWe have two primary databases. A Document store to keep unstructured json documents, and a relational database to store structured tables.Document StoreWe have some convenience methods to interact with our Document Store. You can create a client by explicitly setting credentials, or downloading from AWS Secrets Manager.from aind_data_access_api.credentials import DocumentStoreCredentials from aind_data_access_api.document_store import Client # Method one assuming user, password, and host are known ds_client = Client( credentials=DocumentStoreCredentials( username="user", password="password", host="host", database="metadata", ), collection_name="data_assets", ) # Method two if you have permissions to AWS Secrets Manager ds_client = Client( credentials=DocumentStoreCredentials( aws_secrets_name="aind/data/access/api/document_store/metadata" ), collection_name="data_assets", ) # To get all records response = list(ds_client.retrieve_data_asset_records()) # To get a list of filtered records: response = list(ds_client.retrieve_data_asset_records({"subject.subject_id": "123456"}))RDS TablesWe have some convenience methods to interact with our Relational Database. You can create a client by explicitly setting credentials, or downloading from AWS Secrets Manager.from aind_data_access_api.credentials import RDSCredentials from aind_data_access_api.rds_tables import Client # Method one assuming user, password, and host are known ds_client = Client( credentials=RDSCredentials( username="user", password="password", host="host", database="metadata", ), collection_name="data_assets", ) # Method two if you have permissions to AWS Secrets Manager ds_client = Client( credentials=RDSCredentials( aws_secrets_name="aind/data/access/api/rds_tables" ), ) # To retrieve a table as a pandas dataframe df = ds_client.read_table(table_name="spike_sorting_urls") # Can also pass in a custom sql query df = ds_client.read_table(query="SELECT * FROM spike_sorting_urls") # It's also possible to save a pandas dataframe as a table. Please check internal documentation for more details. ds_client.overwrite_table_with_df(df, table_name)InstallationTo use the software, it can be installed from PyPI.pipinstallaind-data-access-apiTo develop the code, clone repo and runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-data-schema
aind-data-schemaA library that definesAINDdata schema and validates JSON files.User documentation available onreadthedocs.OverviewThis repository contains the schemas needed to ingest and validate metadata that are essential to ensuringAINDdata collection is completely reproducible. Our general approach is to semantically version core schema classes and include those version numbers in serialized metadata so that we can flexibly evolve the schemas over time without requiring difficult data migrations. In the future, we will provide a browsable list of these classes rendered toJSONschema, including all historic versions.Be aware that this package is still under heavy preliminary development. Expect breaking changes regularly, although we will communicate these through semantic versioning.A simple example:importdatetimefromaind_data_schema.subjectimportHousing,Subjectt=datetime.datetime(2022,11,22,8,43,00)s=Subject(species="Mus musculus",subject_id="12345",sex="Male",date_of_birth=t.date(),genotype="Emx1-IRES-Cre;Camk2a-tTA;Ai93(TITL-GCaMP6f)",housing=Housing(home_cage_enrichment=["Running wheel"],cage_id="123"),background_strain="C57BL/6J",)s.write_standard_file()# writes subject.json{"describedBy":"https://raw.githubusercontent.com/AllenNeuralDynamics/aind-data-schema/main/src/aind_data_schema/subject.py","schema_version":"0.3.0","species":"Mus musculus","subject_id":"12345","sex":"Male","date_of_birth":"2022-11-22","genotype":"Emx1-IRES-Cre;Camk2a-tTA;Ai93(TITL-GCaMP6f)","mgi_allele_ids":null,"background_strain":"C57BL/6J","source":null,"rrid":null,"restrictions":null,"breeding_group":null,"maternal_id":null,"maternal_genotype":null,"paternal_id":null,"paternal_genotype":null,"wellness_reports":null,"housing":{"cage_id":"123","room_id":null,"light_cycle":null,"home_cage_enrichment":["Running wheel"],"cohoused_subjects":null},"notes":null}Installing and UpgradingTo install the latest version:pip install aind-data-schemaEvery merge to themainbranch is automatically tagged with a new major/minor/patch version and uploaded to PyPI. To upgrade to the latest version:pip install aind-data-schema --upgradeTo develop the code, check out this repo and run the following in the cloned directory:pip install -e .[dev]ContributingIf you've found a bug in the schemas or would like to make a minor change, open anIssueon this repository. If you'd like to propose a large change or addition, or generally have a question about how things work, head start a newDiscussion!Linters and testingThere are several libraries used to run linters, check documentation, and run tests.To run tests locally, navigate to AIND-DATA-SCHEMA directory in terminal and run (this will not run any on-line only tests):python -m unittestPlease test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coverage run -m unittest discover && coverage reportTo test any of the following modules, conda/pip install the relevant package (interrogate, flake8, black, isort), navigate to relevant directory, and run any of the following commands in place of [command]:[command] -v .Useinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate .Useflake8to check that code is up to standards (no unused imports, etc.):flake8 .Useblackto automatically format the code into PEP standards:black .Useisortto automatically sort import statements:isort .Pull requestsFor internal members, please create a branch. For external members, please fork the repo and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect the build system or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bug fixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsDocumentationTo generate the rst files source files for documentation, run:sphinx-apidoc -o docs/source/ srcThen to create the documentation html files, run:sphinx-build -b html docs/source/ docs/build/htmlMore info on sphinx installation can be found here:https://www.sphinx-doc.org/en/master/usage/installation.html
aind-data-transfer
aind-data-transferTools for transferring large data to and between cloud storage providers.InstallationTo upload data to aws s3, you may need to install and configureawscli. To upload data to gcp, you may need to install and configuregsutil.Generic uploadYou may need to first installpyminizipfrom conda if getting errors on Windows:conda install -c mzh pyminizipFrom PyPI:pip install aind-data-transferFrom source:pip install -e .ImagingRunpip install -e .[imaging]Run./post_install.shEphysFrom PyPI:pip install aind-data-transfer[ephys]From sourcepip install -e .[ephys]FullRunpip install -e .[full]Run./post_install.shDevelopmentRunpip install -e .[dev]Run./post_install.shMPITo run scripts on a cluster, you need to installdask-mpi. This requires compilingmpi4pywith the MPI implementation used by your cluster (Open MPI, MPICH, etc). The following example is for the Allen Institute HPC, but should be applicable to other HPC systems.SSH into your cluster login nodessh user.name@hpc-loginOn the Allen cluster, the MPI modules are only available on compute nodes, so SSH into a compute node (n256 chosen arbitrarily).ssh user.name@n256Now load the MPI module and compiler. It is important that you use the latest MPI version and compiler, or elsedask-mpimay not function properly.module load gcc/10.1.0-centos7 mpi/mpich-3.2-x86_64Install mpi4pypython -m pip install --no-cache-dir mpi4pyNow install dask-mpipython -m pip install dask_mpi --upgradeUsageRunning one or more upload jobsThe jobs can be defined inside a csv file. The first row of the csv file needs the following headers. Some are required for the job to run, and others are optional.Requireds3_bucket: S3 Bucket name platform: One of [behavior, confocal, ecephys, exaSPIM, FIP, HCR, HSFP, mesoSPIM, merfish, MRI, multiplane-ophys, single-plane-ophys, SLAP2, smartSPIM] (pulled from the Platform.abbreviation field) modality: One of [behavior-videos, confocal, ecephys, fMOST, icephys, fib, merfish, MRI, ophys, slap, SPIM, trained-behavior] (pulled from the Modality.abbreviation field) subject_id: ID of the subject acq_datetime: Format can be either YYYY-MM-DD HH:mm:ss or MM/DD/YYYY I:MM:SS POne or more modalities need to be set. The csv headers can look like:modality0: [behavior-videos, confocal, ecephys, fMOST, icephys, fib, merfish, MRI, ophys, slap, SPIM, trained-behavior] modality0.source: path to modality0 raw data folder modality0.compress_raw_data (Optional): Override default compression behavior. True if ECEPHYS, False otherwise. modality0.skip_staging (Optional): If modality0.compress_raw_data is False and this is True, upload directly to s3. Default is False. modality0.extra_configs (Optional): path to config file to override compression defaults modality1 (Optional): [behavior-videos, confocal, ecephys, fMOST, icephys, fib, merfish, MRI, ophys, slap, SPIM, trained-behavior] modality1.source (Optional): path to modality0 raw data folder modality1.compress_raw_data (Optional): Override default compression behavior. True if ECEPHYS, False otherwise. modality1.skip_staging (Optional): If modality1.compress_raw_data is False and this is True, upload directly to s3. Default is False. modality1.extra_configs (Optional): path to config file to override compression defaults ...Somewhat Optional. Set the aws_param_store_name, but can define custom endpoints if desiredaws_param_store_name: Path to aws_param_store_name to retrieve common endpointsIf aws_param_store_name not set...codeocean_domain: Domain of Code Ocean platform codeocean_trigger_capsule_id: Launch a Code Ocean pipeline codeocean_trigger_capsule_version: Optional if Code Ocean pipeline is versioned metadata_service_domain: Domain name of the metadata service aind_data_transfer_repo_location: The link to this project video_encryption_password: Password with which to encrypt video files codeocean_api_token: Code Ocean token used to run a capsuleOptionaltemp_directory: The job will use your OS's file system to create a temp directory as default. You can override the location by setting this parameter. behavior_dir: Location where behavior data associated with the raw data is stored. metadata_dir: Location where metadata associated with the raw data is stored. log_level: Default log level is warning. Can be set here.Optional Flagsmetadata_dir_force: Default is false. If true, the metadata in the metadata folder will be regarded as the source of truth vs. the metadata pulled from aind_metadata_service dry_run: Default is false. If set to true, it will perform a dry-run of the upload portion and not actually upload anything. force_cloud_sync: Use with caution. If set to true, it will sync the local raw data to the cloud even if the cloud folder already exists. compress_raw_data: Override all compress_raw_data defaults and set them to True. skip_staging: For each modality, copy uncompressed data directly to s3.After creating the csv file, you can run through the jobs withpython -m aind_data_transfer.jobs.s3_upload_job --jobs-csv-file "path_to_jobs_list"Any Optional Flags attached will persist and override those set in the csv file. For example,python -m aind_data_transfer.jobs.s3_upload_job --jobs-csv-file "path_to_jobs_list" --dry-run --compress-raw-datawill compress the raw data source and run a dry run for all jobs defined in the csv file.An example csv file might look like:data-source, s3-bucket, subject-id, modality, platform, acq-datetime, aws_param_store_name dir/data_set_1, some_bucket, 123454, ecephys, ecephys, 2020-10-10 14:10:10, /aind/data/transfer/endpoints dir/data_set_2, some_bucket2, 123456, ophys, multiplane-ophys, 2020-10-11 13:10:10, /aind/data/transfer/endpointsDefining a custom processing capsule to run in code oceanRead the previous section on defining a csv file. Retrieve the capsule id from the code ocean platform. You can add an extra parameter to define a custom processing capsule that gets executed aftet the data is uploaded:codeocean_process_capsule_id, data-source, s3-bucket, subject-id, modality, platform, acq-datetime, aws_param_store_name xyz-123-456, dir/data_set_1, some_bucket, 123454, ecephys, ecephys, 2020-10-10 14:10:10, /aind/data/transfer/endpoints xyz-123-456, dir/data_set_2, some_bucket2, 123456, ophys, multiplane-ophys, 2020-10-11 13:10:10, /aind/data/transfer/endpointsContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coverage run -m unittest discover && coverage reportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate .Useflake8to check that code is up to standards (no unused imports, etc.):flake8 .Useblackto automatically format the code into PEP standards:black .Useisortto automatically sort import statements:isort .Pull requestsFor internal members, please create a branch. For external members, please fork the repo and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect the build system or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bug fixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing tests
aind-dispim-processing
aind-dispim-processingLibrary for running dispim code ocean pipeline.InstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]Linters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-distributions
No description available on PyPI.
aind-ephys-utils
aind-ephys-utilsHelpful methods for exploringin vivoelectrophysiology data.InstallationFor userspipinstallaind-ephys-utilsFor developersFirst, clone the repository. Then, from theaind-ephys-utilsdirectory, run:pipinstall-e.[dev]Note:On recent versions of macOS, you'll need to put the last argument in quotation marks:".[dev]"ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useblackto automatically format the code into PEP standards:black.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsDocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/src/aind_ephys_utilsThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.Developing in Code OceanMembers of the Allen Institute for Neural Dynamics can follow these steps to create a Code Ocean capsule from this repository:Click the⨁ New Capsulebutton and select "Clone from AllenNeuralDynamics"Type inaind-ephys-utilsand click "Clone" (this step requires that your GitHub credentials are configured properly)Select a Python base image, and optionally change the compute resourcesAttach data to the capsule and any dependencies needed to load it (e.g.pynwb,hdmf-zarr)Add plotting dependencies (e.g.ipympl,plotly)Launch a Visual Studio Code cloud workstationInside Visual Studio Code, select "New Terminal" from the "Terminal" menu and run the following commands:$pipinstall-e.[dev]$gitcheckout-b<nameoffeaturebranch>Now, you can create Jupyter notebooks in the "code" directory that can be used to test out new functions before updating the library. When prompted, install the "Python" extensions to be able to execute notebook cells.Once you've finished writing your code and tests, run the following commands:$coveragerun-munittestdiscover&&coveragereport $interrogate.$black. $flake8. $isort.Assuming all of these pass, you're ready to push your changes:$gitadd<filestoadd> $gitcommit-m"Commit message"$gitpush-uorigin<nameoffeaturebranch>After doing this, you can open a pull request on GitHub.Note thatgitwill only track files inside theaind-ephys-utilsdirectory, and will ignore everything else in the capsule. You will no longer be able to commit changes to the capsule itself, which is why this workflow should only be used for developing a library, and not for performing any type of data analysis.When you're done working, it's recommended to put the workstation on hold rather than shutting it down, in order to keep Visual Studio Code in the same state.
aindex
aindexPython Library for Reusable AI functions
aind-exaspim-pipeline-utils
exaSPIM pipeline utilsCode repository to be installed in exaSPIM processing capsules.FeaturesWrapper code for ImageJ automation.n5 to zarr converter to be run in a Code Ocean capsule.ImageJ wrapper moduleThe ImageJ wrapper module contains Fiji macro templates and wrapper code to automatically run interest point detection and interest point based registration in the Code Ocean capsule. This functionality is set as the main entry point of the package if the whole package is invoked on the command line or theaind_exaspim_pipelinecommand is run.#!/usr/bin/env bashset-excd~/capsule imagej_wrapper"$@"N5 to Zarr converterThe N5 to zarr converter sets up a local dask cluster with multiple python processes as workers to read in an N5 dataset and write it out in a multiscale Zarr dataset. Both datasets may be local or directly on S3. AWS credentials must be available in the environment (Code Ocean credential assignment to environment variables).This implementation is based on dask.array (da).This command takes a manifest json file as the only command line argument or looks it up at the hard-wireddata/manifest/exaspim_manifest.jsonlocation if not specified.To set up a code ocean capsule, use the followingrun.shscript:#!/usr/bin/env bashset-excd~/capsule n5tozarr_da_converter"$@"InstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]For n5tozarr and zarr multiscale conversion, install aspipinstall-e.[n5tozarr]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsDocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-metadata-mapper
aind-metadata-mapperRepository to contain code that will parse source files into aind-data-schema models.UsageInstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-metadata-service
aind-metadata-serviceREST service to retrieve metadata from databases.InstallationServer InstallationCan be pip installed usingpip install aind-metadata-service[server].Installingpyodbc.You may need to installunixodbc-dev. You can follow thishttps://learn.microsoft.com/en-us/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver16for instructions depending on your os.You may need to rundocker system prunebefore building the docker image if you're getting erros running apt-get:#10 23.69 Err:1 http://deb.debian.org/debian bullseye/main amd64 libodbc1 amd64 2.3.6-0.1+b1 #10 23.69 Could not connect to debian.map.fastlydns.net:80 (146.75.42.132). - connect (111: Connection refused) Unable to connect to deb.debian.org:http:Client InstallationCan be pip installed withpip install aind-metadata-service[client]For DevelopmentIn the root directory, runpip install -e .[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coverage run -m unittest discover && coverage reportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate .Useflake8to check that code is up to standards (no unused imports, etc.):flake8 .Useblackto automatically format the code into PEP standards:black .Useisortto automatically sort import statements:isort .Pull requestsFor internal members, please create a branch. For external members, please fork the repo and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect the build system or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bug fixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsDocumentationTo generate the rst files source files for documentation, runsphinx-apidoc -o doc_template/source/ srcThen to create the documentation html files, runsphinx-build -b html doc_template/source/ doc_template/build/htmlMore info on sphinx installation can be found here:https://www.sphinx-doc.org/en/master/usage/installation.htmlResponsesThere are 6 possible status code responses for aind-metadata-service:200: successfully retrieved valid data without any problems.406: successfully retrieved some data, but failed to validate against pydantic models.404: found no data that matches query.300: queried the server, but more items were returned than expected.503: failed to connect to labtracks/sharepoint servers.500: successfully connected to labtracks/sharepoint, but some other server error occurred. These status codes are defined in StatusCodes enum in response_handler.py
aind-metadata-upgrader
aind-metadata-upgraderUsageTo use this template, click the greenUse this templatebutton andCreate new repository.After github initially creates the new repository, please wait an extra minute for the initialization scripts to finish organizing the repo.To enable the automatic semantic version increments: in the repository go toSettingsandCollaborators and teams. Click the greenAdd peoplebutton. Addsvc-aindscicompas an admin. Modify the file in.github/workflows/tag_and_publish.ymland remove the if statement in line 10. The semantic version will now be incremented every time a code is committed into the main branch.To publish to PyPI, enable semantic versioning and uncomment the publish block in.github/workflows/tag_and_publish.yml. The code will now be published to PyPI every time the code is committed into the main branch.The.github/workflows/test_and_lint.ymlfile will run automated tests and style checks every time a Pull Request is opened. If the checks are undesired, thetest_and_lint.ymlcan be deleted. The strictness of the code coverage level, etc., can be modified by altering the configurations in thepyproject.tomlfile and the.flake8file.InstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-ng-link
# aind-ng-link[![License](https://img.shields.io/badge/license-MIT-brightgreen)](LICENSE) ![Code Style](https://img.shields.io/badge/code%20style-black-black)Repository to generate neuroglancer links to facilitate the visualization of the datasets generated at the Allen Institute for Neural Dynamics.## Installation To use the software, in the root directory, run` pip install-e. `To develop the code, run` pip install-e.[dev] `## Contributing### Linters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:` coverage run-munittest discover && coverage report `- Useinterrogateto check that modules, methods, etc. have been documented thoroughly:` interrogate . `- Useflake8to check that code is up to standards (no unused imports, etc.):` flake8 . `- Useblackto automatically format the code into PEP standards:` black . `- Useisortto automatically sort import statements:` isort . `### Pull requestsFor internal members, please create a branch. For external members, please fork the repo and open a pull request from the fork. We’ll primarily use [Angular](https://github.com/angular/angular/blob/main/CONTRIBUTING.md#commit) style for commit messages. Roughly, they should follow the pattern:`<type>(<scope>):<short summary> `where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect the build system or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bug fixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing tests### Documentation To generate the rst files source files for documentation, run`sphinx-apidoc-odoc_template/source/ src `Then to create the documentation html files, run`sphinx-build-bhtml doc_template/source/ doc_template/build/html `More info on sphinx installation can be found here:https://www.sphinx-doc.org/en/master/usage/installation.html
aind-ophys-pipeline-utils
aind-ophys-pipeline-utilsUsageTo use this template, click the greenUse this templatebutton andCreate new repository.After github initially creates the new repository, please wait an extra minute for the initialization scripts to finish organizing the repo.To enable the automatic semantic version increments: in the repository go toSettingsandCollaborators and teams. Click the greenAdd peoplebutton. Addsvc-aindscicompas an admin. Modify the file in.github/workflows/tag_and_publish.ymland remove the if statement in line 10. The semantic version will now be incremented every time a code is committed into the main branch.To publish to PyPI, enable semantic versioning and uncomment the publish block in.github/workflows/tag_and_publish.yml. The code will now be published to PyPI every time the code is committed into the main branch.The.github/workflows/test_and_lint.ymlfile will run automated tests and style checks every time a Pull Request is opened. If the checks are undesired, thetest_and_lint.ymlcan be deleted. The strictness of the code coverage level, etc., can be modified by altering the configurations in thepyproject.tomlfile and the.flake8file.InstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-ophys-utils
Welcome to aind-ophys-utilsThis repository contains python utility methods for processing optical physiology data. Methods in this repository have simple, standard data interfaces and are generally applicable to optical physiology data. As much as possible think arrays and dataframes rather than complex project-specific data structures.Installationpipinstallaind-ophys-utilsTo use the software from source, clone the repository and in the root directory runpipinstall-e.To develop the code in place, runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-mpytest&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aind-segmentation-evaluation
aind-segmentation-evaluationPython package for performing a skeleton-based evaluation of a predicted segmentation of neural arbors. This tool detects topological mistakes (i.e. splits and merges) in the predicted segmentation by comparing it to the ground truth skeleton. Once this comparison is complete, several statistics (e.g. edge accuracy, split count, merge count) are computed and returned in a dictionary. There is also an optional to write either tiff or swc files that highlight each topological mistake.UsageHere is a simple example of evaluating a predicted segmentation. Note that this package supports a number of different input types, see documentation for details.importosfromaind_segmentation_evaluation.evaluateimportrun_evaluationfromaind_segmentation_evaluation.conversionsimportvolume_to_graphfromtifffileimportimreadif__name__=="__main__":# Initializationsdata_dir="./resources"target_graphs_dir=os.path.join(data_dir,"target_graphs")path_to_target_labels=os.path.join(data_dir,"target_labels.tif")pred_labels=imread(os.path.join(data_dir,"pred_labels.tif"))pred_graphs=volume_to_graph(pred_labels)# Evaluationstats=run_evaluation(target_graphs_dir,path_to_target_labels,pred_graphs,pred_labels,filetype="tif",output="tif",output_dir=data_dir,permute=[2,1,0],scale=[1.101,1.101,1.101],)# Write out resultsprint("Graph-based evaluation...")forkeyinstats.keys():print("{}:{}".format(key,stats[key])InstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]To install this package from PyPI, runpipinstallaind-segmentation-evaluationPull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing tests
aind-smartspim-operator-utils
aind-smartspim-operator-utilsPurposeTools for use by a SmartSPIM operator may include:Laser power calibration recordsData transfer/delete scriptsSOP descriptions and guidesInstallationTo use the software, in the root directory, runpipinstall-e.To develop the code, runpipinstall-e.[dev]ContributingLinters and testingThere are several libraries used to run linters, check documentation, and run tests.Please test your changes using thecoveragelibrary, which will run the tests and log a coverage report:coveragerun-munittestdiscover&&coveragereportUseinterrogateto check that modules, methods, etc. have been documented thoroughly:interrogate.Useflake8to check that code is up to standards (no unused imports, etc.):flake8.Useblackto automatically format the code into PEP standards:black.Useisortto automatically sort import statements:isort.Pull requestsFor internal members, please create a branch. For external members, please fork the repository and open a pull request from the fork. We'll primarily useAngularstyle for commit messages. Roughly, they should follow the pattern:<type>(<scope>): <short summary>where scope (optional) describes the packages affected by the code changes and type (mandatory) is one of:build: Changes that affect build tools or external dependencies (example scopes: pyproject.toml, setup.py)ci: Changes to our CI configuration files and scripts (examples: .github/workflows/ci.yml)docs: Documentation only changesfeat: A new featurefix: A bugfixperf: A code change that improves performancerefactor: A code change that neither fixes a bug nor adds a featuretest: Adding missing tests or correcting existing testsSemantic ReleaseThe table below, fromsemantic release, shows which commit message gets you which release type whensemantic-releaseruns (using the default configuration):Commit messageRelease typefix(pencil): stop graphite breaking when too much pressure appliedPatchFix Release, Default releasefeat(pencil): add 'graphiteWidth' optionMinorFeature Releaseperf(pencil): remove graphiteWidth optionBREAKING CHANGE: The graphiteWidth option has been removed.The default graphite width of 10mm is always used for performance reasons.MajorBreaking Release(Note that theBREAKING CHANGE:token must be in the footer of the commit)DocumentationTo generate the rst files source files for documentation, runsphinx-apidoc-odoc_template/source/srcThen to create the documentation HTML files, runsphinx-build-bhtmldoc_template/source/doc_template/build/htmlMore info on sphinx installation can be foundhere.
aine-drl
No description available on PyPI.
aineko
AinekoAineko is a Python framework for building data applications.With Aineko, you seamlessly bring data into any product and iterate quickly. Whether you're an individual developer or part of a larger team, Aineko helps you rapidly build scalable, maintainable, and fast data applications.Under the hood, Aineko automatically configures tooling needed for production-ready data apps, like message brokers, distributed compute, and more. This allows you to focus on building your application instead of spending time with configuration and infrastructure.DocumentationFor complete information please refer to theAineko documentation.QuickstartGet started with Aineko in minutes by followingthis tutorial.ExamplesTo see some examples of Aineko in action visithere.Aineko DreamMore coming soon...ContributingIf you're interested in contributing to Aineko, follow thisguide.
aineko-plugins-nodes-fastapi-server
aineko-plugins-nodes-fastapi-serverTheaineko-plugins-nodes-fastapi-serverpackage is a plugin for theAinekoframework.It provides a node that hosts a FastAPI server, allowing the user to define custom endpoints using FastAPI's syntax.Detailed information can be found in thedocs.
aineko-plugins-nodes-http-poller
aineko-plugins-nodes-http-pollerTheaineko-plugins-nodes-http-pollerpackage is a plugin for theAinekoframework.It provides a node that hosts an HTTP poller, allowing to periodically consume an API response and publishing it to a dataset.Detailed information can be found in thedocs.
aineko-plugins-nodes-websocket-client
aineko-plugins-nodes-websocket-clientTheaineko-plugins-nodes-websocket-clientpackage is a plugin for theAinekoframework.It provides a node that hosts a websocket client, allowing for consuming payloads from a websocket and publishing them to a dataset.Detailed information can be found in thedocs.
aineko_style
aineko_styleA pylint checker for misc style conventionsWe adopt theGoogle Python Style Guidewith some modifications. This pylint checker is intended to enforce those conventions.InstallationInstall with pip:pip install aineko_styleOnce installed you can either run it directly from the command line:pylint --load-plugins=aineko_style.checker your_module.pyor add it to the pylint configuration file. Example:pyproject.toml:[tool.pylint.main]load-plugins=["aineko_style.checker"]pylintrc:[MAIN]load-plugins=aineko_style.checkerFeaturesWarning MessagesMessage IDDescriptionMessage symbolC0001Docstring contains types. Types should be part of the function definition.docstring-contains-typesStyle ConventionsC0001 docstring-contains-typesYes:defmessage(index:int,content:str):"""short descriptionArgs:index: The index of the message.content: The content of the message."""...No:defmessage(index:int,content:str):"""short descriptionArgs:index (int): The index of the message.content (str): The content of the message."""...
ai_nester
UNKNOWN
ai-network-envoy-sdk
No description available on PyPI.
ai-network-storage
No description available on PyPI.
ainipdf
This is the homepage of our project.
ainipdf2
This is the homepage of our project.
ainject
# ainject Simple asynchronous dependency injector for python.## Reasons * No asynchronous DI withasync/awaitsupport. * Simplifying things.## Features * Asynchronous instance factories. * Asynchronous__init__and__new__support. * Damn simple api.## Requirements * Python 3.5+ * setuptools >= 30.3.0 (installation only)## Usage All you have to do isbindsome factories with some names andinject/instancethem where you want.` python >>> import ainject `### Bind and instance Simple bind with name:` python >>> async defasync_factory():... return "async_value" ... >>> defsync_factory():... return "sync_value" >>> ainject.bind(async_factory,name="async")>>> awaitainject.instance("async")'async_value' >>> ainject.bind(sync_factory,name="sync")>>> awaitainject.instance("sync")'sync_value' >>> `As you can see you should alwaysawaityour result, even if factory is actually synchronous.Bind without name:` python >>> ainject.bind(async_factory) >>> await ainject.instance(async_factory) 'async_value' >>> ainject.bind(sync_factory) >>> await ainject.instance(sync_factory) 'sync_value' >>> `In this casenameisfactoryitself. This is equivalent to:` python >>> ainject.bind(async_factory, name=async_factory) >>> `So, you can use any hashable value for name, or even omit it for auto naming.By default binding is done in «singleton» mode. This means, that first time instance is accessed it will be cashed and for every next instance request cashed version will be used:` python >>> deffactory():... return [] ... >>> ainject.bind(factory) >>> a = await ainject.instance(factory) >>> b = await ainject.instance(factory) >>> a, b([],[]) >>> a is b True >>> `For non-singleton usage passsingleton=Falseto bind method. In this case every instantiation will actually execute factory function:` python >>> deffactory():... return [] ... >>> ainject.bind(factory, singleton=False) >>> a = await ainject.instance(factory) >>> b = await ainject.instance(factory) >>> a, b([],[]) >>> a is b False >>> `### Inject Injecting is done viainjectdecorator:` python >>> @ainject.inject(x=factory) ... def foo(x): ... print(x) ... ... >>> await foo() [] >>> `Keep in mind that «name» should be defined before decorator, or just use strings for names. Also, remember, that everything you wrap withinjectdecorator became awaitable:` python >>>@ainject.inject(x="async")... class A: ... def __init__(self, x): ... self.x = x ... ... ... >>> a = await A() >>> `Even class instantiation. Side-effect of this «magic» is that you can useasync__init__andasync__new__:` python >>> @ainject.inject() ... class A: ... async def __new__(self, x): ... ... ... returnsuper().__new__(self)... async def __init__(self, x): ... self.x = x ... ... ... >>> a = await A(3) `As you can see you can even inject nothing.## Advanced usage Most of time you only need above scenarios, but if you need low-level access to injector, or use more than one injector you should instantiateInjectorclass and use itsbind,injectandinstancemethods. Default injector is global for ainject module and can be accessed asainject._injector. Bindings are stored as dictionary with name-factory pairs inInjector._bindings. Instances (singletons) stored as name-instance pairs inInjector._instances.
ainlp
ainlpnlp library which support text processing methods, text match methods, etcpublish ainlp package methodconfigure~/.pypircfile, add the following lines:[ainlp]repository=https://upload.pypi.org/legacy/username=__token__password=pypi-AgEIcHlwaS5vcmcCJDQyNzU0YzYwLTBhNWYtNGI1Zi05OTI5LTBhMjQwMzc1YjEwYwACNnsicGVybWlzc2lvbnMiOiB7InByb2plY3RzIjogWyJhaW5scCJdfSwgInZlcnNpb24iOiAxfQAABiDDY42I73y7epx_fP7Q1MjRES8FJA3dI2b-0tF5-gO0tgthen modify version insetup.py, run the following commands to publish the new version to pypi:pythonsetup.pysdistbdist_wheel twineuploaddist/ainlp-<newversion>*
ainlp-skadai
Failed to fetch description. HTTP Status Code: 404
aino-convert
aino-convert============aino-convert is a basically wrapper for `ImageMagick convert`_ with caching.The main purpose is to help generate quality thumbnails simply andefficiently. During the development of sorl-thumbnail I learned some andeventually I took the plunge to write something using ImageMagick convertinstead of PIL.Pros----- Simple thumbnail tag generating high quality output- Remote files handling on the fly- Usage of convert commandline syntax for infinate flexibility- Caching mechanism- Cleanup of unused images or conversions of images can be made- Storage is local file storage onlyCons----- Requirements: convert, pyexiv2 is nice to have- Storage is local file storage only- Security (protecting the developer from himself)Demo====There is a demo in the `demo` directory.To run the demo it just cd in to it and type: `./run`.. _ImageMagic convert: http://www.imagemagick.org/
ainodes
Failed to fetch description. HTTP Status Code: 404
ainodes-engine
No description available on PyPI.
aino-jstools
aino-jstools is a set of tools for working with JavaScript and Django. Primarily it compiles javascripts.Design backgroundWe wanted to make a tool that made including a bunch of JavaScripts in a template easy and clean and compiling all those JavaScripts into packed pieces in production for optimal performance. The other goal we wanted to achive was to expose urls defined inurls.py,MEDIA_URL,DEBUGsettings to JavaScript code. Our future includes making a cleaner implementation for i18n in JavaScript than the one provided by Django.RequirementsDjango 1.xPython 2.5+Java (for compiling JavaScripts)InstallIncludejstoolsinINSTALLED_APPSin your project settings. Optionally include the jstools/urls.py in yoururls.py:(r'^jstools/', include('jstools.urls'))Template usageFirst define your scripts in a template as follows:{% scripts "js/mysite-min.js" %} http://yui.yahooapis.com/3.1.0/build/yui/yui-min.js js/a.js js/b.js {% url jshelper %} {% endscripts %}Whensettings.DEBUGisTruethis will translate to:<script src="http://yui.yahooapis.com/3.1.0/build/yui/yui-min.js"></script> <script src="{{ MEDIA_URL }}js/a.js"></script> <script src="{{ MEDIA_URL }}js/b.js"></script> <script src="{% url jshelper %}"></script>Whensettings.DEBUGisFalsethis will translate to:<script src="{{ MEDIA_URL }}js/mysite-min.js?TIMESTAMP"></script>whereTIMESTAMPis based on modification date of{{ MEDIA_ROOT}}js/myste-min.jsCompilingCompiling all defined scripts is as simple as running:python manage.py buildjsIf you are using the defaultfilesystemand/orapp_directoriesthis management command will find all templates with{% scripts %}tags and compile its contents into the first argument of the tag.jshelper viewThis view will output named urls,settings.MEDIA_URL,settings.DEBUG(I suggest you override this in your template unless you want to recompile the script when you change yourDEBUGsetting) for use in your JavaScript code. You will have access to a JavaScript object namedJSTOOLSby default, you can change the name by settingJSTOOLS_NAMESPACE.JSTOOLS.settings.MEDIA_URLsettings.MEDIA_URLJSTOOLS.settings.DEBUGsettings.DEBUGJSTOOLS.get_urlThis function will get named urls defined in yoururls.py. First argument is the name of the named url, subsequent arguments are arguments passed to that pattern. Examples:JSTOOLS.get_url('jshelper'); JSTOOLS.get_url('blog_entry', 2010, 04, 25, 'aino-jstools');
aino-mutations
aino-mutations is a tool to call mutation scripts at a certain revision of a mercurial repository. Mutation scripts are typically scripts that perform database refactoring of some sort. aino-convert is not intelligent:Does not offer introspectionMutation scripts are intended to use raw sql for schema migration which means you will be locked to a particular database engine, that of your own choice of course.Why?aino-mutations solves the problem of running a mutation in the correct environment. Often when you do mutations you want to perform some logic to insert or remove data. To perform this logic we need the environment in which the mutation was written for. aino-convert automatically updates a mercurial repository to the revision where the mutation was added and executes the mutation script.RequirementsDjango with Multi DB support, v1.2+, or trunk until released.Mercurial, only tested with v1.5Django project managed by a mercurial repositoryInstallationAddmutationsto your pythonpathAddmutationstoINSTALLED_APPSConfigurationMutation scripts are by default looked for in amutationssubdirectory of your mercurial repository root, you can change this by settingMUTATIONS_ROOTin your settings file. Note thatMUTATIONS_ROOTshould be a relative directory path to your repository root.Usageaino-mutations seperates mutations for different databases and therefore you need to specify what database you are affecting with your mutation. To add a mutation:Add the python file (mutation) toMUTATIONS_ROOT/alias/wherealiasis the alias used in your settings file (the default is calleddefault).Add the file to the repository:hg add path/to/mutationCommit:hg ci-m"myfirst mutation"Now run the mutation:python manage.py mutateMutationsMutations are just normal python files that do whatever you like. For convenience there are some local variables passed to the mutation scripts:cursor: a cursor instance for the current databasecommit_unless_managedis just a shortcut fordjango.db.transaction.commit_unless_manageddry: this will be True ifmutateis run with the--dryoption which can be usefull for displaying some info to the user.FAQI created a mutation that was wrong, what do I do?All you need to do is to remove it from the repository:hg rm path/to/mutation; hg ci-m"nomore bad code"I want to try a mutation before commiting, how can i do that?run:python manage.py runmutation path/to/mutationI have my django project in a deployment environment, can I still use aino-mutations?The best way to solve this since aino-mutations may update project files to a certain revision while performing the mutations it is best to clone the repository to another location while accessing the same databases.
aino-utkik
aino-utkik provides minimalistic class based views for Django focusing on common usage, readability and convienience.For Django 1.3 or earlier use 0.7.8 For Django 1.7 or later use 0.8.0 or laterExample:# urls.py from utkik.dispatch import * urlpatterns = patterns('', (r'^(?P<slug>[-\w]+)/$', 'news.NewsDetailView'), (r'^$', 'news.NewsListView'), ) # news/views.py from django.shortcuts import get_object_or_404 from news.models import News from utkik import View class NewsDetailView(View): template_name = 'news/news_detail.html' def get(self, slug): self.c.news = get_object_or_404(News.objects, slug=slug) class NewsListView(View): template_name = 'news/news_list.html' def get(self): self.c.news_list = News.objects.all()
ain-py
ain-pyA python version ofain-js.Installationpip install ain-pyRun all testtoxExamplesfromain.ainimportAinimportasyncioain=Ain('https://mainnet-api.ainetwork.ai/')asyncdefprocess():accounts=awaitain.db.ref('/accounts').getValue()print(accounts)loop=asyncio.get_event_loop()loop.run_until_complete(process())
ainshamsflow
AinShamsFlow4th CSE Neural Networks Project.Contents:Project DescriptionProject StructureInstallUsageTeam MembersTodoProject Description:AinShamsFlow (asf) is a Deep Learning Framework built by ourTeamfrom Ain Shams University during December 2020 - January 2021.The Design and Interface is inspired heavily from Keras and TensorFlow.However, we only implement everything from scratch using only simple libraries such as numpy and matplotlib.Project Structure:The Design of all parts can be seen in the following UML Diagram.This is how the Design Structure should work in our Framework.Install:You can install the latest available version as follows:$pipinstallainshamsflowUsage:you start using this project by importing the package as follows:>>>importainshamsflowasasfthen you can start creating a model:>>>model=asf.models.Sequential([...asf.layers.Dense(300,activation="relu"),...asf.layers.Dense(100,activation="relu"),...asf.layers.Dense(10,activation="softmax")...],input_shape=(784,),name="my_model")>>>model.print_summary()then compile and train the model:>>>model.compile(...optimizer=asf.optimizers.SGD(lr=0.01),...loss='sparsecategoricalcrossentropy',...metrics=['accuracy']...)>>>history=model.fit(X_train,y_train,epochs=100)finally you can evaluate, predict and show training statistics:>>>history.show()>>>model.evaluate(X_valid,y_valid)>>>y_pred=model.predict(X_test)A more elaborate example usage can be found inmain.pyor check out thisdemo notebook.Team Members:Pierre NabilAhmed TahaGirgis MichealHazzem MohammedIbrahim ShoukryJohn BahaaMichael MagdyZiad TarekTodo:Framework Design:A Data Module to read and process datasets.Dataset__init__()__bool__()__len__()__iter__()__next__()apply()numpy()batch()cardinality()concatenate()copy()enumerate()filter()map()range()shuffle()skip()split()take()unbatch()add_data()add_targets()normalize()ImageDataGenerator__init__()flow_from_directory()A NN Module to design different architectures.Activation FunctionsLinearSigmoidHard SigmoidTanhHard TanhReLULeakyReLUELUSELUSoftmaxSoftplusSoftsignSwishLayersDNN Layers:DenseBatchNormDropoutCNN Layers 1D: (optional)ConvPool (Avg and Max)GlobalPool (Avg and Max)UpsampleCNN Layers 2D:ConvPool (Avg and Max)FastConvFastPool (Avg and Max)GlobalPool (Avg and Max)UpsampleCNN Layers 3D: (optional)ConvPool (Avg and Max)GlobalPool (Avg and Max)UpsampleOther Extra Functionality:FlattenActivationReshapeInitializersConstantUniformNormalIdentityLossesMSE (Mean Squared Error)MAE (Mean Absolute Error)MAPE (Mean Absolute Percentage Error)BinaryCrossentropyCategoricalCrossentropySparseCategoricalCrossentropyHuberLossLogLossLinearActivationLogLossSigmoidActivationPerceptronCriterionLossSvmHingeLossEvaluation MetricsAccuracyTP (True Positives)TN (True Negatives)FP (False Positives)FN (False Negatives)PrecisionRecallF1ScoreRegularizersL1L2L1_L2Optimization Modules for trainingSGDMomentumAdaGradRMSPropAdaDeltaAdamA Visualization Modules to track the training and testing processesHistory Class for showing training statisticsverboseparameter in traininglive plotting of training statisticsA utils module for reading and saving modelsAdding CUDA supportPublish to PyPICreating a Documentation for the ProjectExample Usage:This part can be found in thedemo notebookmentioned above.Download and Split a dataset (MNIST or CIFAR-10) to training, validation and testingConstruct an Architecture (LeNetorAlexNet) and make sure all of its components are provided in your framework.Train and test the model until a good accuracy is reached (Evaluation Metrics will need to be implemented in the framework also)Save the model into a compressed formatChange Log0.1.0 (29/1/2021)First Release
ainstorage
How to use ainstorageAINstorage is package which enables to use storage functions in the AIN systems.
ainwater-package-test
No description available on PyPI.
ain-worker
No description available on PyPI.
ain-worker-staging
No description available on PyPI.
ainyan
Provdes two scripts:ainyan-trainingainyan-datasetDATASET PREPAREainyan-datasets--sourcejgate-lite--dspaths3://wadalabs/jgate-test--profilejpxTRAININGTRAINING=jgate.iniainyan-trainingWith jgate.ini like:[training]model=NousResearch/Llama-2-7b-chat-hfaws_profile=jpxdataset=s3://wadalabs/jgate-smallfinalname=jgate-smalllearning_rate=2e-3max_steps=100MODEL S3 HELPER# Uploadainyan-model-helper--profilejpx--modeup--modeljgate-full--bucketwadalabs# Downloadainyan-model-helper--profilejpx--modedl--modeljgate-full--bucketwadalabs
aio
UNKNOWN
aio2b2t
aio2b2taio2b2t is a modern, async, API wrapper for2b2t.devand2b2t.io.DocumentationYou can find documentationhere.##FAQDo you play on 2b2t?No.Ok then why? Don't you know this doesn't even haveproperlibrary design?Shut up.
aio2ch
Fully asynchronous read-only API wrapper for 2ch.hk (dvach, Двач)RequirementshttpxaiofilesclickInstall with pip$pip3installaio2chBuild from source$gitclonehttps://github.com/wkpn/aio2ch$cd./aio2ch$python3setup.pyinstallUsageSimple usage (in this caseclient.close()must be called when client is no longer needed)>>>fromaio2chimportApi>>>client=Api()>>>...>>>awaitclient.close()Or you can use it as a context manager>>>asyncwithApi()asclient:...boards=awaitclient.get_boards()......Get all boards>>>boards=awaitclient.get_boards()>>>boards(<Boardname='Фагготрия',id='fag'>,...)In addition we can getstatusfor each method. This is useful for debug purposes or if retries are needed>>>status,boards=awaitclient.get_boards(return_status=True)>>>status200>>>boards(<Boardname='Фагготрия',id='fag'>,...)Get all threads from a board>>>threads=awaitclient.get_board_threads(board="b")>>>threads(<Threadnum='180981319'>,...)Get top threads from a board sorted by method (views,scoreorposts_count)>>>top_threads=awaitclient.get_top_board_threads(board="b",method="views",num=3)>>>top_threads(<Threadnum='180894312'>,<Threadnum='180946622'>,<Threadnum='180963318'>)Get all thread’s posts (threadis an instance ofThread)>>>thread_posts=awaitclient.get_thread_posts(thread=thread)>>>thread_posts(<Postnum='180894312'>,...)Get all thread’s posts by url>>>thread_posts=awaitclient.get_thread_posts(thread="https://2ch.hk/test/res/30972.html")>>>thread_posts(<Postnum='30972'>,...)Get all media in all thread’s posts (images, webm and so on)>>>thread_media=awaitclient.get_thread_media(thread=thread)>>>thread_media(<Filename='15336559148500.jpg',path='/b/src/180979032/15336559148500.jpg',size='19'>,...)Get specific thread media>>>images_and_videos=awaitclient.get_thread_media(thread,media_type=(Image,Video))>>>images_and_videos(<Imagename=...>,<Videoname=...>,...)>>>just_images=awaitclient.get_thread_media(thread,media_type=Image)>>>just_images(<Imagename=...>,...)Download all thread media>>>awaitclient.download_thread_media(files=thread_media,save_to="./downloads")
aio2gis
python-aio2gis============A Python library for accessing the 2gis API via asyncio interface
aio2py
UNKNOWN
aio4chan
Contents4chan API reader.Installingpython3 -m pip install aio4chanUsageimportasyncioimportaiohttpimportaio4chanloop=asyncio.get_event_loop()session=aiohttp.ClientSession(loop=loop)client=aio4chan.Client(session=session,loop=loop)asyncdefexecute():"""Traverse 4chan."""boards=awaitclient.get_boards()# short namesboard_ids=(board.boardforboardinboards)forboard_idinboard_ids:pages=awaitclient.get_threads(board_id)# list of pages, each containing threadsthread_ids=(thread.noforpageinpagesforthreadinpage.threads)forthread_idinthread_ids:# need both board_id and thread_idthread=awaitclient.get_thread(board_id,thread_id)forpostinthread:try:# might not existcomment=post.comexceptAttributeError:continue# print where we got it, and the commentprint(board_id,'>',thread_id,'>',post.no,'\n',post.com)try:loop.run_until_complete(execute())exceptKeyboardInterrupt:passfinally:loop.run_until_complete(session.close())loop.close()
aio9p
aio9pAsyncio-based bindings for the 9P protocol. Work in progress.Working examples for the 9P2000 and 9P2000.u dialects are implemented in aio9p.example .Features9P2000 client and server9P2000.u client and serverTransports: TCP, domain socketsTODODocumentationClient examples.FeaturesSupport for the 9P2000.L dialectTestingSignificantly expanded unit testingExpanded integration testsBenchmarking
aioabcpapi
AioAbcpApiАсинхронная библиотека дляAPI ABCPсasyncioиaiohttpПрисоединяйтесь ктелеграм чатуУстановкаpip install aioabcpapiОписаниеВсе методы максимально приближены к древовидному оформлениюофициальной документации.Разделяются наcpиts, они в свою очередь разделяются наclientиadmin, далее для поиска нужного вам метода отталкивайтесь от документацииAPI ABCP.Для примера, из документацииTS.Client,Обновление позиции в корзинеописание операции следующее:Операция: POST /ts/cart/updateДля использования этого метода нам нужно будет обратиться кawait api.ts.client.cart.update()Доступ к APIДля API АдминистратораЕсли вы являетесь клиентом магазина на платформе ABCP, обратитесь к вашему менеджеру. (Вам понадобится статический IP адрес)ПримечаниеВсе аргументы времени, такие какcreate_time,update_time,date_start,date_endи прочие, принимаютstrилиdatetime. При передачеdatetimeобъект будет преобразован в зависимости от требований метода вRFC3339или"%Y-%m-%d %H:%M:%S"ПримерimportasynciofromaioabcpapiimportAbcphost,login,password='id33333','api@id33333','md5hash'api=Abcp(host,login,password)asyncdefsearch_some_parts(article,brand):search_result=awaitapi.cp.client.search.articles(number=article,brand=brand,use_online_stocks=True,disable_online_filtering=True,with_out_analogs=True)forxinsearch_result:iffloat(x['price'])<3000:print('Похоже на чудо, но скорее ошибка прайса. Отключим пока поставщика')awaitapi.cp.admin.distributors.edit_status(x['distributorId'],False)eliffloat(x['price'])<37000:awaitapi.cp.client.basket.add(basket_positions={'number':x['article'],'brand':x['brand'],'supplierCode':x['supplierCode'],'itemKey':x['itemKey'],'quantity':1,'comment':f"Да, РРЦ никто не любит"})if__name__=='__main__':asyncio.run(search_some_parts('602000600','LuK'))Больше примеров
aioaccount
aioaccountUtility for user account creation, modification & email confirmation.Installationpip3 install aioaccountDocsaioaccount.readthedocs.ioFeaturesSecurity.Easy to use.Removes common boilerplate code.SMTP support.Email template support with jinja2.Mongodb, postgresql, mysql & sqlite support.Full unit tests.Full documentation.Uses aiojobs to spawn SMTP background jobs.SecurityAll passwords are hashed using bcrypt.Password policies.Password reset code expiration.Email validation.Thanks tobcryptpassword-strengthdatabasessqlalchemymotoremail-validatoraiosmtplibaiojobsjinja2asynctestsphinxsphinx materialEveryone who helped to make these packages
aioacm-sdk-python
User GuideIntroductionPython SDK for ACM with asyncio support.FeaturesGet/Publish/Remove config from ACM server use REST API.Watch config changes from server.Auto failover on server failure.TLS supported.Address server supported.Both Alibaba Cloud ACM and Stand-alone deployment supported.Supported Python:Python 3.5Python 3.6Supported ACM versionACM 1.0Change LogsInstallationFor Python 3.5 and above:pipinstallaioacm-sdk-pythonGetting StartedimportaioacmENDPOINT="acm.aliyun.com:8080"NAMESPACE="**********"AK="**********"SK="**********"# get configclient=aioacm.ACMClient(ENDPOINT,NAMESPACE,AK,SK)data_id="com.alibaba.cloud.acm:sample-app.properties"group="group"print(asyncio.get_event_loop().run_until_complete(client.get(data_id,group)))# add watchimporttimeclient.add_watcher(data_id,group,lambdax:print("config change detected: "+x))asyncio.get_event_loop().run_until_complete(asyncio.sleep(5))# wait for config changesConfigurationclient = ACMClient(endpoint, namespace, ak, sk)endpoint-required- ACM server address.namespace- Namespace. | default:DEFAULT_TENANTak- AccessKey For Alibaba Cloud ACM. | default:Nonesk- SecretKey For Alibaba Cloud ACM. | default:NoneExtra OptionsExtra option can be set byset_options, as following:client.set_options({key}={value})Configurable options are:default_timeout- Default timeout for get config from server in seconds.tls_enabled- Whether to use https.auth_enabled- Whether to use auth features.cai_enabled- Whether to use address server.pulling_timeout- Long polling timeout in seconds.pulling_config_size- Max config items number listened by one polling process.callback_thread_num- Concurrency for invoking callback.failover_base- Dir to store failover config files.snapshot_base- Dir to store snapshot config files.app_name- Client app identifier.no_snapshot- To disable default snapshot behavior, this can be overridden by paramno_snapshotingetmethod.API ReferenceGet ConfigACMClient.get(data_id, group, timeout, no_snapshot)paramdata_idData id.paramgroupGroup, useDEFAULT_GROUPif no group specified.paramtimeoutTimeout for requesting server in seconds.paramno_snapshotWhether to use local snapshot while server is unavailable.returnW Get value of one config item following priority:Step 1 - Get from local failover dir(default:${cwd}/acm/data).Failover dir can be manually copied from snapshot dir(default:${cwd}/acm/snapshot) in advance.This helps to suppress the effect of known server failure.Step 2 - Get from one server until value is got or all servers tried.Content will be save to snapshot dir after got from server.Step 3 - Get from snapshot dir.Add WatchersACMClient.add_watchers(data_id, group, cb_list)paramdata_idData id.paramgroupGroup, useDEFAULT_GROUPif no group specified.paramcb_listList of callback functions to add.returnAdd watchers to a specified config item.Once changes or deletion of the item happened, callback functions will be invoked.If the item is already exists in server, callback functions will be invoked for once.Multiple callbacks on one item is allowed and all callback functions are invoked concurrently bythreading.Thread.Callback functions are invoked from current process.Remove WatcherACMClient.remove_watcher(data_id, group, cb, remove_all)paramdata_idData id.paramgroupGroup, use "DEFAULT_GROUP" if no group specified.paramcbCallback function to delete.paramremove_allWhether to remove all occurrence of the callback or just once.returnRemove watcher from specified key.List All ConfigACMClient.list_all(group, prefix)paramgroupOnly dataIds with group match shall be returned, default is None.paramgrouponly dataIds startswith prefix shall be returned, default is NoneCase sensitive.returnList of data items.Get all config items of current namespace, with dataId and group information only.Warning: If there are lots of config in namespace, this function may cost some time.Publish ConfigACMClient.publish(data_id, group, content, timeout)paramdata_idData id.paramgroupGroup, use "DEFAULT_GROUP" if no group specified.paramcontentConfig value.paramtimeoutTimeout for requesting server in seconds.returnTrue if success or an exception will be raised.Publish one data item to ACM.If the data key is not exist, create one first.If the data key is exist, update to the content specified.Content can not be set to None, if there is need to delete config item, use functionremoveinstead.Remove ConfigACMClient.remove_watcher(data_id, group, cb, remove_all)paramdata_idData id.paramgroupGroup, use "DEFAULT_GROUP" if no group specified.paramtimeoutTimeout for requesting server in seconds.returnTrue if success or an exception will be raised.Remove one data item from ACM.Debugging ModeDebugging mode if useful for getting more detailed log on console.Debugging mode can be set by:ACMClient.set_debugging() # only effective within the current processCLI ToolA CLI Tool is along with python SDK to make convenient access and management of config items in ACM server.You can useacm {subcommand}directly after installation, sub commands available are as following:addaddanamespaceuseswitchtoanamespacecurrentshowcurrentendpointandnamespaceshowshowallendpointsandnamespaceslistgetlistofdataIdspullgetoneconfigcontentpushpushoneconfigexportexportdataIdstolocalfilesimportimportfilestoACMserverUseacm -hto see the detailed manual.Data Security OptionsACM allows you to encrypt data along withKey Management Service, service provided by Alibaba Cloud (also known asKMS).To use this feature, you can follow these steps:Install KMS SDK bypip install aliyun-python-sdk-kms.Name your data_id with acipher-prefix.Get and filling all the needed configuration toACMClient, info needed are:region_id,kms_ak,kms_secret,key_id.Just make API calls and SDK will process data encrypt & decrypt automatically.Example:c = acm.ACMClient(ENDPOINT, NAMESPACE, AK, SK) c.set_options(kms_enabled=True, kms_ak=KMS_AK, kms_secret=KMS_SECRET, region_id=REGION_ID, key_id=KEY_ID) # publish an encrypted config item. await c.publish("cipher-dataId", None, "plainText") # get the content of an encrypted config item. await c.get("cipher-dataId", None)Other ResourcesAlibaba Cloud ACM homepage:https://www.aliyun.com/product/acm
aioactioncable
aioactioncable: async Action Cable client libraryaioactioncable is a python library for building Ruby on RailsAction Cableclients.The library is based onwebsocketsand asyncio.aioactioncable is thus an async Rails Action Cable client library.Installation$ python3 -m pip install aioactioncableaioactioncable requires Python 3 and therefore needs to be installed using the Python 3 version of pip.RequirementsPython >= 3.7websocketsUsageIn addition to managing websockets connections, Action Cable servers manage multiple channels, that clients can subscribe to.Here is a code example to connect to an Action Cable server, subscribe to a channel and receive messages on that channel:#!/usr/bin/env python3importaioactioncableimportasyncioimportjsondefprocess(msg,identifier)msg_json=json.loads(msg)print(f'Message received on{json.dumps(identifier)}')...asyncdefac_recv(uri,identifier):asyncwithaioactioncable.connect(uri)asacconnect:subscription=awaitacconnect.subscribe(identifier)asyncformsginsubscription:process(msg,identifier)asyncio.run(ac_recv('wss://example.app',{'channel':'ChatChannel'}))All the code examples below must be run in an asyncio event loop.Examples are built "chronologically", object created in Connect section is reused in Subscribe section, and so on.Connect to an Action Cable serverimportaioactioncableacconnect=aioactioncable.connect(uri)aioactioncable Connect object is an async context manager, you can thus use it in anasync withstatement:importaioactioncableimportasyncioasyncwithaioactioncable.connect('wss://example.app')asacconnect:...Subscribe to an Action Cable channelsubscription=awaitacconnect.subscribe({'channel':'ChatChannel'})Recv messages on an Action Cable channelReceive next message on subscription channel:msg=awaitsubscription.recv()Subscription object is an iterable, you can thus iterate over to recv messages in an async for loop:asyncformsginsubscription:...Send messages on an Action Cable channelawaitsubscription.send({'action':'create','chatRoom':'climbing'})Unsubscribe from an Action Cable channelawaitsubscription.unsubscribe()Close an Action Cable server connectionExplicit close of the connection is not needed if it is done in anasync withstatement.Otherwise:awaitacconnect.close()Licenseaioactioncable is distributed under the MIT license.ContributionsContributions are very welcome!Feel free to open anissuefor any bug report.Feel free to propose bug fixes or features via aPull Request.
aioadb
No description available on PyPI.
aio-adb-shell
Documentation for this package can be found athttps://aio-adb-shell.readthedocs.io/.This Python package implements ADB shell and FileSync functionality. It originated frompython-adb.Installationpip install aio-adb-shellExample Usage(Based onandroidtv/adb_manager.py)fromaio_adb_shell.adb_deviceimportAdbDeviceTcpfromaio_adb_shell.auth.sign_pythonrsaimportPythonRSASigner# Connect (no authentication necessary)device1=AdbDeviceTcp('192.168.0.111',5555,default_timeout_s=9.)awaitdevice1.connect(auth_timeout_s=0.1)# Connect (authentication required)withopen('path/to/adbkey')asf:priv=f.read()signer=PythonRSASigner('',priv)device2=AdbDeviceTcp('192.168.0.222',5555,default_timeout_s=9.)awaitdevice2.connect(rsa_keys=[signer],auth_timeout_s=0.1)# Send a shell commandresponse1=awaitdevice1.shell('echo TEST1')response2=awaitdevice2.shell('echo TEST2')
aioaerospike
aioaerospikeThis library is planned to be an async API for Aerospike. The library will be Pure-Python, Protocol based on the C Client.InstallationUsing pip$ pip install aioaerospikeContributingTo work on theaioaerospikecodebase, you'll want to fork the project and clone it locally and install the required dependencies viapoetry:[email protected]:{USER}/aioaerospike.git $makeinstallTo run tests and linters use command below (Requires aerospike to run locally on port 3000):$makelint&&maketestIf you want to run only tests or linters you can explicitly specify which test environment you want to run, e.g.:$makelint-blackLicenseaioaerospikeis licensed under the MIT license. See the license file for details.Latest changes0.1.6 (XXXX-XX-XX)0.1.5 (2019-12-17)Added TTL argument for put_keyAdded operate method, enables users to interact with lower-level API to do specific actions, such as multi op (read, write, modify, etc) in same message.Added UNDEF/AerospikeNone for the option of empty bins, when reading specific bins.0.1.4 (2019-12-07)Added delete key methodAdded key_exists methodChanged signature of put_key to be a dict, for easy multiple bins insert.0.1.3 (2019-12-07)Changed all enums to uppercaseAdded tests for all supported key typesAdded support for dict and list as values.0.1.2 (2019-12-07)Fixed key digest, key type can be all supported types (int, float, str, bytes)0.1.1 (2019-12-07)Fixed license and metadataThis package is 3rd party, unrelated to Aerospike company
aio-aerospike-python
asyncio wrapper of aerospike python client libraryThis project is work in progress. please do not use it in production yet.This project provides a simple way to use aerospike with asyncio.This project is based onAerospike python client library docsDocsinstallationpipinstallaio-aerospike-pythonQuick startstart docker composedockercomposeup-dfromaio_aerospike_pythonimportAioAerospikeClientimportasyncioconfig={'hosts':[('0.0.0.0',3000)]}client=AioAerospikeClient(config)print(client.is_connected())asyncdefput_some_data(limit:int):foriinrange(limit):key=("test","test",i)data={"a":i}awaitclient.put(key,data)asyncdefread_data(limit:int):foriinrange(limit):key=("test","test",i)r=awaitclient.get(key)print(r)loop=asyncio.get_event_loop()loop.run_until_complete(put_some_data(33))loop.run_until_complete(read_data(33))client.close()Now lets test it with concurrencyfromaio_aerospike_pythonimportAioAerospikeClientfromaio_aerospike_pythonimportexceptionfromaerospike_helpersimportexpressionsasexpimportaerospikeimportasyncioconfig={'hosts':[('0.0.0.0',3000)]}client=AioAerospikeClient(config)print(client.is_connected())asyncdefput_some_data(limit:int):foriinrange(limit):key=("test","test",i)data={"a":i}awaitclient.put(key,data)asyncdefread_data(limit:int):keys=[("test","test",i)foriinrange(limit)]# print(keys)r=awaitclient.get_many(keys)print(r)asyncdefuse_query(mina:int,maxa:int):query=client.query("test","test")expr=expr=exp.And(exp.LT(exp.IntBin("a"),maxa),exp.GT(exp.IntBin("a"),mina)).compile()scan_policy={"expressions":expr}results=awaitquery.results(scan_policy)print("query results ===")forrinresults:print(r)asyncdefuse_scan(mina:int,maxa:int):scan=client.query("test","test")expr=exp.And(exp.LT(exp.IntBin("a"),maxa),exp.GT(exp.IntBin("a"),mina)).compile()scan_policy={"expressions":expr}results=awaitscan.results(scan_policy)print("scan results ===")forrinresults:print(r)asyncdeftest_append(key=("test","test",3),bin="a",val="test",meta=None,policy=None):awaitclient.put(key=key,bins={"vv":"test_"})awaitclient.append(key=key,bin="vv",val="append",meta=meta,policy=policy)r=awaitclient.get(key=key)key,_,bin=rprint("append")print(r)asyncdefmain():L=awaitasyncio.gather(put_some_data(700),read_data(50),use_query(10,20),use_scan(40,45),test_append())asyncio.run(main())LicenseThe AIO Aerospike Python Client is made available under the terms of the Apache License, Version 2, as stated in the file LICENSE.
aio-agents
Aio AgentsAn opinionated template for build llm agents using aiofauna framework.
aioagi
AioagiAsync agi client/server framework. The project based on “aiohttp” framework.Key FeaturesSupports both client and server side of AGI protocol.AGI-server has middlewares and pluggable routing.Getting startedServerSimple AGI server:importasynciofromaiohttp.webimportApplication,AppRunner,TCPSite,Responsefromaioagiimportrunnerfromaioagi.appimportAGIApplicationfromaioagi.logimportagi_server_loggerfromaioagi.urldispathcerimportAGIViewfromaiohttp.web_runnerimportGracefulExitasyncdefhello(request):message=awaitrequest.agi.stream_file('hello-world')awaitrequest.agi.verbose('Hello handler:{}.'.format(request.rel_url.query))agi_server_logger.debug(message)asyncdefhttp_hello(request):returnResponse(text="Hello, world")classHelloView(AGIView):asyncdefsip(self):message=awaitself.request.agi.stream_file('hello-world')awaitself.request.agi.verbose('HelloView handler:{}.'.format(self.request.rel_url.query))agi_server_logger.debug(message)if__name__=='__main__':app=AGIApplication()app.router.add_route('SIP','/',hello)runner.run_app(app)# ORif__name__=='__main__':apps=[]app=AGIApplication()app.router.add_route('SIP','/',hello)http_app=Application()http_app.router.add_route('GET','/',http_hello)loop=asyncio.get_event_loop()runners=[]sites=[]for_appin[app,http_app]:app_runner=AppRunner(_app)loop.run_until_complete(app_runner.setup())ifisinstance(_app,AGIApplication):sites.append(runner.AGISite(app_runner,port=8081))else:sites.append(TCPSite(app_runner,port=8080))runners.append(app_runner)forsiteinsites:loop.run_until_complete(site.start())uris=sorted(str(s.name)forrunnerinrunnersforsinrunner.sites)print("======== Running on{}========\n""(Press CTRL+C to quit)".format(', '.join(uris)))try:loop.run_forever()except(GracefulExit,KeyboardInterrupt):# pragma: no coverpassfinally:forrunnerinreversed(runners):loop.run_until_complete(runner.cleanup())ifhasattr(loop,'shutdown_asyncgens'):loop.run_until_complete(loop.shutdown_asyncgens())loop.close()ClientTo set AGI connection as Asterisk:importasyncioimportlogging.configfromaioagi.logimportagi_client_loggerfromaioagi.clientimportAGIClientSessionfromaioagi.parserimportAGIMessage,AGICodeasyncdeftest_request(loop):headers={'agi_channel':'SIP/100-00000001','agi_language':'ru','agi_uniqueid':'1532375920.8','agi_version':'14.0.1','agi_callerid':'100','agi_calleridname':'test','agi_callingpres':'0','agi_callingani2':'0','agi_callington':'0','agi_callingtns':'0','agi_dnid':'101','agi_rdnis':'unknown','agi_context':'from-internal','agi_extension':'101','agi_priority':'1','agi_enhanced':'0.0','agi_accountcode':'','agi_threadid':'139689736754944',}asyncwithAGIClientSession(headers=headers,loop=loop)assession:asyncwithsession.sip('agi://localhost:8080/hello/?a=test1&b=var1')asresponse:asyncformessageinresponse:client_logger.debug(message)awaitresponse.send(AGIMessage(AGICode.OK,'0',{}))asyncwithsession.sip('agi://localhost:8080/hello-view/?a=test2&b=var2')asresponse:asyncformessageinresponse:client_logger.debug(message)awaitresponse.send(AGIMessage(AGICode.OK,'0',{}))NoteSession request headers are set automatically forsession.sip('agi://localhost:8080/hello/?a=test1&b=var1')request:agi_type:SIPagi_network:yesagi_network_script:hello/agi_request:agi://localhost:8080/hello/AMIimportasynciofromaioagi.ami.actionimportAMIActionfromaioagi.ami.managerimportAMIManagerasyncdefcallback(manager,message):print(message)asyncdefmain(app):manager=AMIManager(app=app,title='myasterisk',host='127.0.0.1',port=5038,username='username',secret='secret',)manager.register_event('*',callback)app['manager']=managerawaitmanager.connect()awaitasyncio.sleep(2)message=awaitmanager.send_action(AMIAction({'Action':'Command','Command':'database show',}))print(message)print(message.body)asyncdefcleanup(app):app['manager'].close()if__name__=='__main__':app={}_loop=asyncio.get_event_loop()try:_loop.run_until_complete(main(app))exceptKeyboardInterrupt:_loop.run_until_complete(cleanup(app))_loop.close()Installpip install aioagiThanksGael Pasgrimaud -panoramisk
aioairctrl
# aioairctrlLibrary and commandline utilities for controlling Philips air purifiers (using encrypted CoAP)
aioairq
PyPI packageaioairqPython library for asynchronous data access to local air-Q devices.Retrieve data from air-QAt its present state,AirQrequires anaiohttpsession to be provided by the user:importasyncioimportaiohttpfromaioairqimportAirQADDRESS="123ab_air-q.local"PASSWORD="airqsetup"asyncdefmain():asyncwithaiohttp.ClientSession()assession:airq=AirQ(ADDRESS,PASSWORD,session)config=awaitairq.configprint(f"Available sensors:{config['sensors']}")data=awaitairq.dataprint(f"Momentary data:{data}")asyncio.run(main())
aioairtable
Key FeaturesAsyncio andaiohttpbasedAllairtable REST APImethods supportedAPI rate limit supportFully type annotated (PEP 484)Installationaioairtable is available on PyPI. Use pip to install it:pipinstallaioairtableRequirementsPython >= 3.8aiohttpmultidictbackoffaiofreqlimityarlUsing aioairtablePass a value of any hashable type toacquireor do not specify any parameter:importasynciofromaioairtableimportAirtable,SortDirectionasyncdefmain():airtable=Airtable(api_key='some_key')base=airtable.base('base_id')table=base.table('table_name')records,offset=awaittable.list_records(fields=('field_1','field_2'),filter_by_formula='{field_3}',max_records=100500,page_size=3,sort=(('field_1',SortDirection.ASC),('field_2',SortDirection.DESC)),view='table3',offset='record033')forrecordinrecords:print(record)record=awaittable.create_record({'field_1':'value_1_new_001','field_2':'value_2_new_001','field_3':'value_3_new_001'})awaitrecord.delete()asyncio.run(main())