package
stringlengths 1
122
| pacakge-description
stringlengths 0
1.3M
|
---|---|
aicore-tz-watch-scraper | No description available on PyPI. |
aicore-xmas | To run this package, run:python -m aicore_xmas |
aicorn | PY Study |
aicost | AI-COST |
ai-coustics | No description available on PyPI. |
aicp | AI Collaboration PlatformInstallpip install aicpPrerequisites* Python 3.6+UsageLogin to the platform using the following command:aicp loginCreate service with as the directory using the following command:aicp service create <directory>or import service from a local directory using the following command:aicp service import <directory> |
aicq | No description available on PyPI. |
ai-creator | No description available on PyPI. |
aicrowd-api | AIcrowd APIPython client for server side API of theaicrowd.comwebapp.Free software: GNU General Public License v3Documentation:https://aicrowd-api.readthedocs.io.InstallationDeploymentpipinstallgit+https://github.com/AIcrowd/aicrowd_api.gitDevelopmentgitclonehttps://github.com/AIcrowd/aicrowd_apicdaicrowd_api
pipinstall-rrequirements_dev.txt
pipinstall-e.UsageInstantiate API objectfromaicrowd_apiimportAPIasAICROWD_APIauth_token="<YOUR AICROWD AUTH TOKEN>"api=AICROWD_API(auth_token)Authenticate participantwithAPI_KEYapi.authenticate_participant(EXAMPLE_API_KEY)withusernameapi_key=api.authenticate_participant_with_username("spMohanty")Get all Submissionschallenge_id="test_challenge"submissions=api.get_all_submissions(challenge_id)print(submissions)Create Submissionchallenge_id="test_challenge"submission=api.create_submission(challenge_id)print(submission)# Output# ========================================# AIcrowdSubmission : 5261# challenge_id : test_challenge# round_id : False# score : False# score_secondary : False# grading_status : submitted# message :# ========================================Get submissionchallenge_id="test_challenge"submission_id=5262submission=api.get_submission(challenge_id,submission_id)Update submissionAssuming you have asubmissionobject by usingapi.create_submissionorapi.get_submission.
You can update the submission by :# Update paramssubmission.grading_status="graded"submission.score=0.98submission.score_secondary=0.98submission.update()print(submission)# Output## ========================================# AIcrowdSubmission : 5262# challenge_id : test_challenge# round_id : False# score : 0.98# score_secondary : 0.98# grading_status : graded# message :# ========================================Tests# Setup the environment varriablescpenviron.sh.exampleenviron.sh# Then modify the respective environment variablessourceenviron.sh
pyteststests/[email protected] [email protected] |
aicrowd-cli | AIcrowd CLIaicrowd-cliis a simple CLI tool for interacting with the AIcrowd platform. It is written in python, usingclick.Download datasets, make submissions and much more with a single command!DocumentationSupported versionspython3.6 and above are supported on all platforms.Reporting issuesKindly raise an issuehere. |
aicrowd-gym | AIcrowd GymA gym wrapper for RL evaluations on AIcrowd. |
aicrowd-repo2docker | repo2dockerrepo2dockerfetches a git repository and builds a container image based on
the configuration files found in the repository.See therepo2docker documentationfor more information on using repo2docker.For support questions please search or post tohttps://discourse.jupyter.org/c/binder.See thecontributing guidefor information on contributing to
repo2docker.Please note that this repository is participating in a study into sustainability
of open source projects. Data will be gathered about this repository for
approximately the next 12 months, starting from 2021-06-11.Data collected will include number of contributors, number of PRs, time taken to
close/merge these PRs, and issues closed.For more information, please visitour informational pageor download ourparticipant information sheet.Using repo2dockerPrerequisitesDocker to build & run the repositories. Thecommunity editionis recommended.Python 3.6+.Supported on Linux and macOS.See documentation note about Windows support.InstallationThis a quick guide to installingrepo2docker, see our documentation fora full guide.To install from PyPI:pipinstalljupyter-repo2dockerTo install from source:gitclonehttps://github.com/jupyterhub/repo2docker.gitcdrepo2docker
pipinstall-e.UsageThe core feature of repo2docker is to fetch a git repository (from GitHub or locally),
build a container image based on the specifications found in the repository &
optionally launch the container that you can use to explore the repository.Note that Docker needs to be running on your machine for this to work.Example:jupyter-repo2dockerhttps://github.com/norvig/pytudesAfter building (it might take a while!), it should output in your terminal
something like:Copy/pastethisURLintoyourbrowserwhenyouconnectforthefirsttime,tologinwithatoken:http://0.0.0.0:36511/?token=f94f8fabb92e22f5bfab116c382b4707fc2cade56ad1ace0If you copy paste that URL into your browser you will see a Jupyter Notebook
with the contents of the repository you had just built!For more information on how to userepo2docker, see theusage guide.Repository specificationsRepo2Docker looks for configuration files in the source repository to
determine how the Docker image should be built. For a list of the configuration
files thatrepo2dockercan use, see thecomplete list of configuration files.The philosophy of repo2docker is inspired byHeroku Build Packs.Docker ImageRepo2Docker can be run inside a Docker container if access to the Docker Daemon is provided, for example seeBinderHub. Docker images arepublished to quay.io. The oldDocker Hub imageis no longer supported. |
ai.cs | Welcome to AI.CSAI.CS is the coordinates transformation package in Python. It offers functionality for converting data between geometrical coordinates (cartesian, spherical and cylindrical) as well as between geocentric and heliocentric coordinate systems typically used in spacecraft measurements. The package currently also supports rotations of data by means ofrotation matrices. Transformations between spacecraft coordinate systems are implemented as a Python binding to theCXFORMlibrary.The full documentation is available ataics.rtfd.io.Getting startedThis tutorial will guide you through basic usage of AI.CS.InstallationAI.CS is developed for Python 3, so make sure that you have a working isntallation of it. The package is distributed together with C portion of CXFORM library, which is compiled automatically during installation. Thus make sure that you have a functioning compiler in your system, for instance, gcc.Assuming the above requirements are satisfied install the package with Python package manager:$ pip install ai.csGeometrical coordinatesAI.CS ships with functions for conversion between cartesian and spherical coordinates and between cartesian and cylindrical coordinates:importnumpyasnpfromaiimportcs# cartesian to sphericalr,theta,phi=cs.cart2sp(x=1,y=1,z=1)# spherical to cartesianx,y,z=cs.sp2cart(r=1,theta=np.pi/4,phi=np.pi/4)# cartesian to cylindricalr,phi,z=cs.cart2cyl(x=1,y=1,z=1)# cylindrical to cartesianx,y,z=cs.cyl2cart(r=1,phi=np.pi/2,z=1)Most of the functions support both scalars and numpy arrays as input:importnumpyasnpfromaiimportcs# converting spherical spiral from spherical to cartesian coordinatesx,y,z=cs.sp2cart(r=np.ones(100),theta=np.linspace(-np.pi/2,np.pi/2,100),phi=np.linspace(0,np.pi*6,100))Spacecraft coordinatesAI.CS provides Python bindings to CXFORM library for conversion between various geocentric and heliocentric cartesian coordinate systems. For example, the code below performs transformation of data from GSE to HEEQ coordinate system:fromdatetimeimportdatetimefromastropyimportunitsasufromaiimportcs# converting (0.5, 0.5, 0.5) AU location from GSE to HEEQ at current timex,y,z=cs.cxform('GSE','HEEQ',datetime.now(),x=u.au.to(u.m,0.5),y=u.au.to(u.m,0.5),z=u.au.to(u.m,0.5))Both scalars and numpy arrays are supported as input:fromdatetimeimportdatetime,timedeltafromastropyimportunitsasufromaiimportcs# converting circular orbit at 1 AU from cylindrical to cartesian coordinatesr=np.ones(365)*u.au.to(u.m,1)phi=np.linspace(0,np.pi*2,365)z=np.zeros(365)x_HEE,y_HEE,z_HEE=cs.cyl2cart(r,phi,z)# converting HEE to HEEQx_HEEQ,y_HEEQ,z_HEEQ=cs.cxform('HEE','HEEQ',[datetime(2016,1,1)+timedelta(days=d)fordinrange(365)],x=x_HEE,y=y_HEE,z=z_HEE)Geometrical transformationsCurrently AI.CS offers only one type of geometrical transformations - rotations. Rotation is executed by means of 3D transformation matrices for right-handed rotations around X, Y and Z axes:importnumpyasnpfromaiimportcs# get (3x3) rotation matrix for rotation by pi/4 around X axisTx=cs.mx_rot_x(gamma=np.pi/4)# get (3x3) rotation matrix for rotation by -pi/4 around Y axisTy=cs.mx_rot_y(theta=-np.pi/4)# get (3x3) rotation matrix for rotation by pi/2 around Z axisTz=cs.mx_rot_z(phi=np.pi/2)Is is also possible to construct rotation matrices for compound rotations in one shot:importnumpyasnpfromaiimportcs# get matrix for right-handed rotation around X, Y and Z axes (exactly in this order)T=cs.mx_rot(theta=np.pi/4,phi=np.pi/4,gamma=np.pi/4)# get matrix for right-handed rotation around Z, Y and X axes (exactly in this order)T_reverse=cs.mx_rot_reverse(theta-np.pi/4,phi=-np.pi/4,gamma=-np.pi/4)# T_reverse effectively reverses the transformation described by T in this caseRotation matrices can be applied to data in cartesian coordinates in the following way:importnumpyasnpfromaiimportcs# a cube with the side length 2x=np.array([1,1,1,1,-1,-1,-1,-1])y=np.array([1,1,-1,-1,1,1,-1,-1])z=np.array([1,-1,1,-1,1,1,-1,-1])# rotate cube by pi/4 around each axisT=cs.mx_rot(theta=np.pi/4,phi=np.pi/4,gamma=np.pi/4)x,y,z=cs.mx_apply(T,x,y,z) |
aicsapi-tool-python | aicsapi-tool-python| .gitignore
| .kvvars # specifies environment variables needed by "keyvault_certgen.py" & "keyvault_utils.py"
| LICENSE
| README.md
| requirements.txt # package dependencies
| requirements_dev.txt # required for packaging
| setup.py # for wrapping into a PyPI package
|
+---aicsapi_tool_python # actual codes
| appinsight_transport.py # for logging custom events w/ correlation id to Azure Appinsights
| keyvault_certgen.py # provides utilities to generate X509v3 cert and upload to Azure KV
| keyvault_tokenCache.py # for caching credentials to save login time
| keyvault_utils.py # device-code sign in to Azure, get/import secret & cert to Azure KV
| __init__.py
|
\---tests # test_*.py performs unit test for the corresponding module
test_appinsight_transport.py
test_keyvault_certgen.py
test_keyvault_tokenCache.py
test_keyvault_utils.pyDescriptionA package to help Python API template in:Logging to Azure Application InsightsFetching secret from Azure Key VaultGenerating self-signed certificate and upload it to Azure Key VaultCaching credential token to save login timeInstallationThis package has been published to PyPI, so you can usepip install aicsapi-tool-pythonNoteWhen calling methods inkeyvault_utilsorkeyvault_certgen, be sure to have the following environment variables set:KEY_VAULT_URL
KEY_VAULT_SECRET_NAME
KEY_VAULT_CERT_NAME
AZURE_TENANT_IDExample: Generate Self-signed Certification and Upload to Azure KVEnsure required env. variables are loaded, create.kvvarsunder current working directoryKEY_VAULT_URL="https://[your keyvault name].vault.azure.net"
KEY_VAULT_CERT_NAME="certificate name"
AZURE_TENANT_ID="your azure tenant id"Run the following code snippet withpython certgen.py [your ASUS account name] [filename of generated key & cert]# certgen.pyfromaicsapi_tool_python.keyvault_certgenimportgenerate_v3cert,upload_v3cert_to_kvimportsysasus_account=sys.argv[1]cert_name=sys.argv[2]generate_v3cert(asus_account,cert_name)upload_v3cert_to_kv(cert_name+'.pfx') |
aics-bead-alignment-core | aics_bead_alignment_coreaics_bead_alignment_core==1.2.0A uitlity package containing functions to align bead imagesFeaturesStore values and retain the prior value in memory... some other functionalityInstallationStable Release:pip install aics_bead_alignment_coreDevelopment Head:pip install git+https://github.com/BrianWhitneyAI/aics_bead_alignment_core.gitDocumentationFor full package documentation please visitBrianWhitneyAI.github.io/aics_bead_alignment_core.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.The Commands You Need To Knowmake installThis will setup a virtual environment local to this project and install all of the
project's dependencies into it. The virtual env will be located inaics_bead_alignment_core/venv.make test,make fmt,make lint,make type-check,make import-sortQuality assurancepip install -e .[dev]This will install your package in editable mode with all the required development
dependencies.make docsThis will generate documentation using sphinx.make publishandmake publish-snapshotRunning this command will start the process of publishing to PyPI`make bumpversion' - [release, major, minor, patch, dev]update verisoning with new releasesmake cleanThis will clean up various Python and build generated files so that you can ensure
that you are working in a clean workspace.Suggested Git Branch Strategymainis for the most up-to-date development, very rarely should you directly
commit to this branch. GitHub Actions will run on every push and on a CRON to this
branch but still recommended to commit to your development branches and make pull
requests to main. If you push a tagged commit with bumpversion, this will also release to PyPI.Your day-to-day work should exist on branches separate frommain. Even if it is
just yourself working on the repository, make a PR from your working branch tomainso that you can ensure your commits don't break the development head. GitHub Actions
will run on every push to any branch or any pull request from any branch to any other
branch.It is recommended to use "Squash and Merge" commits when committing PR's. It makes
each set of changes tomainatomic and as a side effect naturally encourages small
well defined PR's. |
aicscytoparam | 3D Cell ParameterizationSpherical harmonics coefficients-based parameterization of the cytoplasm and nucleoplasm for 3D cellsInstallationStable Release:pip install aicscytoparamDevelopment Head:pip install git+https://github.com/AllenCell/aics-cytoparam.gitHow to useHere we outline an example of how to useaicscytoparamto create a parameterization of a 3D cell. In this case, the 3D cells will be represented by a cell segementation, nuclear segmentation and a fluorescent protein (FP) image representing the fluorescent signal of a tagged protein.# Import required packagesimportnumpyasnpimportmatplotlib.pyplotaspltfromaicscytoparamimportcytoparamfromskimageimportmorphologyasskmorpho# First create a cuboid cell with an off-center cuboid nucleus# and get the spherical harmonics coefficients of this cell and nucleus:w=100mem=np.zeros((w,w,w),dtype=np.uint8)mem[20:80,20:80,20:80]=1nuc=np.zeros((w,w,w),dtype=np.uint8)nuc[40:60,40:60,30:50]=1# Create an FP signal located in the top half of the cell and outside the# nucleus:gfp=np.random.rand(w**3).reshape(w,w,w)gfp[mem==0]=0gfp[:,w//2:]=0gfp[nuc>0]=0# Vizualize a center xy cross-section of our cell:plt.imshow((mem+nuc)[w//2],cmap='gray')plt.imshow(gfp[w//2],cmap='gray',alpha=0.25)plt.axis('off')# Use aicsshparam to expand both cell and nuclear shapes in terms of spherical# harmonics:coords,coeffs_centroid=cytoparam.parameterize_image_coordinates(seg_mem=mem,seg_nuc=nuc,lmax=16,# Degree of the spherical harmonics expansionnisos=[32,32]# Number of interpolation layers)coeffs_mem,centroid_mem,coeffs_nuc,centroid_nuc=coeffs_centroid# Run the cellular mapping to create a parameterized intensity representation# for the FP image:gfp_representation=cytoparam.cellular_mapping(coeffs_mem=coeffs_mem,centroid_mem=centroid_mem,coeffs_nuc=coeffs_nuc,centroid_nuc=centroid_nuc,nisos=[32,32],images_to_probe=[('gfp',gfp)]).data.squeeze()# The FP image is now encoded into a representation of its shape:print(gfp_representation.shape)(65, 8194)# Now we want to morph the FP image into a round cell.# First we create the round cell:fromskimageimportmorphologyasskmorphomem_round=skmorpho.ball(w//3)# radius of our round cellnuc_round=skmorpho.ball(w//3)# radius of our round nucleus# Erode the nucleus so it becomes smaller than the cellnuc_round=skmorpho.binary_erosion(nuc_round,selem=np.ones((20,20,20))).astype(np.uint8)# Vizualize a center xy cross-section of our round cell:plt.imshow((mem_round+nuc_round)[w//3],cmap='gray')plt.axis('off')# Next we need to parameterize the coordinates of our round# cell:coords_round,_=cytoparam.parameterize_image_coordinates(seg_mem=mem_round,seg_nuc=nuc_round,lmax=16,nisos=[32,32])# Now we are ready to morph the FP image into our round cell:gfp_morphed=cytoparam.morph_representation_on_shape(img=mem_round+nuc_round,param_img_coords=coords_round,representation=gfp_representation)# Visualize the morphed FP image:plt.imshow((mem_round+nuc_round)[w//3],cmap='gray')plt.imshow(gfp_morphed[w//3],cmap='gray',alpha=0.25)plt.axis('off')ReferenceFor an example of how this package was used to analyse a dataset of over 200k single-cell images at the Allen Institute for Cell Science, please check out our paper inbioaRxiv.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.Questions?If you have any questions, feel free to leave a comment in our Allen Cell forum:https://forum.allencell.org/.Free software: Allen Institute Software License |
aicsdaemon | aicsdaemonPython Class defining a daemon process. An implemented class that inherts from this should be runnable as a daemon.FeaturesStore values and retain the prior value in memory... some other functionalityQuick StartfromaicsdaemonimportExamplea=Example()a.get_value()# 10InstallationStable Release:pip install aicsdaemonDevelopment Head:pip install git+https://github.com/AllenCellModeling/aicsdaemon.gitDocumentationFor full package documentation please visitAllenCellModeling.github.io/aicsdaemon.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.The Four Commands You Need To Knowpip install -e .[dev]This will install your package in editable mode with all the required development dependencies (i.e.tox).make buildThis will runtoxwhich will run all your tests in both Python 3.6 and Python 3.7 as well as linting your code.make cleanThis will clean up various Python and build generated files so that you can ensure that you are working in a clean
environment.make docsThis will generate and launch a web browser to view the most up-to-date documentation for your Python package.Additional Optional Setup Steps:Turn your project into a GitHub repository:Make sure you havegitinstalled, if you don't,follow these instructionsMake an account ongithub.comGo tomake a new repositoryRecommendations:It is strongly recommended to make the repository name the same as the Python package nameA lot of the following optional steps arefreeif the repository is Public, plus open source is coolOnce you are in your newly generated cookiecutter Python project directory, rungit initAftergithas initialized locally, run the following commands:git remote add origin [email protected]:AllenCellModeling/aicsdaemon.gitgit push -u origin masterRegister aicsdaemon with Codecov:Make an account oncodecov.io(Recommended to sign in with GitHub)SelectAllenCellModelingand click:Add new repositoryCopy the token provided, go to yourGitHub repository's settings and under theSecretstab,
add a secret calledCODECOV_TOKENwith the token you just copied.
Don't worry, no one will see this token because it will be encrypted.Generate and add an access token as a secret to the repository for auto documentation generation to workGo to yourGitHub account's Personal Access Tokens pageClick:Generate new tokenRecommendations:Name the token: "Auto-Documentation Generation" or similar so you know what it is being used for laterSelect only:repo:status,repo_deployment, andpublic_repoto limit what this token has access toCopy the newly generated tokenGo to yourGitHub repository's settings and under theSecretstab,
add a secret calledACCESS_TOKENwith the personal access token you just created.
Don't worry, no one will see this password because it will be encrypted.Register your project with PyPI:Make an account onpypi.orgGo to yourGitHub repository's settings and under theSecretstab,
add a secret calledPYPI_TOKENwith your password for your PyPI account.
Don't worry, no one will see this password because it will be encrypted.Next time you push to the branch:stable, GitHub actions will build and deploy your Python package to PyPI.Recommendation: Prior to pushing tostableit is recommended to install and runbumpversionas this will,
tag a git commit for release and update thesetup.pyversion number.Add branch protections tomasterandstableTo protect from just anyone pushing tomasterorstable(the branches with more tests and deploy
configurations)Go to yourGitHub repository's settings and under theBranchestab, clickAdd ruleand select the
settings you believe best.Recommendations:Require pull request reviews before mergingRequire status checks to pass before merging (Recommended: lint and test)Suggested Git Branch Strategymasteris for the most up-to-date development, very rarely should you directly commit to this branch. GitHub
Actions will run on every push and on a CRON to this branch but still recommended to commit to your development
branches and make pull requests to master.stableis for releases only. When you want to release your project on PyPI, simply make a PR frommastertostable, this template will handle the rest as long as you have added your PyPI information described in the aboveOptional Stepssection.Your day-to-day work should exist on branches separate frommaster. Even if it is just yourself working on the
repository, make a PR from your working branch tomasterso that you can ensure your commits don't break the
development head. GitHub Actions will run on every push to any branch or any pull request from any branch to any other
branch.It is recommended to use "Squash and Merge" commits when committing PR's. It makes each set of changes tomasteratomic and as a side effect naturally encourages small well defined PR's.GitHub's UI is bad for rebasingmasterontostable, as it simply adds the commits to the other branch instead of
properly rebasing from what I can tell. You should always rebase locally on the CLI until they fix it.Free software: BSD license |
aics-dask-utils | AICS Dask UtilsDocumentation related to Dask, Distributed, and related packages.
Utility functions commonly used by AICS projects.FeaturesDistributed handler to manage various debugging or cluster configurationsDocumentation on example cluster deploymentsBasicsBefore we jump into quick starts there are some basic definitions to understand.TaskA task is a single static function to be processed. Simple enough. However, relevant to
AICS, is that when usingaicsimageio(and / ordask.array.Array), your image (ordask.array.Array) is split up intomanytasks. This is dependent on the image reader
and the size of the file you are reading. But in general it is safe to assume that each
image you read is split many thousands of tasks. If you want to see how many tasks your
image is split into you can either compute:Psuedo-code:sum(2 * size(channel) for channel if channel not in ["Y", "X"])Dask graph length:len(AICSImage.dask_data.__dask_graph__())MapApply a given function to the provided iterables as used as parameters to the function.
Givenlambda x: x + 1and[1, 2, 3], the result ofmap(func, *iterables)in this
case would be[2, 3, 4]. Usually, you are provided back an iterable offutureobjects back from amapoperation. The results from the map operation are not
guaranteed to be in the order of the iterable that went in as operations are started as
resources become available and item to item variance may result in different output
ordering.FutureAn object that will become available but is currently not defined. There is no guarantee
that the object is a valid result or an error and you should handle errors once the
future's state has resolved (usually this means after agatheroperation).GatherBlock the process from moving forward until all futures are resolved. Control flow here
would mean that you could potentially generate thousands of futures and keep moving on
locally while those futures slowly resolve but if you ever want a hard stop and wait for
some set of futures to complete, you would need gather them.Other CommentsDask tries to mirror the standard libraryconcurrent.futureswherever possible which
is what allows for this library to have simple wrappers around Dask to allow for easy
debugging as we are simply swapping outdistributed.Client.mapwithconcurrent.futures.ThreadPoolExecutor.mapfor example. If at any point in your code
you don't want to usedaskfor some reason or another, it is equally valid to useconcurrent.futures.ThreadPoolExecutororconcurrent.futures.ProcessPoolExecutor.Basic Mapping with Distributed HandlerIf you have an iterable (or iterables) that would result in less than hundreds of
thousands of tasks, it you can simply use the normalmapprovided by theDistributedHandler.client.Important Note:Notice, "... iterable that wouldresultin less than hundreds
of thousands of tasks...". This is important because what happens when you try tomapover a thousand image paths, each which spawns anAICSImageobject. Each one adds
thousands more tasks to the scheduler to complete. This will break and you should look
toLarge Iterable Batchinginstead.fromaics_dask_utilsimportDistributedHandler# `None` address provided means use local machine threadswithDistributedHandler(None)ashandler:futures=handler.client.map(lambdax:x+1,[1,2,3])results=handler.gather(futures)fromdistributedimportLocalClustercluster=LocalCluster()# Actual address provided means use the dask schedulerwithDistributedHandler(cluster.scheduler_address)ashandler:futures=handler.client.map(lambdax:x+1,[1,2,3])results=handler.gather(futures)Large Iterable BatchingIf you have an iterable (or iterables) that would result in more than hundreds of
thousands of tasks, you should usehandler.batched_mapto reduce the load on the
client. This will batch your requests rather than send than all at once.fromaics_dask_utilsimportDistributedHandler# `None` address provided means use local machine threadswithDistributedHandler(None)ashandler:results=handler.batched_map(lambdax:x+1,range(1e9)# 1 billion)fromdistributedimportLocalClustercluster=LocalCluster()# Actual address provided means use the dask schedulerwithDistributedHandler(cluster.scheduler_address)ashandler:results=handler.batched_map(lambdax:x+1,range(1e9)# 1 billion)Note:Notice that there is nohandler.gathercall afterbatched_map. This is
becausebatched_mapgathers results at the end of each batch rather than simply
returning their future's.InstallationStable Release:pip install aics_dask_utilsDevelopment Head:pip install git+https://github.com/AllenCellModeling/aics_dask_utils.gitDocumentationFor full package documentation please visitAllenCellModeling.github.io/aics_dask_utils.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.Additional CommentsThis README, provided tooling, and documentation are not meant to be all encompassing
of the various operations you can do withdaskand other similar computing systems.
For further reading go todask.org.Free software: Allen Institute Software License |
aicsfeature | AICS Features Extractionfrom aicsfeatures.extractor import */feature_calcFunctions related to feature extraction. Feature extraction should be done like this:from aicsfeatures.extractor import *
features_result = xxx.get_features(args=None, seg=image_xxx)where xxx can bemem, for cell membrane-related featuresdna, for nucleus-related featuresstructure, for structure-specific features.stack, for stack-related features.Assumptions of each function about the input image should be detailed inside the function like thisAssumptions:
- Input is a ZYX 16-bit numpy array (2D images can be passed in as a 1YX)
- There is a single object of interest
- Background has value 0
- Object of interest have pixel value > 0The resultfeatures_resultshould always be a single row Pandas dataframe.exutils.py:Main routine for feature calculation. We try to re-use functions from skimage (shape analysis) and Mahotas (texture analysis). Here is also the place to implement new features that are not found in those packages. We try to be general and do not say anything specific about the type of biological structure that the input image should represent.mem.py:Wrappers for feature extraction of cell membrane images.dna.py:Wrappers for feature extraction of dna images.structure.py:Wrappers for feature extraction of structures images. Things to be defined: which type of feature we should extract for each given structure. Or do we do all to all?stack.py:Wrappers for feature extraction of whole stack images.How to BuildThe default project layout and build steps are discussed inBUILD.md. Some of the information
is related to the AICS build process.Legal documentsLicense(Before you release the project publicly please check with your manager to ensure the license is appropriate and has been run through legal review as necessary.)Contribution Agreement(Additionally update the contribution agreement to reflect
the level of contribution you are expecting, and the level of support you intend to provide.) |
aicsimageio | AICSImageIOImage Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure PythonFeaturesSupports reading metadata and imaging data for:OME-TIFFTIFFND2-- (pip install aicsimageio[nd2])DV-- (pip install aicsimageio[dv])CZI-- (pip install aicspylibczi>=3.1.1 fsspec>=2022.8.0)LIF-- (pip install readlif>=0.6.4)PNG,GIF,etc.-- (pip install aicsimageio[base-imageio])Files supported byBio-Formats-- (pip install aicsimageio bioformats_jar) (Note: requiresjavaandmaven, see below for details.)Supports writing metadata and imaging data for:OME-TIFFPNG,GIF,etc.-- (pip install aicsimageio[base-imageio])Supports reading and writing tofsspecsupported file systems
wherever possible:Local paths (i.e.my-file.png)HTTP URLs (i.e.https://my-domain.com/my-file.png)s3fs(i.e.s3://my-bucket/my-file.png)gcsfs(i.e.gcs://my-bucket/my-file.png)SeeCloud IO Supportfor more details.InstallationStable Release:pip install aicsimageioDevelopment Head:pip install git+https://github.com/AllenCellModeling/aicsimageio.gitAICSImageIO is supported on Windows, Mac, and Ubuntu.
For other platforms, you will likely need to build from source.Extra Format InstallationTIFF and OME-TIFF reading and writing is always available after
installingaicsimageio, but extra supported formats can be
optionally installed using[...]syntax.For a single additional supported format (e.g. ND2):pip install aicsimageio[nd2]For a single additional supported format (e.g. ND2), development head:pip install "aicsimageio[nd2] @ git+https://github.com/AllenCellModeling/aicsimageio.git"For a single additional supported format (e.g. ND2), specific tag (e.g.v4.0.0.dev6):pip install "aicsimageio[nd2] @ git+https://github.com/AllenCellModeling/[email protected]"For faster OME-TIFF reading with tile tags:pip install aicsimageio[bfio]For multiple additional supported formats:pip install aicsimageio[base-imageio,nd2]For all additional supported (and openly licensed) formats:pip install aicsimageio[all]Due to the GPL license, LIF support is not included with the[all]extra, and must be installed manually withpip install aicsimageio readlif>=0.6.4Due to the GPL license, CZI support is not included with the[all]extra, and must be installed manually withpip install aicsimageio aicspylibczi>=3.1.1 fsspec>=2022.8.0Due to the GPL license, Bio-Formats support is not included with the[all]extra, and must be installed manually withpip install aicsimageio bioformats_jar.Important!!Bio-Formats support also requires ajavaandmvnexecutable in the environment. The simplest method is to installbioformats_jarfrom conda:conda install -c conda-forge bioformats_jar(which will additionally bringopenjdkandmavenpackages).DocumentationFor full package documentation please visitallencellmodeling.github.io/aicsimageio.QuickstartFull Image ReadingIf your image fits in memory:fromaicsimageioimportAICSImage# Get an AICSImage objectimg=AICSImage("my_file.tiff")# selects the first scene foundimg.data# returns 5D TCZYX numpy arrayimg.xarray_data# returns 5D TCZYX xarray data array backed by numpyimg.dims# returns a Dimensions objectimg.dims.order# returns string "TCZYX"img.dims.X# returns size of X dimensionimg.shape# returns tuple of dimension sizes in TCZYX orderimg.get_image_data("CZYX",T=0)# returns 4D CZYX numpy array# Get the id of the current operating sceneimg.current_scene# Get a list valid scene idsimg.scenes# Change scene using nameimg.set_scene("Image:1")# Or by scene indeximg.set_scene(1)# Use the same operations on a different scene# ...Full Image Reading NotesThe.dataand.xarray_dataproperties will load the whole scene into memory.
The.get_image_datafunction will load the whole scene into memory and then retrieve
the specified chunk.Delayed Image ReadingIf your image doesn't fit in memory:fromaicsimageioimportAICSImage# Get an AICSImage objectimg=AICSImage("my_file.tiff")# selects the first scene foundimg.dask_data# returns 5D TCZYX dask arrayimg.xarray_dask_data# returns 5D TCZYX xarray data array backed by dask arrayimg.dims# returns a Dimensions objectimg.dims.order# returns string "TCZYX"img.dims.X# returns size of X dimensionimg.shape# returns tuple of dimension sizes in TCZYX order# Pull only a specific chunk in-memorylazy_t0=img.get_image_dask_data("CZYX",T=0)# returns out-of-memory 4D dask arrayt0=lazy_t0.compute()# returns in-memory 4D numpy array# Get the id of the current operating sceneimg.current_scene# Get a list valid scene idsimg.scenes# Change scene using nameimg.set_scene("Image:1")# Or by scene indeximg.set_scene(1)# Use the same operations on a different scene# ...Delayed Image Reading NotesThe.dask_dataand.xarray_dask_dataproperties and the.get_image_dask_datafunction will not load any piece of the imaging data into memory until you specifically
call.computeon the returned Dask array. In doing so, you will only then load the
selected chunk in-memory.Mosaic Image ReadingRead stitched data or single tiles as a dimension.Readers that support mosaic tile stitching:LifReaderCziReaderAICSImageIf the file format reader supports stitching mosaic tiles together, theAICSImageobject will default to stitching the tiles back together.img=AICSImage("very-large-mosaic.lif")img.dims.order# T, C, Z, big Y, big X, (S optional)img.dask_data# Dask chunks fall on tile boundaries, pull YX chunks out of the imageThis behavior can be manually turned off:img=AICSImage("very-large-mosaic.lif",reconstruct_mosaic=False)img.dims.order# M (tile index), T, C, Z, small Y, small X, (S optional)img.dask_data# Chunks use normal ZYXIf the reader does not support stitching tiles together the M tile index will be
available on theAICSImageobject:img=AICSImage("some-unsupported-mosaic-stitching-format.ext")img.dims.order# M (tile index), T, C, Z, small Y, small X, (S optional)img.dask_data# Chunks use normal ZYXReaderIf the file format reader detects mosaic tiles in the image, theReaderobject
will store the tiles as a dimension.If tile stitching is implemented, theReadercan also return the stitched image.reader=LifReader("ver-large-mosaic.lif")reader.dims.order# M, T, C, Z, tile size Y, tile size X, (S optional)reader.dask_data# normal operations, can use M dimension to select individual tilesreader.mosaic_dask_data# returns stitched mosaic - T, C, Z, big Y, big, X, (S optional)Single Tile Absolute PositioningThere are functions available on both theAICSImageandReaderobjects
to help with single tile positioning:img=AICSImage("very-large-mosaic.lif")img.mosaic_tile_dims# Returns a Dimensions object with just Y and X dim sizesimg.mosaic_tile_dims.Y# 512 (for example)# Get the tile start indices (top left corner of tile)y_start_index,x_start_index=img.get_mosaic_tile_position(12)Metadata ReadingfromaicsimageioimportAICSImage# Get an AICSImage objectimg=AICSImage("my_file.tiff")# selects the first scene foundimg.metadata# returns the metadata object for this file format (XML, JSON, etc.)img.channel_names# returns a list of string channel names found in the metadataimg.physical_pixel_sizes.Z# returns the Z dimension pixel size as found in the metadataimg.physical_pixel_sizes.Y# returns the Y dimension pixel size as found in the metadataimg.physical_pixel_sizes.X# returns the X dimension pixel size as found in the metadataXarray Coordinate Plane AttachmentIfaicsimageiofinds coordinate information for the spatial-temporal dimensions of
the image in metadata, you can usexarrayfor indexing by coordinates.fromaicsimageioimportAICSImage# Get an AICSImage objectimg=AICSImage("my_file.ome.tiff")# Get the first ten seconds (not frames)first_ten_seconds=img.xarray_data.loc[:10]# returns an xarray.DataArray# Get the first ten major units (usually micrometers, not indices) in Zfirst_ten_mm_in_z=img.xarray_data.loc[:,:,:10]# Get the first ten major units (usually micrometers, not indices) in Yfirst_ten_mm_in_y=img.xarray_data.loc[:,:,:,:10]# Get the first ten major units (usually micrometers, not indices) in Xfirst_ten_mm_in_x=img.xarray_data.loc[:,:,:,:,:10]Seexarray"Indexing and Selecting Data" Documentationfor more information.Cloud IO SupportFile-System Specification (fsspec)allows
for common object storage services (S3, GCS, etc.) to act like normal filesystems by
following the same base specification across them all. AICSImageIO utilizes this
standard specification to make it possible to read directly from remote resources when
the specification is installed.fromaicsimageioimportAICSImage# Get an AICSImage objectimg=AICSImage("http://my-website.com/my_file.tiff")img=AICSImage("s3://my-bucket/my_file.tiff")img=AICSImage("gcs://my-bucket/my_file.tiff")# Or read with specific filesystem creation argumentsimg=AICSImage("s3://my-bucket/my_file.tiff",fs_kwargs=dict(anon=True))img=AICSImage("gcs://my-bucket/my_file.tiff",fs_kwargs=dict(anon=True))# All other normal operations work just fineRemote reading requires that the file-system specification implementation for the
target backend is installed.Fors3:pip install s3fsForgs:pip install gcsfsSee thelist of known implementations.Saving to OME-TIFFThe simpliest method to save your image as an OME-TIFF file with key pieces of
metadata is to use thesavefunction.fromaicsimageioimportAICSImageAICSImage("my_file.czi").save("my_file.ome.tiff")Note:By defaultaicsimageiowill generate only a portion of metadata to pass
along from the reader to the OME model. This function currently does not do a full
metadata translation.For finer grain customization of the metadata, scenes, or if you want to save an array
as an OME-TIFF, the writer class can also be used to customize as needed.importnumpyasnpfromaicsimageio.writersimportOmeTiffWriterimage=np.random.rand(10,3,1024,2048)OmeTiffWriter.save(image,"file.ome.tif",dim_order="ZCYX")SeeOmeTiffWriter documentationfor more details.Other WritersIn most cases,AICSImage.saveis usually a good default but there are other image
writers available. For more information, please refer toour writers documentation.BenchmarksAICSImageIO is benchmarked usingasv.
You can find the benchmark results for every commit tomainstarting at the 4.0
release on ourbenchmarks page.DevelopmentSee ourdeveloper resourcesfor information related to developing the code.CitationIf you findaicsimageiouseful, please cite this repository as:Eva Maxfield Brown, Dan Toloudis, Jamie Sherman, Madison Swain-Bowden, Talley Lambert, AICSImageIO Contributors (2021). AICSImageIO: Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure Python [Computer software]. GitHub.https://github.com/AllenCellModeling/aicsimageiobibtex:@misc{aicsimageio,author={Brown, Eva Maxfield and Toloudis, Dan and Sherman, Jamie and Swain-Bowden, Madison and Lambert, Talley and {AICSImageIO Contributors}},title={AICSImageIO: Image Reading, Metadata Conversion, and Image Writing for Microscopy Images in Pure Python},year={2021},publisher={GitHub},url={https://github.com/AllenCellModeling/aicsimageio}}Free software: BSD-3-Clause(The LIF component is licensed under GPLv3 and is not included in this package)(The Bio-Formats component is licensed under GPLv2 and is not included in this package)(The CZI component is licensed under GPLv3 and is not included in this package) |
aicsimageprocessing | aicsimageprocessingA generalized scientific image processing module from the Allen Institute for Cell Science.InstallationInstall Requires:Prior to installing this package, you must havenumpyinstalled.pipinstallnumpyStable Release:pip install aicsimageprocessingDevelopment Head:pip install git+https://github.com/AllenCellModeling/aicsimageprocessing.gitDocumentationFor full package documentation please visitAllenCellModeling.github.io/aicsimageprocessing.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.Free software: Allen Institute Software License |
aicsmlsegment | ## OverviewThe Allen Cell Structure Segmenter is a Python-based open source toolkit developed for 3D segmentation of intracellular structures in fluorescence microscope images, developed at the Allen Institute for Cell Science. This toolkit consists of two complementary elements, a classic image segmentation workflow with a restricted set of algorithms and parameters and an iterative deep learning segmentation workflow. We created a collection of 20 classic image segmentation workflows based on 20 distinct and representative intracellular structure localization patterns as a lookup table reference and starting point for users. The iterative deep learning workflow can take over when the classic segmentation workflow is insufficient. Two straightforward human-in-the-loop curation strategies convert a set of classic image segmentation workflow results into a set of 3D ground truth images for iterative model training without the need for manual painting in 3D. The Allen Cell Structure Segmenter thus leverages state of the art computer vision algorithms in an accessible way to facilitate their application by the experimental biology researcher. More details including algorithms, validations, examples, and video tutorials can be found at [allencell.org/segmenter](allencell.org/segmenter) or in our [bioRxiv paper](https://www.biorxiv.org/content/10.1101/491035v1).Note: This repository has only the code for the “Iterative Deep Learning Workflow”. The classic part can be found at [https://github.com/AllenInstitute/aics-segmentation](https://github.com/AllenInstitute/aics-segmentation)## Installation:prerequisite:To perform training/prediction of the deep learning models in this package, we assume an [NVIDIA GPU](https://www.nvidia.com/en-us/deep-learning-ai/developer/) has been set up properly on a Linux operating system, either on a local machine or on a remote computation cluster. Make sure to check if your GPU supports at least CUDA 8.0 (CUDA 9.0 and up is preferred): [NVIDIA Driver check](https://www.nvidia.com/Download/index.aspx?lang=en-us).The GPUs we used to develop and test our package are two types: (1) GeForce GTX 1080 Ti GPU (about 11GB GPU memory), (2) Titan Xp GPU (about 12GB GPU memory), (3) Tesla V100 for PCIe (with about 33GB memory). These cover common chips for personal workstations and data centers.Note 1:As remote GPU clusters could be set up differently from institute to institute, we will assume a local machine use case through out the installation and demos.Note 2:We are investigating alternative cloud computing service to deploy our package and will have updates in the next few months. Stay tuned :)create a conda environment:`bash conda create--namemlsegmenter python=3.6 `(For how to install conda, see [here](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html#installing-conda-on-a-system-that-has-other-python-installations-or-packages))activate your environment and do the installation within the environment:`bash conda activate mlsegmenter `(Note: always check out [conda documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#activating-an-environment) for updates. If you are using an older version of conda, you may need to activate the environment bysource activate mlsegmenter.)Install PytorchGo to [PyTorch website](https://pytorch.org/get-started/locally/), and find the right installation command for you.we use version 1.0 (which is the stable version at the time of our development)we use Linux (OS), Conda (package), python 3.6 (Language), CUDA=10.0 (Question about CUDA? see [setup CUDA](./docs/check_cuda.md)).*Make sure you use either the automatically generated command on PyTorch website, or the command recommended on PyTorch website for installing [older version](https://pytorch.org/get-started/previous-versions/)*Install Allen Cell Segmenter (deep learning part)`bash git clonehttps://github.com/AllenInstitute/aics-ml-segmentation.gitcd./aics-ml-segmentationpip install-e. `The-eflag when doingpip installwill allow users to modify any the source code without the need of re-installing the package afterward. You may do the installation without-e, if you don’t want any change on the code.## Level of Support
We are offering it to the community AS IS; we have used the toolkit within our organization. We are not able to provide guarantees of support. However, we welcome feedback and submission of issues. Users are encouraged to sign up on our [Allen Cell Discussion Forum](https://forum.allencell.org/) for community quesitons and comments.# Link to [Documentations and Tutorials](./docs/overview.md) |
aics-pipeline-uploaders | aics_pipeline_uploadersThis package contains resources for uploading pipeline data to FMSaics_pipeline_uploaders==1.2.0FeaturesStore values and retain the prior value in memory... some other functionalityInstallationStable Release:pip install aics_pipeline_uploadersDevelopment Head:pip install git+https://github.com/BrianWhitneyAI/aics_pipeline_uploaders.gitDocumentationFor full package documentation please visitBrianWhitneyAI.github.io/aics_pipeline_uploaders.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.The Four Commands You Need To Knowmake installThis will setup a virtual environment local to this project and install all of the
project's dependencies into it. The virtual env will be located incamera-alignment-core/venv.make test,make fmt,make lint,make type-check,make import-sortQuality assurancepip install -e .[dev]This will install your package in editable mode with all the required development
dependencies.make cleanThis will clean up various Python and build generated files so that you can ensure
that you are working in a clean workspace.Suggested Git Branch Strategymainis for the most up-to-date development, very rarely should you directly
commit to this branch. GitHub Actions will run on every push and on a CRON to this
branch but still recommended to commit to your development branches and make pull
requests to main. If you push a tagged commit with bumpversion, this will also release to PyPI.Your day-to-day work should exist on branches separate frommain. Even if it is
just yourself working on the repository, make a PR from your working branch tomainso that you can ensure your commits don't break the development head. GitHub Actions
will run on every push to any branch or any pull request from any branch to any other
branch.It is recommended to use "Squash and Merge" commits when committing PR's. It makes
each set of changes tomainatomic and as a side effect naturally encourages small
well defined PR's. |
aicspylibczi | aicspylibcziPython module to exposelibCZIfunctionality for reading (subset of) Zeiss
CZI files and meta-data. We only support 64bit architectures currently if you desperately need 32 bit support please make an issue or modify the source and build it for your use case.UsageThe first example show how to work with a standard CZI file (Single or Multi-Scene). The second example shows how to work with a Mosaic CZI file.Example 1: Read in a czi and select a portion of the image to displayimportnumpyasnpfromaicspylibcziimportCziFilefrompathlibimportPathimportmatplotlib.pyplotaspltpth=Path('20190610_S02-02.czi')czi=CziFile(pth)# Get the shape of the data, the coordinate pairs are (start index, size)dimensions=czi.get_dims_shape()# [{'X': (0, 1900), 'Y': (0, 1300), 'Z': (0, 60), 'C': (0, 4), 'S': (0, 40), 'B': (0, 1)}]czi.dims# BSCZYXczi.size# (1, 40, 4, 60, 1300, 1900)# Load the image slice I want from the fileimg,shp=czi.read_image(S=13,Z=16)# shp = [('B', 1), ('S', 1), ('C', 4), ('Z', 1), ('Y', 1300), ('X', 1900)] # List[(Dimension, size), ...]# img.shape = (1, 1, 4, 1, 1300, 1900) # numpy.ndarray# define helper functionsdefnorm_by(x,min_,max_):norms=np.percentile(x,[min_,max_])i2=np.clip((x-norms[0])/(norms[1]-norms[0]),0,1)returni2defrecolor(im):# transform from rgb to cyan-magenta-yellowim_shape=np.array(im.shape)color_transform=np.array([[1,1,0],[0,1,1],[1,0,1]]).Tim_reshape=im.reshape([np.prod(im_shape[0:2]),im_shape[2]]).Tim_recolored=np.matmul(color_transform.T,im_reshape).Tim_shape[2]=3im=im_recolored.reshape(im_shape)returnim# normalize, combine into RGB and transform to CMYc1=(norm_by(img[0,0,0,0,0:750,250:1000],50,99.8)*255).astype(np.uint8)c2=(norm_by(img[0,0,1,0,0:750,250:1000],50,99.8)*255).astype(np.uint8)c3=(norm_by(img[0,0,2,0,0:750,250:1000],0,100)*255).astype(np.uint8)rgb=np.stack((c1,c2,c3),axis=2)cmy=np.clip(recolor(rgb),0,255)# plot using matplotlib¶plt.figure(figsize=(10,10))plt.imshow(cmy)plt.axis('off')Example 2: Read in a mosaic fileimportnumpyasnpimportaicspylibcziimportpathlibfromPILimportImagemosaic_file=pathlib.Path('mosaic_test.czi')czi=aicspylibczi.CziFile(mosaic_file)# Get the shape of the datadimensions=czi.dims# 'STCZMYX'czi.size# (1, 1, 1, 1, 2, 624, 924)czi.get_dims_shape()# [{'X': (0, 924), 'Y': (0, 624), 'Z': (0, 1), 'C': (0, 1), 'T': (0, 1), 'M': (0, 2), 'S': (0, 1)}]czi.is_mosaic()# True# Mosaic files ignore the S dimension and use an internal mIndex to reconstruct, the scale factor allows one to generate a manageable imagemosaic_data=czi.read_mosaic(C=0,scale_factor=1)mosaic_data.shape# (1, 1, 624, 1756)# the C channel has been specified S & M are used internally for position so this is (T, Z, Y, X)normed_mosaic_data=norm_by(mosaic_data[0,0,:,:],5,98)*255img=Image.fromarray(normed_mosaic_data.astype(np.uint8))InstallationThe preferred installation method is withpip install.
This will install the aicspylibczi python module and extension binaries (hosted on PyPI):pip install aicspylibcziIf this doesn't work:Please investigate the following (generally windows issues):your OS is 64 bit - we only support 64 bit binariesyour python is a 64 bit application (not 32 bit)are your C++ runtime libraries up to date?vc_redist.exeIf you have tried this and are still having trouble please reach out to us and we will endeavor to help.DocumentationDocumentation is available atgithub.io.BuildUse these steps to build and install aicspylibczi locally:Clone the repository including submodules (--recurse-submodules).Requirements:libCZI requires a c++11 compatible compiler. Built & Tested with clang.Development requirements are those required for libCZI:libpng,zlibInstall the package:pip install .
pip install -e .[dev] # for development (-e means editable so changes take effect when made)
pip install .[all] # for everything including jupyter notebook to work with the Example_Usage abovelibCZI is automatically built as a submodule and linked statically into aicspylibczi.Note: If you get the message directly below on windows you need to set PYTHONHOME to be the folder the python.exe you are compiling against lives in.EXEC : Fatal Python error : initfsencoding: unable to load the file system codec ...
ModuleNotFoundError: No module named 'encodings'Known Issueswith read_mosaic if the scale_factor is not 1.0 Zeiss's libCZI will, on some files, fail to render certain subblocks
within the composite mosaic image. It is not currently known if this is an issue with the file or with libCZI.Historyaicspylibczi was originally a fork ofpylibczithat was developed by
Paul Watkins and focused on mSEM data. In attempting to extend the work to we transitioned
to pybind11, implemented c++ and python tests, added continuous integration via github actions,
and added the functionality to read individual subblocks and stacks of subblocks as a numpy.ndarray.
Metadata reading, including specific subblock metadata reading has also been added.We intend for this work to be merged back into the original project once we have the new work integrated with
the original work.Licenses & AcknowledgementsThis project was created from a fork of pylibczi as explained above in the history section and Paul Watkins
is a developer on our repo as well. Pylibczi, from
theCenter of Advanced European Studies And Researchand the core dependency libCZI, are covered by the GPLv3 license.TheGPLv3 licenseis a consequence of libCZI which imposes GPLv3. If
you wish to use libCZI or this derivative in a commercial product you may need to talk to
Zeiss and CAESAR.A discussion about GPLv3. |
aicssegmentation | aicssegmentationPart 1 of Allen Cell and Structure SegmenterThis repository only has the code for the "Classic Image Segmentation Workflow" of Segmenter. The deep learning part can be found athttps://github.com/AllenCell/aics-ml-segmentationWe welcome feedback and submission of issues. Users are encouraged to sign up on ourAllen Cell Discussion Forumfor quesitons and comments.InstallationOur package is implemented in Python 3.7. Detailed instructions as below:Installation on Linux(Ubuntu 16.04.5 LTS is the OS we used for development)Installation on MacOSInstallation on WindowsUse the packageOur package is designed (1) to provide a simple tool for cell biologists to quickly obtain intracellular structure segmentation with reasonable accuracy and robustness over a large set of images, and (2) to facilitate advanced development and implementation of more sophisticated algorithms in a unified environment by more experienced programmers.Visualization is a key component in algorithm development and validation of results (qualitatively). Right now, our toolkit utilizesitk-jupyter-widgets, which is a very powerful visualization tool, primarily for medical data, which can be used in-line in Jupyter notebooks. Some cool demo videos can be foundhere.Part 1: Quick StartAfter following the installation instructions above, users will find that the classic image segmentation workflow in the toolkit is:formulated as a simple 3-step workflow for solving 3D intracellular structure segmentation problem using restricted number of selectable algorithms and tunable parametersaccompanied by a"lookup table"with 20+ representative structure localization patterns and their results as a reference, as well as the Jupyter notebook for these workflows as a starting point.Typically, we use Jupyter notebook as a "playground" to explore different algorithms and adjust the parameters. After determining the algorithms and parameters, we use Python scritps to do batch processing/validation on a large number of data.You can find aDEMO on a real exampleon our tutorial pagePart 2: APIThe list of high-level wrappers/functions used in the package can be found atAllenCell.github.io/aics-segmentation.Object Identification: Bridging the gap between binary image (segmentation) and analysisThe current version of the Allen Cell Segmenter is primarily focusing on converting fluorescent images into binary images, i.e., the mask of the target structures separated from the background (a.k.a segmentation). But, the binary images themselves are not always useful, with perhaps the exception of visualization of the entire image, until they are converted into statistically sound numbers that are then used for downstream analysis. Often the desired numbers do not refer to all masked voxels in an entire image but instead to specific "objects" or groups of objects within the image. In our python package, we provide functions to bridge the gap between binary segmentation and downstream analysis viaobject identification.What is object identification?See a real demo in jupyter notebook to learn how to use the object identification functionsCiting SegmenterIf you find our segmenter useful in your research, please cite our bioRxiv paper:J. Chen, L. Ding, M.P. Viana, M.C. Hendershott, R. Yang, I.A. Mueller, S.M. Rafelski. The Allen Cell Structure Segmenter: a new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images. bioRxiv. 2018 Jan 1:491035.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.Free software: Allen Institute Software License |
aicsshparam | AICS Spherical Harmonics ParametrizationSpherical harmonics parametrization for 3D starlike shapes.Installation:Stable Release:pip install aicsshparamBuild from source to make customization:git clone [email protected]:AllenCell/aics-shparam.gitcd aics-shparampip install -e .How to useHere we outline an example of how one could use spherical harmonics coefficients as shape descriptors on a synthetic dataset composed by 3 different shapes: spheres, cubes and octahedrons.# Import required packagesimportnumpyasnpimportpandasaspdimportmatplotlib.pyplotaspltfromsklearn.decompositionimportPCAfromaicsshparamimportshtools,shparamfromskimage.morphologyimportball,cube,octahedronnp.random.seed(42)# for reproducibility# Function that returns binary images containing one of the three# shapes: cubes, spheres octahedrons of random sizes.defget_random_3d_shape():idx=np.random.choice([0,1,2],1)[0]element=[ball,cube,octahedron][idx]label=['ball','cube','octahedron'][idx]img=element(10+int(10*np.random.rand()))img=np.pad(img,((1,1),(1,1),(1,1)))img=img.reshape(1,*img.shape)# Rotate shapes to increase dataset variability.img=shtools.rotate_image_2d(image=img,angle=360*np.random.rand()).squeeze()returnlabel,img# Compute spherical harmonics coefficients of shape and store them# in a pandas dataframe.df_coeffs=pd.DataFrame([])foriinrange(30):# Get a random shapelabel,img=get_random_3d_shape()# Parameterize with L=4, which corresponds to50 coefficients# in total(coeffs,_),_=shparam.get_shcoeffs(image=img,lmax=4)coeffs.update({'label':label})df_coeffs=df_coeffs.append(coeffs,ignore_index=True)# Vizualize the resulting dataframewithpd.option_context('display.max_rows',5,'display.max_columns',5):display(df_coeffs)# Let's use PCA to reduce the dimensionality of the coefficients# dataframe from 51 down to 2.pca=PCA(n_components=2)trans=pca.fit_transform(df_coeffs.drop(columns=['label']))df_trans=pd.DataFrame(trans)df_trans.columns=['PC1','PC2']df_trans['label']=df_coeffs.label# Vizualize the resulting dataframewithpd.option_context('display.max_rows',5,'display.max_columns',5):display(df_trans)# Scatter plot to show how similar shapes are grouped together.fig,ax=plt.subplots(1,1,figsize=(3,3))forlabel,df_labelindf_trans.groupby('label'):ax.scatter(df_label.PC1,df_label.PC2,label=label,s=50)plt.legend(loc='upper left',bbox_to_anchor=(1.05,1))plt.xlabel('PC1')plt.ylabel('PC2')plt.show()ReferenceFor an example of how this package was used to analyse a dataset of over 200k single-cell images at the Allen Institute for Cell Science, please check out our paper inbioaRxiv.DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code.Questions?If you have any questions, feel free to leave a comment in our Allen Cell forum:https://forum.allencell.org/.Free software: Allen Institute Software License |
aics-tf-registration | aics_tf_registrationRigid registration algorithm for generating training/testing data for transfer function modelFeaturesRigid registration of.tiffconfocal/fluorescent microscopy images, outputting cropped images containing their mutual field of viewSupports the following registration scenariosImages at the same or different resolutions and pixel dimensions/scalesFull multichannel images based on a reference channelSpecific channels in multichannel images based on seperate reference channelMultiple pairs of images with the same registration scenario at onceConfiguration of registration settings through easy-to-read.yamlfileOutputs composite of registered image for easy evaluation of resultsQuick StartIn console (after installation):run_alignment --config_path `path/to/config/file.yaml`InstallationStable Release:pip install aics_tf_registrationDevelopment Head:pip install git+https://github.com/AllenCell/aics_tf_registration.gitDocumentationFor full package documentation please visitAllenCell.github.io/aics_tf_registration.Image Requirements for RegistrationIn order for the registration algorithm to produce accurate results, the images must have the following requirements:Images must be in.tifor.tiffformat.The source and target images must be in separate folders and images that are to be registered to each other must share the same filename.Images must be 3D or 4DThe field of view of either the source or target image must be wholly contained within the fov of the other (or cropped to be so with the settings in the config file)Rotation and mirroring of images (if necessary) to have matching orientations must either be done prior to registration or within the settings of the config file if it is consistent between different image pairsThe resolution/voxel dimensions of the images, or at least the relative scaling differences between the source and target image, should be known to within approximately 3-4 decimal unitsFree software: Allen Institute Software License |
aictl | Tool to create AI jobs |
aictoolbox | UNKNOWN |
aict-tools | aict-toolsExecutables to perform machine learning tasks on FACT and CTA eventlist data.
Possibly also able to handle input of other experiments if in the same file format.All you ever wanted to do with your IACT data in one package. This project is mainly targeted at using machine-learning for the following tasks:Energy RegressionGamma/Hadron SeparationReconstruction of origin (Mono for now)CitingIf you use theaict-tools, please cite us like this using the doi provided by
zenodo, e.g. like this if using bibtex files:@misc{aict-tools,author={Nöthe, Maximilian and Brügge, Kai Arno and Buß, Jens Björn},title={aict-tools},subtitle={Reproducible Artificial Intelligence for Cherenkov Telescopes},doi={10.5281/zenodo.3338081},url={https://github.com/fact-project/aict-tools},}InstallationThen you can install the aict-tools by:pip install aict-toolsBy default, this does not install optional dependencies for writing out
models inonnxorpmmlformat.
If you want to serialize models to these formats, install this using:$ pip install aict-tools[pmml] # for pmml support
$ pip install aict-tools[onnx] # for onnx supportIn the case of working with CTA data, you will also need to have ctapipe installed.
If this is not already the case, you can install it using:$ pip install aict-tools[cta] # for DISP use on CTA dataTo install all optional dependencies, use:$ pip install aict-tools[all] # for allAlternatively you can clone the repo,cdinto the folder and do the usualpip install .dance.UsageFor each task, there are two executables, installed to yourPATH.
Each takeyamlconfiguration files andh5pystyle hdf5 files as input.
The models are saved aspickleusingjobliband/orpmmlusingsklearn2pmml.aict_train_<...>This script is used to train a model on events with known truth
values for the target variable, usually monte carlo simulations.aict_apply_<...>This script applies a given model, previously trained withaict_train_<...>and applies it to data, either a test data set or data with unknown truth values for the target variable.The apply scripts can iterate through the data files in chunks using
the--chunksize=<N>option, this can be handy for very large files (> 1 million events).Energy RegressionEnergy regression for gamma-rays require ayamlconfiguration file
and simulated gamma-rays in the event list format.The two scripts to perform energy regression are calledaict_train_energy_regressoraict_apply_energy_regressorAn example configuration can be found inexamples/config_energy.yaml.To apply a model, useaict_apply_energy_regressor.SeparationBinary classification or Separation requires ayamlconfiguration file,
one data file for the signal class and one data file for the background class.The two scripts to perform separation are calledaict_train_separation_modelaict_apply_separation_model.An example configuration can be found inexamples/config_separator.yaml.Reconstruction of gamma-ray origin using the disp methodTo estimate the origin of the gamma-rays in camera coordinates, thedisp-method can be used.Here it is implemented as a two step regression/classification task.
One regression model is trained to estimateabs(disp)and a
classification model is trained to estimatesgn(disp).Training requires simulated diffuse gamma-ray events.aict_train_disp_regressoraict_apply_disp_regressorAn example configuration can be found inexamples/config_source.yaml.
Currently supported experiments:FACTCTANote: By applying the disp regressor,Thetawil be deleted from the feature set.Theta has to be calculated from the source prediction e.g. by usingfact_calculate_thetafrom pyfact.Utility scriptsApplying straight cutsFor data selection, e.g. to get rid of not well reconstructable events,
it is customary to apply so called pre- or quality cuts before applying machine learning models.This can be done withaict_apply_cutsand ayamlconfiguration file of the cuts to apply. Seeexamples/quality_cuts.yamlfor an example configuration file.Split data into training/test setsUsingaict_split_data, a dataset can be randomly split into sets,
e.g. to split a monte carlo simulation dataset into train and test set. |
aicurious | Hi! My name is Viet Anh. I'm a software developer from Vietnam.This is my personal utility library for software development. It contains
a lot of useful stuff that I use in my projects. It is not intended to be
a general purpose library, but it may be useful for you too.My blog:https://aicurious.io.My LinkedIn:https://www.linkedin.com/in/vietanhdev/. |
ai-cv-utils | AI-CV-UTILSThe best tool for generating test dataBecome a test data wizard, generating data quickly and easily.FeaturesSeparate images into different folders according to percentagesDelete background in imagesGenerate scenarios with test data✨ Magic ✨InstallationUsing pippip install ai-cv-utilsUsageSeparate images into different folders according to percentagesaicv split <source of files> -t <train_folder> -v <validation_folder> -x <test_folder> -T <train_percent> -V <validation_percent> -X <test_percent>Delete background in imagesaicv rmbg -s <source of files> -t <target>Generate scenarios with test dataaicv yologen -b <source of background> -s <samples_folder> -o <outout_folder> -f <format_of_images> -z <percent_samples_size> -d <sample_degree_to_rotate> -q <qty_stages>Licenceai-cv-utils is provided under a AGPL3+ license that can be found in theLICENSEfile. By using, distributing, or contributing to this project, you agree to the terms and conditions of this license. |
aid | UNKNOWN |
aida | Failed to fetch description. HTTP Status Code: 404 |
aida-interchange | AIF Python APITo use theaida_interchangepackage within your project, you must first install it. It is recommended that you installaida_interchangeinto a python3 virtual environment. SeePython Virtual Environmentfor more details on creating and using a virtual environment.InstallTo installaida_interchange, make sure aPython Virtual Environmentis activated and run the following command:$pipinstallaida-interchangeTheaida_interchangemodules can now be imported into your project.API DocumentationThe python project usesSphinxfor generating documentation. To generate the documentation, make sure aPython Virtual Environmentis activated, navigate to theAIDA-Interchange-Format/python/docsdirectory, and run theupdate_documentation.shscript.$cddocs
$./update_documentation.shThis script will generate documentation in the form of HTML and place it within theAIDA-Interchange-Format/python/docs/build/htmlfolder.Python Virtual EnvironmentIt is recommended that Python development be done in an isolated environment called a virtual environment. There are multiple ways to set up and use Python virtual environments. This README describes one of those ways.The basic steps are:Install virtualenv (done once)Create a virtual environment (done once per development effort)Repeat as needed:Activate a virtual environmentInstall libraries and develop codeDeactivate a virtual environmentFollow the instructions below to set up your virtual environment. It is important to note that you should never install any project specific python dependencies outside of your virtual environment. Also, ensure that your virtual environment has been activated before running python scripts within this repository.Install virtualenvIf you haven't installedvirtualenvyet, follow these steps. This only needs to be once.$cd~
$mkdir.virtualenvs
$pipinstallvirtualenvVerify virtualenv is installed$whichvirtualenvCreate virtual environmentWhen you are starting development of a python project, you first need to create a virtual environment. The name of the virtual environment in the example below,aida-interchange-format, assumes you are making changes or testing the AIF library. Feel free to use a name specific to your application if you are just using the AIF library.To create the virtual environment and install the latest AIF, run the following:$virtualenv-ppython3~/.virtualenvs/aida-interchange-format
$source~/.virtualenvs/aida-interchange-format/bin/activate
$pipinstallaida-interchangeYour virtual environment is now activated. The following sections describe deactivating and re-activating the virtual environment.Deactivate virtual environmentTo deactivate your current virtual environment, run the following command.$deactivateActivate virtual environmentTo re-activate your virtual environment, run the following command.$source~/.virtualenvs/aida-interchange-format/bin/activate |
aidalib | Aida LibAida is a language agnostic library for text generation.UsageA simple hello world script would look like this:fromaidaimportrender,Empty,Var# create a variable to hold a namename_var=Var('name')# create a simple phrasenode=(Empty+'hello,'|name_var).to_phrase()# assign a value to the variablename_var.assign('World')# render the nodeprint(render(node))# 'Hello, World.'InstallDownload and install with pip:pipinstallaidalibCore ConceptsWhen using Aida, first you compose a tree of operations on your text that include conditions via branches and other control flow. Later, you fill the tree with data and render the text.A building block is the variable class:Var. Use it to represent a value that you want to control later. A variable can hold numbers (e.g.float,int) or strings.You can create branches and complex logic withBranch. In the example below, ifxis greater than 1, it will rendermany, otherwisesingle.x=Var('x')Branch(x>1,'many','single')ContextThe context, represented by the classCtx, is useful to create rules that depends on what has been written before. Each object or literal that is passed to Aida is remembered by the context.name=Const('Bob')alt_name=Const('He')bob=Branch(~name.in_ctx(),name,alt_name)ctx=Ctx()render(bob|'is a cool guy.'|bob|'doesn\'t mind.',ctx)# Bob is a cool guy. He doesn't mind.Creating a reference expression is a common use-case, so we have a helper function calledcreate_ref.bob=create_ref('Bob','He')OperatorsYou can compose operations on your text with some handy operators.Concatenation (+and|)'the'|'quick'|'brown'|'fox'# 'the quick brown fox''the'+'quick'+'brown'+'fox'# 'thequickbrownfox'Check context (in_ctx)Check if the current node is in the context.Const('something').in_ctx()Create a sentence (sentence)Formats a line into a sentence, capitalizing the first word and adding a period.Const('hello world').sentence()# 'Hello world.'Logical and numeric operatorsoperatorexamplenegation!xgreater thanx > ygreater or equal thanx >= yless thanx < yless or equal thanx <= yequalx == ynot equalx != yor`xandx & yplusx + yRandom choiceRandomly draws one node from a list of possibilities.Choice('Alice','Bob','Chris')# either 'Alice', 'Bob', or 'Chris'InjectorTheInjectorclass assigns values to variables from a list each time it is rendered. Very useful to automatically fill values based on data.animal=Var('animal')sound=Var('sound')node=animal|'makes'|soundnode=Injector([animal,sound],node)# assign multiple valuesnode.assign([{'animal':'cat','sound':'meaw'},{'animal':'dog','sound':'roof'},])render(node)# 'cat makes meaw'render(node)# 'dog makes roof'For-loops withRepeatUseRepeatto render a node multiple times. At the simplest level, you have this:render(Repeat('buffalo').assign(8))# 'buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo'Repeatis very useful when used withInjector, like this:animal=Var('animal')sound=Var('sound')node=animal|'makes'|soundnode=Injector([animal,sound],node)repeat=Repeat(node)# assign multiple valuesdata=[{'animal':'cat','sound':'meaw'},{'animal':'dog','sound':'roof'},]node.assign(data)repeat.assign(len(data))# renders text based on datarender(node)# cat makes meaw dog makes roofLanguage ConceptsThere are some experimental features that allows you to create text that adapts to common language features, like grammaticalnumberandperson.Enumerate itemsUseLangConfigto setup language features and then callcreate_enumeration().fromaidaimportcreate_enumeration,LangConfig,Lang,renderrender(create_enumeration(LangConfig(lang=Lang.ENGLISH),'Alice','Bob','Chris'))# 'Alice, Bob, and Chris'render(create_enumeration(LangConfig(lang=Lang.PORTUGUESE),'Alice','Bob','Chris'))# 'Alice, Bob e Chris'Sentence StructureYou can compose sentences using special structures:NP(noun phrase) andVP(verb phrase) along withLangConfig.fromaidaimportNP,VP,LangConfigsubj=NP('the dog')verb=VP('barked')s=(subj|verb).sentence()render(LangConfig(s))# The dog barked.What really makes this different from just usingConstis that we can create rules that change the output ofNPandVPbased on various language features. The system will try to use the rule that matches most features from the givenLangConfig.fromaidaimportNP,VP,LangConfig,GNumber,GPersonsubj=(NP('I').add_mapping('I',GNumber.SINGULAR,GPerson.FIRST).add_mapping('he',GNumber.SINGULAR,GPerson.THIRD)).add_mapping('we',GNumber.PLURAL,GPerson.FIRST))verb=(VP('drive').add_mapping('drive',GPerson.FIRST).add_mapping('drives',GPerson.THIRD))s=(subj|verb|'a nice car').sentence()render(LangConfig(s,number=GNumber.SINGULAR,person=GPerson.FIRST))# I drive a nice car.render(LangConfig(s,number=GNumber.SINGULAR,person=GPerson.THIRD))# He drives a nice car.render(LangConfig(s,number=GNumber.PLURAL,person=GPerson.FIRST))# We drive a nice car. |
aida-lib | Aida LibAida is a language agnostic library for text generation.UsageA simple hello world script would look like this:fromaidaimportrender,Empty,Var# create a variable to hold a namename_var=Var('name')# create a simple phrasenode=(Empty+'hello,'|name_var).to_phrase()# assign a value to the variablename_var.assign('World')# render the nodeprint(render(node))# 'Hello, World.'InstallDownload and install with pip:pipinstallaidalibCore ConceptsWhen using Aida, first you compose a tree of operations on your text that include conditions via branches and other control flow. Later, you fill the tree with data and render the text.A building block is the variable class:Var. Use it to represent a value that you want to control later. A variable can hold numbers (e.g.float,int) or strings.You can create branches and complex logic withBranch. In the example below, ifxis greater than 1, it will rendermany, otherwisesingle.x=Var('x')Branch(x>1,'many','single')ContextThe context, represented by the classCtx, is useful to create rules that depends on what has been written before. Each object or literal that is passed to Aida is remembered by the context.name=Const('Bob')alt_name=Const('He')bob=Branch(~name.in_ctx(),name,alt_name)ctx=Ctx()render(bob|'is a cool guy.'|bob|'doesn\'t mind.',ctx)# Bob is a cool guy. He doesn't mind.Creating a reference expression is a common use-case, so we have a helper function calledcreate_ref.bob=create_ref('Bob','He')OperatorsYou can compose operations on your text with some handy operators.Concatenation (+and|)'the'|'quick'|'brown'|'fox'# 'the quick brown fox''the'+'quick'+'brown'+'fox'# 'thequickbrownfox'Check context (in_ctx)Check if the current node is in the context.Const('something').in_ctx()Create a sentence (sentence)Formats a line into a sentence, capitalizing the first word and adding a period.Const('hello world').sentence()# 'Hello world.'Logical and numeric operatorsoperatorexamplenegation!xgreater thanx > ygreater or equal thanx >= yless thanx < yless or equal thanx <= yequalx == ynot equalx != yor`xandx & yplusx + yRandom choiceRandomly draws one node from a list of possibilities.Choice('Alice','Bob','Chris')# either 'Alice', 'Bob', or 'Chris'InjectorTheInjectorclass assigns values to variables from a list each time it is rendered. Very useful to automatically fill values based on data.animal=Var('animal')sound=Var('sound')node=animal|'makes'|soundnode=Injector([animal,sound],node)# assign multiple valuesnode.assign([{'animal':'cat','sound':'meaw'},{'animal':'dog','sound':'roof'},])render(node)# 'cat makes meaw'render(node)# 'dog makes roof'For-loops withRepeatUseRepeatto render a node multiple times. At the simplest level, you have this:render(Repeat('buffalo').assign(8))# 'buffalo buffalo buffalo buffalo buffalo buffalo buffalo buffalo'Repeatis very useful when used withInjector, like this:animal=Var('animal')sound=Var('sound')node=animal|'makes'|soundnode=Injector([animal,sound],node)repeat=Repeat(node)# assign multiple valuesdata=[{'animal':'cat','sound':'meaw'},{'animal':'dog','sound':'roof'},]node.assign(data)repeat.assign(len(data))# renders text based on datarender(node)# cat makes meaw dog makes roofLanguage ConceptsThere are some experimental features that allows you to create text that adapts to common language features, like grammaticalnumberandperson.Enumerate itemsUseLangConfigto setup language features and then callcreate_enumeration().fromaidaimportcreate_enumeration,LangConfig,Lang,renderrender(create_enumeration(LangConfig(lang=Lang.ENGLISH),'Alice','Bob','Chris'))# 'Alice, Bob, and Chris'render(create_enumeration(LangConfig(lang=Lang.PORTUGUESE),'Alice','Bob','Chris'))# 'Alice, Bob e Chris'Sentence StructureYou can compose sentences using special structures:NP(noun phrase) andVP(verb phrase) along withLangConfig.fromaidaimportNP,VP,LangConfigsubj=NP('the dog')verb=VP('barked')s=(subj|verb).sentence()render(LangConfig(s))# The dog barked.What really makes this different from just usingConstis that we can create rules that change the output ofNPandVPbased on various language features. The system will try to use the rule that matches most features from the givenLangConfig.fromaidaimportNP,VP,LangConfig,GNumber,GPersonsubj=(NP('I').add_mapping('I',GNumber.SINGULAR,GPerson.FIRST).add_mapping('he',GNumber.SINGULAR,GPerson.THIRD)).add_mapping('we',GNumber.PLURAL,GPerson.FIRST))verb=(VP('drive').add_mapping('drive',GPerson.FIRST).add_mapping('drives',GPerson.THIRD))s=(subj|verb|'a nice car').sentence()render(LangConfig(s,number=GNumber.SINGULAR,person=GPerson.FIRST))# I drive a nice car.render(LangConfig(s,number=GNumber.SINGULAR,person=GPerson.THIRD))# He drives a nice car.render(LangConfig(s,number=GNumber.PLURAL,person=GPerson.FIRST))# We drive a nice car.ExamplesCheck more complex uses at theexamples folder.LicenseLicensed under theMIT License. |
aidan | This is a living package that I’ll update with information about me, helper functions, and other fun stuff.At the moment this is my “hello world” of a pypi package so my apologies for how barren it is. |
aidanpdf | This is the homepage of our project. |
aidans-common-functions | No description available on PyPI. |
aidapy | The Python packageaidapycentralizes and simplifies access to:Spacecraft data from heliospheric missionsSpace physics simulationsAdvanced statistical toolsMachine Learning, Deep Learning algorithms, and applicationsTheaidapypackage is part of the project AIDA (Artificial Intelligence Data Analysis) in Heliophysics funded from
the European Unions Horizon 2020 research and innovation programme under grant agreement No 776262.
It is distributed under the open-source MIT license.Full documentation can be foundhereInstallationThe package aidapy has been tested only for Linux.Using PyPiaidapyis available for pip.pipinstallaidapyFrom sourcesThe sources are located onGitLab:https://gitlab.com/aidaspace/aidapyClone the GitLab repo:Open a terminal and write the below command to clone in your PC the
AIDApy repo:gitclonehttps://gitlab.com/aidaspace/aidapy.gitcdaidapyCreate a virtual envAIDApy needs a significant number of dependencies. The easiest
way to get everything installed is to use a virtual environment.AnacondaYou can create a virtual environment and install all the dependencies usingcondawith the following commands:pipinstallcondacondacreate-naidapysourceactivateaidapyVirtual EnvVirtualenvcan also be used:pipinstallvirtualenvvirtualenvAIDApysourceAIDApy/bin/activateInstall the version you want via the commands:For the “basic” version:pythonsetup.pyinstallFor the version with the ML use cases:pipinstallaidapy[ml]Test the installation in your PC by running. (Install both versions before running the tests)pythonsetup.pytest5) (Optional) Generate the docs: install the extra dependencies of doc and run
thesetup.pyfile:pipinstallaidapy[doc]pythonsetup.pybuild_sphinxOnce installed, the doc can be generated with:cddocmakehtmlDependenciesThe required dependencies are:Python>= 3.6scikit-learn>= 0.21numpy>= 1.18scipy>= 1.4.1matplotlib>= 3.2.1pandas>= 1.0.3heliopy>= 0.12sunpy>= 1.1.2astropy>=4.0.1xarray>=0.15bottleneck>= 1.3.2heliopy-multid>= 0.0.2Optional dependencies are:pytorch>= 1.4skorch>= 0.8.0Testing dependencies are:pytest>= 2.8Extra testing dependencies:coverage>= 4.4pylint>= 1.6.0UsageAIDApy’s high level interface has been created in order to combine
simplicity with workability. In the example below, the end user
downloads data from the MMS space mission for a specific time range and
afterwards extracts themeanof these. Finally the timeseries are
ploted in the screen.fromdatetimeimportdatetime#AIDApy Modulesfromaidapyimportload_data################################################################################ Define data parameters################################################################################ Time Intervalstart_time=datetime(2018,4,8,0,0,0)end_time=datetime(2018,4,8,0,1,0)# Dictionary of data settings: mission, product, probe, coordinates# Currently available products: 'dc_mag', 'i_dens', and 'all'settings={'prod':['dc_mag'],'probes':['1','2'],'coords':'gse'}################################################################################ Download and load desired data as aidapy timeseries###############################################################################xr_mms=load_data(mission='mms',start_time=start_time,end_time=end_time,**settings)################################################################################ Extract a Statistical Measurement of the data###############################################################################xr_mms['dc_mag1'].statistics.mean()################################################################################ Plot the loaded aidapy timeseries###############################################################################xr_mms['dc_mag1'].graphical.peek()ContributingPull requests are welcome. For major changes, please open an issue first
to discuss what you would like to change.All the code must follow the instructions of STYLEGUIDE.rst. Please make sure to update tests as
appropriate.LicensesThis software (AIDApy) and the database of the AIDA project (AIDAdb) are
distributed under theMITlicense.The data collections included in the AIDAdb are distributed under the
Creative CommonsCC BT
4.0license. |
ai-dashboard | No description available on PyPI. |
aidatafactory | Variantional Autoencoder used as a generative model for generating synthetic data. |
ai-data-preprocessing-queue | ai-data-preprocessing-queueWhat it doesThis tool is intended for preparing data for further processing.
It contains different text processing steps that can be enabled or disabled dynamically.Installationpip install ai-data-preprocessing-queueHow to usefromai_data_preprocessing_queueimportPipelinestate={}pre_processor_dict={'to_lower':None,'spellcheck':'test\r\ntesting'}pipeline=Pipeline(pre_processor_dict)value=pipeline.consume('Input text',state)stateis optional here and can be used to cache preprocessing data between pipeline calls.The preprocessors that the pipeline should use have to be transmitted as keys within a dictionary.
Some preprocessors also require additional data to function.
The data must be converted into string form and assigned to its preprocessor within the dictionary.This dictionary then needs to be transmitted to the pipeline through its constructor.Note: Pipeline has to be instantiated only once and can be reused.Existing preprocessorsTo Lower CaseName: to_lowerRequired additional data: -Converts the text to lower case characters.Remove NumbersName: remove_numbersRequired additional data: -Removes all numbers from the text.Remove PunctuationName: remove_punctuationRequired additional data: -Removes all special characters from the text.Text onlyName: text_onlyRequired additional data: -Removes all special characters and numbers from the text.Spellcheck (Levenshtein)Name: spellcheckRequired additional data: A string containing words, separated by newline, i.e. "word1\r\nword2"Takes a list of words representing the correct spelling. Words within the given text that are close to a word from this list will be replaced with the listed word.Regex replacementName: regex_replacementRequired additional data: CSV data in string form with the following line format: <pattern>,<replacement>,<order>pattern: a regex pattern that is to be found within the textreplacement: the word/text by which any match should be replacedorder: the order in which the regex entries are supposed to be applied (lowest number will be applied first!)This preprocessor will search for occurrences of specific entities in your text and replace them by a specified pattern.Token ReplacementName: token_replacementRequired additional data: CSV data in string form with the following line format: <text>,<replacement>,<order>text: one or multiple words to search within the textreplacement: the word/text by which any match should be replacedorder: the order in which the entries are supposed to be applied (largest number will be applied first!)With this preprocessor you can replace specific words and abbreviations within the text with specified tokens. It is also possible to replace abbreviations ending with a dot. Other special characters are not supported, though.How to start developingWith VS CodeJust install VS Code with the Dev Containers extension. All required extensions and configurations are prepared automatically.With PyCharmInstall the latest PyCharm versionInstall PyCharm plugin BlackConnectInstall PyCharm plugin MypyConfigure the Python interpreter/venvpip install requirements-dev.txtpip install black[d]Ctl+Alt+S => Check Tools => BlackConnect => Trigger when saving changed filesCtl+Alt+S => Check Tools => BlackConnect => Trigger on code reformatCtl+Alt+S => Click Tools => BlackConnect => "Load from pyproject.yaml" (ensure line length is 120)Ctl+Alt+S => Click Tools => BlackConnect => Configure path to the blackd.exe at the "local instance" config (e.g. C:\Python310\Scripts\blackd.exe)Ctl+Alt+S => Click Tools => Actions on save => Reformat codeRestart PyCharmHow to publishUpdate the version in setup.py and commit your changeCreate a tag with the same version numberLet GitHub do the rest |
ai-dataproc | No description available on PyPI. |
aidate | Failed to fetch description. HTTP Status Code: 404 |
aida-tools | READMEThis README would normally document whatever steps are necessary to get your application up and running.What is this repository for?Quick summaryVersionHow do I get set up?Summary of set upConfigurationDependenciesDatabase configurationHow to run testsDeployment instructions |
aida-ua | Це скрипт Python під назвою AIDA отримує курси обміну фіатних валют(USD & EUR) і криптовалют за допомогою API ПриватБанку та Binance. Це дозволяє користувачеві ввести кількість днів, за які він хоче побачити курси обміну, а також вибрати конкретну криптовалюту для перегляду курсу. AIDA використовує бібліотеку aiohttp для виконання асинхронних HTTP-запитів.Після запуску скрипт попросить користувача ввести кількість днів, за які він хоче переглянути курси обміну, а також код криптовалюти, курси якої він хоче переглянути (якщо є). Потім програма роздрукує курси обміну вибраних валют і криптовалюти за кожен із минулих днів, до вказаної кількості днів(криптовалюту можна вивести лише курс за останній доступний день).Список підтримуваних криптовалют:"BTC""ETH""SAND""SOL""DOGE"Зауважте, що AIDA вимагає встановлення бібліотеки aiohttp.
Це можливо зробити за допомогою pip, виконавши команду pip install aiohttp в терміналі. |
ai-db | AIDBAnalyze unstructured data blazingly fast with machine learning. Connect your own ML models to your own data sources and query away!Quick StartIn order to start using AIDB, all you need to do is install the requirements, specify a configuration, and query!
Setting up on the environment is as simple asgitclonehttps://github.com/ddkang/aidb.gitcdaidb
pipinstall-rrequirements.txt# Optional if you'd like to run the examples belowgdownhttps://drive.google.com/uc?id=1SyHRaJNvVa7V08mw-4_Vqj7tCynRRA3x
unzipdata.zip-dtests/Text Example (in CSV)We've set up an example of analyzing product reviews with HuggingFace. Set your HuggingFace API key. After this, all you need to do is runpythonlaunch.py--config=config.sentiment--setup-blob-table--setup-output-tableAs an example query, you can runSELECTAVG(score)FROMsentimentWHERElabel='5 stars'ERROR_TARGET10%CONFIDENCE95%;You can see the mappingshere. We use the HuggingFace API to generate sentiments from the reviews.Image Example (local directory)We've also set up another example of analyzing whether or not user-generated content is adult content for filtering.
In order to run this example, all you need to do is runpythonlaunch.py--config=config.nsfw_detect--setup-blob-table--setup-output-tableAs an example query, you can runSELECT*FROMnsfwWHEREracyLIKE'POSSIBLE';You can see the mappingshere. We use the Google Vision API to generate the safety labels.Key FeaturesAIDB focuses on keeping cost down and interoperability high.We reduce costs with our optimizations:First-class support for approximate queries, reducing the cost of aggregations by up to350x.Caching, which speeds up multiple queries over the same data.We keep interoperability high by allowing you to bring your own data source, ML models, and vector databases!Approximate QueryingOne key feature of AIDB is first-class support for approximate queries.
Currently, we support approximateAVG,COUNT, andSUM.
We don't currently supportGROUP BYorJOINfor approximate aggregations, but it's on our roadmap.
Please reach out if you'd like us to support your queries!In order to execute an approximate aggregation query, simply appendERROR_TARGET <error percent>% CONFIDENCE <confidence>%to your normal aggregation.
As a full example, you can compute an approximate count by doing:SELECTCOUNT(xmin)FROMobjectsERROR_TARGET5%CONFIDENCE95%;TheERROR_TARGETspecifies the percent errorcompared to running the query exactly.For example, if the true answer is 100, you will get answers between 95 and 105 (95% of the time).Useful LinksHow to connect ML APIsHow to define configuration fileConnecting to Data StoreContributeWe have many improvements we'd like to implement. Please help us! For the time being, pleaseemailus, if you'd like to help contribute.Contact UsNeed help in setting up AIDB for your specific dataset or want a new feature? Please fillthis form. |
aidbox | This package was renamed to aidboxpySeehttps://pypi.org/project/aidboxpypip install aidboxpy |
aidboxpy | aidbox-pyAidbox client for python.
This package provides an API for CRUD operations over Aidbox resources.The library is based onfhir-pyand the main difference between libraries in our case is the way they represent resource references (read more aboutdifferences).Aidbox-py also going to support some Aidbox features like _assoc operation, AidboxQuery and so on.Most examples fromfhir-py readmealso work for aidbox-py (but you need to replace FHIR client with AsyncAidboxClient/SyncAidboxClient). See base aidbox-py example below.Getting startedInstallMost recent version:pip install git+https://github.com/beda-software/aidbox-py.gitPyPi:pip install aidboxpyAsync exampleimportasynciofromaidboxpyimportAsyncAidboxClientfromfhirpy.base.exceptionsimport(OperationOutcome,ResourceNotFound,MultipleResourcesFound)asyncdefmain():# Create an instanceclient=AsyncAidboxClient('http://localhost:8080',authorization='Bearer TOKEN')# Search for patientsresources=client.resources('Patient')# Return lazy search setresources=resources.search(name='John').limit(10).page(2).sort('name')patients=awaitresources.fetch()# Returns a list of AsyncAidboxResource# Get exactly one resourcetry:patient=awaitclient.resources('Practitioner')\.search(id='id').get()exceptResourceNotFound:passexceptMultipleResourcesFound:pass# Validate resourcetry:awaitclient.resource('Person',custom_prop='123',telecom=True).is_valid()exceptOperationOutcomease:print('Error:{}'.format(e))# Create Organization resourceorganization=client.resource('Organization',name='beda.software',active=False)awaitorganization.save()# Get patient resource by reference and deletepatient_ref=client.reference('Patient','new_patient')patient_res=awaitpatient_ref.to_resource()awaitpatient_res.delete()# Iterate over search set and change organizationorg_resources=client.resources('Organization').search(active=False)asyncfororg_resourceinorg_resources:org_resource['active']=Trueawaitorg_resource.save()if__name__=='__main__':loop=asyncio.get_event_loop()loop.run_until_complete(main())APIImport library:from aidboxpy import SyncAidboxClientorfrom aidboxpy import AsyncAidboxClientTo create AidboxClient instance use:SyncAidboxClient(url, authorization='', extra_headers={})orAsyncAidboxClient(url, authorization='', extra_headers={})Returns an instance of the connection to the server which provides:.reference(resource_type, id, reference, **kwargs) - returnsSyncAidboxReference/AsyncAidboxReferenceto the resource.resource(resource_type, **kwargs) - returnsSyncAidboxResource/AsyncAidboxResourcewhich described below.resources(resource_type) - returnsSyncAidboxSearchSet/AsyncAidboxSearchSetSyncAidboxResource/AsyncAidboxResourceprovides:.serialize() - serializes resource.get_by_path(path, default=None) – gets the value at path of resource.save() - creates or updates resource instance.delete() - deletes resource instance.to_reference(**kwargs) - returnsSyncAidboxReference/AsyncAidboxReferencefor this resourceSyncAidboxReference/AsyncAidboxReferenceprovides:.to_resource() - returnsSyncAidboxResource/AsyncAidboxResourcefor this referenceSyncAidboxSearchSet/AsyncAidboxSearchSetprovides:.search(param=value).limit(count).page(page).sort(*args).elements(*args, exclude=False).include(resource_type, attr=None, recursive=False, iterate=False).revinclude(resource_type, attr=None, recursive=False, iterate=False).has(*args, **kwargs).assoc(elements)async.fetch() - makes query to the server and returns a list ofResourcefiltered by resource typeasync.fetch_all() - makes query to the server and returns a full list ofResourcefiltered by resource typeasync.fetch_raw() - makes query to the server and returns a raw BundleResourceasync.first() - returnsResourceor Noneasync.get(id=None) - returnsResourceor raisesResourceNotFoundwhen no resource found or MultipleResourcesFound when more than one resource found (parameter 'id' is deprecated)async.count() - makes query to the server and returns the total number of resources that match the SearchSet |
aidbox-python-sdk | aidbox-python-sdkCreate a python 3.8+ environmentpyenvSet env variables and activate virtual environmentsource activate_settings.shInstall the required packages withpipenv install --devMake sure the app's settings are configured correctly (seeactivate_settings.shandaidbox_python_sdk/settings.py). You can also
use environment variables to define sensitive settings, eg. DB connection variables (see example.env-ptl)You can then run example withpython example.py.AddAPP_FAST_START_MODE=TRUEto env_tests for fast start mode.Getting startedMinimal applicationfromaidbox_python_sdk.mainimportcreate_appas_create_appfromaidbox_python_sdk.settingsimportSettingsfromaidbox_python_sdk.sdkimportSDKsettings=Settings(**{})sdk=SDK(settings,resources=resources,seeds=seeds)asyncdefcreate_app():returnawait_create_app(sdk)Register handler for operationimportloggingfromaiohttpimportwebfromyourappfolderimportsdk@sdk.operation(methods=["POST","PATCH"],path=["signup","register",{"name":"date"},{"name":"test"}],timeout=60000## Optional parameter to set a custom timeout for operation in milliseconds)defsignup_register_op(operation,request):"""POST /signup/register/21.02.19/testvaluePATCH /signup/register/22.02.19/patchtestvalue"""logging.debug("`signup_register_op` operation handler")logging.debug("Operation data:%s",operation)logging.debug("Request:%s",request)returnweb.json_response({"success":"Ok","request":request["route-params"]})Validate requestschema={"required":["params","resource"],"properties":{"params":{"type":"object","required":["abc","location"],"properties":{"abc":{"type":"string"},"location":{"type":"string"}},"additionalProperties":False,},"resource":{"type":"object","required":["organizationType","employeesCount"],"properties":{"organizationType":{"type":"string","enum":["profit","non-profit"]},"employeesCount":{"type":"number"},},"additionalProperties":False,},},}@sdk.operation(["POST"],["Organization",{"name":"id"},"$update"],request_schema=schema)asyncdefupdate_organization_handler(operation,request):location=request["params"]["location"]returnweb.json_response({"location":location})Valid request examplePOST/Organization/org-1/$update?abc=xyz&location=us
organizationType:non-profit
employeesCount:10 |
aidcraftbr | Aidcraft Bot Resourcesto use the bot resources import bot_resourcesfunctions are:stopWatch: converts seconds into days hours minuts and seconds |
aidd-codebase | AIDD CodebaseA high-level codebase for deep learning development in drug discovery applications using PyTorch-Lightning.DependenciesThe codebase requires the following additional dependenciesCUDA >= 11.4PyTorch >= 1.9Pytorch-Lightning >= 1.5RDKitOptionally supports: tensorboard and/or wandbInstallationThe codebase can be installed from PyPI usingpip, or your package manager of choice, with$pipinstallaidd-codebaseUsageConfiguration: The coding framework has a number of argument dataclasses in the filearguments.py. This file contains all standard arguments for each of the models. Because they are dataclasses, you can easily adapt them to your own needs.Does your Seq2Seq adaptation need an extra argument? Import the Seq2SeqArguments from arguments.py, create your own dataclass which inherits it and add your extra argument.*It is important to note that the order of supplying arguments to a script goes as follows:*- --flags override config.yaml- config.yaml overrides default values in arguments.py- default values from arguments.py are used when no other values are suppliedAt the end, it stores all arguments in config.yamlUse: The coding framework has four main parts:utilsdata_utilsmodelsinterpretationThese parts should be usedFile Setup: The setup of the files in the system is best used as followed:coding_framework|-- ..ESR X|-- project 1|-- data|-- ..|-- Arguments.py|-- config.yaml|-- main.py|-- datamodule.py|-- pl_framework.pyContributorsAll fellows of the AIDD consortium have contributed to the packaged.Code of ConductEveryone interacting in the codebase, issue trackers, chat rooms, and mailing lists is expected to follow thePyPA Code of Conduct. |
aiddl-core | The AIDDL Core LibraryThis is the Python implementation of the AIDDL Core Library.It provides everything needed forParsing AIDDL filesWorking with containers and modulesCreating terms and typesEvaluating terms and typesDefault functions for evaluationChanges0.3.4AIDDL files now packaged in sdist and bdist0.3.2Refactored parserAdded aiddl resource module containing .aiddl files to remove dependency on AIDDL_PATH environment variableModules loaded from symbolic terms now need to be in a path that follows their nameExample:x.y.zmust be in a filex/y/z.aiddlin any known pathPython modules can be added as path sources for .aiddl files to the newParserclass.For example:parser = Parser(c, aiddl_modules=[a, b, c])assumesa,b, andcare python modules |
aiddl-external-grpc | A GRPC Library for AIDDLInterfaces to connect the AIDDL framework to other components via Protobuf and gRPC.Proxy container functionalityServer hosts a containerClient is a proxy for a containerSupported services:Call function registered on server via its URIProxy FunctionSingle AIDDL function offered by a serverClient is a proxy of an AIDDL function f: Term -> TermActor abstractionImplement actor serversUse actor servers via gRPCSender abstractionSend AIDDL messages to a serverReceiver abstractionRead queued up AIDDL messages form a receiver serverQuery determines how messages are retrievedServer collects messages and sends them to client when queriedCan be used to collect sensor data occasionally queried by a sensor abstractionVersions0.2.0Sensor client0.1.0Actor clientActor server (abstract)Container proxyFunction call serviceContainer proxy clientFunction proxy clientFunction proxy server (abstract)Receiver clientReceiver server (abstract)Sender clientSender server (abstract) |
aidea | AIdea CLIOfficial AIdea command line tool forhttps://aidea-web.tw.InstallationMake sure you have installed both Python 3 andpippackage manager.Runpip install aideato install AIdea CLI.CommandsThe command line tool supports the following commands:aidea login Login CLI
aidea topics list List available topics
aidea topics files List downloadable files
aidea topics download Download topic files
aidea topics submit Make a new submissionLicenseThe AIdea CLI is released under theApache 2.0 license. |
aidebug | AIDebug ConsoleAIDebug Console is a Python-based command line application that leverages the power of OpenAI's GPT models to assist with debugging and developing software projects. It provides a user-friendly interface for interacting with your codebase, running your project, and even debugging your code with the help of AI.FeaturesProject Management: Select and deselect project files and directories.Project Configuration: Configure specific project details such as language, type, framework, and run command.Code Execution: Run your project directly from the console. (Automatically catches errors and asks user if they want to debug)AI Debugging: Debug your project with the help of OpenAI's GPT Models.AI Feature Request: Request a feature for your project from OpenAI's GPT Models.AI Code Documentation: Get a README.md file for your project's GitHub repository.InstallationInstall With PippipinstallaidebugManual Build and InstallClone the repository to your local machine.Navigate to the project directory.Install the required Python packages using pip:pipinstallsetuptoolswheelBuild the project with setup.py:pythonsetup.pysdistbdist_wheelChange directory to the built project:cddistInstall the built .whl file with pip:pipinstallaidebug-0.0.8-py3-none-any.whlUsageSet the necessary environment variables. You need to provide your OpenAI API key:exportOPENAI_API_KEY=your_openai_api_keyRun the project:aidebugUse thehelpcommand to see a list of available commands.Environment VariablesOPENAI_API_HOST: The API host for OpenAI. Default ishttps://api.openai.com.OPENAI_API_KEY: Your OpenAI API key.CommandsHere is a brief explanation of the commands available in the AIDebug Console:cd <directory>: Change the current working directory.exit: Exit the AIDebug Console.project select: Select project files and directories.project deselect: Unselect files and directories by id.project run: Run the project.project files paths: Prints selected files paths.project files contents: Prints selected files path and contents.config project language <language>: Set the programming language of your project.config project type: Set the type of your project.config project framework: Set the framework that your project is using.config project run <command>: Set the command used to run your project.config openai model <model>: Set the OpenAI model to be used.config openai temperature <temperature>: Set the OpenAI model's temperature.debug <error>: Debug your project with the help of OpenAI's GPT Models.feature <feature_request>: Request a feature for your project from OpenAI's GPT Models.readme <optional_message>: Request a README.md file for your project's GitHub repository from OpenAI's GPT Models.Remember to replace<directory>,<language>,<command>,<model>,<temperature>,<error>,<feature_request>, and<optional_message>with your actual values.Running System CommandsAIDebug Console allows you to run native system commands directly from the shell. Simply input the desired command, and it will be executed in the console.For example, to list the files in the current directory, you can use the commandls:> lsThis feature provides flexibility and convenience for running various system tasks alongside your project debugging and development.CreditsThis project has borrowed code fromTheR1D's shell_gpt project. I would like to express my gratitude for the contribution to the open-source community which has greatly aided the development of this project.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.LicenseThis project is licensed under the GNU v3 GPL-3.0 License. See theLICENSEfile for details. |
aide-core | Quick Links:Homepage|PlatformIO IDE|Registry|Project Examples|Docs|Donate|Contact UsSocial:LinkedIn|Twitter|Facebook|Community ForumsPlatformIOis a professional collaborative platform for embedded development.A place where Developers and Teams have true Freedom! No more vendor lock-in!Open source, maximum permissive Apache 2.0 licenseCross-platform IDE and Unified DebuggerStatic Code Analyzer and Remote Unit TestingMulti-platform and Multi-architecture Build SystemFirmware File Explorer and Memory InspectionGet StartedWhat is PlatformIO?PlatformIO IDEPlatformIO Core (CLI)Project ExamplesSolutionsLibrary ManagementDesktop IDEs IntegrationContinuous IntegrationAdvancedDebuggingUnit TestingStatic Code AnalysisRemote DevelopmentRegistryLibrariesDevelopment PlatformsDevelopment ToolsContributingSeecontributing guidelines.Telemetry / Privacy PolicyShare minimal diagnostics and usage information to help us make PlatformIO better.
It is enabled by default. For more information see:Telemetry SettingLicenseCopyright (c) 2014-present PlatformIO <[email protected]>The PlatformIO is licensed under the permissive Apache 2.0 license,
so you can use it in both commercial and personal projects with confidence. |
aide_design | No description available on PyPI. |
aidedkit | DroneKit PythonDroneKit-Python helps you create powerful apps for UAVs.OverviewDroneKit-Python (formerly DroneAPI-Python) contains the python language implementation of DroneKit.The API allows developers to create Python apps that communicate with vehicles over MAVLink. It provides programmatic access to a connected vehicle's telemetry, state and parameter information, and enables both mission management and direct control over vehicle movement and operations.The API is primarily intended for use in onboard companion computers (to support advanced use cases including computer vision, path planning, 3D modelling etc). It can also be used for ground station apps, communicating with vehicles over a higher latency RF-link.Getting StartedTheQuick Startguide explains how to set up DroneKit on each of the supported platforms (Linux, Mac OSX, Windows) and how to write a script to connect to a vehicle (real or simulated).A basic script looks like this:fromdronekitimportconnect# Connect to UDP endpoint.vehicle=connect('127.0.0.1:14550',wait_ready=True)# Use returned Vehicle object to query device state - e.g. to get the mode:print("Mode:%s"%vehicle.mode.name)Once you've got DroneKit set up, theguideexplains how to perform operations like taking off and flying the vehicle. You can also try out most of the tasks by running theexamples.ResourcesThe project documentation is available athttps://readthedocs.org/projects/dronekit-python/. This includesguide,exampleandAPI Referencematerial.The example source code is hosted here on Github as sub-folders of/dronekit-python/examples.TheDroneKit Forumsare the best place to ask for technical support on how to use the library. You can also check out ourGitter channelthough we prefer posts on the forums where possible.Documentation:https://dronekit-python.readthedocs.io/en/latest/about/index.htmlGuides:[https://dronekit-python.readthedocs.io/en/latest/guide/index.html)API Reference:[https://dronekit-python.readthedocs.io/en/latest/automodule.html)Examples:/dronekit-python/examplesForums:http://discuss.dronekit.io/Gitter:https://gitter.im/dronekit/dronekit-pythonthough we prefer posts on the forums where possible.Users and contributors wanted!We'd love yourfeedback and suggestionsabout this API and are eager to evolve it to meet your needs, please feel free to create an issue to report bugs or feature requests.If you've created some awesome software that uses this project,let us know on the forums here!If you want to contribute, see ourContributingguidelines, we welcome all types of contributions but mostly contributions that would help us shrink ourissues list.LicenceDroneKit-Python is made available under the permissive open sourceApache 2.0 License.Copyright 2015 3D Robotics, Inc. |
aide-document | No description available on PyPI. |
aidee-living-docs | Aidee Living Documentation HelpersFeaturesHelper functions to generate living documentation with Sphinx and BehaveType safe code usingmypyfor type checkingRequirementsPython 3.9-3.11InstallationYou can installAidee Living Documentationviapip:$pipinstallaidee-living-docsThis addsaidee-living-docsas a library, but also provides the CLI application with the same name.Using the application from the command lineThe application also provides a CLI application that is automatically added to the path when installing via pip.Once installed with pip, type:aidee-living-docs --helpTo see which options are available.ContributingContributions are very welcome.
To learn more, see theContributor Guide.IssuesIf you encounter any problems,
pleasefile an issuealong with a detailed description.CreditsThis project has been heavily inspired byBluefruit |
aide-infra | No description available on PyPI. |
aideml | Weco PackageThis is a Python package for using Weco AI service.Installationpipinstallweco |
ai-demos | No description available on PyPI. |
aiden | No description available on PyPI. |
aidenbots | No description available on PyPI. |
aidentified-matching-api | No description available on PyPI. |
aidenva | No description available on PyPI. |
aider | # Utilsgeneric utilities for developer to use in reguler programming. |
aider-chat | aider is AI pair programming in your terminalAider is a command line tool that lets you pair program with GPT-3.5/GPT-4,
to edit code stored in your local git repository.
You can start a new project or work with an existing repo.
Aider makes sure edits from GPT arecommitted to gitwith sensible commit messages.
Aider is unique in that it lets you ask for changes topre-existing, larger codebases.GPT-4 Turbo with 128k context and unified diffsAider supports OpenAI's new GPT-4 model that has the massive 128k context window.
Benchmark results indicate that it isvery fast,
and a bitbetter at codingthan previous GPT-4 models.Aider now supports aunified diff editing format, which reduces GPT-4 Turbo's "lazy" coding.To use it, run aider like this:aider --4turboGetting startedSee theinstallation instructionsfor more details, but you can
get started quickly like this:$ pip install aider-chat
$ export OPENAI_API_KEY=your-key-goes-here
$ aider hello.js
Using git repo: .git
Added hello.js to the chat.
hello.js> write a js script that prints hello worldExample chat transcriptsHere are some example transcripts that show how you can chat withaiderto write and edit code with GPT-4.Hello World Flask App: Start from scratch and have GPT create a simple Flask app with various endpoints, such as adding two numbers and calculating the Fibonacci sequence.Javascript Game Modification: Dive into an existing open-source repo, and get GPT's help to understand it and make modifications.Complex Multi-file Change with Debugging: GPT makes a complex code change that is coordinated across multiple source files, and resolves bugs by reviewing error output and doc snippets.Create a Black Box Test Case: GPT creates a "black box" test case without access to the source of the method being tested, using only ahigh level map of the repository based on tree-sitter.You can find more chat transcripts on theexamples page.FeaturesChat with GPT about your code by launchingaiderfrom the command line with set of source files to discuss and edit together. Aider lets GPT see and edit the content of those files.GPT can write and edit code in most popular languages: python, javascript, typescript, html, css, etc.Request new features, changes, improvements, or bug fixes to your code. Ask for new test cases, updated documentation or code refactors.Aider will apply the edits suggested by GPT directly to your source files.Aider willautomatically commit each changeset to your local git repowith a descriptive commit message. These frequent, automatic commits provide a safety net. It's easy to undo changes or use standard git workflows to manage longer sequences of changes.You can use aider with multiple source files at once, so GPT can make coordinated code changes across all of them in a single changeset/commit.Aider cangiveGPT-4a map of your entire git repo, which helps it understand and modify large codebases.You can also edit files by hand using your editor while chatting with aider. Aider will notice these out-of-band edits and keep GPT up to date with the latest versions of your files. This lets you bounce back and forth between the aider chat and your editor, to collaboratively code with GPT.If you are using gpt-4 through openai directly, you can add image files to your context which will automatically switch you to the gpt-4-vision-preview modelUsageRun theaidertool by executing the following command:aider <file1> <file2> ...If your pip install did not place theaiderexecutable on your path, you can invoke aider like this:python -m aider.main <file1> <file2>Replace<file1>,<file2>, etc., with the paths to the source code files you want to work on.
These files will be "added to the chat session", so that GPT can see their contents and edit them according to your instructions.You can also just launchaideranywhere in a git repo without naming
files on the command line. It will discover all the files in the
repo. You can then add and remove individual files in the chat
session with the/addand/dropchat commands described below.
If you or GPT mention one of the repo's filenames in the conversation,
aider will ask if you'd like to add it to the chat.Think about the change you want to make and which files will need
to be edited -- add those files to the chat.
Don't addallthe files in your repo to the chat.
Be selective, and just add the files that GPT will need to edit.
If you add a bunch of unrelated files, GPT can get overwhelmed
and confused (and it costs more tokens).
Aider will automatically
share snippets from other, related files with GPT so it canunderstand the rest of your code base.Aider also has many
additional command-line options, environment variables or configuration file
to set many options. Seeaider --helpfor details.In-chat commandsAider supports commands from within the chat, which all start with/. Here are some of the most useful in-chat commands:/add <file>: Add matching files to the chat session./drop <file>: Remove matching files from the chat session./undo: Undo the last git commit if it was done by aider./diff: Display the diff of the last aider commit./run <command>: Run a shell command and optionally add the output to the chat./voice: Speak to aider torequest code changes with your voice./help: Show help about all commands.See thefull command docsfor more information.TipsThink about which files need to be edited to make your change and add them to the chat.
Aider has some ability to help GPT figure out which files to edit all by itself, but the most effective approach is to explicitly add the needed files to the chat yourself.Large changes are best performed as a sequence of thoughtful bite sized steps, where you plan out the approach and overall design. Walk GPT through changes like you might with a junior dev. Ask for a refactor to prepare, then ask for the actual change. Spend the time to ask for code quality/structure improvements.Use Control-C to safely interrupt GPT if it isn't providing a useful response. The partial response remains in the conversation, so you can refer to it when you reply to GPT with more information or direction.Use the/runcommand to run tests, linters, etc and show the output to GPT so it can fix any issues.Use Meta-ENTER (Esc+ENTER in some environments) to enter multiline chat messages. Or enter{alone on the first line to start a multiline message and}alone on the last line to end it.If your code is throwing an error, share the error output with GPT using/runor by pasting it into the chat. Let GPT figure out and fix the bug.GPT knows about a lot of standard tools and libraries, but may get some of the fine details wrong about APIs and function arguments. You can paste doc snippets into the chat to resolve these issues.GPT can only see the content of the files you specifically "add to the chat". Aider also sends GPT-4 amap of your entire git repo. So GPT may ask to see additional files if it feels that's needed for your requests.I also shared some generalGPT coding tips on Hacker News.GPT-4 vs GPT-3.5Aider supports all of OpenAI's chat models.
You can choose a model with the--modelcommand line argument.You should probably use GPT-4 if you can. For more details see theFAQ entry that compares GPT-4 vs GPT-3.5.For a discussion of using other non-OpenAI models, see theFAQ about other LLMs.InstallationSee theinstallation instructions.FAQFor more information, see theFAQ.Kind words from usersThe best AI coding assistant so far.--Matthew BermanHands down, this is the best AI coding assistant tool so far.--IndyDevDanAider ... has easily quadrupled my coding productivity.--SOLAR_FIELDSIt's a cool workflow... Aider's ergonomics are perfect for me.--qupIt's really like having your senior developer live right in your Git repo - truly amazing!--rappsterWhat an amazing tool. It's incredible.--valyagolevAider is such an astounding thing!--cgrothausIt was WAY faster than I would be getting off the ground and making the first few working versions.--Daniel FeldmanTHANK YOU for Aider! It really feels like a glimpse into the future of coding.--derwikiIt's just amazing. It is freeing me to do things I felt were out my comfort zone before.--DougieThis project is stellar.--funkytacoAmazing project, definitely the best AI coding assistant I've used.--joshuavialI am an aider addict. I'm getting so much more work done, but in less time.--dandandanAfter wasting $100 on tokens trying to find something better, I'm back to Aider. It blows everything else out of the water hands down, there's no competition whatsoever.--SystemSculptBest agent for actual dev work in existing codebases.--Nick Dobos |
aide_render | No description available on PyPI. |
aide-sdk | AIDE SDKIntroductionThis library allows you to build AI inference models to be run on the AIDE platform.Table of contentsQuickstartPackaging a modelThe Manifest fileAccessing study dataSaving outputLogging & Model FailuresQuickstartTo get started:fromaide_sdk.applicationimportAideApplicationfromaide_sdk.inference.aideoperatorimportAideOperatorfromaide_sdk.model.operatorcontextimportOperatorContextfromaide_sdk.model.resourceimportResourcefromaide_sdk.utils.file_storageimportFileStorageclassMyModel(AideOperator):defprocess(self,context:OperatorContext):origin_dicom=context.originresult=my_cool_stuff(origin_dicom)# Your magic goes herefile_manager=FileStorage(context)path=file_manager.save_dicom("my_results",result)result_dicom=Resource(format="dicom",content_type="result",file_path=path)context.add_resource(result_dicom)returncontextAideApplication.start(operator=MyModel())What we just didThe main application class isAideApplication. Once started, it will connect to the model's input queue and listen for new messages.
The single parameter required by Aide is theoperator- this can be any object that implements the following method:process(context: OperatorContext)- This is the operation method, it receives an OperatorContext as input, and should return it as output. The context object is a special object which allows access to input resources.Packaging & PublishingOnce your model is ready, it will need to be published onto the platform.
In order to do that, it'll need to be Dockerized.Docker image requirementsThe SDK needs to be installed on the image (usingpip install aide-sdkor similar)The entrypoint to the container needs to runAideApplication.start.The following environment variables need to be set:MANIFEST_PATH - the path to amanifestfile.Manifest FileThe manifest file provides the AIDE platform with the details it needs to use the model. It includes the following information:model_name- stringmodel_version- stringmodel_description- string, The description of your model.predicate- string, a validpredicate string.mode- string, a validmode string.Model modesThe model mode determines how it'll be used by AIDE.
The mode string can have one of the following values:QA- QA mode, when the model is still being tested.R- Research mode.CU- Clinical use.Predicate StringThe predicate string determines which inputs will be sent to the model.
It's a logical expression, evaluating to a boolean,It's possible to use any comparison operator (<,>,==,>=,<=,!=) and combine usingANDorOR.The predicate supports evaluation against DICOM image metadata tags. Any DICOM tags that are wished to be evaluated against should be prefixed with the following:DICOM_.For example:DICOM_Modality=="MR"ANDDICOM_SliceThickness<=10The above string will evaluate to true if the input DICOM "Modality" tag value is "MR" and the "SliceThickness" tag value is 10 or lower.It is also possible to request specific resource types. For example:DICOM_Modality=="MR"ANDDICOM_SliceThickness<=10ANDresource.type=="nifty/origin"Resource types are defined as format/content-type.Manifest example{"model_name":"test_model","model_version":"1.0.0","model_description":"This is a test model","mode":"QA","predicate":"tetststs"}Accessing ResourcesTheprocessmethod is called with an instance ofOperatorContext, the reference for that object is shown below.Object ReferenceOperatorContextPropertiesPropertyTypeDescriptionoriginOriginThe origin object contains the initial input information to this pipeline.resourcesList[Resource]The resources added by previous operators in the pipeline.MethodsMethodReturn typeDescriptionget_resources_by_type(format: str, content_type: str)List[Resource]Returns the resources of a specific type.add_resource(resource: Resource)NoneAdd a newResourceto the resources list. This resource will be available to the next operators.set_error(error: str)NoneSets an error message in case the operator can't complete its operation. The execution will be marked as a failure.ResourcePropertiesPropertyTypeDescriptionformatstrThe file format (e.g. nifti/dicom/etc)content_typestrThe content within this resource (eg "brain_scan", "white_matter")file_pathstrThe file path of this resource. Returned by the file manager when saving.namespacestrThe UID of the operator that created this resource. Added automatically when saving resources.Origin(Resource)The origin object is a special resource. It contains everything any resource contains, and additional information.PropertiesPropertyTypeDescriptionformatstrThe file format (e.g. "dicom")content_typestrThe content within this resource (eg "origin")file_pathstrThe path of this object.namespacestrThe UID of the operator that created this resource. Added automatically when saving resources.received_timestampdatetimeThe time and date on which the origin object was first received by AIDE.patient_idstrThe patient ID this data refers to.DicomOrigin(Origin)This origin object is used when the original input data is a DICOM study.PropertiesPropertyTypeDescriptionformatstrThe file format (e.g. "dicom")content_typestrThe content within this resource (eg "origin")file_pathstrThe path of this object.namespacestrThe UID of the operator that created this resource. Added automatically when saving resources.received_timestampdatetimeThe time and date on which the origin object was first received by AIDE.patient_idstrThe patient ID this data refers to.study_uidstrThe DICOM Study ID.seriesList[DicomSeries]The DICOM series in this study.MethodsMethodReturn typeDescriptionget_series_by_id()DicomSeriesReads the dicom file and instantiates a pydicomDatasetfrom it.DicomSeriesA DicomSeries object refers to a specific series of images.PropertiesPropertyTypeDescriptionseries_idstrThe UID of this series.metadatadictA dictionary containing series metadata.imagesList[DicomImage]A list of dicom images included in this series.DicomImageThis is a wrapper object around a PyDicomDatasetobject.PropertiesPropertyTypeDescriptioncontext_metadatadictA dictionary containing the image metadata.image_pathstrThe path to the .dcm file.MethodsMethodReturn typeDescriptionload_dataset()pydicom.DatasetReads the dicom file and instantiates a pydicomDatasetfrom it.get_filename()strReturns the dicom filename (e.g. "filename.dcm")get_context_metadata()strReturns the image metadata, loading it if hasn't been loaded.reload_context_metadata()strReloads the context metadata from the file.Saving output DataFileStorageThis helper class allows you to save files, with convenience methods to help save DICOM images and PDF files.
To use it, instantiate it with anOperatorContextobject.It is recommended to include the source DICOM study/series id in the output/final report, this helps the end user to validate that the output was produced using the expected source dataMethodsMethodReturn typeDescriptionsave_file(file_bytes: bytes, file_name: str)strSaves binary data to disk, and returns a path string with its location on disk. Requires binary data and a file name.load_file(file_path: str)bytesLoads a file from disk, using a path string.save_dicom(folder_name: str, dataset: pydicom.Dataset)strSaves a PyDicomDatasetto disk, and returns a path string with its location on disk. Requires a container folder name and the pydicomDataset.save_encapsulated_pdf(folder_name: str, dataset: Dataset, pdf_file_path: str)strSave a PDF file, encapsulated within a DICOM file. This function require a folder name, theDatasetthe PDF relates to, and the pdf file path. Returns the dicom path.LoggingLogging is possible using theaide_sdk.logger.logger.LogManagerclass:fromaide_sdk.logger.loggerimportLogManagerlogger=LogManager.get_logger()logger.info("info message")logger.warn("warn message")logger.error("error message")logger.exception("exception message")Failures vs ErrorsThere are two ways in which operators can fail - either a response can't be reached, for example because of a lack of statistical significance, or an error occurred while attempting to run the operator.Failures are still a valid result. To log an error response, use theOperatorContextset_failuremethod:context.set_failure("Couldn't reach conclusion")returncontextHowever, unexpected errors should raise an exception. It is possible to use theModelErrorexception for this:fromaide_sdk.utils.exceptionsimportModelErrortry:something()exceptException:LogManager.get_logger().exception("Failed")raiseModelError("Unknown error while running model") |
aidetector | AI Detector: Detecting AI Generated TextOverviewAI Detector is a Python module, based on PyTorch, that simplifies the process of training and deploying a classification model to detect whether a given text has been generated by AI. It is designed to be platform-agnostic, making AI detection capabilities accessible to users across different work environments.InstallationThere are two methods available for installing the AI Detector module:Using pip:You can install AI Detector directly from PyPI using pip by running the following command:pip3 install aidetectorFrom this repository:Alternatively, you can clone this repository and install it locally:git clone https://github.com/baileytec-labs/aidetector.git
cd aidetector
pip3 install .UsageAI Detector can be operated in two modes: training and inference.TrainingTo train a new model, you need a CSV dataset with a classification column (labels: 0 for human-written and 1 for AI-generated text) and a text column (the text data). The script takes the following command-line arguments:aidetector train --datafile [path_to_data] --modeloutputfile [path_to_model] --vocaboutputfile [path_to_vocab] --tokenmodel [SpaCy model] --percentsplit [percentage_for_test_split] --classificationlabel [classification_label_in_data] --textlabel [text_label_in_data] --download --lowerbound [lower_bound_for_early_stopping] --upperbound [upper_bound_for_early_stopping] --epochs [number_of_epochs]InferenceTo make predictions with a trained model, you need to provide the text you want to classify. The script takes the following command-line arguments:aidetector infer --modelfile [path_to_trained_model] --vocabfile [path_to_vocab] --text [text_to_classify] --tokenmodel [SpaCy_model] --threshold [probability_threshold_for_classification] --download [flag_to_download_SpaCy_model]The prediction will be printed to the console: "This was written by AI" or "This was written by a human."Python APIYou can use all the functionality of AiDetector in your python programs, it's as simple as starting withfrom aidetector.aidetectorclass import *
from aidetector.inference import *
from aidetector.training import *
from aidetector.tokenization import *
#or
import aidetector as adFrom there, you have access to all of the training, inference, and tokenization capabilities.for example,#Getting inference of an AI model in python
from aidetector.tokenization import *
from aidetector.inference import *
from aidetector.aidetectorclass import *
tokenizer=get_tokenizer()
vocab=load_vocab("./myvocabfile.vocab")
model = AiDetector(len(vocab))
testtext="Is this written by AI?"
model.load_state_dict(torch.load("./mymodelfile.model"))
isai=check_input(
model,
vocab,
testtext,
tokenizer=tokenizer,
)
#returns 0 if human, 1 if AI.DependenciesThe main dependencies for this project include:PyTorch
SpaCy
Torchtext
scikit-learn
pandas
argparse
HaloNote: For tokenization, the project uses SpaCy models. By default, it uses the multi-language model xx_ent_wiki_sm, but other models can be specified using the --tokenmodel argument. If the model is not already downloaded, you can use the --download flag to download the model.ContributingContributions to the AI Detector project are welcome.
Please review CONTRIBUTION.md for further instructions. |
aide_tutorial | No description available on PyPI. |
aidev | No description available on PyPI. |
aide-validation | Valadation Tool for AIDEValidation tool for the outputs in Onshape.InstallationStable Release:pip install aide_validationDevelopment Head:pip install git+https://github.com/AguaClara/aide_validation.gitQuick Startfromaide_validationimportlink_inputlink_input.main()DevelopmentSeeCONTRIBUTING.mdfor information related to developing the code. |
aidevkit | aikit一些ai开发过程中使用到的工具模块aikit安装使用aos安装git clone [email protected]:cpcgskill/aikit.git使用aosimportos# 设置以下环境变量为腾讯云cos的配置os.environ['COS_Region']='ap-hongkong'os.environ['COS_SecretId']=''os.environ['COS_SecretKey']=' 'os.environ['COS_Bucket']=' 'importtorchfromaidevkit.aosimportSaverclassMyModule(torch.nn.Module):def__init__(self):super(MyModule,self).__init__()self.linear=torch.nn.Linear(10,10)self.s=1defforward(self,x):returnself.linear(x)saver=Saver(lambda:MyModule(),'test.pt',3)foriinrange(10):saver.step() |
aidevlog | templetetemplete of AIDevLog需要修改.travis.ymlgemsources--addhttps://gems.ruby-china.com/--removehttps://rubygems.org/
gemsources-l[sudo]geminstalltravis
travislogin
travisencrypt<PyPIpassword>--adddeploy.password关于转义某些符号的注意事项请注意,如果您的 PyPI 密码包含特殊字符,则需要在加密密码之前将其转义。https://2019.iosdevlog.com/2019/06/05/travis/代码规范.vscode/settings.json{"python.pythonPath":"/Users/iosdevlog/.Envs/aidevlog/bin/python","python.linting.flake8Enabled":true,"python.formatting.provider":"yapf","python.linting.flake8Args":["--max-line-length=248"],"python.linting.pylintEnabled":false}字符串使用双引号:""安装pipinstallaidevlog开发本地开发pip3install-e.测试pytest发布python3setup.pysdistbdist_wheel
twineuploaddist/*生成 CHANGELOGnpminstall-gconventional-changelog-cli
./version.sh联系方式网站:http://2019.iosdevlog.com/微信公众号: AI 开发日志LicenseAIDevLog is released under the MIT license. SeeLICENSEfor details. |
aid-hash | AIDThis package provides modified Python implementation of the openCommunity IDflow hashing standard that takes into consideration the flow timestamp.It supports Python versions 2.7+ (for not much longer) and 3+.InstallationThis package is availableon PyPI, therefore:pip install aid_hashTo install locally from a git clone, you can use also use pip, e.g. by sayingpip install -U .UsageThe API breaks the computation into two steps: (1) creation of a flowtuple object, (2) computation of the Community ID string on thisobject. It supports various input types in order to accommodatenetwork byte order representations of flow endpoints, high-level ASCII,and ipaddress objects.Here's what it looks like:import aid_hash
tpl = aid_hash.FlowTuple.make_tcp('14.125487','127.0.0.1', '10.0.0.1', 1234, 80)
aid = aid_hash.AID()
print(aid.calc(tpl))This will print 2:7RJA0SqvF3nbfatPoP1dkZnVvWw=.CommunityID vs AIDAID is a modified version of Zeek's CommunityID.
The main purpose of it is to avoid the collisions
caused when CID groups flows that happened in different days together because they have common
srcip/dstip srcport/dstport and protofor example:if there's a flow from IP1:port1 -> IP2:port2 at 1AM 10/10/2023,
it will have the same community ID as a flow from IP1:port1 -> IP2:port2 at 9PM 10/10/2023 or even a week after.
which means that entirely different flows will have the same cid.The major difference is that AID
take into consideration the timestamp of the flow.
AID matches exact flows accross different monitors instead of correlating them.TestingThe package includes a unittest testsuite in thetestsdirectory
that runs without installation of the module. After changing into that
folder you can invoke it e.g. viapython3 -m unittest tests/aid_test.pyAcknowledgmentsThis is a fork of Zeek's Community ID, the code was originally written by Christian Kreibich @ckreibich |
ai-dialog-model | nlg-deeppavlov-apiset of NLG experimentation using deep pavlov libraryFree software: Apache Software License 2.0Documentation:https://nlg-deeppavlov-api.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2019-02-06)First release on PyPI. |
aidial-sdk | AI DIAL Python SDKOverviewFramework to create applications and model adapters forAI DIAL.Applications and model adapters implemented using this framework will be compatible withAI DIAL APIthat was designed based onAzure OpenAI API.UsageInstall the library usingpip:pip install aidial-sdkEcho application exampleThe echo application example replies to the user by repeating their last message:# Save this as app.pyimportuvicornfromaidial_sdkimportDIALAppfromaidial_sdk.chat_completionimportChatCompletion,Request,Response# ChatCompletion is an abstract class for applications and model adaptersclassEchoApplication(ChatCompletion):asyncdefchat_completion(self,request:Request,response:Response)->None:# Get last message (the newest) from the historylast_user_message=request.messages[-1]# Generate response with a single choicewithresponse.create_single_choice()aschoice:# Fill the content of the response with the last user's contentchoice.append_content(last_user_message.contentor"")# DIALApp extends FastAPI to provide a user-friendly interface for routing requests to your applicationsapp=DIALApp()app.add_chat_completion("echo",EchoApplication())# Run built appif__name__=="__main__":uvicorn.run(app,port=5000)Runpython3 app.pyCheckSend the next request:curlhttp://127.0.0.1:5000/openai/deployments/echo/chat/completions\-H"Content-Type: application/json"\-H"Api-Key: DIAL_API_KEY"\-d'{"messages": [{"role": "user", "content": "Repeat me!"}]}'You will see the JSON response as:{"choices":[{"index":0,"finish_reason":"stop","message":{"role":"assistant","content":"Repeat me!"}}],"usage":null,"id":"d08cfda2-d7c8-476f-8b95-424195fcdafe","created":1695298034,"object":"chat.completion"}Developer environmentThis project usesPython>=3.8andPoetry>=1.6.1as a dependency manager.Check out Poetry'sdocumentation on how to install iton your system before proceeding.To install requirements:poetry installThis will install all requirements for running the package, linting, formatting and tests.IDE configurationThe recommended IDE isVSCode.
Open the project in VSCode and install the recommended extensions.The VSCode is configured to use PEP-8 compatible formatterBlack.Alternatively you can usePyCharm.Set-up the Black formatter for PyCharmmanuallyor
install PyCharm>=2023.2 withbuilt-in Black support.LintRun the linting before committing:makelintTo auto-fix formatting issues run:makeformatTestRun unit tests locally for available python versions:maketestRun unit tests for the specific python version:maketestPYTHON=3.11CleanTo remove the virtual environment and build artifacts run:makecleanBuildTo build the package run:makebuildPublishTo publish the package to PyPI run:makepublish |
aidios-sdk | ######Aidios SDK for Python#######This SDK for the AIDIOS API, provides an easy to use pyton interface. It allows you to retrieve and store data, get digest information, and confirm data transactions.#Installation
Using Pip:
pip install aidios_sdk#Dependencies
requests
json
(note json, and requests will be downloaded when you install aidios_sdk. 'requests' is only used in the example_usage.py#Usage
#Importing the SDK
First, import the SDK in your Python script:from aidios_sdk import AidiosAPI##Initializing the API Client
You can initialize the API client as follows:api = AidiosAPI()###Retrieve Data
To retrieve data, use the retrieve method and pass in the data ID:data_id = "some_data_id"
response = api.retrieve(data_id)
print("Retrieve Response:", response)####Store Data
Use the store method and pass a file:with open("path/to/file", "rb") as file:
file_content = file.read()data = {"file_content": file_content}
response = api.store(data)
print("Store Response:", response)#####Get Digest
To get digest information, use the digest method:data_id = "some_data_id"
response = api.digest(data_id)
print("Digest Response:", response)######Get Confirmations
Check this number of transactions and associated stats for a given (root)txid:transaction_id = "some_transaction_id"
response = api.confirmations(transaction_id)
print("Confirmations Response:", response)#######Example Script
An example script demonstrating these functionalities is included in the examples/ directory.To run the example:python examples/example_usage.pyThe test script will prompt you to perform each of the functions( store, retrieve, digest and confirmations.Have fun with aidios! |
aidirectory | No description available on PyPI. |
ai-distributions | No description available on PyPI. |
ai-dive | 🤿 AI DiveAIDataIngestor,Verifier andEncoder.This library is meant to be practical examples of how to run real world AI Models given data.Installationpip install ai-diveWhy build AI-DiveIn the age of theAI Engineer, it is more likely that you will start by grabbing an off the shelf model as a starting point than training your own from scratch. That is not to say you will never train a model. It is just to say, let's verify state of the art before we go building.A wide range of AI tasks that used to take 5 years and a research team to accomplish in 2013, now just require API docs and a spare afternoon in 2023.🤿 AI-Dive let's you easily dive into the results of a model to decide whether it is worth building upon. It also gives a simple and consistent interface to run in your app or implement new models.ModelTODO: Breakup below into each partModelDatasetDiverSaverDatasetTODODiveTODOSaveTODOAll Together NowTODOModel & DatasetThere are only a two interfaces to implement to get up and running on any model or dataset.Dataset- How to iterate over dataModel- How to predict given each data pointDive & SaveThere are two helper classes to run your model given a datasetDiver- How to run each datapoint from your dataset through the model.Saver- How to save off the results of the run. Running the model and not saving the results can cost time and money.ModelsAI-Dive provides a wrapper around existing models to make them easy to run on your own data. We are not writing models from scratch, we are simply wrapping them with a consistent interface so they can be evaluated in a consistent way.fromai.dive.models.vitimportViTmodel=ViT()data={"full_path":"images/shiba_inu_1.jpg"}output=model.predict(data)print(output)There are a few models implemented already, we are looking to extend this list to new models as the come out, or allow this interface to be implemented in your package to save you time evaluating.HELP US BUILD OUT OUR MODEL LIBRARY OR IMPLEMENT YOUR OWN
TODO: Show how to do eitherVision Transformer (ViT)Llama-2Mistral-7bDalle-3Stable DiffusionMagic AnimateDatasetsModels are worthless without the data to run and evaluate them on. Sure you can poke your model with a stick by running on a single example, but the real insights come from running your model given a dataset.fromai.dive.models.vitimportViTfromai.dive.data.directory_classificationimportDirectoryClassification# Instantiate the model and datasetmodel=ViT()dataset=DirectoryClassification(data_dir="/path/to/images")# Use a Saver to write the results to a csvsaver=Saver("output.csv",output_keys=['filename','class_name','prediction','probability'],save_every=10)# Run the model on the dataset, and save the results as we godiver=Diver(model,dataset,saver=saver)results=diver.run()# The output will be a list of all the predictionsprint(results)TheDiverobject saves you the work of processing each row in the dataframe and theSavertakes care of writing all the results to disk so you can compare them across runs.With plug and play models and datasets, the hope is anyone can evaluate a model against any dataset and share the results quickly and effectively.Model InterfaceTODODataset InterfaceA dataset has to implement two methods__len__and__getitem__so that we can iterate over it. If it implements_build, you can load everything into memory to make the other calls faster.Here is an example dataset that iterates over a directory of images with the folder names as classnames.Example directory structure:images/
cat/
1.jpg
2.jpg
dog/
1.jpg
2.jpg
3.jpgExample data loader:fromai.dive.data.datasetimportDatasetimportosclassDirImageClassification(Dataset):def__init__(self,data_dir):super().__init__()self.data_dir=data_dir# For iterating over the datasetdef__len__(self):returnlen(self.filepaths)# For iterating over the datasetdef__getitem__(self,idx):return{"filepath":self.filepaths[idx],"class_name":self.labels[idx]}# Override this function to load the dataset into memory for fast accessdef_build(self):# iterate over files in directory, taking the directory name as the labellabels=[]filepaths=[]forroot,dirs,filesinos.walk(self.data_dir):forfileinfiles:iffile.endswith(".jpg")orfile.endswith(".png"):labels.append(os.path.basename(root))filepaths.append(os.path.join(root,file))self.labels=labelsself.filepaths=filepaths |
ai-django-core | Disclaimer: Package was superseded by ambient-toolbox!This package was renamed, moved and deprecated under the old name. The successor is the "Ambient Toolbox".PyPIMigration docsOverviewThis package contains various useful helper functions. You can read up on all the fancy things atreadthedocs.io.InstallationInstall the package via pip:pip install ai-django-coreor via pipenv:pipenv install ai-django-coreAdd module toINSTALLED_APPSwithin the main djangosettings.py:INSTALLED_APPS = (
...
'ai_django_core',
)ContributeSetup package for developmentCreate a Python 3.11 virtualenvActivate the virtualenv (take care that you needScripts/activate.ps1for Windows users instead ofScripts/activate)Install dependencies withpip install .[dev,docs,view-layer,drf,graphql]Add functionalityClone the project locallyCreate a new branch for your featureChange the dependency in your requirements.txt to a local (editable) one that points to your local file system:-e /Users/workspace/ai-django-coreor via pippip install -e /Users/workspace/ai-django-coreEnsure the code passes the testsCreate a pull requestRun testsCheck coveragepytest --cov=.Run testspytestGit hooks (via pre-commit)We use pre-push hooks to ensure that only linted code reaches our remote repository and pipelines aren't triggered in
vain.To enable the configured pre-push hooks, you need toinstallpre-commit and run once:pre-commit install -t pre-push -t pre-commit --install-hooksThis will permanently install the git hooks for both, frontend and backend, in your local.git/hooksfolder.
The hooks are configured in the.pre-commit-config.yaml.You can check whether hooks work as intended using theruncommand:pre-commit run [hook-id] [options]Example: run single hookpre-commit run ruff --all-files --hook-stage pushExample: run all hooks of pre-push stagepre-commit run --all-files --hook-stage pushUpdate documentationTo generate new auto-docs for new modules run:sphinx-apidoc -o ./docs/modules/ ./ai_django_core/(in the current
set up an auto doc for the antivirus module is not supported due to installation and import problems. Since it might
be removed in the future, that should be fine for now).To build the documentation run:sphinx-build docs/ docs/_build/html/or go into thedocsfolder and
run:make html. Opendocs/_build/html/index.htmlto see the documentation.Translation filesIf you have added custom text, make sure to wrap it in_()where_is
gettext_lazy (from django.utils.translation import gettext_lazy as _).How to create translation file:Navigate toai_django_core/ai_django_core(the inner directory!)python manage.py makemessages -l deHave a look at the new/changed files withinai_django_core/ai_django_core/localeHow to compile translation files:Navigate toai_django_core/ai_django_core(the inner directory!)python manage.py compilemessagesHave a look at the new/changed files withinai_django_core/ai_django_core/localePublish to ReadTheDocs.ioFetch the latest changes in GitHub mirror and push themTrigger new build at ReadTheDocs.io (follow instructions in admin panel at RTD) if the GitHub webhook is not yet set
up.Publish to PyPiUpdate documentation about new/changed functionalityUpdate theChangelogIncrement version in main__init__.pyCreate pull request / merge to masterThis project uses the flit package to publish to PyPI. Thus publishing should be as easy as running:flit publishTo publish to TestPyPI use the following ensure that you have set up your .pypirc as
shownhereand use the following command:flit publish --repository testpypi |
ai-django-fileupload | Overview:This package provides an easy way to implement thejQuery Fileuploader, in your Django project.The uploaded files are stored through the Attachment model. Attachments could be linked to any model.Additionally, a list of attachments is rendered along with the uploader button. These attachments have a convenient delete feature.Installation:Add a requirement to your requirements.txt:ai-django-fileuploadAdd module toINSTALLED_APPS:fileupload.apps.FileuploadConfigAdd module's urls to your url file:url(r'^upload/', include('fileupload.urls')),Add static files. They are not included in this package, though a convenientnpm packageis provided.npm install ai-django-fileuploadRun migrationsSettingsDefault thumbnail imageThe uploader comes with a default thumbnail image, in case it couldn't be generated.You can set your own one, adding its location to the settings file:UPLOADER_DEFAULT_THUMBNAIL = '/static/img/default-thumbnail.png'Otherwise it'd be fetched from "static/node_modules/ai-django-fileupload/img/default-thumbnail.png". You may find useful to copy this image wherever your static content is stored.Login requiredIf your uploader needs the user be authenticated, you can enable this restriction adding this to the settings file:UPLOADER_LOGIN_REQUIRED = TruePersist filenamesThe uploader prepends a UUID to the filename. You can disable this by adding this to the settings file:UPLOADER_PERSIST_FILENAME = TrueUsage:Include the upload_file template tag in your template:{% load upload_file %}Call it with the object that the uploaded files will be attached to:{% upload_file obj=object %}Make sure to put the template tag outside any other form tags you have since it will render a new form.For a minimal setup, please load the following files. Scripts order is important.<link rel="stylesheet" type="text/css" href="node_modules/bootstrap/dist/css/bootstrap.min.css"/>
<link rel="stylesheet" type="text/css" href="node_modules/blueimp-file-upload/css/jquery.fileupload.css">
<!-- jQuery -->
<script src="node_modules/jquery/dist/jquery.js"></script>
<!-- The jQuery UI widget factory, can be omitted if jQuery UI is already included -->
<script src="node_modules/blueimp-file-upload/js/vendor/jquery.ui.widget.js"></script>
<!-- The Templates plugin is included to render the upload/download listings -->
<script src="node_modules/blueimp-tmpl/js/tmpl.min.js"></script>
<!-- The Load Image plugin is included for the preview images and image resizing functionality -->
<script src="node_modules/blueimp-load-image/js/load-image.all.min.js"></script>
<!-- The Canvas to Blob plugin is included for image resizing functionality -->
<script src="node_modules/blueimp-canvas-to-blob/js/canvas-to-blob.min.js"></script>
<!-- The basic File Upload plugin and components-->
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload.js"></script>
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload-process.js"></script>
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload-image.js"></script>
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload-audio.js"></script>
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload-video.js"></script>
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload-validate.js"></script>
<script src="node_modules/blueimp-file-upload/js/jquery.fileupload-ui.js"></script>
<!-- Locale -->
<script src="node_modules/ai-django-fileupload/locale.js"></script>
<!-- CSRF token -->
<script src="node_modules/ai-django-fileupload/csrf.js"></script>
<!-- The main application script -->
<script src="node_modules/ai-django-fileupload/index.js"></script>
<!-- The XDomainRequest Transport is included for cross-domain file deletion for IE8+ -->
<!--[if gte IE 8]>
<script src="node_modules/blueimp-file-upload/js/cors/jquery.xdr-transport.js"></script>
<![endif]-->ContributeClone the project locallyCreate a new branch for your featureChange the dependency in your requirements.txt to a local (editable) one that points to your local file system:-e /Users/felix/Documents/workspace/ai-django-fileuploadEnsure the code passes the testsRun:python setup.py developCreate a pull requestPublish to PyPiIncrement version insetup.pyUpdateChangeloginReadme.mdCreate pull request / merge to masterRun:Make sure you have all the required packages installedpip install twine wheelCreate a file in your home directory:~/.pypirc[distutils]
index-servers=
pypi
testpypi
[pypi]
repository: https://upload.pypi.org/legacy/
username: ambient-innovation
[testpypi]
repository: https://test.pypi.org/legacy/
username: ambient-innovationEmptydistdirectoryCreate distributionpython setup.py sdist bdist_wheelUpload to Test-PyPitwine upload --repository testpypi dist/*Check at Test-PyPi if it looks niceUpload to real PyPitwine upload dist/*TestsInstall requirementspip install -r requirements.pipCheck coveragepytest --cov=fileupload fileuploadRun testspytestInternationalizationActivate a virtualenv with django installedGo to fileupload appRundjango-admin makemessages -l <language_code>Translate the .po fileRundjango-admin compilemessagesChangelog0.1.5(2021-12-07)Addedexcludefunctionality where attachments can be excluded from the response by ids0.1.4(2021-03-01)Fixed a bug where the upload_to method would cause a RecursionError when using the persist-filename optionAdded file type restriction configurable with UPLOADER_ALLOW_FILETYPE option0.1.3(2021-02-22)Added option to persist filename0.1.2(2019-12-11)Description altered0.1.1(2019-12-11)Updated Readme.md0.1.0(2019-11-19)Added support for django 2.2.* |
aidkit | aidkit is the quality gate between machine learning models and the deployment of those models.InstallationActivate your virtual environment with python 3.6, e.g.source venv/bin/activatepip install aidkitExample UsageAuthenticateThe only requirement for using aidkit is having a license for it.To authenticate, you need to run the following once:python-maidkit.authenticate--url<subdomain>.aidkit.ai--token<yourauthtoken>ModelYou can upload a model to aidkit, or list the names of all models
uploaded.For uploading, you need a keras .h5 file, that contains a LSTM
architecture. Do the following to upload it:python-maidkit.model--file<pathtoyourh5file>To list all uploaded models type:python-maidkit.modelDataYou can upload a data set to aidkit, or list the names of all datasets
uploaded.For uploading, you need a zip file.
We expect a zip, containing a folder, that is named like the dataset
should be called. This subfolder contains INPUT and OUTPUT folders
that each contain csv files. Do the following to upload it:python-maidkit.data--file<pathtoyourzipfile>To list all uploaded datasets type:python-maidkit.dataAnalysisYou can start a new quality analysis. For doing so, you need a toml file.
This file will follow a specified toml standard. Do the following to upload it:python-maidkit.analysis--file<pathtoyourtomlfile>To list all uploaded datasets type:python-maidkit.analysisVisualizationAfter running an analysis you can observe the results in our web-GUI. to get the link type:python-maidkit.urlJust follow the link and authorize yourself with your credentials. |
aidkitcli | aidkit is the quality gate between machine learning models and the deployment of those models.InstallationActivate your virtual environment with python 3.6, e.g.source venv/bin/activatepip install aidkitExample UsageAuthenticateThe only requirement for using aidkit is having a license for it.To authenticate, you need to run the following once:python-maidkitcli.authenticate--url<subdomain>.aidkitcli.ai--token<yourauthtoken>ModelYou can upload a model to aidkit, or list the names of all models
uploaded.For uploading, you need a keras .h5 file, that contains a LSTM
architecture. Do the following to upload it:python-maidkitcli.model--file<pathtoyourh5file>To list all uploaded models type:python-maidkitcli.modelDataYou can upload a data set to aidkit, or list the names of all datasets
uploaded.For uploading, you need a zip file.
We expect a zip, containing a folder, that is named like the dataset
should be called. This subfolder contains INPUT and OUTPUT folders
that each contain csv files. Do the following to upload it:python-maidkitcli.data--file<pathtoyourzipfile>To list all uploaded datasets type:python-maidkitcli.dataAnalysisYou can start a new quality analysis. For doing so, you need a toml file.
This file will follow a specified toml standard. Do the following to upload it:python-maidkitcli.analysis--file<pathtoyourtomlfile>To list all uploaded datasets type:python-maidkitcli.analysisVisualizationAfter running an analysis you can observe the results in our web-GUI. to get the link type:python-maidkitcli.urlJust follow the link and authorize yourself with your credentials. |
aidkit-client | aidkit is an MLOps platform that allows you to assess and defend against threads
and vulnerabilities of AI models before they deploy to production.
aidkit-client is a companion python client library to seamlessly integrate with
aidkit in python projects. |
aidkitHW | aidkitHWThis repo is for a coding challenge to interface various frameworks eg pytorch (fastai), tensorflow, onnx and inferencing onnx-runtime. The work is best followed on Linux OS with jupyter notebooks.Some prerequisites to follow along:pip install jupyter notebookIn case there's problems installing onnx:sudo apt-get install protobuf-compiler libprotoc-devTo useonnx-tensorflowfor tensorflow-1.x install it using:git clone https://github.com/onnx/onnx-tensorflow.git && cd onnx-tensorflow
git checkout tf-1.x
pip install -e .The package can be installed via pip:pip install aidkitHW |
aidkitmlevaluate | No description available on PyPI. |
aido | aido |
aido-agents-daffy | No description available on PyPI. |
aido-agents-daffy-aido4 | No description available on PyPI. |
aido-analyze-daffy | No description available on PyPI. |
aido-analyze-daffy-aido4 | No description available on PyPI. |
aidoc | aidocaidoc is a command line interface (CLI) tool that uses AI to automatically generate documentation for your code.RequirementsOpenAI API key:https://beta.openai.com/Python 3.6 or higherInstallationglobally install the packagepython3 -m pip install aidoc(recommended) create a virtual environment and install the packagepipinstallaidocUsageTo configure the API key and model for aidoc, run the following command:aidoc configureTo generate documentation for a source file or directory, run the following command:aidoc gen <source_file>You can also specify the following optional arguments:-oor--overwrite: Overwrite existing docstrings-for--format: Format the entire source file using black (default=True)-pror--pull-request: Create a pull request with the changesExamplesGenerate docstrings for the main.py file:aidoc gen main.pyGenerate docstrings for all Python files in the src directory and its subdirectories:aidoc gen srcGenerate docstrings and create a pull request with the changes:aidoc gen main.py -prContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.LicenseThis project is licensed under the GNU General Public License v3.0 - see theLICENSEfile for details.AcknowledgmentsOpenAIDisclaimerThis project is not affiliated with OpenAI. The OpenAI API and GPT-3 language model are not free. You will need to sign up for a freeOpenAIaccount and create an API key to use this tool. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.