package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
aitk.robots
aitk.robotsA lightweight Python robot simulator for JupyterLab, Notebooks, and other Python environments.GoalsA lightweight mobile robotics simulatorUsable in the classroom, research, or explorationExplore wheeled robots with range, cameras, smell, and light sensorsOperate quickly without a huge amount of resourcesCreate reproducible experimentsDesigned for exposition, experimentation, and analysisSensors designed for somewhat realistic problems (such as image recognition)Especially designed to work easily with Machine Learning and Artificial Intelligence systemsInstallationFor the core operations, you will need to install just aitk.robots:pipinstallaitk.robotsTo use the Jupyter enhancements, you'll also need the browser-based extensions. You can install those with:jupyter labextension install @jupyter-widgets/jupyterlab-managerIf not in a conda environment, then you will also need to:jupyter nbextension enable --py widgetsnbextensionFor additional information, please see:aitkaitk.robotsaitk.networksaitk.utils
aitk.utils
aitk.utilsUtils for AI
ait-learners
Learners EnvironmentWebinterface for accessing CR exercises.Buildpython3-mbuildInstall#pippipinstallait-learners#dockerdockerpullghcr.io/ait-cs-iaas/learnersRungunicorn--bind127.0.0.1:5000learners:app# orflaskrun# ordocker-composeup-dDeveloppipinstall-e.
aitlTest
No description available on PyPI.
aitl-test-pkg
No description available on PyPI.
aitoai
Info:CheckGithubfor the latest source code anddocumentationfor more information.Maintainer:Aito <[email protected]>AboutThe Aito Python SDK is an open-source library that helps you to integrate your Python application toAitoquicker and more efficiently.The SDK also includes theAito Command Line Interface (CLI)that enables you to interact with Aito using commands in your command-line shell, e.g: infer a table schema from a file or upload a file to Aito.Support / FeedbackFor issues with, questions about, or feedback, please join oursupport channels.InstallationAito Python SDK can be installed withpip$pipinstallaitoaiCheck ourinstallation guidefor more informationDocumentationYou will needsphinxinstalled to generate the documentation.$ pip install sphinx $ pip install sphinx_rtd_themeDocumentation can be generated by:$cddocs$makecleanhtmlGenerated documentation can be found in thedocs/build/html/*directory.Dependencies:The Python SDK SQL integration and the tests depend on the ODBC db drivers and the postgres and mysql functionality.The main drivers can be installed in Debian based Linuxes with:` sudo apt installunixodbc-devodbc-postgresqlpostgresql-clientdefault-mysql-clientpip install pyodbc `You can find more information about installing the MySQL ODBC drivers [here](https://dev.mysql.com/doc/connector-odbc/en/connector-odbc-installation.html)Also note: that the database integration tests require local postgres / mySQL servers.Environment:To run the test, you need to have the following environment variables defined:AITO_GROCERY_DEMO_INSTANCE_URLAITO_GROCERY_DEMO_API_KEYAITO_INSTANCE_URLAITO_API_KEYTestingInstall the required packages:$pipinstall-rrequirements/build.txt$pipinstall-rrequirements/test.txtYou can use our test cli to run tests:$python-mtests-hNoteSome tests require some environment variables to be setup. Use the test CLI list command to discover and display the test suites and casesThere are multiple test suites:CLI suite:$python-mtests-vsuitecliNoteYou need an Aito instance and set up the credentials with`AITO_INSTANCE_URL`and`AITO_API_KEY`to test some functions.SDK suite:$python-mtests-vsuitesdkNoteYou need an Aito instance and set up the credentials with`AITO_INSTANCE_URL`and`AITO_API_KEY`to test some functionsSQL functions tests:Test against Postgres:$python-mtests-vcasesql_functions.test_connection.TestPostgresConnection$python-mtests-vcasesql_functions.test_cli_sql_functions.TestPostgresFunctionsTest against MySQL:$python-mtests-vcasesql_functions.test_connection.TestMySQLConnection$python-mtests-vcasesql_functions.test_cli_sql_functions.TestMySQLFunctionsNoteTo test the SQL functions, you need to installpyodbcand the specific database ODBC driver.Build and test built package:To build the package:$pipinstall-rrequirements/deploy.txt$./scripts/deploytest.pypi--no-publish-bdevWARNING: This will update the version in ‘aito/__init__’ file. DO NOT commit this to Git!The built wheel should be at`dist/`. To install:$pipinstalldist/aitoai-<version>.whlTo test the built package, create an environment variable`TEST_BUILT_PACKAGE`and run the above testsTest the documentation:To test the inline documentation:$python-mtests-vsuiteinline_docsTo test the code blocks in rst files:$cddocs$curl-oreddit_sample.csvhttps://raw.githubusercontent.com/AitoDotAI/kickstart/master/reddit_sample.csv$exportSPHINX_DISABLE_MOCK_REQUIREMENTS=TRUE$makedoctestContributingMaking changesAdd unittest to the appropriate test suite (e.g: test case for Aito Schema:tests/sdk/test_aito_schema.py) or create a new test suite inside thetestsfolderAdd documentation:Inline documentation if applicableUpdate .rst file in docs/source folder (e.g: Add a new Client function to docs/source/sdk.rst)Add inline doc test if applicable (e.g: inline doc test for Aito Client:tests/inline_docs/test_client_inline_docs.py)NoteRemember to update theautodoc_mock_importsin conf.py file if there are additional requirementsUpdate the version in aito/__init__ to e.g. ‘1.2.3rc1’ and write release notes in docs/changelog.rstCheck CircleCI and issue a PRDeploy to production (scripts/deploy) with the appropriate version after the PR is reviewed
aitomatic
WebModel Library User ManualThe WebModel library is a tool for building, tuning, and inference of models that are built with the Aitomatic system. The target users of this library are AI Engineers who use the Aitomatic system.RequirementsPython 3.9 or higherrequestslibrarypandaslibrarynumpylibrarytqdmlibraryInstallationThe WebModel library can be installed using pip:pipinstall'aitomatic>=1.2.0'--index-urlhttps://test.pypi.org/simple/--extra-index-urlhttps://pypi.org/simpleQuick StartTo get started, you can create a WebModel object by passing in the model name and API token:fromaitomatic.api.web_modelimportWebModel# load modelAPI_ACCESS_TOKEN='<API_ACCESS_TOKEN>'project_name="<project name>"model_name="<model name>"model=WebModel(api_token=API_ACCESS_TOKEN,project_name=project_name,model_name=model_name)model.load()# view model training statistics and infoprint(model.stats)# run model inferencedata={'X':my_dataframe}response=model.predict({'X':data})print(response['predictions'])MethodsTheWebModelclass provides several methods for working with the model:Constructormodel_names=WebModel.get_model_names(api_token="YOUR_API_TOKEN",project_name="MyProject")Load modelloadset up the model ready by loading all parameter from Aitomatic model repo.model.load()Returnthe model with loaded paramsPredictThepredictmethod takes a dictionary as input with the data you want to make predictions on. The input data should be a pandas DataFrame, Series, or numpy array with the key "X". The method returns a dictionary with the predictions, with the key "predictions".response=model.predict(input_data={'X':df})input_data: input data for prediction, dictionary with data under key 'X'Return: result of the prediction call in a dictionary where the actual result is underpredictionkeyTuningtune_modelis a statis method to generate multiple versions of a given model with the set of input paramstune_model(project_name=PROJECT_NAME,base_model=BASE_MODEL_NAME,conclusion_tuning_range=conclusion_threshold_ranges,ml_tuning_params=ML_MODELS_PARAMS,output_model_df_path='tuning.parquet',wait_for_tuning_to_complete=True,prefix="[HUNG7]",)project_name: A string containing the name of the Aitomatic project to use.base_model:A string containing the name of the base model to use.conclusion_tuning_range: A dictionary specifying the range of values to use for the final layer of the tuned model.ml_tuning_params: A dictionary specifying the AutoML tuning parameters to use.output_model_df_path: A string specifying the path to save the resulting DataFrame containing the tuned model's hyperparameters and performance.wait_for_tuning_to_complete: A boolean specifying whether to wait for the tuning process to complete before returning. Default is True.prefix: A string containing a prefix to add to the name of the new model. Default is "finetune".ReturnA Pandas DataFrame containing the hyperparameters and performance of the tuned model.Log model metricslog_metricsis to save the model metric after evaluationmodel.log_metrics("accuracy",0.95)Get models in projectstaticmodel_names=WebModel.get_model_names(api_token="YOUR_API_TOKEN",project_name="MyProject")Theapi_tokenA string containing the access token for the Aitomatic API. If not provided, the AITOMATIC_API_TOKEN environment variable will be used.Theproject_nameA string containing the name of the Aitomatic project to use. If not provided, the AITOMATIC_PROJECT_ID environment variable will be used.Returna list of the names of all models in the specified project.
aitool
.注意:此BytedAITool分支里的task_customized模块包含业务代码,不开源。在公司内请使用pip install bytedaitool安装,获得更多能力接口在公司外仅可使用pip install aitool安装,仅支持基础的算法接口AITool 用于提高算法开发的效率百种工具函数(数据处理、多进程、计时器等)百种算法(包括动态规划、统计模型、深度学习等等)每种算法都自带数据,便于验证和使用。aitool -官方主页-文档MotivationInstallationGetting StartedCommunicationReleases and ContributingThe TeamMotivationInstallation注意:需要python版本>=3.5pipinstallaitool--upgradeGetting Started# TodoCommunicationReleases and ContributingThe Team授权许可本项目采用 MIT 开源授权许可证,完整的授权说明已放置在LICENSE文件中。
ai-tool
ai_toolai toolslice picturefromai_tool.img_slideimportyield_sub_img# yield the sub image from the jpgforbbox,sub_imginyield_sub_img("test.jpg",0,0,180,60):clip="-".join([str(x)forxinbbox])print("sub img:{}".format(clip))cv2.imshow(clip,sub_img)cv2.waitKey(0)IoUcompute the iou for tow boxes,example box1 1, 2, 101, 102. location(1,2) is left-up, location(101,102) is right-down.fromai_tool.bboximportBBoxbbox1=BBox([1,2,101,102])bbox2=BBox([11,12,121,122])iou=bbox1/bbox2print("iou",iou)assertiou>0.5print('box1 S is',bbox1.S)print('box1 & box2',bbox1&bbox2)print('box1 == box2',bbox1==bbox2)print('merge box1 + box2',bbox1+bbox2)print('merge box1 | box2',bbox1|bbox2)result is :iou0.5785714285714286 box1Sis10000box1&box2[11,12,101,102]box1==box2True mergebox1+box2[1,2,121,122]mergebox1|box2[1,2,121,122]multi bbox operationfromai_tool.bboximportBBoxes,BBoxbb1=BBoxes(iou_thresh=0.6)bb2=BBoxes()bb1.append([1,2,101,102])bb1.append([1000,2,1101,102])bb2.append([11,12,111,112])bb2.append([1,1002,101,1102])# judge the bbox in bb1print("[5, 5, 100, 100] in bb1",BBox([5,5,100,100])inbb1)print("[100, 5, 200, 100] in bb1",BBox([100,5,200,100])inbb1)# bb1 & bb2print("bb1 & bb2",bb1&bb2)print("bb1 - bb2",bb1-bb2)print("bb2 - bb1",bb2-bb1)result is[5,5,100,100]inbb1True[100,5,200,100]inbb1False bb1&bb2[[1,2,101,102]]bb1-bb2[[1000,2,1101,102]]bb2-bb1[[1,1002,101,1102]]
aitoolbox
AI ToolboxDocumentationAIToolbox is a framework which helps you train deep learning models in PyTorch and quickly iterate experiments. It hides the repetitive technicalities of training the neural nets and frees you to focus on interesting part of devising new models. In essence, it offers a keras-style train loop abstraction which can be used for higher level training process while still allowing the manual control on the lower level when desired.In addition to orchestrating the model training loop the framework also helps you keep track of different experiments by automatically saving models in a structured traceable way and creating performance reports. These can be stored both locally or on AWS S3 (Google Cloud Storage in beta) which makes the library very useful when training on the GPU instance on AWS. Instance can be automatically shut down when training is finished and all the results are safely stored on S3.InstallationTo install the AIToolbox package execute:pipinstallaitoolboxIf you want to install the most recent version from github repository, first clone the package repository and then install via thepipcommand:gitclonehttps://github.com/mv1388/aitoolbox.git pipinstall./aitoolboxAIToolbox package can be also provided as a dependency in therequirements.txtfile. This can be done by just specifying theaitoolboxdependency. On the other hand, to automatically download the current master branch from github include the following dependency specification in the requirements.txt:git+https://github.com/mv1388/aitoolbox#egg=aitoolboxTrainLoopTrainLoopis the main abstraction for PyTorch neural net training. At its core it handles the batch feeding of data into the model, calculating loss and updating parameters for a specified number of epochs. To learn how to define the TrainLoop supported PyTorch model please look at theModelsection bellow.After the model is created, the simplest way to train it via the TrainLoop abstraction is by doing the following:fromaitoolbox.torchtrain.train_loopimport*tl=TrainLoop(model,train_loader,val_loader,test_loader,optimizer,criterion)model=tl.fit(num_epochs=10)AIToolbox includes a few more advanced derivations of the basic TrainLoop which automatically handle the experiment tracking by creating model checkpoints, performance reports, example predictions, etc. All of this can be saved just on the local drive or can also be automatically stored on AWS S3. Currently implemented advancedTrainLoopsareTrainLoopCheckpoint,TrainLoopEndSaveandTrainLoopCheckpointEndSave. Here, 'Checkpoint' stands for checkpointing after each epoch, while 'EndSave' will only persist and evaluate at the very end of the training.For the most complete experiment tracking it is recommended to use theTrainLoopCheckpointEndSaveoption. The optional use of theresult packagesneeded for the neural net performance evaluation is explained in theexperiment sectionbellow.fromaitoolbox.torchtrain.train_loopimport*TrainLoopCheckpointEndSave(model,train_loader,validation_loader,test_loader,optimizer,criterion,project_name,experiment_name,local_model_result_folder_path,hyperparams,val_result_package=None,test_result_package=None,cloud_save_mode='s3',bucket_name='models',cloud_dir_prefix='')Check out a fullTrainLoop training & experiment tracking example.Multi-GPU trainingAll TrainLoop versions in addition to single GPU also support multi-GPU training to achieve even faster training. Following the core PyTorch setup, two multi-GPU training approaches are available:DataParallelandDistributedDataParallel.DataParallel (DP)To use DataParallel-like multiGPU training with TrainLoop just set the TrainLoop'sgpu_modeparameter to'dp':fromaitoolbox.torchtrain.train_loopimport*model=...# TTModelTrainLoop(model,train_loader,val_loader,test_loader,optimizer,criterion,gpu_mode='dp').fit(num_epochs=10)Check out a fullDataParallel training example.DistributedDataParallel (DDP)Distributed training on multiple GPUs via DistributedDataParallel is enabled by the TrainLoop itself under the hood by wrapping the model (TTModel,more in Model section) intoDistributedDataParallel. TrainLoop also automatically spawns multiple processes and initializes them. Inside each spawned process the model and all other necessary training components are moved to the correct GPU belonging to a specific process. Lastly, TrainLoop also automatically adds the PyTorchDistributedSamplerto each of the provided data loaders in order to ensure different data batches go to different GPUs and there is no overlap.To enable distributed training via DistributedDataParallel, the user has to set the TrainLoop'sgpu_modeparameter to'ddp'.fromaitoolbox.torchtrain.train_loopimport*model=...# TTModelTrainLoop(model,train_loader,val_loader,test_loader,optimizer,criterion,gpu_mode='ddp').fit(num_epochs=10,callbacks=None,ddp_model_args=None,num_nodes=1,node_rank=0,num_gpus=torch.cuda.device_count())Check out a fullDistributedDataParallel training example.Automatic Mixed Precision training (AMP)All the TrainLoop versions also support training with Automatic Mixed Precision (AMP). In the past this required using theNvidia apexextension but fromPyTorch 1.6onwards AMP functionality is built into core PyTorch and no separate instalation is needed. Current version of AIToolbox already supports the use of built-in PyTorch AMP.The user only has to set the TrainLoop parameteruse_amptouse_amp=Truein order to use the default AMP initialization and start training the model in the mixed precision mode. If the user wants to specify custom AMPGradScalerinitialization parameters, these should be provided as a dict parameteruse_amp={'init_scale': 2.**16, 'growth_factor': 2.0, ...}to the TrainLoop. All AMP initializations and training related steps are then handled automatically by the TrainLoop.You can read more about different AMP details in thePyTorch AMP documentation.Single-GPU mixed precision trainingExample of single-GPU AMP setup:fromaitoolbox.torchtrain.train_loopimport*model=...# TTModelTrainLoop(model,...,optimizer,criterion,use_amp=True).fit(num_epochs=10)Check out a fullAMP single-GPU training example.Multi-GPU DDP mixed precision trainingWhen training in the multi-GPU setting, the setup is mostly the same as in the single-GPU. All the user has to do is set accordingly theuse_ampparameter of the TrainLoop and to switch itsgpu_modeparameter to'ddp'. Under the hood, TrainLoop will initialize the model and the optimizer for AMP and start training using DistributedDataParallel approach.Example of multi-GPU AMP setup:fromaitoolbox.torchtrain.train_loopimport*model=...# TTModelTrainLoop(model,...,optimizer,criterion,gpu_mode='ddp',use_amp=True).fit(num_epochs=10)Check out a fullAMP multi-GPU DistributedDataParallel training example.ModelTo take advantage of the TrainLoop abstraction the user has to define their model as a class which is a standard way in core PyTorch as well. The only difference is that for TrainLoop supported training the model class has to be inherited from the AIToolbox specificTTModelbase class instead of PyTorchnn.Module.TTModelitself inherits from the normally usednn.Moduleclass thus our models still retain all the expected PyTorch enabled functionality. The reason for using the TTModel super class is that TrainLoop requires users to implement two additional methods which describe how each batch of data is fed into the model when calculating the loss in the training mode and when making the predictions in the evaluation mode.The code below shows the general skeleton all the TTModels have to follow to enable them to be trained with the TrainLoop:fromaitoolbox.torchtrain.modelimportTTModelclassMyNeuralModel(TTModel):def__init__(self):# model layers, etc.defforward(self,x_data_batch):# The same method as required in the base PyTorch nn.Module...# return predictiondefget_loss(self,batch_data,criterion,device):# Get loss during training stage, called from fit() in TrainLoop...# return batch lossdefget_predictions(self,batch_data,device):# Get predictions during evaluation stage# + return any metadata potentially needed for evaluation...# return predictions, true_targets, metadataCallbacksFor advanced applications the basic logic offered in different default TrainLoops might not be enough. Additional needed logic can be injected into the training procedure by usingcallbacksand providing them as a parameter list to TrainLoop'sfit(callbacks=[callback_1, callback_2, ...])function.AIToolbox by default already offers a wide selection of different useful callbacks. However when some completely new functionality is desired the user can also implement their own callbacks by inheriting from the base callback objectAbstractCallback. All that the user has to do is to implement corresponding methods to execute the new callback at the desired point in the train loop, such as: start/end of batch, epoch, training.experimentResult PackageThis is the definition of the model evaluation procedure on the task we are experimenting with. Result packages available out of the box can be found in theresult_packagemodulewhere we have implemented severalbasic, general result packages. Furthermore, for those dealing with NLP, result packages for several widely researched NLP tasks such as translation, QA can be found as part of theNLPmodulemodule. Last but not least, as the framework was built with extensibility in mind and thus if needed the users can easily define their own result packages with custom evaluations by extending the baseAbstractResultPackage.Under the hood the result package executes one or moremetricsobjects which actually calculate the performance metric calculation. Result package object is thus used as a wrapper around potentially multiple performance calculations which are needed for our task. The metrics which are part of the specified result package are calculated by calling theprepare_result_package()method of the result package which we are using to evaluate model's performance.Experiment SaverThe experiment saver saves the model architecture as well as model performance evaluation results and training history. This can be done at the end of each epoch as a model checkpointing or at the end of training.Normally not really a point of great interest when using the TrainLoop interface as it is hidden under the hood. However as AIToolbox was designed to be modular one can decide to write their own training loop logic but just use the provided experiment saver module to help with the experiment tracking and model saving. For PyTorch users we recommend using theFullPyTorchExperimentS3Saverwhich has also been most thoroughly tested. The experiment is saved by calling thesave_experiment()function from the selected experiment saver and providing the trained model and the evaluated result package containing the calculated performance results.cloudAll of these modules are mainly hidden under the hood when using different experiment tracking abstractions. However, if desired and only the cloud saving functionality is needed it is easy to use them as standalone modules in some desired downstream application.AWSFunctionality for saving model architecture and training results to S3 either during training or at the training end. On the other hand, the module also offers the dataset downloading from the S3 based dataset store. This is useful when we are experimenting with datasets and have only a slow local connection, thus scp/FTP is out of the picture.Google CloudSame functionality as for AWS S3 but for Google Cloud Storage. Implemented, however, not yet tested in practice.nlpCurrently, mainly used for the performance evaluationresult packagesneeded for different NLP tasks, such as Q&A, summarization, machine translation.For the case of e.g. NMT the module also providesattention heatmap plottingwhich is often helpful for gaining addition insights into the seq2seq model. The heatmap plotter creates attention heatmap plots for every validation example and saves them as pictures to disk (potentially also to cloud).Lastly, the nlp module also provides several rudimentary NLP data processing functions.AWS GPU instance prep and management bash scriptsAs some of the tasks when training models on the AWS cloud GPU are quite repetitive, the package also includes several useful bash scripts to automatize tasks such as instance initialization and bootstrapping, experiment file updating, remote AIToolbox installation updating, etc.For further information look into the/bin/AWSfolder and read the providedREADME.Examples of package usageLook into the/examplesfolder for starters. Will be adding more examples of different training scenarios.
ai-toolbox
No description available on PyPI.
ai-toolkit
ai-toolkitMotivationWhen working on ML projects, especially supervised learning, there tends to be a lot of repeated code, because in every project, we always want a way to checkpoint our work, visualize loss curves in tensorboard, add additional metrics, and see example output. Some projects we are able to do this better than others. Ideally, we want to have some way to consolidate all of this code into a single place.The problem is that Pytorch examples are generally not very similar. Like most data exploration, we want the ability to modify every part of the codebase to handle different loss metrics, different types of data, or different visualizations based on our data dimensions. Combining everything into a single repository often overcomplicates the underlying logic (making the training loop extremely unreadable, for example). We want to strike a balance between extremely minimalistic / readable code that makes it easy to add extra functionality when needed.This project is for developers or ML scientists who want features of a fully-functioning ML pipeline from the beginning. Each project comes with consistent styling, an opinionated way of handling logging, metrics, and checkpointing / resuming training from checkpoints. It also integrates seamlessly with Google Colab and AWS/Google Cloud GPUs.Try It Out!The first thing you should do is go into one of the output_*/ folders and try training a model. We currently have the following models:MNIST CNN(Source)Character-Level RNN+LSTM(Source)Object Detection(Source)Notable FeaturesIn train.py, the code performs some verification checks on all models to make sure you aren't mixing up your batch dimensions.Try stopping it and starting it after a couple epochs - it should resume training from the same place.On tensorboard, loss curves should already be plotting seamlessly across runs.All checkpoints should be available in checkpoints/, which contains activation layers, input data, and best models.Scheduling runs is easy by specifying a file in the configs/ folder.Evaluation CriteriaThe goal is for this repository to contain a series of clean ML examples of different levels of understanding that I can draw from and use as examples, test models, etc. I essentially want to gather all of the best-practice code gists I find or have used in the past, and make them modular and easily imported or exported for later use.The goal is not for this to be some ML framework built on PyTorch, but to focus on a single researcher/developer workflow and make it very easy to begin working. Great for Kaggle competitions, simple data exploration, or experimenting with different models.The rough evaluation metric for this repo's success is how fast I can start working on a Kaggle challenge after downloading the data: getting insights on the data, its distributions, running baseline and finetuning models, getting loss curves and plots.Current WorkflowAdd data to yourdata/folder and edit the corresponding DataasetLoader indatasets/.Add your config and model toconfigs/andmodels/.Runtrain.py, which saves model checkpoints, output predictions, and tensorboards in the same folder.Start tensorboard using thecheckpoints/folder withtensorboard --logdir=checkpoints/Start and stop training usingpython train.py --checkpoint=<checkpoint name>. The code should automatically resume training at the previous epoch and continue logging to the previous tensorboard.Runpython test.py --checkpoint=<checkpoint name>to get final predictions.Directory Structurecheckpoints/ (Only created once you run train.py)data/configs/ai_toolkit/datasets/losses/metrics/models/layers/...visualizations/args.py (Modify default hyperparameters manually)metric_tracker.pytest.pytrain.pyutil.pyverify.pyviz.py (Stub, create more visualizations if necessary)tests/Goal WorkflowMove data intodata/.Fill inpreprocess.pyanddataset.py. (explore data by runningpython viz.py)Changeargs.pyto specify input/output dimensions, batch size, etc.Runtrain.py, which saves model checkpoints, output predictions, and tensorboards in the same folder. Also automatically starts tensorboard server in a tmux session. Resume training at any point.Runtest.pyto get final predictions.
aitools
AIToolsArtificial Intelligence Tools KitFree software: Apache Software License 2.0Documentation:https://aitools.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History1.0.0 (2021-05-11)First release on PyPI.
ai-tools
Infoai_tool 2018-05-25Author: Zhao Mingming <[email protected]>Copyright: This module has been placed in the public domain.version:0.0.6version:0.1.0-add func roc-add func photo_stiching-add func flv2imgversion:0.1.1-add func microsoft_demoversion:0.1.3-add func luminanceversion:0.1.4-add func dbop ,opreate the databaseversion:0.1.5-add func util, make the print has timeversion:0.1.6-add func txt2html, make the txt 2 html tableversion:0.1.8-modify file microsoft_demo, add zprint into itversion:0.1.9-modify roc_yoloversion:0.2.1-modify insert_image2db, can insert data into 2 databaseversion:0.2.3-add class2info.py function is get_class_infoversion:0.2.4-add darkchannel.py to decreate the haze of an imageversion:0.2.7-add lumi_classfy to luminance.pyversion:0.3.1-add modify video2imgversion:0.3.3-add wjdc-add video2gifversion:0.3.6-add get_histfeature_from_one_imgFunctions:draw_curve: draw a curve in a image and return the imageimage2text: translate a image to be text stylesave2server: save a image on the local serverimage2bw: turn a gray image to be a binary weights imageHow To Use This Moduleexample code:x=np.array([-0.2,0.3,0.4,0.5])y=np.array([0.2,0.4,0.1,-0.4])norm(x,np.array([0,500]))img=draw_curve(x,y)img=draw_curve(x,y,title='my title',xlabel='my x label',ylabel='my y lable')img=cv2.imread('../examples/faces/2007_007763.jpg')img_gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)# img2bwimg_bw=image2bw(img_gray)printimg.shapeimage2text(img,(80,40))image2text(img,(80,40))Refresh20180821
aitools-aimfeld
No description available on PyPI.
aitoolsAPI
aitoolsAPIThis is a simple async API client library for aitoolsAPI.InstallationYou can install the library using pip:pip install aitoolsAPIUsingSince this library is asynchronous, you should write the code like this:GPT request:import asyncio import aitoolsAPI async def main(prompt): print(await aitoolsAPI.gpt(prompt)) # Will send a response as text asyncio.run(main("What package of sanctions was imposed on Russia last time?"))SDXL request:import asyncio import aitoolsAPI async def main(prompt): print(await aitoolsAPI.sdxl(prompt)) # Will send the answer in the form of a link asyncio.run(main("Red ball flies among the clouds, 4K, realistic, no blur"))
aitoolz
aitoolzVarious Python tools, by Alex Ioannides (AI). Some of them might be useful for artificial intelligence, some of them might not.InstallingYou can install aitoolz from PyPI usingpip install aitoolzAlternatively, you can install directly from themainbranch of this repo viapip install git+https://github.com/alexioannides/aitoolz.git@mainWhere the@XXXXcomponent of the URI can be substituted for any branch, tag or commit hash. See thepip docsfor more info.FeaturesA brief overview of the core tools:Template Python Package ProjectsTheaitoolz.make_projectmodule exposes thecreate_python_pkg_projectfunction that can create empty Python package projects to speed-up development. This includes:Executable tests via PyTest.Fully configured code formatting and checking using Ruff and Black.Fully configured static type checking using MyPy.Dev task automation using Nox.Fully configured CICD using GitHub Actions.This is an opinionated setup that reflects how I like to develop projects. This can also be called from the command line using the Make Empty Project (MEP) command - e.g.,mep my_packageWheremy_packagecan be replaced with any valid Python module name. Either of these commands will create a directory structure and skeleton files,my_package ├── .github │ └── workflows │ ├── python-package-ci.yml │ └── python-package-cd.yml ├── .gitignore ├── README.md ├── noxfile.py ├── pyproject.toml ├── src │ └── my_package │ ├── __init__.py │ └── hello_world.py └── tests └── test_hello_world.pyThis has been tested to be installable and for all dev tasks automated with Nox to pass - usenox --listto see them all.Find External Dependencies in a Python Module or Source FolderTheaitoolz.find_importsmodule exposes thefind_importsfunction that returns a list of all package dependencies imported into a Python module or source folder - i.e., all dependencies that are not in the Python standard library.This can also be called from the command line - e.g.,find-imports src/my_packageOr,find-imports my_module.py
aitpi
AITPIArbitrary Input for Terminal or a Pi, or Aitpi (pronounced 'eight pi')GoalThe goal of this project is to provide a simple, but arbitrary, input mechanism for use with a raspberry pi, or a terminal keyboard (maybe more SBCs in the future?!).This program can be configured with two simple json files.SupportedThe project supports:Simple 'buttons''1 to 1' gpio to button setup on a raspberry piNon interrupt based key inputInterrupt based key input (using pynput)Encoders'2 to 1' gpio to encoder setup on a raspberry piNon interrupt based 2 to 1 key inputInterrupt based 2 to 1 key input (using pynput)ExamplesTo configure your setup, you can create up to three types of json files:Command Registry:A registry of commands that will interact directly with your user program[ { "type": "normal", "input_type": "button", "id": "1", "name": "command0" }, { "id": "1", "input_type": "button", "path": "../temp/", "type": "presets", "name": "howdy" }, { "id": "1", "input_type": "button", "path": "../temp/", "type": "presets", "name": "test" }, { "id": "1", "input_type": "button", "path": "../temp/", "type": "presets", "name": "another.txt" } ]name: A UNIQUE identifier that is presented.id: The message id sent with each commandinput_type: The abstract functional representation i.e. (for now) a button or an encodertype: Category for each command. Must be defined, but is only used to sort commands usefullypath: Only used for foldered commands. Tells the file path of the represented file.Input listThe list of all 'input units' that your system uses[ { "name": "Button0", "type": "button", "mechanism": "rpi_gpio", "trigger": "5", "reg_link": "commandName0" }, { "name": "Encoder0", "type": "encoder", "mechanism": "rpi_gpio", "left_trigger": "17", "right_trigger": "24", "reg_link": "commandName2" } ]This is an array of depth 1, with all your 'input units' listed as dictionaries inside"name": specifies the name of the input unitValid names: Any string, must be unique among all input units"type": specifies what type of input this unit isValid types: 'button', 'encoder'"mechanism": This tells Aitpi by what mechanism the input will be watchedValid mechanisms: 'key_interrupt', 'key_input', 'rpi_gpio'key_interrupt: Usespynputto set interrupts on your keyboard itselfkey_input: Manual in-code input through the function 'aitpi.takeInput'rpi_gpio: Raspberry pi GPIO input, all input units are assumed to be active low"trigger": The key string or gpio number that will trigger input for a buttonNOTE: This is only needed if 'type' equals 'button'Valid triggers: Any string, or any valid unused gpio number on a raspberry piNote strings of more than one char will not work with key_interrupt (pynput)"left_trigger" and "right_trigger: The key string or gpio numbers that will act as a left or right for an encoderNOTE: These are only needed if 'type' equals 'encoder'Valid left_triggers and right_triggers: Any string, or any valid unused gpio number on a raspberry piNote strings of more than one char will not work with key_interrupt (pynput)"reg_link": This corrosponds to a command from the command registry and will determine what message is sent to your user programFoldered CommandsFoldered commands allows you to consider all the files in a folder as a 'command' in the registry. This uses thewatchdogpython package to monitor folders and update on the fly. All commands added will be deleted and reloaded upon program startup.[ { "name": "Folder0", "path": "/path/to/your/folder", "type": "<registry_type>", "id": "3", "input_type": "button" }, { "name": "Folder1", "path": "/another/path", "type": "<registry_type>", "id": "4", "input_type": "encoder" } ]This is an array of depth 1 that lists all the folders you want to add"name": Gives a name that you can use to access the json using 'getFolderedCommands'Valid names: Any string"path": Specifies the folder that will be watchedValid paths: Any valid folder on your system"type": This will tell Aitpi where to insert the commands from the folder into your command registryValid types: Any string"id": When a command is added from the folder, this id will be the command registry 'id' valueValid ids: Any positive int, negative ints are reserved for Aitpi and could produce bad side effects"input_type": When a command is added from the folder, this directly corrosponds to the command registry's 'input_type'Example usage:# import the base aitpiimportaitpifromaitpiimportrouter# In order to receive messages can either make an object with a consume(message) function# or just provide a function `def consume(message)`classWatcher():defconsume(self,message):print("Got command:%s"%message.name)print("On event:%s"%message.event)print("All attributes:%s"%message.attributes)watcher=Watcher()# Here we add a consumer that will receive commands with ids 0,1,2,3,4, these ids are the same# as defined in your registry json filerouter.addConsumer([0,1,2,3,4],watcher)# We must first initialize our command registry before we can start getting inputaitpi.addRegistry("<path_to_json>/command_reg.json","<path_to_json>/foldered_commands.json")# We can add multiple registries, and do not need the foldered commandsaitpi.addRegistry("<path_to_json>/another_reg.json")# Once we initialize our system, all interrupt based commands can be sent imediately.# Therefore, make sure you are ready to handle any input in your functions before calling this.aitpi.initInput("<path_to_json>/example_input.json")# For synchronous input (not interrupt based) using the 'key_input' input mechanism is desireable# You can setup a custom progromatic form of input using this (If it is good enough, add it to AITPI!)while(True):aitpi.takeInput(input())
aitpi-c3n3
AITPIArbitrary Input for Terminal or a PIGoalThe goal of this project is to provide a simple, but arbitrary, input mechanism for use with a raspberry pi, or a terminal keyboard.This program can be configured with two simple json files.SupportedThe project supports:Simple 'buttons''1 to 1' gpio to button setup on a raspberry piNon interrupt based key inputInterrupt based key input (using pynput)Encoders'2 to 1' gpio to encoder setup on a raspberry piNon interrupt based 2 to 1 key inputInterrupt based 2 to 1 key input (using pynput)ExamplesTo configure your setup, see the two example json files:example_input.jsonexample_command_registry.json
aitr
aiturArtificial Intelligence for Turkish
ai-traffic-light-simulator
TrafficLightAIA python traffic simulation serving as a playground to create traffic light A.I. systems. The traffic simulation uses a cellular automata approach to simulate large traffic grids. The simulation is optimized with Numba.Installationpipinstallai-traffic-light-simulatorExamplefromtraffic_simulation_numbaimportTrafficSimulation# OR from traffic_simulation import TrafficSimulationimportrandomNORTH_SOUTH_GREEN=0EAST_WEST_GREEN=1# A basic A.I. which randomly determines light timings# Inputs: [North waiting, East waiting, South waiting, West Waiting, Previous Light Direction]defmy_ai(inputs):ifinputs[-1]==NORTH_SOUTH_GREEN:returnEAST_WEST_GREEN,random.randint(1,30)ifinputs[-1]==EAST_WEST_GREEN:returnNORTH_SOUTH_GREEN,random.randint(1,30)# Make traffic simulation object with our naive A.I.sim=TrafficSimulation(my_ai,grid_size_x=8,grid_size_y=8,lane_length=10,max_speed=5,in_rate=0.2,initial_density=0.1,seed=42)results=sim.run_simulation(1000)# Runs the simulation for 1000 ticksprint(results)# Returns { 'cars_stopped': 131680, 'carbon_emissions': 672824 }# Render a frame of the simulation after 1000 tickssim.render_frame("Small.png")
ai-trainer
DocumentationInstallation for UserOpen anaconda powershell, activate an environment with anaconda, navigate into the trainer repo and execute the following to install trainer using pip, including its dependencies:pipinstallai-trainerFor Online Learning you have to install PyTorch:condainstallpytorchtorchvisioncudatoolkit=10.1-cpytorchAI-Trainer helps with building a data generator and it relies on imgaug for it:condainstallimgaug-cconda-forgeGetting started with training modelsTrainer currently supports annotating images and videos. First, create a dataset usingtrainerinit-dscdYOUR_DATASETGetting started with using trainer in pythonFor using the annotated data, you can use trainer as a python package. After activating the environment containing the trainer and its dependencies, feel free to inspect some of the tutorials in./tutorials/.Development SetupExecute the user installation, but instead of usingpip install ai-trainer, clone the repo locally.gitclonehttps://github.com/Telcrome/ai-trainerBoth vsc and pycharm are used for development with their configurations provided in.vscodeand.ideaRecommended environmentsFor development we recommend to install the conda environment into a subfolder of the repo. This allows for easier experimentation and the IDE expects it this way.condaenvcreate--prefix./envs-fenvironment.yml condaactivate.\envs\.Now install a deep learning backend. PyTorch provides well-workingconda install commands.For Tensorflow with GPU:condainstallcudatoolkit=10.0cudnn=7.6.0=cuda10.0_0 pipinstalltensorflow-gpuTesting Development for pip and cli toolsInstalling the folder directly using pip does not work due to the large amount of files inside the local development folder, especially because in the local development setup the environment is expected to be a subfolder of the repo.pipinstall-e.Uploading to PyPi by handpythonsetup.pysdistbdist_wheel twineuploaddist/*# The asterisk is importantUsing DockerDocker and the provided DOCKERFILE support is currently experimental as it proved to slow down the annotation GUI too much. When the transition to a web GUI is completed docker will be supported again.ContributionDocsCurrently,Read the Docsis used for CI of the docs. Before submitting changes, test the make command in the environment:condaenvcreate-fenvironment.yml condaactivatetrainer_env makehtmlIf this throws warnings or errors,Read the Docswon`t publish them.Tutorials inside the repoDo not use jupyter notebooksShould be testable without preparing data by hand where possible.
ai-traineree
ai-trainereeThe intention is to have a zoo of Deep Reinforcment Learning methods and showcasing their application on some environments.Read more in the doc:ReadTheDocs AI-Traineree.Why another?The main reason is the implemention philosophy. We strongly believe that agents should be emerged in the environment and not the other way round. Majority of the popular implementations pass environment instance to the agent as if the agent was the focus point. This might ease implementation of some algorithms but it isn't representative of the world; agents want to control the environment but that doesn't mean they can/should.That, and using PyTorch instead of Tensorflow or JAX.Quick startTo get started with training your RL agent you need three things: an agent, an environment and a runner. Let's say you want to train a DQN agent on OpenAI CartPole-v1:fromai_traineree.agents.dqnimportDQNAgentfromai_traineree.runners.env_runnerimportEnvRunnerfromai_traineree.tasksimportGymTasktask=GymTask('CartPole-v1')agent=DQNAgent(task.obs_space,task.action_space)env_runner=EnvRunner(task,agent)scores=env_runner.run()or execute one of provided examples$ python -m examples.cart_dqnThat's it.InstallationPyPi (recommended)The quickest way to install package is throughpip.$ pip install ai-trainereeGit repository cloneAs usual with Python, the expectation is to have own virtual environment and then pip install requirements. For example,>python-mvenv.venv >[email protected]:laszukdawid/ai-traineree.git >source.venv/bin/activate >pythonsetup.pyinstallCurrent statePlaying gymOne way to improve learning speed is to simply show them how to play or, more researchy/creepy, provide a proper seed. This isn't a general rule, since some algorithms train better without any human interaction, but since you're on GitHub... that's unlikely your case. Currently there's a scriptinteract.pywhich uses OpenAI Gym's play API to record moves and AI Traineree to store them in a buffer. Such buffers can be loaded by agents on initiation.This is just a beginning and there will be more work on these interactions.Requirement: Installpygame.AgentsShortProgressLinkFull nameDocDQNImplementedDeepMindDeep Q-learning NetworkDocDDPGImplementedarXivDeep Deterministic Policy GradientDocD4PGImplementedarXivDistributed Distributional Deterministic Policy GradientsDocTD3ImplementedarXivTwine Delayed Deep Deterministic policy gradientDocPPOImplementedarXivProximal Policy OptimizationDocSACImplementedarXivSoft Actor CriticDocTRPOarXivTrust Region Policy OptimizationRAINBOWImplementedarXivDQN with a few improvementsDocMulti agentsWe provide both Multi Agents agents entities and means to execute them against supported (below) environements. However, that doesn't mean one can be used without the other.ShortProgressLinkFull nameDocIQLImplementedIndependent Q-LearnersDocMADDPGImplementedarXivMulti agent DDPGDocLoggersSupports using Tensorboard (via PyTorch'sSummaryWriter) andNeptuneto display metrics. Wrappers are provided asTensorboardLoggerandNeptuneLogger.Note: In order to use Neptune one needs to installneptune-client(pip install neptune-client).EnvironmentsNameProgressLinkOpenAI Gym - ClassicDoneOpenAI Gym - AtariDoneOpenAI Gym - MuJoCoNot interested.PettingZooInitial supportPage/GitHubUnity MLSomehow supported.PageMAME Linux emulatorInterested.Official pageDevelopmentWe are open to any contributions. If you want to contribute but don't know what then feel free to reach out (see Contact below). The best way to start is through updating documentation and adding tutorials. In addition there are many other things that we know of which need improvement but also plenty that we don't know of.Setting up development environment requires installingdevandtestextra packages. Thedevextras are for mainly for linting and formatting, and thetestis for running tests. We recommend usingpipso to install everything requires for development run$pipinstall-e.[dev,test]Once installed, please configure your IDE to useblackas formatter,pycodestyleas linter, andisortfor sorting imports. All these are included in thedevextra packages.ContactShould we focus on something specificallly? Let us know by opening a feature requestGitHub issueor contacting [email protected] project@misc{ai-traineree, author ={Laszuk, Dawid}, title ={AI Traineree: Reinforcement learning toolset}, year ={2020}, publisher ={GitHub}, journal ={GitHub repository}, howpublished ={\url{https://github.com/laszukdawid/ai-traineree}},}
ai-training-utils
AI Training utilsThis package contains utils which can be used while training AI models:Path Helper: Makes use of argparse to obtain the right directories for the dataset, output artifacts and logging output.Logging Helper: Logger with file handler which writes the logging output to the given path in the arguments.Singleton: Helper class to make it possible to use classes as Singleton
ai-transform
No description available on PyPI.
ai-transformersx
Transformersx##介绍🤗 Transformers是一个非常好用的专门针对基于Pytorch的Transformer相关NLP深度学习模型的工具库。 它管理和归类了当前几乎所有的最好的基于的Transformer的自然语言模型以及公开的预训练模型,并都转换成了Pytorch。 (1)使用它你可以很方便的做Bert/Albert/GPT2/XLNET等当前最好的自然语言预训练模型训练以及下游任务的模型开发和训练。 (2)使用Pytorch实现的模型有更清晰的代码机构,对学习者来说,学习和理解这些自然语言更为容易。 (3)提供了一个地方收集和存放公开的自然语言预训练模型,供研究人员使用。研究人员也可以把自己愿意公开的预训练模型放到Transformer上供别人研究使用。但是Transformers也有一些问题: (1)根据当前的Transformers的实现看,所有的公开的Transformer的预训练模型都是存放在AWS的S3上。对国外的研究人员那没有什么问题。中国的研究人员需要下载这些模型 就有点费周折了。当前Transformers的模型下载方式有两种:(1)通过代码调用指定模型时,自动去相应的S3上下载。(2)直接到Transformers的网上流量相应的模型文件通过浏览器下载。 反正不管那种方式,想顺利的下载下来,自己想办法吧~。不多说。 (2)尽管Transformers已经提供了一种比较方便的方式来使用各种Transformer相关模型了。但是还是不够好。首先,从设计上,各种模型的实现很不错,但是因为模型的实现与模型的存储和下载深度绑定。 这个设计应该是有问题的。从职责上说,模型的实现和模型的存储下载应该分离。 (3)Transformers增加了一个Trainer以方便研究人员训练Transformer模型使用。同样,这个Trainer的设计和实现水平跟模型的设计和实现也一样有不少的差距。目的本项目的目的是想针对Transformers的一些问题,对Transformers做进一步的扩展,让研究人员使用Transformers更方便。当然也没有解决上面提到的所有问题。(1)首先,针对下载这个问题。本项目的解决方法是,在docker目录中提供了几种用于Transformers相关模型的训练和运行环境的Docker定义, 利用阿里云的Docker海外构建机器,在构建Docker是顺便把指定的预训练模型下载下来。当前主要是中文语言模型,包括bert、albert、robert、electra。可以直接从阿里云镜像库获取。 相关模型放在镜像的/app/models目录下面。docker pull registry.cn-beijing.aliyuncs.com/modoso/transformersx-bert docker pull registry.cn-beijing.aliyuncs.com/modoso/transformersx-albert docker pull registry.cn-beijing.aliyuncs.com/modoso/transformersx-robert docker pull registry.cn-beijing.aliyuncs.com/modoso/transformersx-electra最好使用脚本download-models.sh 下载并从镜像中把模型copy出来的脚本。sh download-models.sh [指定模型存放目录](2)为了更方便的使用,(当然,你得先参照上面的模型下载方式下载docker镜像或者模型)首先,你可以简单的像examples.task.sentiment.sentiment_task那样实现情感识别,只需要实现一个DataProcessor和一个Taskfromai_transformersximportDataProcessor,DataArguments,join_path,InputExample,log,TaskArgumentsfromai_transformersx.examplesimportExampleTaskBaseimportpandasaspdclassSentimentDataProcessor(DataProcessor):def__init__(self,config:DataArguments):self._config=configdef_get_example(self,file_name,type):pd_all=pd.read_csv(join_path(self._config.data_dir,file_name))log.info("Read data from{}, length={}".format(join_path(self._config.data_dir,file_name),len(pd_all)))examples=[]fori,dinenumerate(pd_all.values):examples.append(InputExample(guid=type+'_'+str(i),text_a=d[1],label=str(d[0])))returnexamplesdefget_train_examples(self):returnself._get_example('train.csv','train')defget_dev_examples(self):returnself._get_example('dev.csv','dev')defget_labels(self):return['0','1','2','3']defdata_dir(self):returnself._config.data_dirclassSentimentTask(ExampleTaskBase):def__init__(self,taskArgs:TaskArguments=None):super().__init__('sentiment',taskArgs)self.task_args.model_args.num_labels=4def_data_processor(self):returnSentimentDataProcessor(self.task_args.data_args)然后,像examples.task.main那样实现启动方法:fromai_transformersx.examplesimportExampleManagementfromai_transformersx.examples.tasksimportSentimentTasktask_manager=ExampleManagement()task_manager.register_tasks([('sentiment',SentimentTask)])if__name__=="__main__":task_manager.start_example_task()接着,你就可以训练你的情感识别的模型了。你应该在下载的镜像模型中进行。查看所有sentiment任务参数python main.py sentiment -h训练模型(可以参考上面的帮助列表设置相关的参数)python main.py sentiment(3) 常见的中文的自然语言任务的例子TODO:使用 pytorchlightning和fastai来实现trainer增加更多的自然语言任务的例子
aitree
No description available on PyPI.
aitsisui
AIT UI
aitui
ChatGPT TUITerminal-based chat window with the chatGPT APIInstallpip install aituiDevelopmentclone this repopip install -e .You can now start the chat from any directory by typing the command$ aiUpon startup, you will be asked for your OpenAI organization id and api-keyRunning the applicationStart upWhen the application starts up, it will ask you what agent you want to speak with. You can save common persons with a preconditioned prompt here. e.g.:Personal travel agent: A chat gpt assistant that recomends things to do within your interestsPython docstring writerPersonal car mechanic advisoretcKey Bindingsescputs you intovimcmd mode.ioraputs you back into insert mode. pressingvin cmd mode will open a vim editor so you and write multi-line prompts with full key bindings.Note: There are some limitations with the vim key bindings. I've found I need to pressesctwice or there is a long delay for entering into cmd mode.RoadmapAddvimkey bindings to prompt inputOpenvimfor multi line promptsSupport multi key vim keybindings. e.g.ddciwInitialize conversations from different common personas. e.g. travel agentSave conversations to a databaseOutput running cost of the conversationPublish to PyPiCreditsThe TUI application is powered bytextualizeOpenAI (of course)
aitur
aiturArtificial Intelligence for Turkish
aitv
No description available on PyPI.
aitviewer
A set of tools to visualize and interact with sequences of 3D data with cross-platform support on Windows, Linux, and macOS. See the official page athttps://eth-ait.github.io/aitviewerfor all the details.InstallationBasic Installation:pip install aitviewerNote that this does not install the GPU-version of PyTorch automatically. If your environment already contains it, you should be good to go, otherwise install it manually.Or install locally (if you need to extend or modify code)git clone [email protected]:eth-ait/aitviewer.git cd aitviewer pip install -e .For more advanced installation and for installing SMPL body models, please refer to thedocumentation.FeaturesNative Python interface, easy to use and hack.LoadSMPL[-H/-X]/MANO/FLAME/STAR/SUPRsequences and display them in an interactive viewer.Headless mode for server rendering of videos/images.Remote mode for non-blocking integration of visualization code.Render 3D data on top of images via weak-perspective or OpenCV camera models.Animatable camera paths.Edit SMPL sequences and poses manually.Prebuilt renderable primitives (cylinders, spheres, point clouds, etc).Built-in extensible GUI (based on Dear ImGui).Export screenshots, videos and turntable views (as mp4/gif)High-Performance ModernGL-based rendering pipeline (running at 100fps+ on most laptops).QuickstartDisplay an SMPL T-pose (Requires SMPL models):fromaitviewer.renderables.smplimportSMPLSequencefromaitviewer.viewerimportViewerif__name__=='__main__':v=Viewer()v.scene.add(SMPLSequence.t_pose())v.run()Projects using the aitviewerA sampling of projects using the aitviewer. Let us know if you want to be added to this list!Kaufmann et al.,EMDB: The Electromagnetic Database of Global 3D Human Pose and Shape in the Wild, ICCV 2023Shen and Guo et al.,X-Avatar: Expressive Human Avatars, CVPR 2023Sun et al.,TRACE: 5D Temporal Regression of Avatars with Dynamic Cameras in 3D Environments, CVPR 2023Fan et al.,ARCTIC: A Dataset for Dexterous Bimanual Hand-Object Manipulation, CVPR 2023Dong and Guo et al.,PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence, CVPR 2022Dong et al.,Shape-aware Multi-Person Pose Estimation from Multi-view Images, ICCV 2021Kaufmann et al.,EM-POSE: 3D Human Pose Estimation from Sparse Electromagnetic Trackers, ICCV 2021Vechev et al.,Computational Design of Kinesthetic Garments, Eurographics 2021Guo et al.,Human Performance Capture from Monocular Video in the Wild, 3DV 2021CitationIf you use this software, please cite it as below.@software{Kaufmann_Vechev_aitviewer_2022, author = {Kaufmann, Manuel and Vechev, Velko and Mylonopoulos, Dario}, doi = {10.5281/zenodo.1234}, month = {7}, title = {{aitviewer}}, url = {https://github.com/eth-ait/aitviewer}, year = {2022} }Contact & ContributionsThis software was developed byManuel Kaufmann,Velko Vechevand Dario Mylonopoulos. For questions please create an issue. We welcome and encourage module and feature contributions from the community.
aitx
# aitx[![](https://img.shields.io/travis/jeroyang/aitx.svg)](https://travis-ci.org/jeroyang/aitx) [![](https://img.shields.io/pypi/v/aitx.svg)](https://pypi.python.org/pypi/aitx)The toolbox for general ML projects## FeaturesTODO## Installation`bash $ python setup.py install `## Usage`python from aitx) import * `## License * Free software: MIT license
aityping
No description available on PyPI.
aityz-chess
# Aityz Chess ## Features Aityz Chess, at it’s core is a chess.com API wrapper. The main parts of the program are the analysis and stockfish sub-modules that can be used to analyse games using a customPGNGeneratorclass. ## Documentation Aityz Chess uses PDoc3 auto generated documentation. It is hosted [here](https://chess.aityz.repl.co). ## Installation You can install Aityz Chess using Pip! Just typepip install aityz_chessorpip3 install aityz_chessdepending on your operating system to install! ## Contributing More analysis functions, and scraping methods would be appreciated, and definitely better documentation and docstrings. I am currently migrating some of the code into sub-modules, and still isn’t complete (using right types etc).
aiu
AIUAIU is a Python library for extracting information from web archive collections. The work is done through different classes, each specific to a different web archive collection host. Each class performs screen-scraping and API analysis (if available) in order to acquire general collection metadata, seed lists, and seed metadata.InstallationThis package requires Python 3 and is calledaiuon PyPI. Installation is handled viapip:pip install aiuUsing theArchiveItCollectionclassThe class namedArchiveItCollectionhas many methods for extracting information about an Archive-It collection using its collection identifier.For example, to use iPython to get information about Archive-It collection number 5728, one can execute the following:In [1]: from aiu import ArchiveItCollection In [2]: aic = ArchiveItCollection(5728) In [3]: aic.get_collection_name() Out[3]: 'Social Media' In [4]: aic.get_collectedby() Out[4]: 'Willamette University' In [5]: aic.get_description() Out[5]: 'Social media content created by Willamette University.' In [6]: aic.get_collection_uri() Out[6]: 'https://archive-it.org/collections/5728' In [7]: aic.get_archived_since() Out[7]: 'Apr, 2015' In [8]: aic.is_private() Out[8]: False In [9]: len(aic.list_seed_uris()) Out[9]: 113 In [10]: aic.list_seed_uris()[0] Out[10]: 'http://blog.willamette.edu/mba/' In [11]: seed_url = aic.list_seed_uris()[0] In [12]: aic.get_seed_metadata(seed_url) Out[12]: {'collection_web_pages': [{'title': 'Willamette MBA Blog', 'description': ['Blog for the Willamette University Atkinson Graduate School of Management']}]}From this session we now know that the collection's name isSocial Media, it was collected byWillamette University, it has been archived sinceApril 2015, it is not private, and it has 113 seeds.Examine the source inaiu/archiveit_collection.pyfor a full list of methods to use with this class.Using theTroveCollectionclassThe class namedTroveCollectionhas many methods for extracting information about aNational Library of Australia (NLA)Trovecollection using its collection identifier.Note: Because NLA has different collection policies than Archive-It, not all methods, or their outputs, are mirrored betweenTroveCollectionandArchiveItCollection.For example, to use iPython to get information about Trove collection number 13742, one can execute the following:In [1]: from aiu import TroveCollection In [2]: tc = TroveCollection(13742) In [3]: tc.get_collection_name() Out[3]: 'Iconic Australian Brands' In [4]: tc.get_collectedby() Out[4]: {'National Library of Australia': 'http://www.nla.gov.au/', 'State Library of Queensland': 'http://www.slq.qld.gov.au/'} In [5]: tc.get_archived_since() Out[5]: 'Feb 2000' In [6]: tc.get_archived_until() Out[6]: 'Mar 2021' In [7]: len(tc.list_seed_uris()) Out[7]: 64 In [8]: tc.get_breadcrumbs() Out[8]: [0, 15023]From this session we now know that the collection's name isIconic Australian Brands, it was collected byNational Library of AustraliaandState Library of Queensland, has been archived sinceFeb 2000, and contains mementos up toMar 2021, it has 63 seeds, and is a subcollection of collections with identifiers of 0 and 15023 -- the breadcrumbs that lead to this collection.Examine the source inaiu/trove_collection.pyfor a full list of methods to use with this class.Using thePandoraCollectionclassThe class namedPandoraCollectionhas many methods for extracting information about aNational Library of Australia (NLA)Pandoracollection using its collection identifier.Note: Because NLA has different collection policies than Archive-It, not all methods, or their outputs, are mirrored betweenTroveCollectionandArchiveItCollectionandPandoraCollection.For example, to use iPython to get information about Pandora collection number 12022, one can execute the following:In [1]: from aiu import PandoraCollection In [2]: pc = PandoraCollection(12022) In [3]: pc.get_collection_name() Out[3]: 'Fact sheets (Victoria. EPA Victoria) - Australian Internet Sites' In [4]: pc.get_title_pages() Out[4]: {'136318': ('https://webarchive.nla.gov.au/tep/136318', 'Air'), '136347': ('https://webarchive.nla.gov.au/tep/136347', 'How to reduce noise from your business'), '136317': ('https://webarchive.nla.gov.au/tep/136317', 'Land'), '136346': ('https://webarchive.nla.gov.au/tep/136346', 'Landfill gas'), '136314': ('https://webarchive.nla.gov.au/tep/136314', 'Litter'), '136316': ('https://webarchive.nla.gov.au/tep/136316', 'Noise (EPA fact sheet)'), '136319': ('https://webarchive.nla.gov.au/tep/136319', 'Odour'), '136312': ('https://webarchive.nla.gov.au/tep/136312', 'Waste'), '136313': ('https://webarchive.nla.gov.au/tep/136313', 'Water')} In [5]: len(pc.list_memento_urims()) Out[5]: 10 In [6]: pc.list_seed_uris() Out[6]: ['http://www.epa.vic.gov.au/~/media/Publications/1465.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1481.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1466.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1479.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1486%201.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1467.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1468.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1469.pdf', 'http://www.epa.vic.gov.au/~/media/Publications/1470.pdf'] In [7]: pc.get_collectedby() Out[7]: {'State Library of Victoria': 'http://www.slv.vic.gov.au/'}Examine the source inaiu/pandora_collection.pyfor a full list of methods to use with this class.Using thePandoraSubjectclassThe class namedPandoraSubjecthas many methods for extracting information about aNational Library of Australia (NLA)Pandorasubject using its subject identifier.Note: Because NLA has different collection policies than Archive-It, not all methods, or their outputs, are mirrored betweenTroveCollectionandArchiveItCollectionandPandoraCollectionandPandoraSubject.For example, to use iPython to get information about Pandora subject number 83, one can execute the following:In [1]: from aiu import PandoraSubject In [2]: ps = PandoraSubject(83) In [3]: ps.get_subject_name() Out[3]: 'Humanities' In [4]: len(ps.get_title_pages()) Out[4]: 71 In [5]: len(ps.list_memento_urims()) Out[5]: 246 In [6]: len(ps.list_seed_uris()) Out[6]: 71 In [7]: ps.subject_id Out[7]: '83' In [8]: ps.get_collectedby() Out[8]: {'National Library of Australia': 'http://www.nla.gov.au/', 'Australian Institute of Aboriginal and Torres Strait Islander Studies': 'http://www.aiatsis.gov.au', 'State Library of New South Wales': 'http://www.sl.nsw.gov.au/', 'State Library of Victoria': 'http://www.slv.vic.gov.au/', 'State Library of Western Australia': 'http://www.slwa.wa.gov.au/', 'State Library of South Australia': 'http://www.slsa.sa.gov.au/'} In [9]: ps.list_subcategories() Out[9]: ['84', '85', '86']Examine the source inaiu/pandora_collection.pyfor a full list of methods to use with this class.
aiub-notes-dl
Download notes from AIUB PortalOriginally created byNiaz AhmedInstallationThe package is available on PyPI asaiub-notes-dl. So, it can be easily installed using pip.pipinstallaiub-notes-dlUsagecdto your desired directory and runaiubnotesdl.Follow the instructions on the CLI.LicenseMIT
aiudate
No description available on PyPI.
aiuna
aiuna scientific data for the classroomWARNING: This project is still subject to major changes, e.g., in the next rewrite.InstallationExamplesCreating data from ARFF filefromaiunaimport*d=file("iris.arff").dataprint(d.Xd)"""['sepallength', 'sepalwidth', 'petallength', 'petalwidth']"""print(d.X[:5])"""[[5.1 3.5 1.4 0.2][4.9 3. 1.4 0.2][4.7 3.2 1.3 0.2][4.6 3.1 1.5 0.2][5. 3.6 1.4 0.2]]"""print(d.y[:5])"""['Iris-setosa' 'Iris-setosa' 'Iris-setosa' 'Iris-setosa' 'Iris-setosa']"""frompandasimportDataFrameprint(DataFrame(d.y).value_counts())"""Iris-setosa 50Iris-versicolor 50Iris-virginica 50dtype: int64"""cessing a data field as a pandas DataFrame#from aiuna import *#d = dataset.data # 'iris' is the default dataset#df = d.X_pd#print(df.head())#...#mycol = d.X_pd["petal length (cm)"]#print(mycol[:5])#...Creating data from numpy arraysfromaiunaimport*importnumpyasnpX=np.array([[1,2,3],[4,5,6],[7,8,9]])y=np.array([0,1,1])d=new(X=X,y=y)print(d)"""{"uuid": "06NLDM4mLEMrHPOaJvEBqdo","uuids": {"changed": "3Sc2JjUPMlnNtlq3qdx9Afy","X": "13zbQMwRwU3WB8IjMGaXbtf","Y": "1IkmDz3ATFmgzeYnzygvwDu"},"step": {"id": "06NLDM4mLEMrHPOT2pd5lzo","desc": {"name": "New","path": "aiuna.step.new","config": {"hashes": {"X": "586962852295d584ec08e7214393f8b2","Y": "f043eb8b1ab0a9618ad1dc53a00d759e"}}}},"changed": ["X","Y"],"X": ["[[1 2 3]"," [4 5 6]"," [7 8 9]]"],"Y": ["[[0]"," [1]"," [1]]"]}"""Checking historyfromaiunaimport*d=dataset.data# 'iris' is the default datasetprint(d.history)"""{"02o8BsNH0fhOYFF6JqxwaLF": {"name": "New","path": "aiuna.step.new","config": {"hashes": {"X": "19b2d27779bc2d2444c11f5cc24c98ee","Y": "8baa54c6c205d73f99bc1215b7d46c9c","Xd": "0af9062dccbecaa0524ac71978aa79d3","Yd": "04ceed329f7c3eb43f93efd981fde313","Xt": "60d4f429fcd642bbaf1d976002479ea2","Yt": "4660adc31e2c25d02cb751dcb96ecfd3"}}}}"""deld["X"]print(d.history)"""{"02o8BsNH0fhOYFF6JqxwaLF": {"name": "New","path": "aiuna.step.new","config": {"hashes": {"X": "19b2d27779bc2d2444c11f5cc24c98ee","Y": "8baa54c6c205d73f99bc1215b7d46c9c","Xd": "0af9062dccbecaa0524ac71978aa79d3","Yd": "04ceed329f7c3eb43f93efd981fde313","Xt": "60d4f429fcd642bbaf1d976002479ea2","Yt": "4660adc31e2c25d02cb751dcb96ecfd3"}}},"06fV1rbQVC1WfPelDNTxEPI": {"name": "Del","path": "aiuna.step.delete","config": {"field": "X"}}}"""d["Z"]=42print(d.Z,type(d.Z))"""[[42]] <class 'numpy.ndarray'>"""print(d.history)"""{"02o8BsNH0fhOYFF6JqxwaLF": {"name": "New","path": "aiuna.step.new","config": {"hashes": {"X": "19b2d27779bc2d2444c11f5cc24c98ee","Y": "8baa54c6c205d73f99bc1215b7d46c9c","Xd": "0af9062dccbecaa0524ac71978aa79d3","Yd": "04ceed329f7c3eb43f93efd981fde313","Xt": "60d4f429fcd642bbaf1d976002479ea2","Yt": "4660adc31e2c25d02cb751dcb96ecfd3"}}},"06fV1rbQVC1WfPelDNTxEPI": {"name": "Del","path": "aiuna.step.delete","config": {"field": "X"}},"05eIWbfCJS7vWJsXBXjoUAh": {"name": "Let","path": "aiuna.step.let","config": {"field": "Z","value": 42}}}"""GrantsPart of the effort spent in the present code was kindly supported by Fapesp under supervision of Prof. André C. P. L. F. de Carvalho at CEPID-CeMEAI (Grants 2013/07375-0 – 2019/01735-0).HistoryThe novel ideias presented here are a result of a years-long process of drafts, thinking, trial/error and rewrittings from scratch in several languages from Delphi, passing through Haskell, Java and Scala to Python - including frustration with well stablished libraries at the time. The fundamental concepts were lightly borrowed from basic category theory concepts like algebraic data structures that permeate many recent tendencies, e.g., in programming language design.For more details, refer tohttps://github.com/davips/kururu
aiunify
No description available on PyPI.
aiuniv
Failed to fetch description. HTTP Status Code: 404
ai-univ
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
aiuniver
No description available on PyPI.
aiuniversity
Failed to fetch description. HTTP Status Code: 404
ai-university
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
ai-university-AIU
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
ai-university-ALKER
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
ai-university-bio
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
ai-university-PACKAGE
Example PackageThis is a simple example package. You can useGithub-flavored Markdownto write your content.
aiur
Example PackageThis is a place holder package for canoe. Stay tuned.
aiutare
AIUTAREAutomated Analysis, Regression, and EvaluationSetupSee theSetup wiki pagefor creating the config and other necessary filesbin/prepare.sh(Currently written only for Ubuntu 16.04 and 18.04)Usagebin/run.py [absolute path to config.json file] [number of runs; 1 if omitted]
aiutare-finnbarroc
AIUTAREAutomated Analysis, Regression, and EvaluationSetupSee theSetup wiki pagefor creating the config and other necessary filesbin/prepare.sh(Currently written only for Ubuntu 16.04 and 18.04)Usagebin/run.py [absolute path to config.json file] [number of runs; 1 if omitted]
aiuti
This Python package is my personal collection of helpers and utilities that I have written for various projects but don’t have to be maintained with those projects.It has no required dependencies and is MIT licenced in order to make it as portable and easy to use as possible.InstallationThis package can be installed from the official PyPI:$pipinstallaiutiDocumentationThe documentation can be found onRead the Docs.
aiutil
AI/ML Utils |@GitHub|@PyPIThis is a Python pacakage that contains misc utils for AI/ML.Misc enhancement of Python's built-in functionalities.stringcollectionspandas DataFramedatetimeMisc other toolsaiutil.filesystem: misc tools for querying and manipulating filesystems; convenient tools for manipulating text files.aiutil.url: URL formatting for HTML, Excel, etc.aiutil.sql: SQL formattingaiutil.cv: some more tools (in addition to OpenCV) for image processingaiutil.shell: parse command-line output to a pandas DataFrameaiutil.shebang: auto correct SheBang of scriptsaiutil.poetry: tools for making it even easier to manage Python project using Poetryaiutil.pdf: easy and flexible extracting of PDF pagesaiutil.memory: query and consume memory to a specified rangeaiutil.notebook: Jupyter/Lab notebook related toolsaiutil.dockerhub: managing Docker images on DockerHub in batch mode using Pythonaiutil.hadoop:A Spark application log analyzing tool for identify root causes of failed Spark applications.Pythonic wrappers to thehdfscommand.A auto authentication tool for Kerberos.An improved version ofspark_submit.Other misc PySpark functions.Supported Operating Systems and Python VersionsPython 3.10.x on Linux and macOS. It might work on Windows but is not tested on Windows.Installationpip3install--user-UaiutilUse the following commands if you want to install all components of aiutil. Available additional components arecv,docker,pdf,jupyter,adminandall.pip3install--user-Uaiutil[all]
aiutils
mlutilsA python library to automate different sets of tedious tasks used daily in machine learning.
ai-utils
No description available on PyPI.
aiuv
No description available on PyPI.
aiv
AIV: Annotation of Identified VariantsAnnotation of Identified Variants to Create Reports for Clinicians to Assist Therapeutic DecisionsPrerequisitesIt requires three main modules: pandas, myvariant and reportlabpip install pandas pip install myvariant pip install reportlabInstallationpip install aivUpgradepip install aiv --upgradeUsageimportaiv#Getvariantinfoaiv.getvariant('chr1',69635,'G','C')#Annotatevariants,referencegenome:hg38aiv.annotate_mutations('variant_calls.tsv',assembly='hg38')#Annotatevariants,referencegenome:hg19aiv.annotate_mutations('bwa_mutect2_nb09_50_lines.txt',assembly='hg19')TestsYou can test your installation with sample variant call files. Input test files can be found at:./tests/test_annotate_variants.tsv ./tests/bwa_mutect2_nb09_50_lines.txt ./tests/my_data.txtInput File FormatReport PreviewFuture WorkPerformance can be determined by calculating the running time for a given input file with 6000+ mutations.
aiva-core
aiva-coreA repository for AIVA, the AI Virtual Assistant written in Python.
aivanou
No description available on PyPI.
aivanou-test
No description available on PyPI.
aiv-api
aiv_apiApi to interact with the Aktieinvest websiteDevelopmentWhile in root of this project.pip install -e .This will install the package in editable mode.
aivatar-project-api
No description available on PyPI.
aivatar-project-widgets
No description available on PyPI.
aiven-client
Aiven is a next-generation managed cloud services platform. Its focus is in ease of adoption, high fault resilience, customer’s peace of mind and advanced features at competitive price points. Seehttps://aiven.io/for more information about the backend service.aiven-client (avn) is the official command-line client for Aiven.ContentsGetting StartedInstall from PyPiBuild an RPM PackageCheck InstallationLog InUsageAuthenticate: Logins and TokensChoose your CloudWorking with ProjectsExplore Existing ServicesLaunch ServicesManaging service usersService IntegrationsWorking with TeamsConfiguring OAuth2 ClientsExtra FeaturesAutocompleteAuth HelpersContributingKeep ReadingGetting StartedRequirements:Python 3.8 or laterRequestsFor Windows and OSX,certifiis also neededInstall from PyPiPypi installation is the recommended route for most users:$ python3 -m pip install aiven-clientBuild an RPM PackageIt is also possible to build an RPM:$ make rpmCheck InstallationTo check that the tool is installed and working, run it without arguments:$ avnIf you see usage output, you’re all set.Note:On Windows you may need to usepython3-maiven.clientinstead ofavn.Log InThe simplest way to use Aiven CLI is to authenticate with the username and password you use on Aiven:$ avn user login <[email protected]>The command will prompt you for your password.You can also use an access token generated in the Aiven Console:$ avn user login <[email protected]> --tokenYou will be prompted for your access token as above.If you are registered on Aiven through the AWS or GCP marketplace, then you need to specify an additional argument--tenant. Currently the supported value areawsandgcp, for example:$ avn user login <[email protected]> --tenant awsUsageSome handy hints that work with all commands:Theavn helpcommand shows all commands and cansearchfor a command, so for exampleavn help kafka topicshows commands with kafkaandtopic in their description.Passing-hor--helpgives help output for any command. Examples:avn--helporavn service--help.All commands will output the raw REST API JSON response with--json, we use this extensively ourselves in conjunction withjq.Authenticate: Logins and TokensLogin:$ avn user login <[email protected]>Logout (revokes current access token, other sessions remain valid):$ avn user logoutExpire all authentication tokens for your user, logs out all web console sessions, etc. You will need to login again after this:$ avn user tokens-expireManage individual access tokens:$ avn user access-token list $ avn user access-token create --description <usage_description> [--max-age-seconds <secs>] [--extend-when-used] $ avn user access-token update <token|token_prefix> --description <new_description> $ avn user access-token revoke <token|token_prefix>Note that the system has hard limits for the number of tokens you can create. If you’re permanently done using a token you should always useuseraccess-tokenrevokeoperation to revoke the token so that it does not count towards the quota.Alternatively, you can add 2 JSON files, first create a default config in~/.config/aiven/aiven-credentials.jsoncontaining the JSON with anauth_token:{ "auth_token": "ABC1+123...TOKEN==", "user_email": "[email protected]" }Second create a default config in~/.config/aiven/aiven-client.jsoncontaining the json with thedefault_project:{"default_project": "yourproject-abcd"}Choose your CloudList available cloud regions:$ avn cloud listWorking with ProjectsList projects you are a member of:$ avn project listProject commands operate on the currently active project or the project specified with the--projectNAMEswitch. The active project cab be changed with theproject switchcommand:$ avn project switch <projectname>Show active project’s details:$ avn project detailsCreate a project and set the default cloud region for it:$ avn project create myproject --cloud aws-us-east-1Delete an empty project:$ avn project delete myprojectList authorized users in a project:$ avn project user-listInvite an existing Aiven user to a project:$ avn project user-invite [email protected] a user from the project:$ avn project user-remove [email protected] project management event log:$ avn eventsExplore Existing ServicesList services (of the active project):$ avn service listList services in a specific project:$ avn service list --project proj2List only a specific service:$ avn service list db1Verbose list (includes connection information, etc.):$ avn service list db1 -vFull service information in JSON, as it is returned by the Aiven REST API:$ avn service list db1 --jsonOnly a specific field in the output, custom formatting:$ avn service list db1 --format "The service is at {service_uri}"View service log entries (most recent entries and keep on following logs, other options can be used to get history):$ avn service logs db1 -fLaunch ServicesView available service plans:$ avn service plansLaunch a PostgreSQL service:$ avn service create mydb -t pg --plan hobbyistView service type specific options, including examples on how to set them:$ avn service types -vLaunch a PostgreSQL service of a specific version (see above command):$ avn service create mydb96 -t pg --plan hobbyist -c pg_version=9.6Update a service’s list of allowed client IP addresses. Note that a list of multiple values is provided as a comma separated list:$ avn service update mydb96 -c ip_filter=10.0.1.0/24,10.0.2.0/24,1.2.3.4/32Open psql client and connect to the PostgreSQL service (also available for InfluxDB):$ avn service cli mydb96Update a service to a different plan AND move it to another cloud region:$ avn service update mydb --plan startup-4 --cloud aws-us-east-1Power off a service:$ avn service update mydb --power-offPower on a service:$ avn service update mydb --power-onTerminate a service (all data will be gone!):$ avn service terminate mydbManaging service usersSome service types support multiple users (e.g. PostgreSQL database users).List, add and delete service users:$ avn service user-list $ avn service user-create $ avn service user-deleteFor Redis services running version 6 or above, it’s possible to create users withACLs:$ avn service user-create --username new_user --redis-acl-keys="prefix* another_key" --redis-acl-commands="+set" --redis-acl-categories="-@all +@admin" --redis-acl-channels="prefix* some_chan" my-redis-serviceService users are created with strong random passwords.Service IntegrationsService integrationsallow to link Aiven services to other Aiven services or to services offered by other companies for example for logging. Some examples for various diffenent integrations:Google cloud logging,AWS Cloudwatch logging,Remote syslog integrationandGetting started with Datadog.List service integration endpoints:$ avn service integration-endpoint-listList all available integration endpoint types for given project:$ avn service integration-endpoint-types-list --project <project>Create a service integration endpoint:$ avn service integration-endpoint-create --project <project> --endpoint-type <endpoint type> --endpoint-name <endpoint name> --user-config-json <user configuration as json> $ avn service integration-endpoint-create --project <project> --endpoint-type <endpoint type> --endpoint-name <endpoint name> -c <KEY=VALUE type user configuration>Update a service integration endpoint:$ avn service integration-endpoint-update --project <project> --user-config-json <user configuration as json> <endpoint id> $ avn service integration-endpoint-update --project <project> -c <KEY=VALUE type user configuration> <endpoint id>Delete a service integration endpoint:$ avn service integration-endpoint-delete --project <project> <endpoint_id>List service integrations:$ avn service integration-list <service name>List all available integration types for given project:$ avn service integration-types-list --project <project>Create a service integration:$ avn service integration-create --project <project> -t <integration type> -s <source service> -d <dest service> -S <source endpoint id> -D <destination endpoint id> --user-config-json <user configuration as json> $ avn service integration-create --project <project> -t <integration type> -s <source service> -d <dest service> -S <source endpoint id> -D <destination endpoint id> -c <KEY=VALUE type user configuration>Update a service integration:$ avn service integration-update --project <project> --user-config-json <user configuration as json> <integration_id> $ avn service integration-update --project <project> -c <KEY=VALUE type user configuration> <integration_id>Delete a service integration:$ avn service integration-delete --project <project> <integration_id>Working with TeamsList account teams:$ avn account team list <account_id>Create a team:$ avn account team create --team-name <team_name> <account_id>Delete a team:$ avn account team delete --team-id <team_id> <account_id>Attach team to a project:$ avn account team project-attach --team-id <team_id> --project <project_name> <account_id> --team-type <admin|developer|operator|read_only>Detach team from project:$ avn account team project-detach --team-id <team_id> --project <project_name> <account_id>List projects associated to the team:$ avn account team project-list --team-id <team_id> <account_id>List members of the team:$ avn account team user-list --team-id <team_id> <account_id>Invite a new member to the team:$ avn account team user-invite --team-id <team_id> <account_id> <[email protected]>See the list of pending invitations:$ avn account team user-list-pending --team-id <team_id> <account_id>Remove user from the team:$ avn account team user-delete --team-id <team_id> --user-id <user_id> <account_id>Configuring OAuth2 ClientsList configured OAuth2 clients:$ avn account oauth2-client list <account_id>Get a configured OAuth2 client’s configuration:$ avn account oauth2-client list <account_id> --oauth2-client-id <client_id>Create a new OAuth2 client information:$ avn account oauth2-client create <account_id> --name <app_name> -d <app_description> --redirect-uri <redirect_uri>Delete an OAuth2 client:$ avn account oauth2-client delete <account_id> --oauth2-client-id <client_id>List an OAuth2 client’s redirect URIs:$ avn account oauth2-client redirect-list <account_id> --oauth2-client-id <client_id>Create a new OAuth2 client redirect URI:$ avn account oauth2-client redirect-create <account_id> --oauth2-client-id <client_id> --redirect-uri <redirect_uri>Delete an OAuth2 client redirect URI:$ avn account oauth2-client redirect-delete <account_id> --oauth2-client-id <client_id> --redirect-uri-id <redirect_uri_id>List an OAuth2 client’s secrets:$ avn account oauth2-client secret-list <account_id> --oauth2-client-id <client_id>Create a new OAUth2 client secret:$ avn account oauth2-client secret-create <account_id> --oauth2-client-id <client_id>Delete an OAuth2 client’s secret:$ avn account oauth2-client secret-delete <account_id> --oauth2-client-id <client_id> --secret-id <secret_id>Extra FeaturesAutocompleteavn supports shell completions. It requires an optional dependency: argcomplete. Install it:$ python3 -m pip install argcompleteTo use completions in bash, add following line to~/.bashrc:eval "$(register-python-argcomplete avn)"For more information (including completions usage in other shells) seehttps://kislyuk.github.io/argcomplete/.Auth HelpersWhen you spin up a new service, you’ll want to connect to it. The--jsonoption combined with thejqutility is a good way to grab the fields you need for your specific service. Try this to get the connection string:$ avn service get --json <service> | jq ".service_uri"Each project has its own CA cert, and other services (notably Kafka) use mutualTLS so you will also need theservice.keyandservice.certfiles too for those. Download all three files to the local directory:$ avn service user-creds-download --username avnadmin <service>For working withkcat(see also ourhelp article) or the command-line tools that ship with Kafka itself, a keystore and trustore are needed. By specifying which user’s creds to use, and a secret, you can generate these viaavntoo:$ avn service user-kafka-java-creds --username avnadmin -p t0pS3cr3t <service>ContributingCheck theCONTRIBUTINGguide for details on how to contribute to this repository.Keep ReadingWe maintain some other resources that you may also find useful:Command Line Magic with avnManaging Billing Groups via CLI
aivika-modeler
Using Aivika Modeler, you can create quite fast discrete event simulation models that are translated into native code. Also you can run the simulation experiments by the Monte Carlo method, specifying that how the results should be processed. It can plot Time Series, Deviation chart by the confidence interval, plot histograms, save the results in the CSV files for the further analysis and more. All is defined in just a few lines of code written in Python. Then the report of the simulation experiment with charts, statistics summary and links to the saved CSV files is automatically opened in your Web browser.ExampleTo take a taste of Aivika Modeler, here is a complete simulation model and the corresponding experiment that define a simple queue network. The model contains a transact generator, two bounded queues, two servers and the arrival timer that measures the processing of transacts. The experiment launches 1000 simulation runs in parallel, plots charts and then opens a report with the results of simulation in the Web browser. The compilation, simulation and chart plotting took about 1 minute on my laptop.Example:Work Stations in SeriesThis is a model of two work stations connected in a series and separated by finite queues. It is described in different sources [1, 2]. So, this is chapter 7 of [2] and section 5.14 of [1].[1] A. Alan B. Pritsker, Simulation with Visual SLAM and AweSim, 2nd ed.[2] Труб И.И., Объектно-ориентированное моделирование на C++: Учебный курс. - СПб.: Питер, 2006The maintenance facility of a large manufacturer performs two operations. These operations must be performed in series; operation 2 always follows operation 1. The units that are maintained are bulky, and space is available for only eight units including the units being worked on. A proposed design leaves space for two units between the work stations, and space for four units before work station 1. [..] Current company policy is to subcontract the maintenance of a unit if it cannot gain access to the in-house facility.Historical data indicates that the time interval between requests for maintenance is exponentially distributed with a mean of 0.4 time units. Service times are also exponentially distributed with the first station requiring on the average 0.25 time units and the second station, 0.5 time units. Units are transported automatically from work station 1 to work station 2 in a negligible amount of time. If the queue of work station 2 is full, that is, if there are two units awaiting for work station 2, the first station is blocked and a unit cannot leave the station. A blocked work station cannot server other units.#!/usr/local/bin/python3fromsimulation.aivika.modelerimport*model=MainModel()# the transacts can have assignable and updatable fields, but it is not used heredata_type=TransactType(model,'Transact')# it will help us to measure the processing time of transactstimer=create_arrival_timer(model,name='timer',descr='Measures the processing time')timer_source=timer.add_result_source()# this is a generator of transactsinput_stream=exponential_random_stream(data_type,0.4)# a queue before the first workstationqueue1=create_queue(model,data_type,4,name='queue1',descr='Queue no. 1')queue1_source=queue1.add_result_source()# another queue before the second workstationqueue2=create_queue(model,data_type,2,name='queue2',descr='Queue no. 2')queue2_source=queue2.add_result_source()# the first workstation activity is modeled by the serverworkstation1=exponential_random_server(data_type,0.25,name='workstation1',descr='Workstation no. 1')workstation1_source=workstation1.add_result_source()# this is the second workstationworkstation2=exponential_random_server(data_type,0.5,name='workstation2',descr='Workstation no. 2')workstation2_source=workstation2.add_result_source()# try to enqueue the arrivals; otherwise, count them as lostenqueue_stream_or_remove_item(queue1,input_stream)# a chain of streams originated from the first queuestream2=dequeue_stream(queue1)stream3=server_stream(workstation1,stream2)enqueue_stream(queue2,stream3)# another chain of streams, which must be terminated alreadystream4=dequeue_stream(queue2)stream5=server_stream(workstation2,stream4)stream5=arrival_timer_stream(timer,stream5)terminate_stream(stream5)# reset the statistics after 30 time unitsreset_time=30reset_queue(queue1,reset_time)reset_queue(queue2,reset_time)reset_server(workstation1,reset_time)reset_server(workstation2,reset_time)reset_arrival_timer(timer,reset_time)# it defines the simulation specsspecs=Specs(0,300,0.1)processing_factors=[workstation1_source.processing_factor,workstation2_source.processing_factor]# define what to display in the reportviews=[ExperimentSpecsView(),InfoView(),FinalStatsView(title='Processing Time (Statistics Summary)',series=[timer_source.processing_time]),DeviationChartView(title='Processing Factor (Chart)',right_y_series=processing_factors),FinalHistogramView(title='Processing Factor (Histogram)',series=processing_factors),FinalStatsView(title='Processing Factor (Statistics Summary)',series=processing_factors),FinalStatsView(title='Lost Items (Statistics Summary)',series=[queue1_source.enqueue_lost_count]),DeviationChartView(title='Queue Size (Chart)',right_y_series=[queue1_source.count,queue2_source.count]),FinalStatsView(title='Queue Size (Statistics Summary)',series=[queue1_source.count_stats,queue2_source.count_stats]),DeviationChartView(title='Queue Wait Time (Chart)',right_y_series=[queue1_source.wait_time,queue2_source.wait_time]),FinalStatsView(title='Queue Wait Time (Statistics Summary)',series=[queue1_source.wait_time,queue2_source.wait_time])]# it will render the reportrenderer=ExperimentRendererUsingDiagrams(views)# it defines the simulation experiment with 1000 runsexperiment=Experiment(renderer,run_count=1000)# it compiles the model and runs the simulation experimentmodel.run(specs,experiment)After running the simulation experiment, you will see the Deviation charts that will show the confidence intervals by rule 3 sigma. Also you will see a general information about the experiment as well as histograms and summary statistics sections for some properties such as the queue size, queue wait time, the processing time of transacts and the server processing factor in the final time point.How It WorksThe model written in Python is translated into its Haskell representation based on using the Aivika simulation libraries, namelyaivikaandaivika-transformers. Then the translated model is compiled by GHC into native code and executed. The simulation itself should be quite fast and efficient.For the first time, the process of compiling and preparing the model for running may take a few minutes. On next time, it may take just a few seconds.InstallationThere is one prerequisite, though. To use Aivika Modeler, you must haveStackinstalled on your computer. The main operating systems are supported: Windows, Linux and macOS.Then you can install theaivika-modelerpackage usingpipin usual way.LicenseAivika Modeler is licensed under the open-source BSD3 license like that how the main libraries of Aivika itself are licensed under this license.Combining Haskell and PythonIn most cases you do not need to know the Haskell programming language. The knowledge of Python will be sufficient to create and run many simulation models. But if you will need a non-standard component, for example, to simulate the TCP/IP protocol, then you or somebody else will have to write its implementation in Haskell and then create the corresponding wrapper in Python so that it would be possible to use the component from Python.There is a separation of concerns. Python is used as a high-level glue for combining components to build the complete simulation model, while Haskell is used as a high-level modeling language for writing such components.GPSSAivika itself also supports a DSL, which is very similar to the popular GPSS modeling language but not fully equivalent, though. This DSL is implemented in packageaivika-gpss. There are plans to add the corresponding support to Aivika Modeler too. Please stay tuned.WebsiteYou can find a more full information on websitewww.aivikasoft.com.
aivirtualassistant
This Desktop Assistant is complete package for a Virtual Assistant of yours. It can do all basic tasks such as Sending E-Mails, Wishing/Greeting. Taking command in speech from the user. Speak the text provided to it.
ai-virtual-assistant
AI virtual assitantIt is a terminal-based virtual assistant especially made for competitive programming. It has a lot of features, including running python or c++ file, parsing problem set with test cases and test against all the cases in one click, test with brute force solution, and many more. It will help you to boost your programming skill and help you to do a good performance in the programming contest.It can give voice reply and take your voice command. You can turn off or on these features. Basic settings can be easily changed from config option.For installing write the given commands,pip3 install wheelandpip3 install ai-virtual-assistantI recommand after installing checkout the config file. The config can be open by the given command,jarvis -configProgramming FeaturesRun c++ or python programCompetitive Companion Supportparse problemsetgenerate file with templatetest code against testcasesadd testcasebruteforce test solutionGenerate-testcase-genarator-automaticallylogin online judgesubmit codeParse contestCf tool modeOther FeaturesSpeaking Capabilitytaking voice commandSpeech RecognitionAi to answer quesiongoto any websitesolving mathwiki searchgoogle searchYouTube search & play videosinstall python modulelearn from answerdownload filesaccess from anywheresetup competitive companionInstallationRun python or cpp programAny python or c++ files from the current directory can be run using one command. The command is given below,jarvis -r "file_name"or,Cp -r "file_name"If you don't specify the file_name, it will list all the available python and c++ files in the current directory and you have to choose.You can run in debug mode. Debug mode is running C++ file with custom flags. The command for running in debug mode is given below,Cp -r -dIf you want to run the program more than one time you can do that. I thing this is one of the useful command because it helps to check mulitple tests in just one command and one compilation , that saves time. The command for running more than one time is given below,Cp -r -'number of times'If you want to keep executable file after running, you can use '-c' command,An example is given below,Cp -r -c -3 'file_name'with debug,Cp -r -cd -3 'file_name'it will run jarvis in debug mode and it will run 3 times. And after execution, it will keep the executable file.Parsing Problem from online judgeCompetitive Companion support makes parsing problems really very easy. Just give the command,jarvis -cp parseor,Cp parseor,jarvis -cp listenor,Cp listenHere -cp represent competitive programming,It will start listening, then you can just click the competitive companion browser extension. It will parse the problem.After parsing there will create a new folder according to the contest name and in that folder will be another folder according to the problem name. And it will contain all the sample test cases of that problem.Also, the problem can be parsed without competitive companion though I don't recommend this. the command is given below,jarvis -cp parse linkorCp parse linkThere is another way possible for parsing problem using id, which only works for codeforces. The command is,Cp parse idAfter giving the command it will ask for the problem URL. Just give the URL,it will parse the problem. There will be created a folder according to the problem name. And it will contain all the sample test cases of that problem.If you want to automtically open in editor after parsing you need to specify your editor from config option. By default it is set as None.Generate File with TemplateYou can easily generate your file with the template by the given command,jarvis -cp -t "file_name"orCp -t "file_name"If you don't specify the file_name it will be automatically created as "sol.cpp". You can create a python or c++ file.You have to specify your template path. Just open config file and find template_path and give your path for c++ and python.You can use variables in your template file which you will be replaced,variable available,$%CODER%$$%DATE_TIME%$$%PROBLEM_NAME%$$%PROBLEM_URL%$$%TIMELIMIT%$$%MEMORYLIMIT%$$%CODER%$ will be replaced by your name. It can be specified in coder_name in config file. Otherwise just change boss name from config. Boss name will be automatically mirrored to the coder_name.$%DATE_TIME%$ will be replaced by your file creating time and date.Example :Template file,/*** author: $%CODER%$* created: $%DATE_TIME%$**/#include<bits/stdc++.h>usingnamespacestd;intmain(){return0;}Genarated file,/*** author: Saurav Paul* created: Jun 06 2020 9:05 PM**/#include<bits/stdc++.h>usingnamespacestd;intmain(){return0;}Test solutionAfter parsing problem set, the solution can be tested by the given command,jarvis -cp test "filename"orCp test "filename"Giving filename is optionalIt will run all the sample and custom cases from the test folder(Test folder contains all the sample cases after parsing problem set) and check whether your solution is passed. It will show the taken time for running each case. If your code failed any test cases it will show the differences between the correct answer and your output. If every case passed then it will show passed.It is not necessary to have a parsed problem set for using this command. You can make a test folder and add input(.in) and output(.out) case into that folder and then run this command.Cp test --showThis command will show full datails even solution passed against testcases.Add TestcaseAdding testcase is really very easy. Just give the command,jarvis -cp addorCp addYes, that simple :sunglasses: .It will ask for input and output for your new case. Then it will add this case.Test solution with bruteforceIf you have any doubts about your optimal solution, then you can write a brute-force solution and write a random test case generator. You can test your optimal solution with a brute force solution using a random test case.For that, you need three files.1. Main solution 2. Bruteforce solution 3. Testcase Generator (My AI can generate it automatically)Then run this command,jarvis -cp bruteorCp bruteIt will ask for the number of times you want to generate random test cases and test solutions (Stress).It will match output with the brute-force solution's output. If it failed, it will show the differences and ask you to add this to your test case so that you can test this later. Otherwise, it will show Accepted :smile: .Generate testcase generator automaticallyTest case generator can be generated using the given command,jarvis -cp genorCp genIt will analyze all the sample cases and generate gen.py(Test case generator) automatically. Yes, sometimes it might fail (In case of complex test cases). In this case, you have to write a generator manually (You can write in python or c++).There is also one command, to generate gen.py, brute.cpp(empty file) and sol.cpp(with your template). The command is given below,jarvis -cp setuporCp setupLogin and submit to online JudgeFor login write the given command,jarvis -cp loginorCp loginIt will open login page in browser. You need webdriver for that purpose. Install webdriver for your browser. It's really very easy.For submitting code just write the given command,jarvis -cp submitorCp submitN.B.: I have used online-judge-api-client for login and submitting codes.Parsing contestParsing contest is the same as parsing problems using the competitive companion. Just write command,jarvis -cp parseorCp parsethen it will start listening, then just open the contest link and click the browser extension, it will parse all the problems and create a folder for each contest with their test cases.Also, the contest can be parsed without competitive companion though I don't recommend this. the command is given below,jarvis -cp parse contestorCp parse contestIt will ask for the contest link. Then it will parse all the problems.cf tool modeIf you use cf tool for submiting and racing contest. You can use enable this mode. If cf tool mode is enable it will use cf-tool for submitting problem in codeforces.You can enable this mode from config option.Open problem page in browserYou can open problem in browser by the given command,Cp openYou have to be in problem folder.Open standing page in browserYou can open standing page in browser by the given command,Cp standorCp standYou have to be in problem folder.Speaking and voice commandThis ai can speak with you. It will reply in voice and text both. You can toggle them from config.You can also give voice commands. But you have turned this feature on from config.For opening config option just write the following command,jarvis -configSpeech RecognitionJarvis can recognize the speech using google voice recognition API.AI to answer questionAs the name suggests you can have chat with it. You can ask jarvis a question, it will reply to you with his intelligence.goto websiteTo be honest this is one of my favorite features. You can ask Jarvis to go to any websites as with wish. It will open that in your browser.The command is given below,Jarvis goto "website name"It will open codeforces contest page for you. Basically, you can ask him to go to any website you want.It is okay to make some typing mistakes while writing a website name. It will still find it out.Solve MathThis ai can solve simple math. Just ask him to solve it will solve it for you.The command is given below,jarvis solve ( "math" )wiki searchFor searching something on wikipedia the command is given below,jarvis search wikipedia "your text"Search googleFor searching something on google the command is given below,jarvis search google "your text"Search youtubeFor searching something on youtube the command is given below,jarvis search youtube "your text"Play video on youtubejarvis play youtube "song name"Download FilesFor downloading the command is given below,jarvis downloadThen it will ask for the download link.ConfigIf you want to change settings, write the given command,jarvis -configCompetitive companionCompetitive companion is a browser extension that helps to parse problems from various online judges in just one click.For setting the competitive companion, you have to install the extension to your browser. Just search google, you will find the extension.By default this project listens on port 10043, which is a port Competitive Companion already sends parsed problems to by default. If you change the port this project listens on for problems received from Competitive Companion, you'll need to make sure Competitive Companion is properly configured to send problems to that port. To do this, right-click on the extension icon and click "Manage Extension". Then go to "Preferences" and add the port to the "Custom ports" field.To open config write the following command,jarvis -configInstallationPre-requirements :Python-3.5+Pip3For installing write the given commands,pip3 install wheelandpip3 install ai-virtual-assistantI recommand after installing checkout the config file. The config can be open by the given command,jarvis -configN.B : It works fine on Linux. It also should work on Mac os. Unfortunately, it has some problems with windows. If you want to install on windows, you have to install it via WSL(Windows Subsystem for Linux).If you want to contribute on this project you are welcome.
aivision
aivision is a computer vision package that helps you to build variety of computer vision projects. For more info see the github pagehttps://github.com/chinmay18030/aivision
aivision.pvt
Has timetable of Wisdom High Internation Grade 9
aivisiontools
No description available on PyPI.
aiv-lib
AIV Library for Python
aiv-logging
Logging packages are used in common within AI Academy Viet Nam
aivm
Failed to fetch description. HTTP Status Code: 404
aiv-ml
No description available on PyPI.
ai-vocabulary-builder
AI 生词本AI 生词本(“AI Vocabulary Builder” 简称 aivoc)是一个利用了 AI 技术的智能生词本工具,它能帮你快速构建起自己的生词库,学习起来事半功倍。核心功能:提供高质量的整句翻译能力由 AI 自动提取生词及释义独创的故事模式助记生词支持 CSV 等格式导出生词本工具截图:↑ Web App “笔记本”模式↑ 交互式翻译,自动提取生词↑ 通过阅读故事,牢固掌握生词快速开始本工具基于 Python 开发,请使用 pip 来安装本工具:#需要Python版本3.7及以上pip install ai-vocabulary-builder安装完成后,请在环境变量中设置你的OpenAI API key:#使用你在OpenAI官网上申请到的key替换该内容export OPENAI_API_KEY='your_api_key'之后执行aivoc run启动工具,进入交互式命令行模式。或者执行aivoc notebook,在浏览器中打开可交互式 Web App(推荐)。除环境变量外,你也可以通过--api-key参数完成设置:aivoc run --api-key "your_api_key"使用指南使用 Web App执行aivoc notebook命令,使用可交互式 Web App。交互式命令行执行aivoc run命令,会进入交互式命令行模式,在该模式下,你可以快速完成添加生词、阅读故事等操作。添加生词默认情况下,命令行处于“添加生词”模式,此时你可以直接粘贴一小段英文:Enter text> It depicted simply an enormous face, more than a metre wide按下回车后,工具会开始翻译工作。它首先会将你所输入内容的中文翻译打印到屏幕上。然后,它会从原文中提取出一个你最有可能不认识的单词,将其加入到生词本中。Translation Result ┌───────────────┬─────────────────────────────────────────────────────────────┐ │ Original Text │ It depicted simply an enormous face, more than a metre wide │ │ Translation │ 它只是简单地描绘了一个巨大的面孔,超过一米宽。 │ └───────────────┴─────────────────────────────────────────────────────────────┘ ⠴ Extracting word > The new word AI has chosen is "depicted". ┏━━━━━━━━━━┳━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Word ┃ Pronunciation ┃ Definition ┃ ┡━━━━━━━━━━╇━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ depicted │ /dɪˈpɪkt/ │ 描述,描绘(原词:depict) │ └──────────┴───────────────┴────────────────────────────┘ "depicted" was added to your vocabulary book (78 in total), well done!重选生词某些情况下,工具所挑选的生词可能并非你所想的那个。此时,通过输入no命令,你可以启动一次重选:Enter text> no上一次被添加到生词本的单词会被丢弃,工具将尝试重新返回 4 个新生词(可能包含刚被丢弃的词),如下所示:"depicted" has been discarded from your vocabulary book. ⠋ Extracting multiple new words ? Choose the word(s) you don't know (Use arrow keys to move, <space> to select, <a> to toggle, <i> to invert) » ○ depicted / (原词:depict) / dɪˈpɪkt / 描绘,描述 ○ metre / (原词:meter) / ˈmiːtə(r) / 米 ○ simply / ˈsɪmpli / 简而言之,仅仅 ○ enormous / ɪˈnɔːməs / 巨大的,庞大的 ○ None of above, skip for now.请按↑↓方向键移动游标,按空格选中你想要的词(支持多选),按下回车确认。选中的单词会被添加到你的生词本中。? Choose the word(s) you don't know done (2 selections) New word(s) added to your vocabulary book: "metre,enormous" (79 in total), well done!假如你所想的单词仍然没有出现在选项中,请选择None of above, skip for now.,跳过本次添加。别气馁,祝你下次好运。😁查看生词使用list命令可以查看生词本中最近添加的生词,默认展示 10 条:Enter text> list该命令接收一个可选参数:limit,用来指定生词的数量。常见用法:# 查看最近 5 条 Enter text> list 5 # 查看所有生词 Enter text> list all删除生词使用remove命令可以进入“删除生词”模式。在该模式下,你可以输入单词(按↑↓方向键选择自动补全),再按回车键将其从生词本中删除。除手动输入外,你还用可以用鼠标选择单词。要退出“删除生词”模式,输入 q (或不输入任何内容)按下回车,工具将退回到“翻译模式”。阅读故事来助记生词为了快速并牢固掌握生词本里的单词,本工具提供了一个创新的故事模式。在交互式命令行模式下,输入story开始故事模式:Enter text> story工具将从生词本里挑选出 6 个单词,请求 AI 用这些词写一个小故事。输入如下所示:Words for generating story: prudent, extraneous, serendipitously, onus, aphorisms, cater ⠼ Querying OpenAI API to write the story... ╭─────────────────────────────────────────── Enjoy your reading ────────────────────────────────────────────╮ │ Once there was a prudent young girl named Alice who always carried a small notebook with her. She wrote │ │ down aphorisms and wise sayings that she heard from her elders or from books. It was an extraneous task, │ │ but Alice believed that it helped her to be wise and joyful. │ │ │ │ One day, Alice went for a walk in the park and serendipitously met an old man. He was reading a book, and │ │ Alice noticed that he had marked some phrases with a pencil. She greeted him and asked about the book. │ │ They started to chat about literature, and the man shared some of his favorite aphorisms. │ │ │ │ Alice was delighted, and she wrote down the new sayings in her notebook. After their conversation, the │ │ man thanked Alice and said that he felt as if a heavy onus had been lifted from his chest. Alice smiled │ │ and said that it was her pleasure to cater to his needs. │ │ │ │ From then on, Alice and the old man often met in the park to exchange knowledge and wisdom. They learned │ │ that serendipity could bring unexpected blessings to life. │ ╰───────────────────────────────────────────────────────────────────────────────────────────────────────────╯阅读结束后,按下回车键,你可以继续查看在故事中出现的所有生词的详细信息。其他功能导出生词本你可以使用export命令来导出你的生词本。以下是一些示例:# 直接往屏幕输出文本格式 aivoc export # 直接往屏幕输出 CSV 格式 aivoc export --format csv # 往 ./voc.csv 写入 CSV 格式的生词本 aivoc export --format csv --file-path ./voc.csv删除生词如果你觉得你已经牢牢掌握了某个生词,你可以将它从生词本里删除。执行remove命令来完成这个任务:#enormous和depicted为需要删除的单词,多个单词使用空格分隔aivoc remove enormous depicted常用配置此处列举了本工具的所有全局配置项。目前仅支持通过环境变量来完成配置,未来将增加对配置文件的支持。如果你想了解各子命令支持哪些个性化参数,比如“导出”支持哪些格式和参数,请使用--help参数,比如:aivoc export --help。OPENAI_API_KEY工具调用 OpenAI 的 API 时所使用的API Key,必须设置。示例:export OPENAI_API_KEY='your_api_key'OPENAI_API_BASE工具所使用的 OpenAI 的 API 地址,可选设置。仅当默认 API 地址(https://api.openai.com/v1)无法正常访问时指定。示例:# 将 www.example.com 替换为你的域名exportOPENAI_API_BASE="https://www.my-openai-proxy.com/v1"💡 请关注地址配置中的/v1部分。是否添加它,取决于你的代理配置如何。不确定的话可以先写上,如果无法成功调用,再去掉/v1试试看。AIVOC_DATA_DIR指定生词本储存数据文件的路径, 默认路径为当前登录用户的 home 目录:~/示例:export AIVOC_DATA_DIR="$HOME/Documents"为什么开发这个工具?学习一门语言,生词本是一个非常重要的工具。一个内容优秀的生词本,至少需要包含:生词、释义、例句、例句释义这些内容。但是,手动维护这些内容非常麻烦,因此大部分人都没有自己的生词本。阅读时碰见生词,常常查过词典,转头就忘。“AI 生词本”尝试着使用 ChatGPT 的能力,将生词本的维护成本降到最低,让每人都可以拥有自己的生词本。TODO支持bob-plugin-openai-translator插件,实现划词自动扩充生词本。
ai-voice-sdk
AI VOICE SDK目錄簡介需求開始使用安裝教學課程簡介AI Voice是宏正自動科技的語音合成服務優聲學,使用本SDK是必須租用優聲學服務。租用服務請至https://www.aivoice.com.tw上留下聯絡資料。宏正優聲學,推出限量企業體驗版之語音合成服務,提供六個優質美聲,大量語音合成,歡迎企業用戶填寫表格連繫, 了解更多企業體驗版方案細節!需求Pythonpython >= 3.7支援版本API == v1.x開始使用安裝透過pip安裝SDKpipinstallai-voice-sdk手動安裝SDKgitclonehttps://github.com/ATEN-International/ai-voice-sdk-python.gitcdai-voice-sdk-python python-mpipinstallwheel pythonsetup.pysdistbdist_wheel# 建立SDK安裝檔pipinstalldist/ai_voice_sdk-x.x.x-py3-none-any.whl# 安裝SDK,其中 'x.x.x' 填入現在的版本號教學課程tutorial
ai-voice-sdk-standard
AI VOICE SDK簡介AI Voice是宏正自動科技的語音合成服務優聲學,使用本SDK是必須租用優聲學服務。租用服務請至https://www.aivoice.com.tw/business/enterprise上留下聯絡資料。宏正優聲學,推出限量企業標準版之語音合成服務,提供多個優質美聲,大量語音合成,歡迎企業用戶填寫表格連繫, 了解更多企業標準版方案細節!需求Windows需要安裝Microsoft C++ Build Tools,不然下載相依套件時會報error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/錯誤,相關資訊https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-instPythonpython >= 3.7支援SSML版本version == v1.2安裝方式pip安裝SDKpip install ai-voice-sdk-standard手動安裝SDKpython-mpipinstallwheel pythonsetup.pysdistbdist_wheel# 建立SDK安裝檔pipinstalldist\ai_voice_sdk_standard-x.x.x-py3-none-any.whl# 安裝SDK,其中 'x.x.x' 填入現在的版本號使用方式我們目前支援10個不同的聲優,而他們支援2種語言,包括中文和英文。以下範例程式是如何使用AI-Voice-SDK執行方式有分為一般和即時聲音播放模式使用一般模式,執行後文章送出到AI Voice server處理完以後,聲音資料送回來並合成一個.wav檔案# 設定一般模式# RunMode.NORMAL為default值converter.config.set_run_model(aivoice.RunMode.NORMAL)使用即時聲音播放模式,執行後文章送出到AI Voice server,將會開始即時播放聲音# 設定即時聲音播放模式converter.config.set_run_model(aivoice.RunMode.LIVE_PLAY_AUDIO)文字加入方式:文字,SSML格式,宏正優聲學RTF格式,文字檔,SSML格式檔案# 加入文字converter.text.add_text(text="歡迎體驗宏正優聲學,讓好聲音為您的應用提供加值服務。",position=-1)# 加入SSML格式converter.text.add_ssml_text(text="""<speak xmlns="http://www.w3.org/2001/10/synthesis" version="1.2" xml:lang="zh-TW"><voice name="Aaron">宏正自動科技的人工智慧語音合成技術,帶來超逼真<phoneme alphabet="bopomo" lang="TW" ph="ㄉㄜ˙">的</phoneme>合成語音<break time="300ms"/>:自然、真實,讓您拉近與客戶的距離,提高滿意度,帶來轉換率。</voice></speak>""",position=-1)# 加入宏正優聲學RTF格式converter.text.add_webpage_text(text="""按下合成鍵之前,我們[:ㄇㄣˊ]建議您先確認2個[:ㄍㄜ˙]問題:您的文章轉成語音之後,是好聽流暢的嗎?[:1.2秒]您有[:ㄧㄡˇ]將閱讀文,轉為聆聽文嗎?""",rate=1.01,pitch=0,volume=2.45,position=-1)# 讀取純文字檔加入converter.text.open_text_file(file_path="./textfile.txt",encode="utf-8",position=-1)# 讀取SSML格式的檔案converter.text.open_text_file(file_path="./ssmlfile.ssml",encode="utf-8",position=-1)合成聲音教學使用環境變數設定Token和AI Voice Server URL使用Command Prompt環境變數設定Token和AI Voice Server URL@rem 改為AI Voice網頁上的 API_ACCESS_TOKENsetx AI_VOICE_SDK_TOKEN your-token@rem Aten AI Voice Server URLsetx AI_VOICE_URL https://www.aivoice.com.tw/business/enterprise完整程式#coding:utf-8importosimportai_voice_sdkasaivoice# token = "API_ACCESS_TOKEN"token=os.environ.get('AI_VOICE_SDK_TOKEN')server=os.environ.get('AI_VOICE_URL')# 加入tokens內tokens=[token]# 建立轉換器設定檔# server_url 預設為 https://www.aivoice.com.tw/business/enterprise,可不填config=aivoice.ConverterConfig(tokens=tokens,server_url=server)# 選擇設定檔內選用的語音config.set_voice(aivoice.Voice.CALM_HANNAH)# 建立轉換器converter=aivoice.VoiceConverter(config=config)# 設定執行模式# RunMode.NORMAL為default值converter.config.set_run_mode(aivoice.RunMode.NORMAL)converter.text.add_text(text="歡迎體驗宏正優聲學,讓好聲音為您的應用提供加值服務。",position=-1)converter.text.add_ssml_text(text="""<speak xmlns='http://www.w3.org/2001/10/synthesis' version='1.2' xml:lang='zh-TW'><voice name='Aurora'>歡迎體驗宏正優聲學,讓好聲音為您的應用提供加值服務。</voice><voice name='Jason'>歡迎體驗宏正優聲學,讓好聲音為您的應用提供加值服務。</voice></speak>""",position=-1)converter.text.show()# 執行合成語音,且取得語音內容result=converter.run(interval_time=0,is_wait_speech=True)ifresult.status==aivoice.ConverterStatus.GetSpeechSuccess:print("Get speech data success.")# 將語音另存為"aivoice.wav",且當語音數量超過一個時,將語音檔各別存為單一檔案result.save("aivoice",is_merge=True)else:ifresult.status==aivoice.ConverterStatus.GetSpeechFail:print(f"Error message:{result.error_message}")elifresult.status==aivoice.ConverterStatus.ConverVoiceFail:print(f"Error message:{result.error_message}")else:print(f"Converter status:{result.status.name}, Detail:{result.detail}")詳細教學:Tutorial
aivoifu
AIwaifu Vocal Pipelinean implementation of AIwaifu Vocal Pipeline to make it easier to create a fast and easy-to-use Cute Waifu Voice Text -> TTS -> Voice Conversion -> DoneUsage# pip install aivoifupoetryinstallaivoifu# Recommended using poetryfromAIvoifuimportclient_pipelinemodel=client_pipeline.tts_pipeline(tts_model_selection='gtts',vc_model_selection='ayaka-jp',hubert_model='zomehwh-hubert-base',language='en')model.tts('Hello This Is A Test Text Anyway',save_path='./test.wav')How to add your own TTS/Voice Conversion pipelineFirst of all we need to understand how AIwaifu Vocal Pipeline was design There's two componentsTTS: This is a typical text to speech model which you can add your ownVoice_Conversion: This is our Heroine on making TTS sound like a cute girl by converting TTS speech using Voice Conversion which you can train easilyFirst of all we get TTS voice from TTS pipeline, adn the we do voice conversion on itCustom TTSin the TTS folder you'll foundtts_base_model: foldertts.pyif you wish to add your own TTS model please create a class in tts.py and in the function constructor please download the weight and cache it under the tts_base_model along with other cache file# Example codeclassCustomTTS:def__init__(self)->None:fromsomethingimporttts_libraryself.model_link='Huggingface_link'root=os.path.dirname(os.path.abspath(__file__))self.model_root_path=f'{root}/base_tts_model/'self.model_name='custom_tts'model_path=f'{self.model_root_path}/{self.model_name}'ifnotos.path.exist(model_path)os.mkdir(model_path)weigh_path=f'{model_path}/{self.model_name}.pth'ifnotos.path.exist(weigh_path):wget.download(self.model_link,weigh_path)self.model=tts_library.load(weigh_path)print(f'model{self.model_name}initialized')deftts(self,text:str,save:boolean=True,your_args:any):# some preprocessingoutput=self.model.tts(text)ifsave:output.save('save_path')returnoutputCustom Voice Conversionif you're not looking to add new language but just want to custom the model voice easilyOur recommendation is to just Train Voice Conversion pipeline on your own samples and added it to the zoo
aivp
AI and Voice Processing Library.
aivtu
VTU lab programs.
ai-watchdog
WatchdogAbout the projectWatchdog is a simple class to store and manage configurations.Getting startedInstallationpipinstallai-watchdog
aiway
This is the aiway package.
ai-web
一个ai算法接口生成模块,详见githubhttps://github.com/CLANNADHH/ai_web
ai-win
Failed to fetch description. HTTP Status Code: 404
ai-workbench
No description available on PyPI.
aiworkflow
aiflow: Navigator ClientThis is the navigator client package to connect and interact with Navigator
aiworkflows
No description available on PyPI.
ai-workshop
No description available on PyPI.
aiworld
No description available on PyPI.
aiworlds
No description available on PyPI.
aix
aixArtificial Intelligence eXtensionsFast access to your favorite A.I. tools.To install:pip install aixExamplesWant all your faves at your fingertips?Never remember where to import that learner from?SayLinearDiscriminantAnalysis?... was itfrom sklearn?... was itfrom sklearn.linear_model?... ah no! It wasfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis.Sure, you can do that. Or you can simply typefrom aix.Lin...click tab, and; there it is! Select, enter, and moving on with real work.Note: This is meant to get off the ground quickly -- once your code is stable, you should probably import your stuff directly from it's originComing upNow that the AI revolution is on its way, we'll add the ability to find, and one day, use the right AI tool -- until the day that AI will do even that for us...
aix360
AI Explainability 360 (v0.3.0)The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. The AI Explainability 360 toolkit supports tabular, text, images, and time series data.TheAI Explainability 360 interactive experienceprovides a gentle introduction to the concepts and capabilities by walking through an example use case for different consumer personas. Thetutorials and example notebooksoffer a deeper, data scientist-oriented introduction. The complete API is also available.There is no single approach to explainability that works best. There are many ways to explain: data vs. model, directly interpretable vs. post hoc explanation, local vs. global, etc. It may therefore be confusing to figure out which algorithms are most appropriate for a given use case. To help, we have created someguidance materialand ataxonomy treethat can be consulted.We have developed the package with extensibility in mind. This library is still in development. We encourage you to contribute your explainability algorithms, metrics, and use cases. To get started as a contributor, please join theAI Explainability 360 Community on Slackby requesting an invitationhere. Please review the instructions to contribute code and python notebookshere.Supported explainability algorithmsData explanationsProtoDash (Gurumoorthy et al., 2019)Disentangled Inferred Prior VAE (Kumar et al., 2018)Local post-hoc explanationsProtoDash (Gurumoorthy et al., 2019)Contrastive Explanations Method (Dhurandhar et al., 2018)Contrastive Explanations Method with Monotonic Attribute Functions (Luss et al., 2019)Exemplar based Contrastive Explanations MethodGrouped Conditional Expectation (Adaptation of Individual Conditional Expectation Plots byGoldstein et al.to higher dimension )LIME (Ribeiro et al. 2016,Github)SHAP (Lundberg, et al. 2017,Github)Time-Series local post-hoc explanationsTime Series Saliency Maps using Integrated Gradients (Inspired bySundararajan et al.)Time Series LIME (Time series adaptation of the classic paper byRibeiro et al. 2016)Time Series Individual Conditional Expectation (Time series adaptation of Individual Conditional Expectation PlotsGoldstein et al.)Local direct explanationsTeaching AI to Explain its Decisions (Hind et al., 2019)Order Constraints in Optimal Transport (Lim et al.,2022,Github)Global direct explanationsInterpretable Model Differencing (IMD) (Haldar et al., 2023)CoFrNets (Continued Fraction Nets) (Puri et al., 2021)Boolean Decision Rules via Column Generation (Light Edition) (Dash et al., 2018)Generalized Linear Rule Models (Wei et al., 2019)Fast Effective Rule Induction (Ripper) (William W Cohen, 1995)Global post-hoc explanationsProfWeight (Dhurandhar et al., 2018)Supported explainability metricsFaithfulness (Alvarez-Melis and Jaakkola, 2018)Monotonicity (Luss et al., 2019)SetupSupported Configurations:Installation keywordExplainer(s)OSPython versioncofrnetcofrnetmacOS, Ubuntu, Windows3.10contrastivecem, cem_mafmacOS, Ubuntu, Windows3.6dipvaedipvaemacOS, Ubuntu, Windows3.10gcegcemacOS, Ubuntu, Windows3.10imdimdmacOS, Ubuntu3.10limelimemacOS, Ubuntu, Windows3.10matchingmatchingmacOS, Ubuntu, Windows3.10nncontrastivenncontrastivemacOS, Ubuntu, Windows3.10profwtprofwtmacOS, Ubuntu, Windows3.6protodashprotodashmacOS, Ubuntu, Windows3.10rbmbrcg, glrmmacOS, Ubuntu, Windows3.10rule_inductionrippermacOS, Ubuntu, Windows3.10shapshapmacOS, Ubuntu, Windows3.6tedtedmacOS, Ubuntu, Windows3.10tsicetsicemacOS, Ubuntu, Windows3.10tslimetslimemacOS, Ubuntu, Windows3.10tssaliencytssaliencymacOS, Ubuntu, Windows3.10(Optional) Create a virtual environmentAI Explainability 360 requires specific versions of many Python packages which may conflict with other projects on your system. A virtual environment manager is strongly recommended to ensure dependencies may be installed safely. If you have trouble installing the toolkit, try this first.CondaConda is recommended for all configurations though Virtualenv is generally interchangeable for our purposes. Miniconda is sufficient (seethe difference between Anaconda and Minicondaif you are curious) and can be installed fromhereif you do not already have it.Then, create a new python environment based on the explainability algorithms you wish to use by referring to thetableabove. For example, for python 3.10, use the following command:condacreate--nameaix360python=3.10 condaactivateaix360The shell should now look like(aix360) $. To deactivate the environment, run:(aix360)$condadeactivateThe prompt will return back to$or(base)$.Note: Older versions of conda may usesource activate aix360andsource deactivate(activate aix360anddeactivateon Windows).InstallationClone the latest version of this repository:(aix360)$gitclonehttps://github.com/Trusted-AI/AIX360If you'd like to run the examples and tutorial notebooks, download the datasets now and place them in their respective folders as described inaix360/data/README.md.Then, navigate to the root directory of the project which containssetup.pyfile and run:(aix360)$pipinstall-e.[<algo1>,<algo2>,...]The above command installs packages required by specific algorithms. Here<algo>refers to the installation keyword intableabove. For instance to install packages needed by BRCG, DIPVAE, and TSICE algorithms, one could use(aix360)$pipinstall-e.[rbm,dipvae,tsice]The default commandpip install .installsdefault dependenciesalone.Note that you may not be able to install two algorithms that require different versions of python in the same environment (for instancecontrastivealong withrbm).If you face any issues, please try upgrading pip and setuptools and uninstall any previous versions of aix360 before attempting the above step again.(aix360)$pipinstall--upgradepipsetuptools(aix360)$pipuninstallaix360PIP Installation of AI Explainability 360If you would like to quickly start using the AI explainability 360 toolkit without explicitly cloning this repository, you can use one of these options:Install v0.3.0 via repository link(yourenvironment)$pipinstall-egit+https://github.com/Trusted-AI/AIX360.git#egg=aix360[<algo1>,<algo2>,...]For example, usepip install -e git+https://github.com/Trusted-AI/AIX360.git#egg=aix360[rbm,dipvae,tsice]to install BRCG, DIPVAE, and TSICE. You may need to installcmakeif its not already installed in your environment usingconda install cmake.Install previous version v0.2.1 viapypi(yourenvironment)$pipinstallaix360v0.2.1 includes fewer explainability algorithms. The pip installable package of v0.3.0 will be made available onpypisoon.If you follow either of these two options, you will need to download the notebooks available in theexamplesfolder separately.Dealing with installation errorsAI Explainability 360 toolkit istestedon Windows, MacOS, and Linux. However, if you still face installation issues due to package dependencies, please try installing the corresponding package via conda (e.g. conda install package-name) and then install the toolkit by following the usual steps. For example, if you face issues related to pygraphviz during installation, useconda install pygraphvizand then install the toolkit.Please use the right python environment based on thetableabove.Running in DockerUnderAIX360directory build the container image from Dockerfile usingdocker build -t aix360_docker .Start the container image using commanddocker run -it -p 8888:8888 aix360_docker:latest bashassuming port 8888 is free on your machine.Inside the container start jupuyter lab using commandjupyter lab --allow-root --ip 0.0.0.0 --port 8888 --no-browserAccess the sample tutorials on your machine using URLlocalhost:8888Using AI Explainability 360Theexamplesdirectory contains a diverse collection of jupyter notebooks that use AI Explainability 360 in various ways. Both examples and tutorial notebooks illustrate working code using the toolkit. Tutorials provide additional discussion that walks the user through the various steps of the notebook. See the details about tutorials and exampleshere.Citing AI Explainability 360If you are using AI Explainability 360 for your work, we encourage you toCite the followingpaper. The bibtex entry is as follows:@misc{aix360-sept-2019, title = "One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques", author = {Vijay Arya and Rachel K. E. Bellamy and Pin-Yu Chen and Amit Dhurandhar and Michael Hind and Samuel C. Hoffman and Stephanie Houde and Q. Vera Liao and Ronny Luss and Aleksandra Mojsilovi\'c and Sami Mourad and Pablo Pedemonte and Ramya Raghavendra and John Richards and Prasanna Sattigeri and Karthikeyan Shanmugam and Moninder Singh and Kush R. Varshney and Dennis Wei and Yunfeng Zhang}, month = sept, year = {2019}, url = {https://arxiv.org/abs/1909.03012} }Put a star on this repository.Share your success stories with us and others in theAI Explainability 360 Community.AIX360 VideosIntroductoryvideoto AI Explainability 360 by Vijay Arya and Amit Dhurandhar, September 5, 2019 (35 mins)AcknowledgementsAIX360 is built with the help of several open source packages. All of these are listed in setup.py and some of these include:Tensorflowhttps://www.tensorflow.org/about/bibPytorchhttps://github.com/pytorch/pytorchscikit-learnhttps://scikit-learn.org/stable/about.htmlLicense InformationPlease view both theLICENSEfile and the foldersupplementary licensepresent in the root directory for license information.
aixapi
AIx GPT APIFree tier uses a CPU. GPU access costs just $8 / month. This speeds up responses by 10 seconds or more. Upgradeherefor faster AI responses.Submit issues and feature requests for our API here.Seehttps://apps.aixsolutionsgroup.comfor more info.Python Quick Startpip install aixapipip install requestsGet an API key for free athttps://apps.aixsolutionsgroup.com.from aixapi import AIxResourceresult = AIxResource("MY_API_KEY").compose("hello!")print(result)Python DocumentationTo get documentation in python, runhelp(AIxResource)after importingAIxResourceas shown above.DocumentationFor full documentation go tohttps://apps.aixsolutionsgroup.comand click on the Documentation tab.
aixcalibuha
AixCaliBuHAAix(from French Aix-la-Chapelle)Calibration forBuilding andHVAC SystemsThis framework attempts to make the process of calibrating models used in Building and HVAC Systems easier.Key featuresPerforming aSensitivity Analysisto discover tuner parameters for the calibrationCalibrationof a given model based on the tuner parameters, the calibration classes and specified goals to evaluate the objective function of the underlying optimizationInstallationTo install, simply runpip install aixcalibuhaIf you encounter an error with the installation ofscikit-learn, first installscikit-learnseparately and then installebcpy:pip install scikit-learn pip install aixcalibuhaIf this still does not work, we refer to the troubleshooting section ofscikit-learn:https://scikit-learn.org/stable/install.html#troubleshooting. Also checkissue 23for updates.In order to help development, install it as an egg:git clone --recurse-submodules https://github.com/RWTH-EBC/AixCaliBuHA pip install -e AixCaliBuHAFramework structureThe core idea and motivation ofAixCaliBuHAis described in thepaper. The following image illustrates the overall toolchain automated byAixCaliBuHA.At the core ofAixCaliBuHAlays the definition of data types, that link the python data types to the underlying optimization problem and are used for all subsequent steps. This image below illustrates this. For more information, check thepaperand the subsequent section on how to get started.How to get started?We differ this section into two parts. How to get started with the theory of calibration and how to get started with using this repo.How can I calibrate my model?While we aim at automating most parts of a calibration process, you still have to specify the inputs and the methods you want to use. We therefore recommend to:Analyze the physical system and theoretical model you want to calibrateIdentify inputs and outputs of the system and modelGet to know your tuner parameters and how they affect your modelPlan your experiments and perform themLearn about the methods provided for calibration (statistical measures (RMSE, etc.), optimization, ...)Always be criticalabout the results of the process. If the model approach or the experiment is faulty, the calibration will perform accordingly.How to start with AixCaliBuHa?We have three services in place to help you with the setup ofAixCaliBuHa. For the basics on using this repo, we recommend the Jupyter Notebook. If you want to setup your calibration models (in Modelica) and quickly start your first calibration, we provide a guided setup.Jupyter NotebookWe recommend running our jupyter-notebook to be guided through ahelpful tutorial.For this, run the following code:# If jupyter is not already installed: pip install jupyter # Go into your ebcpy-folder (cd \path_to_\AixCaliBuHA) or change the to the absolute path of the tutorial.ipynb and run: jupyter notebook AixCaliBuHA\examples\tutorial.ipynbExamplesClone this repo and look at the examples\README.md file. Here you will find several examples to execute.VisualizationWe provide different plots to make the process of calibration clearer to you. We will go into detail on the different plots, what they tell you and how you can enable/disable them. We refer the plot names with the file names they get.objective_plot:What do we see?The solver in use was "scipy_differential_evolution" using the "best1bin" method. After around 200 iterations, the solver begins to converge. The last 150 itertions don't yield a far better solution, it is ok to stop the calibration here. You can do this using aKeyboardInterrupt/STRG + C.How can we enable/disable the plot?Using theshow_plot=Truekeyword argument (default isTrue)tuner_parameter_plot:What do we see?The variation of values of the tuner parameters together with their specified boundaries (red lines). The tuner parameters vary significantly in the first 200 iterations. At convergence the values obviously also converge.How can we enable/disable the plot?Using theshow_plot=Truekeyword argument (default isTrue)tsd_plot: Created for two different classes - "stationary" and "Heat up"What do we see?The measured and simulated trajectories of our selected goals. The grey part is not used for the evaluation of the objective function. As these values areNaN, matplotlib may interpolate linearly between the points, so don't worry if the trajectory is not logical in the grey area. Note that the inital values for the class "stationary" are not matching the initial values of the measured data. Even if the parameters are set properly, the objective would yield a bad result. In this case you have to adapt the inital values of your model directly in the Modelica code (see section "Best practices").How can we enable/disable the plot?Using thecreate_tsd_plot=Truekeyword argument for showing it each iteration, thesave_tsd_plot=Truefor saving each of these plots. (Default isTrueandFalse, respectivly.)tuner_parameter_intersection_plot:What do we see?This plot is generated if you calibrate multiple classesANDdifferent classes pyrtially have the same tuner parameters (an intersection oftuner_paras). In this case multiple "best" values arise for one tuner parameter. The plot shows the distribution of the tuner-parameters if an intersection is present. You will also be notified in the log file. In the case this plot appears, you have to decide which value to choose. If they differ greatly, you may want to either perform a sensitivity analysis to check which parameter has the biggest impact OR re-evaluate your modelling decisions.How can we enable/disable the plot?Using theshow_plot=Truekeyword argument (default isTrue)DocumentationVisit hour officialDocumentation.Problems?Pleaseraise an issue here.
aix-caller
No description available on PyPI.
aixd
AI-eXtended Design (AIXD)IntroductionIn the current repository we collect the code for the general methodology for AI-augmented generative design. This methodology allows to invert the standard paradigm of parametric modelling, where the designer needs to tweak and tune the input parameters, iteratively or through trial and error, for achieving some desired performance values.Instead, this method allows to, by just specifying the requirements' values, obtain a range of designs that closely approximate those. Besides, the present methodology allows the user to explore the design space, understand how different parameters relate to each other, areas of feasible and unfeasible designs, etc.DocumentationA detailed documentation of theaixdlibrary is providedhere. The documentation includes detailed installation instructions, API references, a user guide, application examples and more.InstallationInstall usingconda:conda env create -f environment.ymlThis creates a conda environment calledaixdwith python 3.9 and all the dependencies defined inrequirements.txtas well as installing theaixdpackage itself in editable mode.DevelopmentIf you are going to develop on this repository, also install the development requirements:pip install -e ".[examples, dev]"Check thecontribution guidelinesfor more details.Folders and structureThe structure we follow on the current repo is as follows:examples: all example applications of theaixdtoolboxsrc: for all source code. It can be structure following the next structuressrc/aixd: source code ofaixdtoolboxKnown issuesPlotly image export can cause a hang of the system. This is due to a bug in Kaleido (the library used by Plotly for image export) reported inhere. A workaround is to downgrade Kaleido to version0.1.0.post1, which can be done by runningpip install kaleido==0.1.0.post1.
aixlab.cn
No description available on PyPI.
aixlab.cn-pre
No description available on PyPI.
aixm
Auth: zhangxinhao主要功能python 负载均衡多线程视频读取日志常用工具图片处理部件