package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
aiida-phonopy
aiida-phonopyThis is the official AiiDA plugin forPhonopy.CompatibilityFromv0.7.0this plugin does not support retro-compatibility with previous versions, due to a restructure of the package.PluginAiiDAPhonopy>=v1.0.0< v2.0.0>=v2.0.0<v3.0.0>=v2.14.0<v3.0.0>=v0.7.0< v1.0.0>=v1.6.0<v2.0.0>=v2.14.0<v3.0.0InstallationTo install from PyPI, simply execute:pip install aiida-phonopyor when installing from source:git clone https://github.com/aiida-phonopy/aiida-phonopy pip install .LicenseTheaiida-phonopyplugin package is released under the MIT license. See theLICENSE.txtfile for more details.AcknowlegementsWe acknowledge support from:theU Bremen Excellence Chairsprogram funded within the scope of theExcellence Strategy of Germany’s federal and state governments;theMAPEXCenter for Materials and Processes.
aiida-phtools
aiida-phtoolsAiiDA plugin for persistence homology tools, used to analyze nanoporous materials.Installationgitclonehttps://github.com/ltalirz/aiida-phtools.cdaiida-phtools pipinstall-e.# also installs aiida, if missing (but not postgres)#pip install -e .[precommit,testing] # install extras for more featuresverdiquicksetup# better to set up a new profileverdicalculationplugins# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this [email protected]
aiida-plugin-template
UNKNOWN
aiida-plumed
aiida-plumedAiiDA plugin providing support for Plumed2This plugin is the default output of theAiiDA plugin cutter, intended to help developers get started with their AiiDA plugins.FeaturesAdd input files usingSinglefileData:SinglefileData=DataFactory('singlefile')inputs['file1']=SinglefileData(file='/path/to/file1')inputs['file2']=SinglefileData(file='/path/to/file2')Specify command line options via a python dictionary andDiffParameters:d={'ignore-case':True}DiffParameters=DataFactory('plumed')inputs['parameters']=DiffParameters(dict=d)DiffParametersdictionaries are validated usingvoluptuous. Find out about supported options:DiffParameters=DataFactory('plumed')print(DiffParameters.schema.schema)Installationpipinstallaiida-plumed verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples verdirunsubmit.py# submit test calculationverdiprocesslist-a# check status of calculationThe plugin also includes verdi commands to inspect its data types:verdidataplumedlist verdidataplumedexport<PK>Developmentgitclonehttps://github.com/ConradJohnston/aiida-plumed.cdaiida-plumed pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more [email protected]
aiida-porousmaterials
aiida-porousmaterialsAiiDAplugin forPorousMaterialspackageInstallationgit clone https://github.com/pzarabadip/aiida-porousmaterialspip install -e .NOTECurrently, it is minimal plugin for an ongoing project. It will be updated to be able for doing wider ranger of calculations.LicenseMITContactpzarabadip@gmail.comAcknowledgmentI would like to thank the funding received from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Actions and cofinancing by the South Moravian Region under agreement 665860. This software reflects only the authors’ view and the EU is not responsible for any use that may be made of the information it contains.
aiida-project
AiiDA-ProjectTool for managing AiiDA "projects" - Python environments tailored to AiiDA with separated project directories.❗️ This package is still in the early stages of development and we will most likely break the API regularly in new 0.X versions. Be sure to pin the version when installing this package in scripts.InstallationThe package can be installed globally withpipx:pipx install aiida-projectinstalled package aiida-project 0.4.0, installed using Python 3.9.16These apps are now globally available- aiida-projectdone! ✨ 🌟 ✨See thepipxinstallation instructionsif you haven't already installedpipx.UsageAfter installingaiida-project, run theinitcommand to get started:aiida-project init👋 Hello there! Which shell are you using? [bash/zsh/fish] (zsh):✨🚀 AiiDA-project has been initialised! 🚀✨Info: For the changes to take effect, run the following command:source /Users/mbercx/.zshrcor simply open a new terminal.This will also add thecdafunction to your shell startup file, so you can easily switch projects. Note that you'll have to source your e.g..zshrcfile for this function to be accessible!createAfter initialising, you can create new projects with their own virtual environment and project directory using thecreatecommand. The latest version ofaiida-corewill also be installed automatically, along with any plugins you specify with the--pluginoption:aiida-project create firstproject --plugin aiida-quantumespresso✨ Creating the project directory and environment using the Python binary:/opt/homebrew/Cellar/[email protected]/3.11.3/Frameworks/Python.framework/Versions/3.11/bin/python3.11🔧 Adding the AiiDA environment variables to the activate script.✅ Success: Project created.💾 Installing the latest release of the AiiDA core module.💾 Installing `aiida-quantumespresso` from the PyPI.You can then activate the project using thecdacommand described above:cda firstprojectNext to activating the Python virtual environment, this will also change the directory to the one for the project.aiida-projectautomatically sets up a directory structure, which we intend to be made configurable globally:(firstproject)~/project/firstproject$ tree -a.├── .aiida│ ├── access│ ├── config.json│ └── daemon│ └── log├── code└── setup├── code├── computer└── profile9 directories, 1 fileNote:You may not have thetreecommand installed on your system.destroyProjects can be cleaned up by usingaiida-project destroy. Firstdeactivatethe environment:deactivate firstprojectNext you can run thedestroycommand:aiida-project destroy firstproject❗️ Are you sure you want to delete the entire firstproject project? This cannot be undone! [y/N]: ySucces: Project with name firstproject has been destroyed.This will remove both the virtual environment, as well as the whole project folder:~/project$ tree -a.└── .aiida_projects├── conda└── virtualenv3 directories, 0 filesOther featuresvirtualenvwrapperintegrationIf you are already usingvirtualenvwrapper, the virtual environments will be installed in the same directory as the one used byvirtualenvwrapper(i.e.$WORKON_HOME). So you can then also use theaiida@prnmarvelsrv3:~$workonfirstprojectEnvironment configurationAutomatically sets some typical AiiDA UNIX environment variables, like AIIDA_PATH and the shell completion (bash/zshfor now,fishsupport coming soon!):$echo$AIIDA_PATH/Users/mbercx/project/firstprojectFuture goalsFor now it just installs AiiDA and plugins, but in the future we want it to be able to also automatically set up the AiiDA database, repository and default profile.(firstproject)aiida@prnmarvelsrv3:~/project/firstproject$verdistatus✔ version: AiiDA v2.3.0✔ config: /home/aiida/project/firstproject/.aiida⏺ profile: no profile configured yetReport: Configure a profile by running `verdi quicksetup` or `verdi setup`.Projects are pydantic data models, and are stored as JSON in the .aiida_projects directory. Over time it should be possible to completely regenerate a project based on this file, but that’s still a work in progress:(firstproject)aiida@prnmarvelsrv3:~/project/firstproject$cd..(firstproject)aiida@prnmarvelsrv3:~/project$tree-a.aiida_projects/.aiida_projects/├── conda└── virtualenv└── firstproject.json2 directories, 1 file
aiida-pseudo
aiida-pseudoAiiDA plugin that simplifies working with pseudopotentials. For more information on how to install and use the package, please consultthe documentation.Compatibility matrixThe following table shows which versions ofaiida-pseudoare compatible with which versions of AiiDA and Python.PluginAiiDAPythonv1.5.0 < v2.0.0v0.8.0 < v1.5.0v0.7.0 < v0.8.0v0.1.0 < v0.7.0LicenseTheaiida-pseudoplugin package is released under the MIT license. See theLICENSE.txtfile for more details.AcknowledgementsWe acknowledge support from:NCCR MARVELfunded by the Swiss National Science FoundationEU Centre of Excellence MaX(Horizon 2020 EINFRA-5, Grant No. 676598)swissuniversitiesP-5 project "Materials Cloud"
aiida-pyscf
aiida-pyscfAnAiiDAplugin for thePython-based Simulations of Chemistry Framework (PySCF).InstallationRequirementsSetupExamplesMean-field calculationCustomizing the structureOptimizing geometryWriting Hamiltonian to FCIDUMP filesWriting orbitals to CUBE filesRestarting unconverged calculationsAutomatic error recoveryPickled modelInstallationThe recommended method of installation is throughpip:pip install aiida-pyscfRequirementsTo useaiida-pyscfa configured AiiDA profile is required. Please refer to thedocumentation ofaiida-corefor detailed instructions.SetupTo run a PySCF calculation through AiiDA using theaiida-pyscfplugin, the computer needs to be configured where PySCF should be run. Please refer to thedocumentation ofaiida-corefor detailed instructions.Then the PySCF code needs to be configured. The following YAML configuration file can be taken as a starting point:label:pyscfdescription:PySCFcomputer:localhostfilepath_executable:pythondefault_calc_job_plugin:pyscf.baseuse_double_quotes:falsewith_mpi:falseprepend_text:''append_text:''Write the contents to a file namedpyscf.yml, making sure to update the value ofcomputerto the label of the computer configured in the previous step. To configure the code, execute:verdicodecreatecore.code.installed--configpyscf.yml-nThis should now have created the code with the labelpyscfthat will be used in the following examples.ExamplesMean-field calculationThe default calculation is to perform a mean-field calculation. At a very minimum, the structure and the mean-field method should be defined:fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('H2O'))builder.parameters=Dict({'mean_field':{'method':'RHF'}})results,node=run.get_node(builder)This runs a Hartree-Fock calculation on the geometry of a water molecule.The main results are stored in theparametersoutput, which by default contain the computedtotal_energyandforces, details on the molecular orbitals, as well as some timing information:print(results['parameters'].get_dict()){'mean_field':{'forces':[[-6.4898366104394e-16,3.0329042995656e-15,2.2269765466236],[1.122487932593e-14,0.64803103141326,-1.1134882733107],[-1.0575895664886e-14,-0.64803103141331,-1.1134882733108]],'forces_units':'eV/Å','molecular_orbitals':{'labels':['0 O 1s','0 O 2s','0 O 2px','0 O 2py','0 O 2pz','1 H 1s','2 H 1s'],'energies':[-550.86280025028,-34.375426862456,-16.629598134599,-12.323304634736,-10.637428057751,16.200273277782,19.796075801491],'occupations':[2.0,2.0,2.0,2.0,2.0,0.0,0.0]},'total_energy':-2039.8853743664,'total_energy_units':'eV',},'timings':{'total':1.3238215579768,'mean_field':0.47364449803717},}Customizing the structureThe geometry of the structure is fully defined through thestructureinput, which is provided by aStructureDatanode. Any other properties, e.g., the charge and what basis set to use, can be specified through thestructuredictionary in theparametersinput:fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('H2O'))builder.parameters=Dict({'mean_field':{'method':'RHF'},'structure':{'basis ':'sto-3g','charge':0,}})results,node=run.get_node(builder)Any attribute of thepyscf.gto.Moleclasswhich is used to define the structure can be set through thestructuredictionary, with the exception of theatomandunitattributes, which are set automatically by the plugin based on theStructureDatainput.Optimizing geometryThe geometry can be optimized by specifying theoptimizerdictionary in the inputparameters. Thesolverhas to be specified, and currently the solversgeometricandbernyare supported. Theconvergence_parametersaccepts the parameters for the selected solver (seePySCF documentationfor details):fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('H2O'))builder.parameters=Dict({'mean_field':{'method':'RHF'},'optimizer':{'solver':'geometric','convergence_parameters':{'convergence_energy':1e-6,# Eh'convergence_grms':3e-4,# Eh/Bohr'convergence_gmax':4.5e-4,# Eh/Bohr'convergence_drms':1.2e-3,# Angstrom'convergence_dmax':1.8e-3,# Angstrom}}})results,node=run.get_node(builder)The optimized structure is returned in the form of aStructureDataunder thestructureoutput label. The structure and energy of each frame in the geometry optimization trajectory, are stored in the form of aTrajectoryDataunder thetrajectoryoutput label. The total energies can be retrieved as follows:results['trajectory'].get_array('energies')Localizing orbitalsTo compute localized orbitals, specify the desired method in theparameters.localize_orbitals.methodinput:fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('H2O'))builder.parameters=Dict({'mean_field':{'method':'RHF'},'localize_orbitals':{'method':'ibo'}})results,node=run.get_node(builder)The following methods are supported:boys,cholesky,edmiston,iao,ibo,lowdin,nao,orth,pipek,vvo. For more information, please refer to thePySCF documentation.Computing the HessianIn order to compute the Hessian, specify an empty dictionary for thehessiankey in theparametersinput:fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('H2O'))builder.parameters=Dict({'mean_field':{'method':'RHF'},'hessian':{}})results,node=run.get_node(builder)The computed Hessian will be attached as anArrayDatanode with the link labelhessian. Usenode.outputs.hessian.get_array('hessian')to retrieve the computed Hessian as a numpy array for further processing.Writing Hamiltonian to FCIDUMP filesTo instruct the calculation to dump a representation of the Hamiltonian to FCIDUMP files, add thefcidumpdictionary to theparametersinput:fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('N2'))builder.parameters=Dict({'mean_field':{'method':'RHF'},'fcidump':{'active_spaces':[[5,6,8,9]],'occupations':[[1,1,1,1]]}})results,node=run.get_node(builder)Theactive_spacesandoccupationskeys are requires and each take a list of list of integers. For each element in the list, a FCIDUMP file is generated for the corresponding active spaces and the occupations of the orbitals. The shape of theactive_spacesandoccupationsarray has to be identical.The generated FCIDUMP files are attached asSinglefileDataoutput nodes in thefcidumpnamespace, where the label is determined by the index of the corresponding active space in the list:print(results['fcidump']['active_space_0'].get_content())&FCINORB=4,NELEC=4,MS2=0,ORBSYM=1,1,1,1,ISYM=1,&END0.583212712168299811110.53596425004980741122-2.942091015256668e-1511320.53812901859059141133-3.782672959584676e-151141...Generating CUBE filesThepyscf.tools.cubegenmodule provides functions to compute various properties of the system and write them as CUBE files. ThePyscfCalculationplugin currently supports computing the following:molecular orbitalscharge densitymolecular electrostatic potentialTo instruct the calculation to dump a representation of any of these quantities to CUBE files, add thecubegendictionary to theparametersinput:fromase.buildimportmoleculefromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_codebuilder=load_code('pyscf').get_builder()builder.structure=StructureData(ase=molecule('N2'))builder.parameters=Dict({'mean_field':{'method':'RHF'},'cubegen':{'orbitals: {'indices':[5,6],'parameters':{'nx':40,'ny':40,'nz':40,}},'density':{'parameters':{'resolution':300,}},'mep':{'parameters':{'resolution':300,}}}})results,node=run.get_node(builder)Theindiceskey has to be specified for theorbitalssubdictionary and takes a list of integers, indicating the indices of the molecular orbitals that should be written to file. Additional parameters can be provided in theparameterssubdictionary (see thePySCF documentationfor details). Theparameterssubdictionaries for thedensityandmepdictionaries are optional. To compute the charge density and molecular electrostatic potential, the and empty dictionary for thedensityandmepkeys, respectively, is sufficient.The generated CUBE files are attached asSinglefileDataoutput nodes in thecubegennamespace, with theorbitals,densityandmepsubnamespaces. For theorbitalssubnamespace, the label is determined by the corresponding molecular orbital index:print(results['cubegen']['orbitals']['mo_5'].get_content())Orbitalvalueinrealspace(1/Bohr^3)PySCFVersion:2.1.1Date:SunApr215:59:1920232-3.000000-3.000000-4.067676400.1538460.0000000.000000400.0000000.1538460.000000400.0000000.0000000.20859970.0000000.0000000.0000001.06767670.0000000.0000000.000000-1.067676-1.10860E-04-1.56874E-04-2.16660E-04-2.92099E-04-3.84499E-04-4.94299E-04-6.20809E-04-7.62048E-04-9.14724E-04-1.07439E-03-1.23579E-03-1.39331E-03...WarningPySCF is known to fail when computing the MEP with DHF, DKS, GHF and GKS references.Restarting unconverged calculationsThe plugin will automatically instruct PySCF to write a checkpoint file. If the calculation did not converge, it will finish with exit status410and the checkpoint file is attached as aSinglefileDataas thecheckpointoutput node. This node can then be passed as input to a new calculation to restart from the checkpoint:failed_calculation=load_node(IDENTIFIER)builder=failed_calculation.get_builder_restart()builder.checkpoint=failed_calculation.outputs.checkpointsubmit(builder)The plugin will write the checkpoint file of the failed calculation to the working directory such that PySCF can start of from there.Post-processingThePyscfCalculationplugin does not support all PySCF functionality; it aims to support most functionality that is computationally intensive, as in this case it is important to be able to offload these calculations as a calcjob on a remote compute resource. Most post-processing utilities are computationally inexpensive, and since the API is in Python, they can be called directly in AiiDA workflows ascalcfunctions. Many PySCF utilities require themodelof the system as an argument, where model is the main object used in PySCF, i.e. the object assigned to themean_fieldvariable in the following:frompyscfimportscfmean_field=scf.RHF(..)mean_field.kernel()Thekernelmethod is often computationally expensive, but its results (stored on the model object) are lost when thePyscfCalculationfinishes as the Python interpreter of the calcjob shuts down and so themean_fieldobject no longer exists. This would force post-processing code to reconstruct the model from scratch and rerun the expensive kernel. Therefore, thePyscfCalculationserializes the PySCF model that was computed and stores it as aPickledDataoutput node with the link labelmodelin the provenance graph. This allows recreating the model in another Python interpreter and have it ready to be used for post-processing:frompyscf.hessianimportthermonode=load_node()# Load the completed `PyscfCalculation`mean_field=node.outputs.model.load()# Reconstruct the model by calling the `load()` methodhessian=mean_field.Hessian().kernel()freq_info=thermo.harmonic_analysis(mean_field.mol,hessian)Automatic error recoveryThere are a variety of reasons why a PySCF calculation may not finish with the intended result. Examples are the self-consistent field cycle not converging or the job getting killed by the scheduler because it ran out of the requested walltime. ThePyscfBaseWorkChainis designed to try and automatically recover from these kinds of errors whenever it can potentially be handled. It is a simple wrapper around thePyscfCalculationplugin that automatically restarts a newPyscfCalculationif the previous iterations failed. Launching aPyscfBaseWorkChainis almost identical to launching aPyscfCalculationdirectly; the inputs just have to be "nested" inside thepyscfnamespace:fromaiida.engineimportrunfromaiida.ormimportDict,StructureData,load_code,load_nodefromaiida_pyscf.workflows.baseimportPyscfBaseWorkChainfromase.buildimportmoleculebuilder=PyscfBaseWorkChain.get_builder()builder.pyscf.code=load_code('pyscf')builder.pyscf.structure=StructureData(ase=molecule('H2O'))builder.pyscf.parameters=Dict({'mean_field':{'method':'RHF','max_cycle':3,}})results,node=run.get_node(builder)In this example, we purposefully set the maximum number of iterations in the self-consistent field cycle to 3 ('mean_field.max_cycle' = 3), which will cause the first iteration to fail to reach convergence. ThePyscfBaseWorkChaindetects the error, indicated by exit status410on thePyscfCalculation, and automatically restarts the calculation from the saved checkpoint. After three iterations, the calculation converges:$verdiprocessstatusIDENTIFIERPyscfBaseWorkChain<30126> Finished [0] [2:results]├── PyscfCalculation<30127> Finished [410]├── PyscfCalculation<30132> Finished [410]└── PyscfCalculation<30137> Finished [0]The following error modes are currently handled by thePyscfBaseWorkChain:120: Out of walltime: The calculation will be restarted from the last checkpoint if available, otherwise the work chain is aborted140: Node failure: The calculation will be restarted from the last checkpoint410: Electronic convergence not achieved: The calculation will be restarted from the last checkpoint500: Ionic convergence not achieved: The geometry optmizization did not converge, calculation will be restarted from the last checkpoint and structurePickled modelThe main objective of aPyscfCalculationis to solve the mean-field problem for a given structure. The results of this, often computationally expensive, step are stored in themean_field_runvariable in the main script:mean_field=scf.RHF(structure)density_matrix=mean_field.from_chk('restart.chk')mean_field_run=mean_field.run(density_matrix)Themean_field_runobject can be used for a number of further post-processing operations implemented in PySCF. To keep thePyscfCalculationinterface simple, not all of this functionality is supported. However, as soon as the calculation job finishes, themean_field_runvariable is lost and can no longer be accessed to be used for further processing.As a workaround, thePyscfCalculationwill"pickle"themean_field_runobject and attach it as themodeloutput to the calculation. Themodeloutput node can be "unpickled" to restore the originalmean_field_runobject such that it can be used for further processing:fromaiida.engineimportruninputs={}results,node=run.get_node(PyscfCalculation,**inputs)mean_field=node.outputs.model.load()print(mean_field.e_tot)WarningFor certain cases, the calculation may fail to pickle the model and will except. In this case, one can set thepickle_modelinput to thePyscfCalculationtoFalse.ContributingThis project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visithttps://cla.opensource.microsoft.com.When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.This project has adopted theMicrosoft Open Source Code of Conduct. For more information see theCode of Conduct FAQor [email protected] any additional questions or comments.TrademarksThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must followMicrosoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
aiida-pytest
NOTE:Theaiida-pytestpackage is in maintenance mode: bugs will continue to be fixed for the foreseeable future, but no new features will be implemented. Users considering usingaiida-pytestarestronglyencouraged to use the built-in AiiDA fixtures instead.aiida-pytestThis is a helper to enable testing AiiDA plugins withpytest. The main purpose is to create a fixture which sets up a temporary AiiDA database and user, and simplify setting up the computers and calculations.To get started, create atestsfolder where yourpytesttests will be located. Inconftests.py, you need to writefromaiida_pytestimport*This defines theconfigureandconfigure_with_daemonfixtures. To set up computers and codes for the test run, you need aconfig.ymlfile in thetestsfolder (and runpytestfrom there). The following example config file sets uplocalhostand thebands-inspectcode:computers: localhost: hostname: localhost description: localhost transport: local scheduler: direct work_directory: /tmp/test_aiida prepend_text: 'unset PYTHONPATH' codes: bands_inspect: description: bands_inspect code default_plugin: bands_inspect.difference remote_computer: localhost remote_abspath: /home/a-dogres/.virtualenvs/bands-inspect/bin/bands-inspectNote:aiida-pytestis not compatible with theaiida-xdistplugin, since the fixtures withscope=sessionare then called for each running worker.Defining and running testsTests withaiida-pytestare defined and run exactly like "regular"pytesttests. If a test needs the AiiDA database, it should use theconfigurefixture. If the test also requires the Daemon to run, it should use theconfigure_with_daemonfixture. Note that, since certain AiiDA import statements require the database backend to be set, these imports should be doneinsidethe test function.After the tests have run, the code will wait for you to pressEnterbefore deleting the testing database and repository. This gives you the opportunity to manually inspect the final state. If you want to avoid this step (for example in a CI system), pass the--quiet-wipeflag topy.test.
aiida-python
AiiDA PythonThis package is an AiiDA plugin allowing you to run python code asCalcJobon a remote computer. Usage is easy one has to inherit CalcJobPython class and instead ofprepare_for_submitionmethod one ahs to overloadrun_python. Parser is generated automatically one does not have to write its own.from aiida.plugins import CalculationFactory CalcJobPython = CalculationFactory("aiida_python.calc") class ClassThatCannotStartWithTestExample(CalcJobPython): @classmethod def define(cls, spec): super().define(spec) spec.input('inputarray', valid_type=ArrayData) spec.input('repeats', valid_type=Int) spec.output('value', valid_type=Float) def run_python(self): import numpy as np a = self.inputs.inputarray repeats = self.inputs.repeats a_inv = np.linalg.inv(a) for ii in range(repeats): a_inv = np.matmul(a_inv, a_inv) a_inv = a_inv/sum(a_inv) a = np.linalg.inv(a) c = float(np.sum(a)) self.outputs.value = cHere is a test case exampledef test_example(aiida_local_code_factory, clear_database): from aiida.plugins import CalculationFactory from aiida.engine import run import numpy as np executable = 'python3' entry_point = 'test.calc_example' code = aiida_local_code_factory(entry_point=entry_point, executable=executable) calculation = CalculationFactory(entry_point) np_a = np.array([[1,2,1],[3,4,3],[0,1,1]]) a = ArrayData() a.set_array("only_one", np_a) inputs = { 'code': code, 'inputarray': a, 'repeats': Int(10)} result = run(calculation, **inputs)For more information look at thislink.
aiida-QECpWorkChain
aiida-QECpWorkChainCar-Parrinello Work Chain. This code was used to perform the simulation work of thisPhD thesisUsagesetup your aiida postgresql database, rabbitmq, and AiiDA. This can be done followingthe official documentationinstall the package viapip install .setup aiida-quantumespresso by configuring the remote computers and the remote code (cp.xandpw.x) as described inthe official documentation. Load the pseudopotential in the databaseas a starting point you can have a look at the example inexamples/example1.py, and change it according to your need. In particular you will need to modify the remote computer configuration and the pseudopotential family, and of course the starting configuration.
aiida-qeq
aiida-qeqAiiDA plugin for computing electronic charges on atoms using equilibration-type models (QEq, EQEq, ...).Templated using theAiiDA plugin cutter.Installationgitclonehttps://github.com/ltalirz/aiida-qeq.cdaiida-qeq pipinstall-e.# also installs aiida, if missing (but not postgres)#pip install -e .[pre-commit,testing] # install extras for more featuresverdiquicksetup# better to set up a new profileverdicalculationplugins# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples verdirunsubmit_qeq.py# submit qeq test calculationverdirunsubmit_eqeq.py# submit eqeq test calculationverdicalculationlist-a# check status of calculationTestsThe following will discover and run all unit test:pipinstall-e.[testing][email protected]
aiida-qp2
aiida-qp2AiiDA plugin forQuantum Package 2.0(QP2).This plugin is the modified output of theAiiDA plugin cutter, intended to help developers get started with their AiiDA plugins.Repository contentsqp2/: The main source code of the plugin packagecalculations.py: A newQP2CalculationCalcJobclassparsers.py: A newParserfor theQP2Calculationdocs/: A documentation template. TheReadTheDocs documentationis built and deployed on thegh-pagesbranch.examples/: Examples of how to submit a calculation using this pluginLICENSE: License for your pluginMANIFEST.in: Configure non-Python files to be included for publication onPyPIREADME.md: This filesetup.json: Plugin metadata for registration onPyPIand theAiiDA plugin registry(including entry points)setup.py: Installation script for pip /PyPI.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.github/:Github Actionsconfigurationci.yml: runs tests and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPIFeaturesInitialize a wave function file (EZFIO) based onStructureDatainstance andqp_create_ezfiodictionary. This step can optionally useBasisSetand/orPseudopotentialnodes produced by theaiida-gaussian-datatypesplugin.Run calculations (e.g. HF, CIPSI) in a given order according to theqp_commandslist. Some pre- or post-processing (e.g.shellscripting) is also possible by providing a list of commands inqp_prependorqp_appendkeys of theparametersDict, respectively.Export TREXIO file from the QP-native EZFIO format.Installationpipinstallaiida-qp2 verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageSeeDemo-aiida-qp.mdand.pyfiles in theexamples/directory.For instance, theexample_trexio_from_xyz.pyis the 3-step workflow using the plugin.verdidaemonstart# make sure the daemon is runningcdexamples pythonexample_trexio_from_xyz.py# prepare and submit the calculationverdiprocesslist-a# check record of calculationCreate the EZFIO wave function file from thehcn.xyzfile using a given basis set.Run SCF calculation using the previously created wave function and parse the output file looking for the Hartree-Fock energy.Export TREXIO wave function file by converting EZFIO format usingTREXIO_TEXTback end.Developmentgitclonehttps://github.com/TREX-CoE/aiida-qp2.cdaiida-qp2 pipinstall-e.[pre-commit]# install extra dependenciespre-commitinstall# install pre-commit [email protected]@irsamc.ups-tlse.fr
aiida-quantumespresso
aiida-quantumespressoThis is the official AiiDA plugin forQuantum ESPRESSO.Compatibility matrixThe matrix below assumes the user always install the latest patch release of the specified minor version, which is recommended.PluginAiiDAPythonQuantum ESPRESSOv4.3 < v5.0v4.0 < v4.3v3.5 < v4.0v3.4 < v3.5v3.3 < v3.4v3.1 < v3.3v3.0 < v3.1v2.0 < v3.0Starting fromaiida-quantumespresso==4.0, the last three minor versions of Quantum ESPRESSO are supported. Older versions are supported up to a maximum of two years.InstallationTo install from PyPI, simply execute:pip install aiida-quantumespressoor when installing from source:git clone https://github.com/aiidateam/aiida-quantumespresso pip install aiida-quantumespressoCommand line interface toolThe plugin comes with a builtin CLI tool:aiida-quantumespresso. This tool is built using theclicklibrary and supports tab-completion. To enable it, add the following to your shell loading script, e.g. the.bashrcor virtual environment activate script:eval "$(_AIIDA_QUANTUMESPRESSO_COMPLETE=source aiida-quantumespresso)"The tool comes with various sub commands, for example to quickly launch some calculations and workchains For example, to launch a testPwCalculationyou can run the following command:aiida-quantumespresso calculation launch pw -X pw-v6.1 -F SSSP/1.1/PBE/efficiencyNote that this requires the codepw-v6.1and pseudopotential familySSSP/1.1/PBE/efficiencyto be configured. See the pseudopotentials section on how to install them easily. Each command has a fully documented command line interface, which can be printed to screen with the help flag:aiida-quantumespresso calculation launch ph --helpwhich should print something like the following:Usage: aiida-quantumespresso calculation launch ph [OPTIONS] Run a PhCalculation. Options: -X, --code CODE A single code identified by its ID, UUID or label. [required] -C, --calculation CALCULATION A single calculation identified by its ID or UUID. [required] -k, --kpoints-mesh INTEGER... The number of points in the kpoint mesh along each basis vector. [default: 1, 1, 1] -m, --max-num-machines INTEGER The maximum number of machines (nodes) to use for the calculations. [default: 1] -w, --max-wallclock-seconds INTEGER the maximum wallclock time in seconds to set for the calculations. [default: 1800] -i, --with-mpi Run the calculations with MPI enabled. [default: False] -d, --daemon Submit the process to the daemon instead of running it locally. [default: False] -h, --help Show this message and exit.PseudopotentialsPseudopotentials are installed and managed through theaiida-pseudoplugin. The easiest way to install pseudopotentials, is to install a version of theSSSPthrough the CLI ofaiida-pseudo. Simply runaiida-pseudo install ssspto install the default SSSP version. List the installed pseudopotential families with the commandaiida-pseudo list. You can then use the name of any family in the command line using the-Fflag.DevelopmentRunning testsTo run the tests, simply clone and install the package locally with the [tests] optional dependencies:gitclonehttps://github.com/aiidateam/aiida-quantumespresso.cdaiida-quantumespresso pipinstall-e.[tests]# install extra dependencies for testpytest# run testsYou can also usetoxto run the test set. Here you can also use the-eoption to specify the Python version for the test run:pipinstalltox tox-epy39--tests/calculations/test_pw.pyPre-commitTo contribute to this repository, please enable pre-commit so the code in commits are conform to the standards. Simply install the repository with thepre-commitextra dependencies:cdaiida-quantumespresso pipinstall-e.[pre-commit]pre-commitinstallLicenseTheaiida-quantumespressoplugin package is released under the MIT license. See theLICENSE.txtfile for more details.AcknowledgementsWe acknowledge support from:theNCCR MARVELfunded by the Swiss National Science Foundation;the EU Centre of Excellence "MaX – Materials Design at the Exascale" (Horizon 2020 EINFRA-5, Grant No. 676598; H2020-INFRAEDI-2018-1, Grant No. 824143; HORIZON-EUROHPC-JU-2021-COE-1, Grant No. 101093374);the European Union's Horizon 2020 research and innovation programme (Grant No. 957189,project BIG-MAP, also part of theBATTERY 2030+ initiative, Grant No. 957213);theswissuniversities P-5 project "Materials Cloud".
aiida-quantumespresso-test
aiida-quantumespressoThis is the official AiiDA plugin forQuantum ESPRESSO.Compatibility matrixThe matrix below assumes the user always install the latest patch release of the specified minor version, which is recommended.PluginAiiDAPythonQuantum ESPRESSOv4.0 < v5.0v3.5 < v4.0v3.4 < v3.5v3.3 < v3.4v3.1 < v3.3v3.0 < v3.1v2.0 < v3.0Starting fromaiida-quantumespresso==4.0, the last three minor versions of Quantum ESPRESSO are supported. Older versions are supported up to a maximum of two years.InstallationTo install from PyPI, simply execute:pip install aiida-quantumespressoor when installing from source:git clone https://github.com/aiidateam/aiida-quantumespresso pip install aiida-quantumespressoCommand line interface toolThe plugin comes with a builtin CLI tool:aiida-quantumespresso. This tool is built using theclicklibrary and supports tab-completion. To enable it, add the following to your shell loading script, e.g. the.bashrcor virtual environment activate script:eval "$(_AIIDA_QUANTUMESPRESSO_COMPLETE=source aiida-quantumespresso)"The tool comes with various sub commands, for example to quickly launch some calculations and workchains For example, to launch a testPwCalculationyou can run the following command:aiida-quantumespresso calculation launch pw -X pw-v6.1 -F SSSP/1.1/PBE/efficiencyNote that this requires the codepw-v6.1and pseudopotential familySSSP/1.1/PBE/efficiencyto be configured. See the pseudopotentials section on how to install them easily. Each command has a fully documented command line interface, which can be printed to screen with the help flag:aiida-quantumespresso calculation launch ph --helpwhich should print something like the following:Usage: aiida-quantumespresso calculation launch ph [OPTIONS] Run a PhCalculation. Options: -X, --code CODE A single code identified by its ID, UUID or label. [required] -C, --calculation CALCULATION A single calculation identified by its ID or UUID. [required] -k, --kpoints-mesh INTEGER... The number of points in the kpoint mesh along each basis vector. [default: 1, 1, 1] -m, --max-num-machines INTEGER The maximum number of machines (nodes) to use for the calculations. [default: 1] -w, --max-wallclock-seconds INTEGER the maximum wallclock time in seconds to set for the calculations. [default: 1800] -i, --with-mpi Run the calculations with MPI enabled. [default: False] -d, --daemon Submit the process to the daemon instead of running it locally. [default: False] -h, --help Show this message and exit.PseudopotentialsPseudopotentials are installed and managed through theaiida-pseudoplugin. The easiest way to install pseudopotentials, is to install a version of theSSSPthrough the CLI ofaiida-pseudo. Simply runaiida-pseudo install ssspto install the default SSSP version. List the installed pseudopotential families with the commandaiida-pseudo list. You can then use the name of any family in the command line using the-Fflag.LicenseTheaiida-quantumespressoplugin package is released under the MIT license. See theLICENSE.txtfile for more details.AcknowlegementsWe acknowledge support from:theNCCR MARVELfunded by the Swiss National Science Foundation;the EU Centre of Excellence "MaX – Materials Design at the Exascale" (Horizon 2020 EINFRA-5, Grant No. 676598);theswissuniversities P-5 project "Materials Cloud".
aiida-raspa
PluginAiiDAPython2.0.01.2.0AiiDA RASPAAiiDAplugin forRASPA2.Designed to work with with RASPA 2.0.37 or later. Latest tests run for RASPA 2.0.47.DocumentationThe documentation for this package can be found onRead the Docs.InstallationIf you usepip, you can install it as:pip install aiida-raspaIf you want to install the plugin in an editable mode, run:git clone https://github.com/lsmo-epfl/aiida-raspa cd aiida-raspa pip install -e . # Also installs aiida, if missing (but not postgres/rabbitmq).ExamplesSeeexamplesfolder for complete examples of setting up a calculation or a work chain.Simple calculationcdexamples/simple_calculations verdirunexample_base.py<code_label>--submit# Submit example calculation.verdiprocesslist-a-p1# Check status of calculation.Work chaincdexamples/workchains verdirunexample_base_restart_timeout.py<code_label># Submit test calculation.verdiprocesslist-a-p1# Check status of the work chain.LicenseMITAcknowledgementsThis work is supported by:theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation;theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598;theswissuniversities P-5 project "Materials Cloud".
aiida-s3
aiida-s3AiiDA plugin that provides various storage backends that allow using cloud data storage services, such as AWS S3 and Azure Blob Storage.Currently, the following storage backends are available:s3.psql_s3: Database provided by PostgreSQL and file repository provided by any service implementing the S3 protocol, for exampleminIO.s3.psql_aws_s3: Database provided by PostgreSQL and file repository provided byAWS S3.s3.psql_azure_blob: Database provided by PostgreSQL and file repository provided byAzure Blob Storage.InstallationThe recommended method of installation is through thepippackage installer for Python:pip install aiida-s3SetupTo use one of the storage backends provided byaiida-s3with AiiDA, you need to create a profile for it:List the available storage backends:aiida-s3profilesetup--helpCreate a profile using one of the available storage backends by passing it as an argument toaiida-s3 profile setup, for example:aiida-s3profilesetups3.psql_s3The command will prompt for the information required to setup the storage backend. After all information is entered, the storage backend is initialized, such as creating the database schema and creating file containers.Create a default user for the profile:verdi-pprofile-nameuserconfigure--set-defaultThe profile is now ready to be used with AiiDA. Optionally, you can set it as the new default profile:verdiprofilesetdefaultprofile-nameOptionally, to test that everything is working as intended, launch a test calculation:verdi-pprofile-namedevellaunch-addTestingThe unit tests are implemented and run withpytest. To run them, install the package with thetestsextra dependencies:pip install aiida-s3[tests]The plugin provides interfaces to various services that require credentials, such as AWS S3 and Azure Blob Storage. To run the test suite, one has to provide these credentials or the services have to be mocked. Instructions for each service that is supported are provided below.S3The base S3 implementation is interfaced with through theboto3Python SDK. Themotolibrary allows to mock this interface. This makes it possible to run the test suite without any credentials. To run the tests, simply executepytest:pytestBy default, the interactions with S3 are mocked throughmotoand no actual credentials are required. To run the tests against an actual S3 server, the endpoint URL and credentials need to be specified through environment variables:export AIIDA_S3_MOCK_S3=False export AIIDA_S3_ENDPOINT_URL='http://localhost:9000' export AIIDA_S3_BUCKET_NAME='some-bucket' export AIIDA_S3_ACCESS_KEY_ID='access-key' export AIIDA_S3_SECRET_ACCESS_KEY='secret-access-key' pytestOne example of an open source implementation of a S3-compatible object store isminIO. An instance can easily be created locally using Docker anddocker-compose. Simply write the following todocker-compose.yml:version: '2' services: minio: container_name: Minio command: server /data --console-address ":9001" environment: - MINIO_ROOT_USER=admin - MINIO_ROOT_PASSWORD=supersecret image: quay.io/minio/minio:latest ports: - '9000:9000' - '9001:9001' volumes: - /tmp/minio:/data restart: unless-stoppedand then launch the container with:docker-compose up -dThe tests can then be run against the server using environment variables as described above.AWS S3TheAWS S3service is interfaced with through theboto3Python SDK. Themotolibrary allows to mock this interface. This makes it possible to run the test suite without any credentials. To run the tests, simply executepytest:pytestBy default, the interactions with AWS S3 are mocked throughmotoand no actual credentials are required. To run the tests against an actual AWS S3 container, the credentials need to be specified through environment variables:export AIIDA_S3_MOCK_AWS_S3=False export AIIDA_S3_AWS_BUCKET_NAME='some-bucket' export AIIDA_S3_AWS_ACCESS_KEY_ID='access-key' export AIIDA_S3_AWS_SECRET_ACCESS_KEY='secret-access-key' pytestAzure Blob StorageTheAzure Blob Storageis communicated with through theazure-blob-storagePython SDK. Currently, there is no good way to mock the clients of this library. Therefore, when the tests are run without credentials, and so the Azure Blob Storage client needs to be mocked, the tests are skipped. To run the tests against an actual AWS S3 container, the credentials need to be specified through environment variables:export AIIDA_S3_MOCK_AZURE_BLOB=False export AIIDA_S3_AZURE_BLOB_CONTAINER_NAME='some-container' export AIIDA_S3_AZURE_BLOB_CONNECTION_STRING='DefaultEndpointsProtocol=https;AccountName=...;AccountKey=...;EndpointSuffix=core.windows.net' pytestThe specified container does not have to exist yet, it will be created automatically. The connection string can be obtained through the Azure portal.
aiida-seigrowth
aiida-seigrowthA plugin to Python3.8 to model SEI growthRepository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_seigrowth/: The main source code of the plugin packagecalculations.py: A newPbeSeiCalculationCalcJobclassdocs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugin.gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginREADME.md: This fileconftest.py: Configuration of fixtures forpytestpyproject.toml: Python package metadata for registration onPyPIand theAiiDA plugin registry(including entry points)InstallationBefore proceeding with the installation make sure that you have the right version of Pybamm installed correctly using the developer versionDownload the External Code folder and include the python script pb.py by modifying the script code.yml accordingly:Include the code by usingverdi code setup --config code.ymlProceed to install the pluginpipinstallaiida-seigrowth==0.0.4 verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:Activate the pybamm environment$ source /absoulte/path/to/PyBaMM/.tox/dev/bin/activateProceed to launch the exampleverdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of [email protected]
aiida-shell
AiiDA plugin that makes running shell commands easy. Run any shell executable without writing a dedicated plugin or parser.DocumentationPlease refer to theonline documentationfor a complete guide.
aiida-siesta
A plugin to interface theSiesta DFT codeto theAiiDA system.Documentation can be found in:http://aiida-siesta-plugin.readthedocs.ioIf you use this package in your research, please consider citing J. Chem. Phys.152, 204108 (2020) (https://doi.org/10.1063/5.0005077) and the relevantaiida-corepapers.AcknowledgementsThis work is supported by theMaX European Centre of Excellencefunded by the Horizon 2020 INFRAEDI-2018-1 program, Grant No. 824143, by theINTERSECT(Interoperable material-to-device simulation box for disruptive electronics) project, funded by Horizon 2020 under grant agreement No 814487, and by the Spanish MINECO (projects FIS2012-37549-C05-05 and FIS2015-64886-C5-4-P). We thank the AiiDA team, who are also supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation
aiida-spirit
aiida-spiritAiiDA plugin for thespirit codeInstallationpipinstallaiida-spirit# install aiida-spirit from pypiverdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation (the spirit python API needs to be installed for this to work:pip install spirit):verdidaemonstart# make sure the daemon is runningcdexamples ./example_LLG.py# run test calculationverdiprocesslist-a# check record of calculationDevelopmentgitclonehttps://github.com/JuDFTteam/aiida-spirit.cdaiida-spirit pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsNote thatpytest -vwill create a test database and profile which requires to find thepg_ctlcommand. Ifpg_ctlis not found you need to nake sure that postgres is installed and then add the localtion ofpg_ctlto thePATH:# add postgres path for pg_ctl to PATH # this is an example for Postgres 9.6 installed on a mac PATH="/Applications/Postgres.app/Contents/Versions/9.6/bin/:$PATH" export PATHCitationIf you use AiiDA-Spirit please cite the method paperP. Rüßmann, J. Ribas Sobreviela, M. Sallermann, M. Hoffmann, F. Rhiem, and S. Blügel,The AiiDA-Spirit Plugin for Automated Spin-Dynamics Simulations and Multi-Scale Modeling Based on First-Principles Calculations, Front. Mater.9, 825043 (2022).doi: 10.3389/fmats.2022.825043,and the latest code releaseP. Rüßmann, J. Ribas Sobreviela, M. Sallermann, M. Hoffmann, F. Rhiem, and S. Blügel. JuDFTteam/aiida-spirit. Zenodo.doi: 10.5281/zenodo.8070770.LicenseThe AiiDA-Spirit code is under theMIT [email protected]
aiida-ssh2win
aiida-ssh2winA transport plugin allowing AiiDA to run calculations on Windows machines.This plugin is the default output of theAiiDA plugin cutter, intended to help developers get started with their AiiDA plugins.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_ssh2win/: The main source code of the plugin packagetransport.py: ContainsSshToWindowsTransport- a concrete implementation of AiiDA'sTransportinterfacedocs/: A documentation template ready for publication onRead the Docstests/: Unit testing using thepytestframework. Installpip install -e .[testing]and runpytest..gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginREADME.md: This fileconftest.py: Configuration of fixtures forpytestpyproject.toml: Python package metadata for registration onPyPIand theAiiDA plugin registry(including entry points)Installationpipinstallaiida-ssh2win verdiquicksetup# better to set up a new profileverdipluginlistaiida.transports# should now show the `ssh2win` [email protected]
aiida-sshonly
aiida-sshonlyAiiDA plugin adding a sshonly transport option, using only SSH to transfer files, avoiding SFTP, in case it's blocked or non functional on a remote systemFeaturesProvides a new 'sshonly' transport option when configuring a computer in AiiDA. Uses SSH and shell commands to emulate SFTP commands used in AiiDA.Known limitation : only works with text files as of 0.1.0Installationpipinstallaiida-sshonly reentryscan verdipluginlistaiida.transports# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of calculationThe plugin also includes verdi commands to inspect its data types:verdidatasshonlylist verdidatasshonlyexport<PK>Developmentgitclonehttps://github.com/adegomme/aiida-sshonly.cdaiida-sshonly pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more information.LicenseMIT
aiida-sssp
aiida-ssspThis project is archived and has been fully superseded byaiida-pseudowhich you can find here:Source codePyPI package
aiida-sssp-workflow
SSSP workflowFor maintainersTo create a new release, clone the repository, install development dependencies withpip install '.[dev]', and then executebumpver update. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.Logger levelThe aiida-core logger level is recommened to set toREPORTas default. In workflow, process related messages are pop to daemon logger in REPORT level. If process finished with none-zero exit_code, log a waring message. While for debug purpose, the INFO level is for showing the parameters infomations when processes are launched.Issues of versionv3.0.1:nowf not turned on for saving inode purpose, fixed in next versionthe dual value of NC/SL is 8 for precision measure and bands for norm-conserving pseudopotentials, better to be 4. fixed in next versionthe dual value for high dual elements is 8 for non-NC pseudopotentials, better to be 16. fixed in next versionLicenseMITContact📧 email:[email protected]
aiida-statefile-schedulers
aiida-statefile-schedulersSimple statefile-driven task schedulers: Currently AiiDA relies mostly on full fledged task schedulers to run jobs in complex workflows. Running such workflows with the direct scheduler often means that many processes run together even whenrunningthe workflow directly (e.g. not submitting to the daemon), overloading a single node.This scheduler does not run any jobs. Instead, it createsstate filesof the form$jobid.QUEUEDin the directory${AIIDA_STATE_DIR}(an environment variable you have to set in your.profile/.bash_profileof the target machine), waiting for some script to pick the jobs up and run them.Thisrunner scriptshould create a file$jobid.DONEwhen done. As intermediate steps it can also create a file$jobid.RUNNINGto signal AiiDA that it picked up a job. The initial state file contains lines of the formkey=valuewith the following keys:cwd: the working directory for this jobcmd: the command to run there (usually viabash -e ...)The state files can also be renamed instead of created. A sample runner can be found inscripts/.FeaturesInstallationpipinstallaiida-statefile-scheduler verdiquicksetup# better to set up a new profileverdipluginlistaiida.schedulers# should now show your calculation pluginsDevelopmentgitclonehttps://github.com/dev-zero/aiida-statefile-schedulers.cdaiida-statefile-schedulers pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more [email protected]
aiida-strain
aiida-strainA plugin forAiiDAto create strained structures, using thestraincode.Documentation:https://aiida-strain.readthedocs.io
aiida-submission-controller
AiiDA submission controllerSome classes to help managing large number of submissions, while controlling the maximum number of submissions running at any given time.This project is still in early phase and the API might change.It includes an abstract base class that implements the main logic, a very simple example of an implementation to compute a 12x12 addition table (inexamples/add_in_batches.py), and a main script to run it (and get results and show them).To use it, you are supposed to launch the script (e.g. in ascreenterminal) with something like this:cdexampleswhiletrue;doverdirunadd_in_batches.py;sleep5;donewhere you can adapt the sleep time. Typically, for real simulations, you might want something in the range of 5-10 minutes, or anyway so that at every new run you have at least some new processes to submit, but still less that the maximum number of available slots, to try to keep the 'queue' quite filled at any given time.There is also a second subclass that, rather than just creating new submissions from some extras, will use (input) nodes in another group as a reference for which calculations to run (e.g.: a group of crystal structures, representing the inputs to a set of workflows).
aiida-supercell
aiida-supercellAiiDA plugin for Supercell programCompatible with:Installationgit clone https://github.com/pzarabadip/aiida-supercellcd aiida-supercellpip install -e .UsageExamples for using workchains are provided inexamplesfolder.
aiida-symmetry-representation
aiida-symmetry-representationA plugin forAiiDAto run calculations with thesymmetry-representationcode. It defines a calculation to filter symmetries by whether they exist in a given structure.Documentation:https://aiida-symmetry-representation.readthedocs.io
aiida-tbextraction
No description available on PyPI.
aiida-tbmodels
aiida-tbmodelsA plugin forAiiDAto run calculations with theTBmodelscode.Documentation:https://aiida-tbmodels.readthedocs.io
aiida-testing-dev
aiida-testingA pytest plugin to simplify testing of AiiDA plugins. This package implements two ways of running an AiiDA calculation in tests:mock_code: Implements a caching layer at the level of the executable called by an AiiDA calculation. This tests the input generation and output parsing, which is useful when testing calculation and parser plugins.export_cache: Implements an automatic export / import of the AiiDA database, to enable AiiDA - level caching in tests. This circumvents the input generation / output parsing, making it suitable for testing higher-level workflows.For more information, see thedocumentation.
aiida-tools
No description available on PyPI.
aiida_upgrade
aiida-upgradeIn Development!A tool to aide upgrades of plugins to new aiida-core versionsUsageTo use the command line tool, it is recommended to install viapipx:$pipxinstallaiida-upgradeOnce installed, you can simply runaiida-upgradeon anyPATH, which can be a single file or a directory:$aiida-upgrade--helpUsage: aiida-upgrade [OPTIONS] PATHThe command line interface of aiida-upgrade.Options:--help Show this message and exit.In casePATHis a directory,aiida-upgradewill recursively update all.pyfiles inside that directory.Supported migrationsCurrently,aiida-upgradeperforms the following code refactoring:Look for deprecatedaiida-coreentry points loaded by plugin factories and add thecore.prefix, seethe corresponding section in the plugin migration guide.Similarly, find and correct full deprecated entry point strings e.g.'aiida.data:structure'.Removedictandlistkeywords from theDictandListnode constructors, respectively. SeePR #5165 onaiida-core, which removed the requirement of using these keywords.Migration steps that are not (yet) supported are:Adding thecore.prefix in shell scripts.Update'name'to'label'when querying for aComputerwith theQueryBuilder.Small changes in the API ofTransportandSchedulerplugins.Removal of thePluginTestCaseclass.If you find any problems with the current refactoring, or any migration steps that are missing, please let us know by opening an issue.
aiida-vasp
ReleaseBuildStatsThis is a plugin toAiiDAto run calculations with the ab-initio programVASP.Please have a look at theAiiDA-VASP documentationfor instructions on how to install and use the plugin.Installing the pluginIf you are already usingAiiDA, simply activate the virtual environment associated with it, here assumed to be located in~/env/aiida-vasp:$ source ~/env/aiida-vasp/bin/activateOtherwise, set up a new virtual environment:$ python -m venv ~/env/aiida-vaspAnd then enable the newly installed virtual environment:$ source ~/env/aiida-vasp/bin/activateInstall theAiiDA-VASPplugin (andAiiDAif that is not already installed):$ (aiida-vasp) pip install aiida-vaspIf you need to install the compatibility release ofAiiDA-VASPwhich works withAiiDA1.6.4 you should instead install the plugin usingpip installaiida-vasp=2.2, but this is not recommended and only mentioned for legacy support. For the legacy version you also most likely have to runreentry scan-raiidaafter installing the plugin.This will automatically install theAiiDApython package(s) as well as any other dependencies of the plugin and register all the plugin classes withAiiDA.Please consider thatAiiDAhave prerequisite that needs to be installed and ensured working. The steps above will not take care of this for you. Please consultAiiDA prerequisitesand follow the instructions therein.SupportThe development, maintenance and use of this plugin is considered a community effort. In order to facilitate for the community to contribute, we have established aspace on Matrixthat users can use to communicate. We encourage users to help each other. In addition, the development team is present in the space and users are free to ask. First consult the documentation of bothAiiDA-VASP documentationandAiiDA documentationand also consider that the developers are not paid for this work. Please respect potential lead times in getting answers and be polite.
aiida-vibroscopy
aiida-vibroscopyAiiDA plugin that uses finite displacements and fields to compute phonon properties, dielectric and Born effective charges tensors, Infrared and Raman spectra.InstallationTo install from PyPI, simply execute:pip install aiida-vibroscopyor when installing from source:git clone https://github.com/bastonero/aiida-vibrosopy pip install .LicenseTheaiida-vibroscopyplugin package is released under a special academic license. See theLICENSE.txtfile for more details.AcknowlegementsWe acknowledge support from:theU Bremen Excellence Chairsprogram funded within the scope of theExcellence Strategy of Germany’s federal and state governments;theMAPEXCenter for Materials and Processes.
aiida-wannier90
aiida-wannier90AiiDA plugin for theWannier90code. This plugin allows to run Wannier90 calculations. Examples are provided to show the integration withQuantum ESPRESSOvia theaiida-quantumespressoplugin.Latest releaseGetting helpBuild statusActivityDocumentationThe documentation on how to use this plugin package is available onRead the Docs.AcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation, and theswissuniversities P-5 project "Materials Cloud".
aiida-wannier90-workflows
aiida-wannier90-workflowsLatest releaseGetting helpBuild statusActivityAdvanced AiiDA workflows for automated Wannierisation.The protocol for automating the construction of Wannier functions is discussed in the articles listed in theSupport and citations.InstallationInstall latest release bypipinstallaiida-wannier90-workflowsOr install the development version bygitclonehttps://github.com/aiidateam/aiida-wannier90-workflows.gitcdaiida-wannier90-workflows/ pipinstall-e.ExamplesSee theexamplesfolder on how to use the workflows.Support and citationsIf you find this package useful, please cite the following articlesJunfeng Qiao, Giovanni Pizzi, Nicola Marzari,Projectability disentanglement for accurate and automated electronic-structure Hamiltonians, npj Computational Materials 9, 208 (2023)https://arxiv.org/abs/2303.07877https://www.nature.com/articles/s41524-023-01146-whttps://archive.materialscloud.org/record/2023.117Junfeng Qiao, Giovanni Pizzi, Nicola Marzari,Automated mixing of maximally localized Wannier functions into target manifolds, npj Computational Materials 9, 206 (2023)https://arxiv.org/abs/2306.00678https://www.nature.com/articles/s41524-023-01147-9https://archive.materialscloud.org/record/2023.86Valerio Vitale, Giovanni Pizzi, Antimo Marrazzo, Jonathan Yates, Nicola Marzari, Arash Mostofi,Automated high-throughput Wannierisation, npj Computational Materials 6, 66 (2020)https://arxiv.org/abs/1909.00433https://www.nature.com/articles/s41524-020-0312-yhttps://doi.org/10.24435/materialscloud:2019.0044/v2
aiida-wien2k
AiiDA WIEN2k pluginTheaiida-wien2kpackage is aWIEN2kplug-in forAiiDAworkflow management developed in conjunction with theCommon workflow project. It is designed to calculate an equation of state (Etot vs volume) for any structure supplied in AiiDA format by running a very basic, yet extremely accurate, self-consistency field cycle. Limitations are a uniform scaling of all lattice parameters (applicable to cubic structures), no relaxation of atomic positions, no magnetism, no spin-orbit coupling. It is meant for DFT users who have no idea about WIEN2k, but still want to run EoS for benchmarking purposes using various DFT codes, including WIEN2k. WIEN2k version 22.2 (or higher) should be used (prior versions are incompatible).The Materials Cloud “AiiDA common workflows verification” database (https://acwf-verification.materialscloud.org) contains WIEN2k results obtained using this workflow. The data are published and discussed in the article: E. Bosoni et al., Comprehensive verification of all-electron and pseudopotential density functional theory (DFT) codes via universal common workflows., in preparation (2023)Developers: Oleg Rubel and Peter BlahaSpecial thanks for the guidance through development: Emanuele Bosoni and Giovanni Pizzi
aiida-worktree
AiiDA-WorkTreeProvides the third workflow component:WorkTree, to design flexible node-based workflows using AiiDA.In AiiDA, there are two workflow components:workfunctionandWorkChain. Workfunction is easy to implement but it does not support automatic checkpointing, which is important for long-running calculations. Workchain supports automatic checkpointing but it is difficult to implement and also not as flexible as theworkfunction. AiiDA-WorkTree provides the third component:WorkTree. It is easy to implement and supports automatic checkpointing. It is also flexible and can be used to design complex workflows.Here is a detailed comparison between theWorkTreewith two AiiDA built-in workflow components.AspectWorkFunctionWorkChainWorkTreeUse CaseShort-running jobsLong-running jobsLong-running jobsCheckpointingNoYesYesExecution orderSequentialHybrid Sequential-ParallelDirected Acyclic GraphNon-blockingNoYesYesImplementationEasyDifficultEasyDynamicNoNoYesReady to UseYesNeed PYTHONPATHYesSubprocesses HandlingNoLaunches & waitsLaunches & waitsFlow ControlAllif,whileif,while,matchTerminationHard exitExitCodeExitCodeData PassingDirect passingContextLink & ContextOutput RecordingLimited supportOut & validatesOutPort ExposingLimited supportManual & automaticManualInstallationpip install aiida-worktreeDocumentationCheck thedocsand learn about the features.ExamplesCreate calcfunction nodes:fromaiida_worktreeimportnode# define add calcfunction [email protected]()defadd(x,y):returnx+y# define multiply calcfunction [email protected]()defmultiply(x,y):returnx*yCreate a worktree to link the nodes.fromaiida_worktreeimportWorkTreefromaiidaimportload_profilefromaiida.ormimportIntload_profile()wt=WorkTree("test_add_multiply")wt.nodes.new(add,name="add1",x=Int(2.0),y=Int(3.0))wt.nodes.new(multiply,name="multiply1",y=Int(4.0))wt.links.new(wt.nodes["add1"].outputs[0],wt.nodes["multiply1"].inputs["x"])wt.submit(wait=True)Start the web app, open a terminal and run:worktree web startThen visit the pagehttp://127.0.0.1:8000/worktree, you should find afirst_workflowWorktree, click the pk and view the WorkTree.One can also generate the node graph from the process:verdi node generate pkDevelopmentPre-commit and TestsTo contribute to this repository, please enable pre-commit so the code in commits are conform to the standards.pip install -e .[tests, pre-commit]pre-commit installWeb appSee theREADME.mdBuild and publishBuild package:pip install buildpython -m buildUpload to PyPI:pip install twinetwine upload dist/*LicenseMIT
aiida-yambo
No description available on PyPI.
aiida-yambo-wannier90
aiida-yambo-wannier90Plugin to combine Wannier90 interpolations with GW corrections computed by YamboFeaturesWannier interpolation of GW band structureRepository contentsexamples/: An example of how to submit workflows using this pluginInstallationpipinstallaiida-yambo-wannier90 verdiquicksetup# better to set up a new profileverdipluginlistaiida.workflows# should now show the workflows in the pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of calculationDevelopmentgitclonehttps://github.com/aiidaplugins/aiida-yambo-wannier90.cdaiida-yambo-wannier90 pipinstall--upgradepip pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsLicenseMIT
aiida-zeopp
PluginAiiDAPython2.0.01.1.0aiida-zeoppAiiDA plugin forZeo++Installationpipinstallaiida-zeopp reentryscan verdiquicksetup# better to set up a new profileverdicalculationplugins# should now show your calclulation pluginsFeaturesAdd input structure in CIF formatCifData=DataFactory('cif')inputs['structure']=CifData(file='/path/to/file')Specify command line options using a python dictionary andNetworkParametersd={'sa':[1.82,1.82,1000],'volpo':[1.82,1.82,1000],'chan':1.2}NetworkParameters=DataFactory('zeopp.parameters')inputs['parameters']=NetworkParameters(dict=d)NetworkParametersvalidates the command line options usingvoluptuous. Find out about supported options:NetworkParameters=DataFactory('zeopp.parameters')print(NetworkParameters.schema)Add alternative atomic radii fileSinglefileData=DataFactory('singlefile')inputs['atomic_radii']=SinglefileData(file='/path/to/file')ExamplesSeeexamplesfolder for complete examples of setting up a calculation.verdidaemonstart# make sure the daemon is runningcdexamples verdirunexamples/example_01.py# runs test calculatioTestsaiida_zeoppcomes with a number of tests that are run at every commit.The following will discover and run all unit tests:pipinstall-e.[testing]pytestAnalyzing output$verdiprocessshow88-----------------------------------------------------------------------------------------typeNetworkCalculation pk88uuiddeb63433-4dcd-4ca1-9165-cb97877496b3 labelaiida_zeoppexamplecalculation descriptionConverts.cifto.cssrformat,computessurfacearea,porevolumeandchannels ctime2018-11-1909:12:55.259776+00:00 mtime2018-11-1909:15:15.708275+00:00 computer[1]localhost codenetwork -----------------------------------------------------------------------------------------##### INPUTS:LinklabelPKType --------------------------------- parameters87NetworkParameters structure86CifData##### OUTPUTS:LinklabelPKType ----------------------------------- remote_folder89RemoteData retrieved90FolderData structure_cssr91SinglefileData output_parameters92ParameterData $verdicalcjobres88{"ASA_A^2":3532.09,"ASA_m^2/cm^3":1932.13,"ASA_m^2/g":2197.86,"Channel_surface_area_A^2":3532.09,"Channels":{"Dimensionalities":[3],"Largest_free_spheres":[6.74621],"Largest_included_free_spheres":[13.1994],"Largest_included_spheres":[13.1994]},"Density":0.879097,"Input_chan":1.2,"Input_cssr":true,"Input_sa":[1.82,1.82,1000],"Input_structure_filename":"HKUST-1.cif","Input_volpo":[1.82,1.82,1000],"NASA_A^2":0.0,"NASA_m^2/cm^3":0.0,"NASA_m^2/g":0.0,"Number_of_channels":1,"Number_of_pockets":0,"POAV_A^3":9049.01,"POAV_Volume_fraction":0.495,"POAV_cm^3/g":0.563078,"PONAV_A^3":0.0,"PONAV_Volume_fraction":0.0,"PONAV_cm^3/g":0.0,"Pocket_surface_area_A^2":0.0,"Unitcell_volume":18280.8}$verdicalcjoboutputls88_scheduler-stderr.txt _scheduler-stdout.txt out.chan out.cssr out.sa out.volpo $verdicalcjoboutputcat88-pout.sa @out.saUnitcell_volume:18280.8Density:0.879097ASA_A^2:3532.09ASA_m^2/cm^3:1932.13ASA_m^2/g:2197.86NASA_A^2:0NASA_m^2/cm^3:0NASA_m^2/g:0Number_of_channels:1Channel_surface_area_A^2:3532.09 Number_of_pockets:0Pocket_surface_area_A^2:[email protected] work is supported by:theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation;theswissuniversities P-5 project "Materials Cloud".
aiiiSharedPython
No description available on PyPI.
ai-images
No description available on PyPI.
ai-img-gen
Wrapper for OpenAI DALL-E Image GeneratorWrapper for Image Generation using DALL-E from OpenAI.PrerequisitesPython 3.7+pipAccount atOpenAI. Make sure you have a Secret Key.Install the code.Installpoetrypython3 -m pip install poetryDownload the codebase and open the folder.git clone cd ai_img_genInstall the necessary packages and environment viapoetry.poetry installCreate a.envfile by copying the sample.env and filling it up the details.cp sample.env .env nano .envRun the code.poetry run python run.py
ai-incantations
No description available on PyPI.
aiinpy
aiinpyaiinpy is an open source artificial intelligence package for the python programming language. aiinpy can be used to build neural networks (nn), convolutional neural networks (cnn), recurrent neural networks (rnn), long term short term memory networks (lstm), and gated recurrent units (gru). these networks can be trained with backpropagation as well as neuroevolution.install aiinpy through pypi:pip install aiinpy
ai-integration
ai_integrationAI Model Integration for Python 2.7/3PurposeExpose your AI model under a standard interface so that you can run the model under a variety of usage modes and hosting platforms - all working seamlessly, automatically, with no code changes.Designed to be as simple as possible to integrate.Create a standard "ai_integration Docker Container Format" for interoperability.Table of ContentsPurposeBuilt-In Usage ModesExample ModelsHow to call the integration library from your codeSimplest Usage ExampleDocker Container Format RequirementsInputs DictsResult DictsError HandlingInputs SchemaSchema Data TypesSchema ExamplesSingle ImageMulti-ImageTextCreating Usage ModesBuilt-In Usage ModesThere are several built-in modes for testing:Command Line using argparse (command_line)HTTP Web UI / multipart POST API using Flask (http)Pipe inputs dict as JSON (test_inputs_dict_json)Pipe inputs dict as pickle (test_inputs_pickled_dict)Pipe single image for models that take a single input named image (test_single_image)Test single image models with a built-in solid gray image (test_model_integration)Example ModelsTensorflow AdaIN Style TransferSentiment AnalysisDeep DreamOpen NSFWSuper ResolutionGPT-2 Text GeneratorStyleGAN Face GeneratorDeOldify Black-and-white Image ColorizerContributionai_integrationis a community project developed under the free Apache 2.0 license. We welcome any new modes, integrations, bugfixes, and your ideas.How to call the integration library from your code(An older version of this library required the user to expose their model as an inference function, but this caused pain in users and is no longer needed.)Run a "while True:" loop in your code and call "get_next_input" to get inputs.Pass an inputs_schema (see full docs below) to "get_next_input".See the specification below for "Inputs Dicts""get_next_input" needs to be called using a "with" block as demonstrated below.Then process the data. Format the result or error as described under "Results Dicts"Then send the result (or error back) with "send_result".Simplest Usage ExampleThis example takes an image and returns a constant string without even looking at the input. It is a very bad AI algorithm for sure!importai_integrationwhileTrue:withai_integration.get_next_input(inputs_schema={"image":{"type":"image"}})asinputs_dict:# If an exception happens in this 'with' block, it will be sent back to the ai_integration libraryresult_data={"content-type":'text/plain',"data":"Fake output","success":True}ai_integration.send_result(result_data)Docker Container Format Requirements:This library is intended to allow the creation of standardized docker containers. This is the standard:Use the ai_integration libraryYou install this library with pip (or pip3)ENTRYPOINT is used to set your python code as the entry point into the container.No command line arguments will be passed to your python entrypoint. (Unless using the command line interface mode)Do not use argparse in your program as this will conflict with command line mode.To test your finished container's integration, run: * nvidia-docker run --rm -it -e MODE=test_model_integration YOUR_DOCKER_IMAGE_NAME * use docker instead of nvidia-docker if you aren't using NVIDIA... * You should see a bunch of happy messages. Any sad messages or exceptions indicate an error. * It will try inference a few times. If you don't see this happening, something is not integrated right.Inputs Dictsinputs_dict is a regular python dictionary.Keys are input names (typically image, or style, content)Values are the data itself. Either byte array of JPEG data (for images) or text string.Any model options are also passed here and may be strings or numbers. Best to accept either strings/numbers in your model.Result DictsContent-type, a MIME type, inspired by HTTP, helps to inform the type of the "data" fieldsuccess is a boolean."error" should be the error message if success is False.{'content-type':'application/json',# or image/jpeg'data':"{JSON data or image data as byte buffer}",'success':True,'error':'the error message (only if failed)'}Error HandlingIf there's an error that you can catch:set content-type to text/plainset success to Falseset data to Noneset error to the best description of the error (perhaps the output of traceback.format_exc())Inputs SchemaAn inputs schema is a simple python dict {} that documents the inputs required by your inference function.Not every integration mode looks at the inputs schema - think of it as a hint for telling the mode what data it needs to provide your function.All mentioned inputs are assumed required by default.The keys are names, the values specify properties of the input.Schema Data TypesimagetextSuggest other types to add to the specification!Schema ExamplesSingle ImageBy convention, name your input "image" if you accept a single image input{"image":{"type":"image"}}Multi-ImageFor example, imagine a style transfer model that needs two input images.{"style":{"type":"image"},"content":{"type":"image"},}Text{"sentence":{"type":"text"}}Creating Usage ModesA mode is a function that lives in a file in the modes folder of this library.To create a new mode:Add a python file in this folderAdd a python function to your file that takes two args:def http(inference_function=None, inputs_schema=None):Attach a hint to your functionAt the end of the file, declare the modes from your file (each python file could export multiple modes), for example:MODULE_MODES={'http':http}Your mode will be called with the inference function and inference schema, the rest is up to you!The sky is the limit, you can integrate with pretty much anything.See existing modes for examples.
aiinterface
#yapayzekainterfacebu bir makine öğrenimi denemesidir
ai-intro
ai_introA repo for course ai introThis file will become your README and also the index of your documentation. Website :https://arg-nctu.github.io/ai_intro/How to useFill me in please! Don't forget code examples:cd ~/ai_introsource docker_run.shsource colab_jupyter.shclick ctrl and the url to open browser, then you should see local jupyter notebook!inside jupyter notebook, click .ipynp and run cellsstructure01-intro-to-ai02-ai-agent03-ai-gym04-mlp-learning05-pytorch-cnn06-ros-docker07-transfer-learning08-detection-segmentation09-dqn10-ddpg-rdpg
aiio
A general-purpose conversational chat bot.
aiire
AIIREAIIRE stands for AI Information Retrieval Engine, which performs tasks that concern information understanding, including natural language understanding.The software provided here provide the python part of AI and natural language underdstanding core functionality.
aiit-sdk
AIIT-SDK安装pipinstallaiit-sdk如果需要指定安装源,可以使用 -i 参数pipinstallaiit-sdk--index-urlhttps://pypi.org/simple/版本更新,可以使用 --upgrade 参数更新pipinstall--upgradeaiit-sdk--index-urlhttps://pypi.org/simple/使用说明登陆模块在 settings.py 的SIMPLE_JWT.AUTH_TOKEN_CLASSES参数下面添加aiit_sdk.auth.AiitToken。配置完成以后,通过大数据OS颁发的 token 就可以正常获取数据,并且后端可以通过 request.user 获取到用户信息。SIMPLE_JWT={'ACCESS_TOKEN_LIFETIME':timedelta(days=1),'REFRESH_TOKEN_LIFETIME':timedelta(days=7),'ROTATE_REFRESH_TOKENS':True,'BLACKLIST_AFTER_ROTATION':True,'ALGORITHM':'HS256','SIGNING_KEY':JWT_SIGNING_KEY,'VERIFYING_KEY':None,'AUTH_HEADER_TYPES':('Bearer','JWT'),'USER_ID_FIELD':'id','USER_ID_CLAIM':'user_id','AUTH_TOKEN_CLASSES':('aiit_sdk.auth.AiitToken',# 允许大数据OS颁发的Token访问'rest_framework_simplejwt.tokens.AccessToken',),'TOKEN_TYPE_CLAIM':'token_type','JTI_CLAIM':'jti','SLIDING_TOKEN_REFRESH_EXP_CLAIM':'refresh_exp','SLIDING_TOKEN_LIFETIME':timedelta(days=7),'SLIDING_TOKEN_REFRESH_LIFETIME':timedelta(days=30),}接口返回为了规范数据返回格式,建议通过 APIResponse 进行数据返回。示例代码:fromaiit_sdk.responseimportAPIResponseclassFileUploadView(APIView):defpost(self,request):# 业务代码data={}# 要返回的数据returnAPIResponse(data=data)返回的数据格式:{"data":{},"message":"ok","code":200}分页模块默认分页模块的配置将 settings.py 的REST_FRAMEWORK.DEFAULT_PAGINATION_CLASS参数设置成aiit_sdk.page.NormalResultsSetPagination。REST_FRAMEWORK={'DEFAULT_AUTHENTICATION_CLASSES':('rest_framework.authentication.BasicAuthentication','rest_framework.authentication.SessionAuthentication','rest_framework_simplejwt.authentication.JWTAuthentication','rest_framework_simplejwt.authentication.JWTTokenUserAuthentication',),'DEFAULT_PERMISSION_CLASSES':(# 'rest_framework.permissions.IsAuthenticated',),'DATETIME_FORMAT':'%Y-%m-%d%H:%M:%S','DEFAULT_PAGINATION_CLASS':'aiit_sdk.page.NormalResultsSetPagination',# 默认分页模块的配置'DEFAULT_FILTER_BACKENDS':('django_filters.rest_framework.DjangoFilterBackend','rest_framework.filters.OrderingFilter','rest_framework.filters.SearchFilter'),'PAGE_SIZE':20,'DEFAULT_SCHEMA_CLASS':'rest_framework.schemas.coreapi.AutoSchema',}模版视图位于view模块下面,有以下几个类:AiitListAPIView 列表视图AiitCreateAPIView 创建视图AiitListCreateAPIView 列表和创建视图AiitRetrieveAPIView 详情查看视图AiitRetrieveUpdateAPIView 详情查看和更新视图AiitRetrieveUpdateDestroyAPIView 详情查看、更新和删除视图算法调用模块位于algo模块下面的exec_algo()函数,通过算法名称调用算法,如果一个算法有多个版本,默认调用最后上传的那个版本。fromaiit_sdk.algoimportexec_algores=exec_algo(algo_name='cv_name_extra',**params)参数:algo_name:算法名称;params:调用算法的参数,每个算法有所不同。文件存储将 settings.py 的DEFAULT_FILE_STORAGE参数设置成aiit_sdk.storage.AiitStorage。DEFAULT_FILE_STORAGE='aiit_sdk.storage.AiitStorage'日志记录模块位于log模块下面,有create_addition_log(),create_change_log()和create_delete_log()3个函数,分别用于记录添加数据,更新数据和删除数据的操作。日志会被记录到django.contrib.admin.models内的LogEntry模块内。更新流程
aij
Proposal: Developing an AI JournalistIntroductionThe goal of this project is to develop an AI journalist that can observe and evaluate various factors, including facial expressions, tone of speech, and objects in the background of a reporter, to generate relevant questions and categorize speeches. The AI journalist will use machine learning algorithms to analyze and optimize its observations and generate the best possible questions to further the conversation.ObjectivesDevelop a machine learning algorithm that can observe and evaluate various factors, including facial expressions, tone of speech, and objects in the background of a reporter.Create a system that can categorize speeches and generate relevant questions to further the conversation.Optimize the system for accuracy and efficiency.MethodologyThe project will be divided into several stages:Data Collection: We will collect data on various news reports, interviews, and conversations to create a comprehensive database of speech and facial expressions. This database will be used as the basis for the machine learning algorithm.Machine Learning: We will develop a machine learning algorithm that can observe and evaluate various factors, including facial expressions, tone of speech, and objects in the background of a reporter. The algorithm will be trained on a large dataset of speech and facial expressions and will be optimized for accuracy and speed.Categorization and Question Generation: We will create a system that can categorize speeches and generate relevant questions to further the conversation. This system will use the observations made by the machine learning algorithm to categorize speeches and generate questions based on the content and context of the conversation. What, where, what time, who, why, how?Testing and Validation: We will test and validate the system on a variety of news reports, interviews, and conversations, ensuring that the system can accurately categorize speeches and generate relevant questions. We will also measure the overall efficiency and usability of the system.DeliverablesA machine learning algorithm that can observe and evaluate various factors, including facial expressions, tone of speech, and objects in the background of a reporter.A system that can categorize speeches and generate relevant questions to further the conversation.A report detailing the performance of the system, including accuracy and efficiency.ConclusionThis project will provide a powerful solution for generating relevant questions and categorizing speeches in various news reports, interviews, and conversations. The incorporation of machine learning algorithms into the observation and question generation process will allow for more efficient and effective journalism, while the system can also be further optimized and extended to various applications, providing new opportunities for research and development in the field of machine learning and journalism.FeaturesDeveloping an AI Journalist is a complex project that requires expertise in natural language processing, machine learning, and computer vision. However, there are some features that could be coded within a week:Face emotion recognition: Implementing a basic face emotion recognition system that can detect and categorize facial expressions of a person in a video can be done within a week.Speech categorization: Implementing a system that can categorize speeches based on their content and context can be done within a week. For example, categorizing speeches as political, social, or economic.Object detection: Implementing a basic object detection system that can detect and categorize objects in the background of a video can be done within a week.Simple question generation: Implementing a basic question generation system that can generate questions based on the content and context of the speech can be done within a week.However, it's important to note that these features are just the building blocks of an AI Journalist and that the development of such a complex system would require a longer period of time, extensive research, and testing.RequirementsNatural Language Processing:The AI journalist model should be able to process natural language effectively, and understand the nuances of grammar, syntax, and vocabulary.To demonstrate how Python can be used for natural language processing, here is some example code using the Natural Language Toolkit (NLTK) library:importnltk# Tokenization - Breaking text into words or sentencestext="Natural language processing is a challenging field, but it can also be very rewarding."sentences=nltk.sent_tokenize(text)words=nltk.word_tokenize(text)print(sentences)# Output: ['Natural language processing is a challenging field, but it can also be very rewarding.']print(words)# Output: ['Natural', 'language', 'processing', 'is', 'a', 'challenging', 'field', ',', 'but', 'it', 'can', 'also', 'be', 'very', 'rewarding', '.']# Parts of Speech Tagging - Identifying the grammatical parts of each word in a sentencepos_tags=nltk.pos_tag(words)print(pos_tags)# Output: [('Natural', 'JJ'), ('language', 'NN'), ('processing', 'NN'), ('is', 'VBZ'), ('a', 'DT'), ('challenging', 'JJ'), ('field', 'NN'), (',', ','), ('but', 'CC'), ('it', 'PRP'), ('can', 'MD'), ('also', 'RB'), ('be', 'VB'), ('very', 'RB'), ('rewarding', 'JJ'), ('.', '.')]# Named Entity Recognition - Identifying named entities (such as names, places, and organizations) in textner_tags=nltk.ne_chunk(pos_tags)print(ner_tags)# Output: (S# (ORGANIZATION Natural/NNP)# (ORGANIZATION language/NN)# processing/NN# is/VBZ# a/DT# challenging/JJ# field/NN# ,/,# but/CC# it/PRP# can/MD# also/RB# be/VB# very/RB# rewarding/JJ# ./.)This code demonstrates how to tokenize a piece of text into sentences and words, and then perform parts of speech tagging and named entity recognition on those words using NLTK. These are just a few of the many natural language processing techniques that can be performed using Python and NLTK.Knowledge Base:The AI journalist model should have access to a wide knowledge base, which it can use to inform its writing and research.The implementation of a knowledge base for an AI journalist model is a complex task that requires a lot of planning and development. However, here is an example of how you could load a pre-existing knowledge base into your Python code using a simple dictionary:Python Code# Define a knowledge base as a dictionaryknowledge_base={"artificial intelligence":["AI","machine learning","neural networks"],"climate change":["global warming","greenhouse gases","carbon emissions"],"COVID-19":["coronavirus","pandemic","vaccine"],# and so on...}# Define a function that takes a topic and returns related terms from the knowledge basedefget_related_terms(topic):iftopicinknowledge_base:returnknowledge_base[topic]else:return[]# Test the function with a few different topicsprint(get_related_terms("artificial intelligence"))# Output: ['AI', 'machine learning', 'neural networks']print(get_related_terms("COVID-19"))# Output: ['coronavirus', 'pandemic', 'vaccine']print(get_related_terms("space exploration"))# Output: []This example demonstrates how you could use a simple dictionary to represent a knowledge base, and then define a function that returns related terms for a given topic. In a real AI journalist model, the knowledge base would likely be much more sophisticated and would incorporate data from a wide range of sources. Additionally, the function would likely be more complex and could use advanced natural language processing techniques to extract relevant information from text.Gathering data from a wide range of resources can be a challenging task, but there are many Python libraries and tools that can help simplify the process. Here's an example of how you could gather data from several different sources using Python:Python Codeimportrequestsfrombs4importBeautifulSoup# Define a list of sources to gather data fromsources=["https://www.nytimes.com","https://www.bbc.com","https://www.theguardian.com",# and so on...]# Loop over each source and extract relevant dataforsourceinsources:# Make a request to the website and get the HTML contentresponse=requests.get(source)soup=BeautifulSoup(response.content,"html.parser")# Find all the relevant elements on the page and extract their dataheadlines=soup.find_all("h2",class_="headline")article_links=soup.find_all("a",class_="article-link")# Print out the extracted dataprint(f"Headlines from{source}:")forheadlineinheadlines:print("- "+headline.get_text().strip())print(f"Article links from{source}:")forarticle_linkinarticle_links:print("- "+article_link.get("href"))This example uses the Requests library to make HTTP requests to several different news websites, and the BeautifulSoup library to extract relevant data from the HTML content of each page. Specifically, it extracts all the headlines and article links from each page and prints them out to the console.In a real AI journalist model, you would likely want to extract much more detailed and structured data, and you would likely use a combination of web scraping, APIs, and other data sources to gather the necessary information.Data Analysis Skills:The AI journalist model should be able to analyze large amounts of data quickly and accurately, and identify patterns and trends that are relevant to the story.Analyzing large amounts of data quickly and accurately is a key requirement for an AI journalist model. Here's an example of how you could use Python and the Pandas library to load, analyze, and visualize a dataset:Python Codeimportpandasaspdimportmatplotlib.pyplotasplt# Load a dataset into a Pandas DataFramedf=pd.read_csv("example_dataset.csv")# Print the first few rows of the datasetprint(df.head())# Calculate some basic statistics on the dataprint("Average value:",df["value"].mean())print("Minimum value:",df["value"].min())print("Maximum value:",df["value"].max())# Group the data by category and calculate the mean value for each categorygrouped_df=df.groupby("category").mean()# Print the grouped dataprint(grouped_df)# Create a bar chart of the grouped datagrouped_df.plot(kind="bar")plt.title("Average Value by Category")plt.xlabel("Category")plt.ylabel("Average Value")plt.show()In this example, we load a dataset into a Pandas DataFrame and use various methods to analyze and visualize the data. Specifically, we print the first few rows of the dataset, calculate some basic statistics, group the data by category and calculate the mean value for each category, and create a bar chart of the grouped data.In a real AI journalist model, you would likely want to use more sophisticated data analysis techniques, such as machine learning algorithms or statistical modeling, to extract insights from large and complex datasets. However, the basic techniques demonstrated here can provide a solid foundation for more advanced analysis.Here's an example dataset in CSV format that you can use with the code example I provided earlier:CSV Datacategory,value A,10 B,15 A,12 C,8 B,17 C,6 A,9 B,20 C,12This dataset contains three categories (A, B, and C) and a corresponding value for each category. You can save this dataset as a CSV file namedexample_dataset.csvand use the code example I provided earlier to load, analyze, and visualize the data.Fact Checking:The AI journalist model should have the ability to verify facts and sources, and ensure the accuracy of the information it presents.Fact checking is an important skill for any journalist, and it's especially critical for an AI journalist model that relies on automated data gathering and processing. Here's an example of how you could use Python and the FactCheck API to verify the accuracy of a piece of information:Python Codeimportrequests# Define the claim to checkclaim="The earth is flat."# Make a request to the FactCheck APIresponse=requests.get("https://factchecktools.googleapis.com/v1alpha1/claims:search",params={"query":claim})# Check if the response contains any resultsifresponse.json()["claims"]:# If there are results, print the verdict and explanationverdict=response.json()["claims"][0]["claimReview"][0]["textualRating"]explanation=response.json()["claims"][0]["claimReview"][0]["textualRatingExplanation"]print(f"The claim '{claim}' is{verdict}.{explanation}")else:# If there are no results, print a message indicating that the claim could not be verifiedprint(f"Could not verify the claim '{claim}'.")In this example, we use the FactCheck API to check the accuracy of a claim ("The earth is flat."). We make a request to the API and check if the response contains any results. If there are results, we print the verdict and explanation provided by the API. If there are no results, we print a message indicating that the claim could not be verified.In a real AI journalist model, you would likely want to use a combination of fact-checking techniques, such as manual research and verification, automated fact-checking tools, and crowd-sourced verification platforms, to ensure the accuracy of the information you present.Writing Skills:The AI journalist model should be able to write well, using proper grammar, syntax, and vocabulary, and should be able to adapt its writing style to the audience and context.Writing skills are essential for an AI journalist model, as the model needs to be able to write articles that are engaging, informative, and accurate. Here's an example of how you could use Python and the GPT-3 API to generate a news article on a given topic:Python Codeimportopenaiopenai.api_key="your_api_key"# Define the prompt for the articleprompt="Write a news article about the new Apple iPhone release."# Set the parameters for the text generationmodel="text-davinci-002"temperature=0.5max_tokens=1024# Generate the article using the GPT-3 APIresponse=openai.Completion.create(engine=model,prompt=prompt,temperature=temperature,max_tokens=max_tokens,n=1,stop=None,timeout=30,)# Print the generated articleprint(response.choices[0].text)In this example, we use the GPT-3 API to generate a news article about the new Apple iPhone release. We define a prompt for the article and set the parameters for the text generation, such as the model to use, the temperature of the sampling process, and the maximum number of tokens in the generated text. We then use the OpenAI API client to generate the article and print the result.Of course, this is just a simple example, and in a real AI journalist model, you would need to incorporate many other features, such as content planning, topic research, style adaptation, and fact checking, to ensure that the generated articles are of high quality and relevance.Voice and Tone:The AI journalist model should be able to convey different tones and voices depending on the context, whether it is a news story, opinion piece, or feature article.AI journalist model that can convey different tones and voices depending on the context requires complex natural language processing (NLP) techniques and algorithms.Firstly we will ned provide a general guidance and resources that may help you to develop such a model using Python.NLTK for tokenization and stemming:Start by exploring existing NLP libraries and frameworks in Python, such as NLTK, spaCy, and transformers. These libraries provide various NLP tools and techniques, such as tokenization, named entity recognition, sentiment analysis, and language modeling, that can be used to analyze and generate text.The code below uses NLTK library to tokenize the input text into individual words and then apply stemming to reduce each word to its root form.Python Codeimportnltkfromnltk.tokenizeimportword_tokenizefromnltk.stemimportPorterStemmer# sample texttext="Natural Language Processing is a complex field, but it has many useful applications."# tokenize the texttokens=word_tokenize(text)# create a stemmer objectstemmer=PorterStemmer()# apply stemming to the tokensstemmed_tokens=[stemmer.stem(token)fortokenintokens]# print the resultsprint("Original text:",text)print("Tokenized text:",tokens)print("Stemmed text:",stemmed_tokens)Terminal Output:Original text: Natural Language Processing is a complex field, but it has many useful applications.Tokenized text: ['Natural', 'Language', 'Processing', 'is', 'a', 'complex', 'field', ',', 'but', 'it', 'has', 'many', 'useful', 'applications', '.']Stemmed text: ['natur', 'languag', 'process', 'is', 'a', 'complex', 'field', ',', 'but', 'it', 'ha', 'mani', 'use', 'applic', '.']spaCy for named entity recognition:For tone and voice modeling, you can use language modeling techniques, such as GPT (Generative Pre-trained Transformer) or BERT (Bidirectional Encoder Representations from Transformers), which have been shown to perform well in various natural language generation tasks. These models can be fine-tuned on a specific task or domain, such as news article writing, to generate text that follows a specific tone or voice.This code uses spaCy library to apply named entity recognition (NER) to the input text and extract the entities and their labels.Python Codeimportspacy# load the pre-trained NER modelnlp=spacy.load("en_core_web_sm")# sample texttext="Bill Gates is the founder of Microsoft Corporation, which is based in Redmond, Washington."# apply named entity recognitiondoc=nlp(text)# extract the entities and their labelsentities=[(ent.text,ent.label_)forentindoc.ents]# print the resultsprint("Original text:",text)print("Named entities:",entities)Terminal Output:Original text: Bill Gates is the founder of Microsoft Corporation, which is based in Redmond, Washington.Named entities: [('Bill Gates', 'PERSON'), ('Microsoft Corporation', 'ORG'), ('Redmond', 'GPE'), ('Washington', 'GPE')]Transformers for sentiment analysis:You can also use style transfer techniques, such as neural style transfer or disentangled representation learning, to transfer the style or voice of one text to another. For example, you can transfer the style of a news article to an opinion piece or feature article.Python Code:fromtransformersimportpipeline# load the sentiment analysis modelclassifier=pipeline("sentiment-analysis")# sample texttext="I love this new phone, it's amazing!"# apply sentiment analysisresult=classifier(text)# print the resultsprint("Original text:",text)print("Sentiment analysis result:",result)NLTK Sentiment AnalysisThis code uses theSentimentIntensityAnalyzerclass from NLTK library to analyze the sentiment of the given text. Thepolarity_scoresmethod returns a dictionary of sentiment scores, including the negative, neutral, positive, and compound scores.Python Code:importnltkfromnltk.sentimentimportSentimentIntensityAnalyzer# load the sentiment analyzersia=SentimentIntensityAnalyzer()# test texttext="I love this new phone, it's amazing!"# get the sentiment scoresscores=sia.polarity_scores(text)# print the scoresprint(scores)Terminal Output:{'neg':0.0,'neu':0.403,'pos':0.597,'compound':0.5859}Finally, it's important to have a large and diverse training dataset that includes various examples of different tones and voices in different contexts. You can collect and preprocess data from various sources, such as news websites, social media platforms, and blogs.This code uses requests library to download the HTML content of webpages, and then uses BeautifulSoup library to extract the text content from the HTML. It then applies some basic preprocessing steps to remove URLs, special characters, and digits, and convert the text to lowercase. Finally, it combines the preprocessed data from different sources into a single training dataset.Python Code:importrequestsfrombs4importBeautifulSoupimportre# define a function to extract text from a webpagedefextract_text(url):response=requests.get(url)soup=BeautifulSoup(response.content,"html.parser")text=""forpinsoup.find_all("p"):text+=p.get_text()returntext# collect data from news websitesnews_urls=["https://www.nytimes.com/","https://www.washingtonpost.com/","https://www.bbc.com/news",]news_text=[]forurlinnews_urls:news_text.append(extract_text(url))# collect data from social media platformssocial_media_urls=["https://www.twitter.com/","https://www.facebook.com/","https://www.instagram.com/",]social_media_text=[]forurlinsocial_media_urls:social_media_text.append(extract_text(url))# collect data from blogsblog_urls=["https://www.medium.com/","https://www.wordpress.com/","https://www.blogger.com/",]blog_text=[]forurlinblog_urls:blog_text.append(extract_text(url))# preprocess the datadefpreprocess(text):# remove URLstext=re.sub(r"http\S+","",text)# remove special characters and digitstext=re.sub(r"[^a-zA-Z\s]","",text)# convert to lowercasetext=text.lower()returntextnews_text=[preprocess(text)fortextinnews_text]social_media_text=[preprocess(text)fortextinsocial_media_text]blog_text=[preprocess(text)fortextinblog_text]# combine the datasetstraining_data=news_text+social_media_text+blog_textTerminal Output:[TODO]: Add an example output here ..Code for each featureHere's an example of a full-stack working Python program for face emotion recognition using OpenCV and Keras.First, you'll need to install the necessary libraries:pip install opencv-pythonpip install kerasThen, you can use the following code to create a basic face emotion recognition system:importcv2importnumpyasnpfromkeras.modelsimportload_model# Load the trained modelmodel=load_model('model.h5')# Define the emotion labelsemotions=['Angry','Disgust','Fear','Happy','Neutral','Sad','Surprise']# Define a function to detect the face in a framedefdetect_face(frame):# Convert the frame to grayscalegray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)# Load the Haar cascade for face detectionface_cascade=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')# Detect faces in the grayscale imagefaces=face_cascade.detectMultiScale(gray,scaleFactor=1.3,minNeighbors=5)# If no faces are detected, return Noneiflen(faces)==0:returnNone,None# If multiple faces are detected, use the largest onelargest_face=faces[0]forfaceinfaces:ifface[2]*face[3]>largest_face[2]*largest_face[3]:largest_face=face# Extract the face region from the framex,y,w,h=largest_faceface_roi=gray[y:y+h,x:x+w]# Resize the face region to 48x48 pixelsface_roi=cv2.resize(face_roi,(48,48))# Return the face region and the coordinates of the facereturnface_roi,largest_face# Define a function to predict the emotion in a facedefpredict_emotion(face):# Reshape the face to match the input shape of the modelface=face.reshape(1,48,48,1)# Normalize the pixel values to be between 0 and 1face=face/255.0# Predict the emotion label using the trained modelpredictions=model.predict(face)# Return the predicted emotion labelreturnemotions[np.argmax(predictions)]# Open a video streamcap=cv2.VideoCapture(0)whileTrue:# Read a frame from the video streamret,frame=cap.read()# Detect the face in the frameface,coords=detect_face(frame)# If a face is detected, predict the emotion label and draw a rectangle around the faceiffaceisnotNone:emotion=predict_emotion(face)x,y,w,h=coordscv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)cv2.putText(frame,emotion,(x,y-10),cv2.FONT_HERSHEY_SIMPLEX,0.9,(0,255,0),2)# Display the framecv2.imshow('Face Emotion Recognition',frame)# Wait for a key pressifcv2.waitKey(1)&0xFF==ord('q'):break# Release the video stream and close all windowscap.release()cv2.destroyAllWindows()This program uses OpenCV to capture video frames from the camera and detect faces using the Haar Cascade classifier for face detection. Once a face is detected, the program extracts the face region, resizes it to 48x48 pixels, and uses a pre-trained Keras model to predict the emotion label. The predicted emotion label is then drawn on the frame along with a rectangle around the detected face.Note that in this example, the pre-trained Keras model is assumed to be saved in a file named 'model.h5'. You will need to train your own model or find a pre-trained model that you can use for this task.Also, keep in mind that this is a very basic example of a face emotion recognition system, and it may not be accurate or robust enough for real-world applications. There are many factors that can affect the performance of such a system, including lighting conditions, camera angles, and the diversity of facial expressions. Nonetheless, this should give you a starting point to build your own face emotion recognition system.Ethical Aspects:The AI journalist model should be programmed to adhere to ethical standards in journalism, such as impartiality, accuracy, and transparency.Ability to Learn:The AI journalist model should have the ability to learn and improve over time, based on feedback from editors and readers.User Interface:The AI journalist model should have a user-friendly interface that allows journalists to input topics and parameters, and receive output in a format that is easy to use and understand.Collaboration:The AI journalist model should be designed to work collaboratively with human journalists, leveraging the strengths of both to produce high-quality journalism.
aijack
AIJack: Security and Privacy Risk Simulator for Machine Learning❤️If you like AIJack, please considerbecoming a GitHub Sponsor❤️What is AIJack?AIJack is an easy-to-use open-source simulation tool for testing the security of your AI system against hijackers. It provides advanced security techniques likeDifferential Privacy,Homomorphic Encryption,K-anonymityandFederated Learningto guarantee protection for your AI. With AIJack, you can test and simulate defenses against various attacks such asPoisoning,Model Inversion,Backdoor, andFree-Rider. We support more than 30 state-of-the-art methods. For more information, check ourdocumentationand start securing your AI today with AIJack.InstallationYou can install AIJack withpip. AIJack requires Boost and pybind11.apt install -y libboost-all-dev pip install -U pip pip install "pybind11[global]" pip install aijackIf you want to use the latest-version, you can directly install from GitHub.pip install git+https://github.com/Koukyosyumei/AIJackWe also provideDockerfile.Quick StartWe briefly introduce the overview of AIJack.FeaturesAll-around abilities for both attack & defensePyTorch-friendly designCompatible with scikit-learnFast Implementation with C++ backendMPI-Backend for Federated LearningExtensible modular APIsBasic InterfacePython APIFor standard machine learning algorithms, AIJack allows you to simulate attacks against machine learning models withAttackerAPIs. AIJack mainly supports PyTorch or sklearn models.# abstract codeattacker=Attacker(target_model)result=attacker.attack()For instance, we can implement Poisoning Attack against SVM implemented with sklearn as follows.fromaijack.attackimportPoison_attack_sklearnattacker=Poison_attack_sklearn(clf,X_train,y_train)malicious_data,log=attacker.attack(initial_data,1,X_valid,y_valid)For distributed learning such as Federated Learning and Split Learning, AIJack offers four basic APIs:Client,Server,API, andManager.ClientandServerrepresent each client and server within each distributed learning scheme. You can execute training by registering the clients and servers toAPIand running it.Managergives additional abilities such as attack, defense, or parallel computing toClient,ServerorAPIviaattachmethod.# abstract codeclient=[Client(),Client()]server=Server()api=API(client,server)api.run()# execute trainingc_manager=ClientManagerForAdditionalAbility(...)s_manager=ServerManagerForAdditionalAbility(...)ExtendedClient=c_manager.attach(Client)ExtendedServer=c_manager.attach(Server)extended_client=[ExtendedClient(...),ExtendedClient(...)]extended_server=ExtendedServer(...)api=API(extended_client,extended_server)api.run()# execute trainingFor example, the bellow code implements the scenario where the server in Federated Learning tries to steal the training data with gradient-based model inversion attack.fromaijack.collaborative.fedavgimportFedAVGAPI,FedAVGClient,FedAVGServerfromaijack.attack.inversionimportGradientInversionAttackServerManagermanager=GradientInversionAttackServerManager(input_shape)FedAVGServerAttacker=manager.attach(FedAVGServer)clients=[FedAVGClient(model_1),FedAVGClient(model_2)]server=FedAVGServerAttacker(clients,model_3)api=FedAVGAPI(server,clients,criterion,optimizers,dataloaders)api.run()AIValut: A simple DBMS for debugging ML ModelsWe also provide a simple DBMS namedAIValutdesigned specifically for SQL-based algorithms. AIValut currently supports Rain, a SQL-based debugging system for ML models. In the future, we have plans to integrate additional advanced features from AIJack, including K-Anonymity, Homomorphic Encryption, and Differential Privacy.AIValut has its own storage engine and query parser, and you can train and debug ML models with SQL-like queries. For example, theComplaintquery automatically removes problematic records given the specified constraint.#WetrainanMLmodeltoclassifywhethereachcustomerwillgobankruptornotbasedontheirageanddebt.#Wewantthetrainedmodeltoclassifythecustomeraspositivewhenhe/shehasmoredebtthanorequalto100.#The10threcordseemsproblematicfortheaboveconstraint.>>Select*Frombankruptidagedebty140002211003221004323005445016301001763310185342019395301104910000#TrainLogisticRegressionwiththenumberofiterationsof100andthelearningrateof1.#Thenameofthetargetfeatureis`y`,andweuseallotherfeaturesastrainingdata.>>Logreglrmodelidy1001FromSelect*FrombankruptTrainedParameters:(0):2.771564(1):-0.236504(2):0.967139AUC:0.520000Predictiononthetrainingdataisstoredat`prediction_on_training_data_lrmodel`#Removeonerecordsothatthemodelwillpredict`positive(class1)`forthesampleswith`debt`greaterorequalto100.>>ComplaintcompShouldbe1Remove1AgainstLogreglrmodelidy1001FromSelect*FrombankruptWheredebtGeq100FixedParameters:(0):-4.765492(1):8.747224(2):0.744146AUC:1.000000Predictiononthefixedtrainingdataisstoredat`prediction_on_training_data_comp_lrmodel`For more detailed information and usage instructions, please refer toaivalut/README.md.Please use AIValut only for research purpose.ResourcesYou can also find more examples in our tutorials and documentation.ExamplesDocumentationAPI ReferenceSupported AlgorithmsCollaborativeHorizontal FLFedAVG,FedProx,FedKD,FedGEMS,FedMD,DSFL,MOON,FedExPCollaborativeVertical FLSplitNN,SecureBoostAttackModel InversionMI-FACE,DLG,iDLG,GS,CPL,GradInversion,GAN AttackAttackLabel LeakageNorm AttackAttackPoisoningHistory Attack,Label Flip,MAPF,SVM PoisoningAttackBackdoorDBA,Model ReplacementAttackFree-RiderDelta-WeightAttackEvasionGradient-Descent Attack,FGSM,DIVAAttackMembership InferenceShadow AttackDefenseHomomorphic EncryptionPaillierDefenseDifferential PrivacyDPSGD,AdaDPS,DPlisDefenseAnonymizationMondrianDefenseRobust TrainingPixelDP,Cost-Aware Robust Tree EnsembleDefenseDebuggingModel Assertions,Rain,Neuron CoverageDefenseOthersSoteria,FoolsGold,MID,Sparse GradientCitationIf you use AIJack for your research, please cite the repo and our arXiv paper.@misc{repotakahashi2023aijack, author = {Hideaki, Takahashi}, title = {AIJack}, year = {2023}, publisher = {GitHub}, journal = {GitHub Repository}, howpublished = {\url{https://github.com/Koukyosyumei/AIJack}}, } @misc{takahashi2023aijack, title={AIJack: Security and Privacy Risk Simulator for Machine Learning}, author={Hideaki Takahashi}, year={2023}, eprint={2312.17667}, archivePrefix={arXiv}, primaryClass={cs.LG} }Related PublicationsBelow you can find a list of papers and books that either use or extend AIJack.Huang, Shiyuan, et al. "Video in 10 Bits: Few-Bit VideoQA for Efficiency and Privacy." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.Song, Junzhe, and Dmitry Namiot. "A Survey of the Implementations of Model Inversion Attacks." International Conference on Distributed Computer and Communication Networks. Cham: Springer Nature Switzerland, 2022.Kapoor, Amita, and Sharmistha Chatterjee. Platform and Model Design for Responsible AI: Design and build resilient, private, fair, and transparent machine learning models. Packt Publishing Ltd, 2023.Mi, Yuxi, et al. "Flexible Differentially Private Vertical Federated Learning with Adaptive Feature Embeddings." arXiv preprint arXiv:2308.02362 (2023).Mohammadi, Mohammadreza, et al. "Privacy-preserving Federated Learning System for Fatigue Detection." 2023 IEEE International Conference on Cyber Security and Resilience (CSR). IEEE, 2023.Contactwelcome2aijack[@]gmail.com
ai-jobdeploy
AI Job DeployGetting started.pipinstallai-jobdeployIn a projectmkdir .jsCreate some deployment templates (see below)Get experimenting!Base TemplateTemplates must implement 5 fields:params,meta,config,valuesandbuilds.paramsis a list of parameters specified on creationupof the resource.metais a list and subset of "subdir", "id", "project".configis a dictionary of configured values (e.g. security group ids, etc..)valuesare a dictionary of formatted values (on the basis of parameters) which are and may be referred to in thebuilds.Thebuildssection must implementupanddown. There are 3 types of builds:sequencescriptfilesequence: sequence ofscripts orfiles.script: an executable script (usually bash).file: a file saved to the path given by the build name.local.yamlparams:-name-runmeta:-id-subdir-projectconfig:python:/usr/bin/python3builds:install:type:scriptcontent:|#!/bin/bash{{ params['python'] }} -m pip install -r requirements.txtdeploy_script:type:filecontent:|#!/bin/bashmkdir -p checkpoints/{{ params.name }}{{ params['run'] }} | tee checkpoints/{{ params.name }}/logstart:type:scriptcontent:|#!/bin/bashchmod +x .jd/{{ params['subdir'] }}/tasks/deploy_scripttmux new-session -d -s {{ params['project'] }}-{{ params['id'] }} ".jd/{{ params['subdir'] }}/tasks/deploy_script"watch:type:scriptcontent:|#!/bin/bashtmux a -t {{ params['project'] }}-{{ params['id'] }}down:type:scriptwhitelist:[256]content:|#!/bin/bashtmux kill-session -t {{ params['project'] }}-{{ params['id'] }}purge:type:scriptcontent:|#!/bin/bashrm -rf checkpoints/{{ params['name'] }}up:type:sequencecontent:-install-deploy_script-startUsing templates to create and manage resourcesList resources:jdlsHere is how to create themy_model.yamlresource:jdbuildlocalup--paramsrun='python3 -u test.py',name=testDo something with the resource (anything apart from up). Getidfromjd ls.jdbuildwatch--id<id>Stop resource:jdrm<id>[--purge/--no-purge]
ai-jsonable
AI-JSONABLEParameter and settings tracking in Python3 for jsonable output.Installationpip3 install aijsonPhilosophySaving and serializing in Python3 is supported by, for instance,pickleanddill. However, we believe that logging parameters in a Pythonic and flexible way is undersupported. Once a model or experiment has been executed, it should be easy to inspect which parameters were used. If the experiment is to be rerun or modified, it should be possible to do this with some simple overrides.Minimum working exampleA minimal example is inexample/config.py, which requires PyTorch and imports fromexample/themod.py.To install do:pip3 install torchexample/themod.py:fromaijson.decorateimportaijsonimporttorch@aijsonclassMyPyTorchModel(torch.nn.Module):def__init__(self,n_layers,n_hidden,n_input):super().__init__()self.rnn=torch.nn.GRU(n_hidden,n_hidden,n_layers)self.embed=torch.nn.Embedding(n_input,n_hidden)@aijsonclassMyCompose:def__init__(self,functions):self.functions=functionsdef__call__(self,x):forfinself.functions:x=f(x)returnxexample/config.py:importjsonfromexample.themodimportMyPyTorchModel,MyComposefromaijsonimportaijson,logging_contextfromtorch.nnimportGRUwithlogging_context()aslc:m=MyPyTorchModel(n_layers=1,n_hidden=512,n_input=64)rnn=aijson(GRU)(input_size=2,hidden_size=5,)n=MyCompose(functions=[m,m,2,rnn])withopen('mymodel.ai.json','w')asf:json.dump(lc,f)Inexample/themod.pyyou can see that classes (and functions) whose parameter settings should be tracked are decorated with@aijson. Predefined functions (as intorch.nn.XXX) are similarly wrapped withaijson(...). To create a single JSON-able logging instance in a Python dictionary, one uses thelogging_contextcontext manager. Having wired the model together in Python, all parameters chosen are recursively saved in the dictionarylc.To run do:python3 -m example.configThis should give output inmymodel.ai.json, which should look like this:{"var0":{"module":"example.themod","caller":"MyPyTorchModel","kwargs":{"n_layers":1,"n_hidden":512,"n_input":64}},"var1":{"module":"torch.nn.modules.rnn","caller":"GRU","kwargs":{"input_size":2,"hidden_size":5}},"var2":{"module":"example.themod","caller":"MyCompose","kwargs":{"functions":["$var0","$var0",2,"$var1"]}}}The JSON output is a dictionary representation of the build tree/ graph. If a parameter is JSON-able, then it will be directly saved in thekwargssubdictionary. Otherwise, it will be defined recursively. Hence the underlying assumption is that all parameters are either JSON-able or are Python objects whose parameters are JSON-able or are Python objects..., and so on. The base/ trunk node is the variable with highest index.Once this output has been produced, it's possible to rebuild the object using the same parameters in the following way:importjsonfromaijsonimportbuildwithopen('mymodel.ai.json')asf:cf=json.load(f)rebuilt=build(cf)This means that one doesn't need the code inexample/config.pybut only the items imported there (i.e. whatever is inexample/themod.pyandtorchetc.).
aika
AikaAboutAika provides date- and time-range parsing utilities for multiple languages. It is based onarbitrary-dateparserandDateRangeParser, and aims forDWIM-like convenience and usefulness.Currently, it supports English and German, and welcomes contributions for other languages.UsagefromaikaimportDaterangeExpressiondr=DaterangeExpression()print("Range: ",dr.parse("Sat - Tue"))print("Single:",dr.parse_single("1. Juli"))Range:(datetime(2023,8,26,0,0),datetime(2023,8,29,23,59,59,999999))Single:datetime(2023,7,1,0,0)Example ExpressionsAika understands all types of date-/time-range expressions like provided by the packages it is based upon, and works with single dates too. This section enumerates a few examples.arbitrary-dateparser » Englishnowtodaylast week to next fridaytomorrow - next weeknext monthdecemberJuly to Decemberjul 1 to jul 7Sat - Tuein March2024-08-20arbitrary-dateparser » Germanjetztheuteletzte woche bis nächsten freitagmorgen - nächste wochenächster monatdezemberJuli-Dezemberjul 1 to jul 7von Samstag bis Dienstagim März20. August 202420.8.202420.08.2024DateRangeParser » English1st julyMarch 2024July to December27th-29th June 201030 May to 9th Aug3rd Jan 1980 -- 2nd Jan 2013Wed 23 Jan -> Sat 16 February 2013Tuesday 29 May - Sat 2 June 2012From 1 to 9 Juljul 1 to jul 914th July 1988Jan 2011 - Mar 201407:00 Tue 7th June - 17th July 3:30pmCaveat: Times will currently be ignored.DateRangeParser » German1. Juli1. bis 7. JuliMärz 2024Juli bis DezemberVom 3. März bis zum 9. März 2024Advanced UsageBy specifyingdefault_start_timeanddefault_end_timearguments, the daterange boundaries will snap to the given times when they otherwise would be "beginning of day" (00:00) or "end of day" (23:59).importdatetimeasdtfromaikaimportDaterangeExpressiondr=DaterangeExpression(default_start_time=dt.time(hour=9),default_end_time=dt.time(hour=17),)dr.parse("Sat - Tue")(datetime(2023,8,26,9,0),datetime(2023,8,29,17,0))TroubleshootingIf you see an error message likelocale.Error: unsupported locale settingfor code like this,locale.setlocale(locale.LC_ALL,"de_DE.UTF-8")you will need to generate the German locales.apt-getupdate apt-getinstall--yestzdatalocales locale-gende_DE.UTF-8SetupAcquire source code and install development sandbox.gitclonehttps://github.com/panodata/aikacdaika python3-mvenv.venvsource.venv/bin/activate pipinstall--editable='.[develop,docs,test]'Run linters and software tests:source.venv/bin/activate poecheckEtymologyAika means "time" in the Finnish language.AcknowledgementsMichael Phelpsfor conceivingarbitrary-dateparser.Robin Wilsonand contributors for conceiving and maintainingDateRangeParser.
aika-datagraph
IntroductionThe aika project is a collection of libraries for working with time series data, and in particular with timeseries data that is expected to be continuously updated. The datagraph sub project is a way to store time series data along with any parameters used to generate that data including other data. It comes with several different persistence engines which can be used to browse stored data and filter by parameters. Since it embeds the dependency graph it enforces consistency. Importantly, meta data such as the time range of the data is stored in the meta data allowing consumers to check the existence of data without examining it. Within the aika project this is used as the backing layer to the task library (putki) which allows the tasks to efficiently know which tasks need to be run at any given point in time.You can read more about the aika project onthe project webpage.Installationpython -m pip install aika-datagraph
aika-ml
IntroductionThis project mostly expands the idea of a skikit-learn pipeline to accept bivariate pipelines, this makes it much easier to make a single pipeline with all aspects of feature engineering and supervised training, and this in turn makes it much easier to support the creation of walk-forward trained-models.For more information on the aika project see theaika webpageInstallationpython -m pip install aika-ml
aika-putki
Introductionaika-putki is a task framework designed to make it easy to build production and research systems on top of timeseries data. They provide tasks which have a notion of completeness which is founded on an awareness of what data is expected from a successful computation, and thus completeness is defined via the inspection of parent tasks existing output and not via knowledge of when a task was last run. This directly solves many issues around eg mis-computing moving averages due to unavailable data.For more information see theaika project wepage
aikasilta
aikasiltaTime bridge (Finnish: aikasilta) - experimental reporting bridge from the Nineties into right now.License: MITDocumentationUser and developerdocumentation of aikasilta.Bug TrackerFeature requests and bug reports are bested entered in thetodos of aikasilta.Primary Source repositoryThe primary source repository ofaikasilta is at sourcehuta collection of tools useful for software development.StatusExperimental.Note: The default branch isdefault.
ai-katie
ai-katieThe utility for Data Science and AI using common libraries. Extended by myself when I’m running to problem that might happen more than once.Github:https://github.com/danpeczek/ai-katiemainSetupTo setup to library you need at least Python 3.8 and pip installed.Also as the package uses PyTorch I strongly suggest to install the PyTorch in the distribution of your choice from:https://pytorch.org/get-started/locally/before installing the library.When all dependencies are installed you can install the library by:$ pip install ai-katieLicenseMIT LICENSE
aika-time
Introductionaika time is part of the aika project for working with time series data, you can find more details on theproject webpage:aika-timeUtility methods for working with timestamps and time series in the context of pandas. Principal components are:Time stamp class that enforces olsen timestamps, i.e. all timestamps must have a time zone.TimeOfDay class that represents a time of day in a given timezone, and given a set of dates produce the correct timestamp.TimeRange class that represents a timerange.causal.py - utilities for aligning one series on another in a causally correct way.Installpython -m pip install aika-time
aika-utilities
This is part of the aika project for time series analyses. These utilities contain useful utilties for the project such as session consistent hash functions. It can be installed aspip install aika-utiltiesbut it is not really intended for standalone use.
aiken
This project is a Python parser for the Aiken question format used in Moodle. Aiken is a very simple format to represent multiple choice questions (https://docs.moodle.org/24/en/Aiken_format) It accept two very similar syntaxes:What is the correct answer to this question? A. Is it this one? B. Maybe this answer? C. Possibly this one? D. Must be this one! ANSWER: DAnd this:Which LMS has the most quiz import formats? A) Moodle B) ATutor C) Claroline D) Blackboard E) WebCT F) Ilias ANSWER: AUsageThe aiken module simply expose theloadanddumpfunctions that respectively parse a question file and convert a parsed question object back to code. Let us parse a question string:>>> import aiken >>> question = aiken.load(""" ... Is this a valid Aiken Question? ... A. Yes ... B. No ... ANSWER: A ... """) >>> question.options ['Yes', 'No']Now, we make some changes and convert it back to a string of code:>>> question.options.append('Who knows?') >>> print(aiken.dump(question)) Is this a valid Aiken Question? A. Yes B. No C. Who knows? ANSWER: A
aikido
tbaPython libraryCreated byBaihan Lin, Columbia University
aikit
aikitAutomatic Tool Kit for Machine Learning and Datascience.The objective is to provide tools to ease the repetitive part of the DataScientist job and so that he/she can focus on modelization. This package is still in alpha and more features will be added. Its mains features are:improved and new "scikit-learn like" transformers ;GraphPipeline : an extension of sklearn Pipeline that handles more generic chains of tranformations ;an AutoML to automatically search throught several transformers and models.Full documentation is available here:https://aikit.readthedocs.io/en/latest/You can run exampleshere, thanks toBinder.GraphPipelineThe GraphPipeline object is an extension ofsklearn.pipeline.Pipelinebut the transformers/models can be chained with any directed graph.The objects takes as input two arguments:models: dictionary of models (each key is the name of a given node, and each corresponding value is the transformer corresponding to that node)edges: list of tuples that links the nodes to each otherExample:gpipeline=GraphPipeline(models={"vect":CountVectorizerWrapper(analyzer="char",ngram_range=(1,4),columns_to_use=["text1","text2"]),"cat":NumericalEncoder(columns_to_use=["cat1","cat2"]),"rf":RandomForestClassifier(n_estimators=100)},edges=[("vect","rf"),("cat","rf")])AutoMLAikit contains an AutoML part which will test several models and transformers for a given dataset.For example, you can create the following python scriptrun_automl_titanic.py:fromaikit.datasetsimportload_dataset,DatasetEnumfromaikit.ml_machineimportMlMachineLauncherdefloader():dfX,y,*_=load_dataset(DatasetEnum.titanic)returndfX,ydefset_configs(launcher):""" modify that function to change launcher configuration """launcher.job_config.score_base_line=0.75launcher.job_config.allow_approx_cv=Truereturnlauncherif__name__=="__main__":launcher=MlMachineLauncher(base_folder="~/automl/titanic",name="titanic",loader=loader,set_configs=set_configs)launcher.execute_processed_command_argument()And then run the command:python run_automl_titanic.py run -n 4To run the automl using 4 workers, the results will be stored in the specified folder You can aggregate those result using:python run_automl_titanic.py result
aiknow
aiKnowA framework of utilizing LLM agents.InstallationClone this repository:gitclonehttps://github.com/midnight-learners/aiknow.gitNavigate to the root directory of the repository and install the dependencies:poetryinstallChat ModelsModelsCurrently, we support chat models from both OpenAI and Qianfan.fromaiknow.llmimportOpenAIChatModel,QianfanChatModelAuthenticationCreate a.envfile in the root directory of your project, and fill in the following fields:# OpenAIOPENAI_API_KEY=""# QianfanQIANFAN_ACCESS_KEY=""QIANFAN_SECRET_KEY=""Our chat models will automatically load the authentication information from this.envfile.# Create a chat modelchat_model=OpenAIChatModel(name="gpt-3.5-turbo",temperature=0.9,)Model AttributesCurrently, we support the following attributes for chat models:name: The name or the identifier of the chat model provided by LLM platforms.temperature: Takes values in $(0, 1)$. It controls the randomness of the model's responses. The higher the temperature, the more random the responses. Defaults to0.9.profile: The profile of AI assistant.auth: The authentication information for the chat model. It varies depending on the LLM platform. For example, for OpenAI, you should importOpenAIAuth. If this attribute is not provided, the chat model will use the authentication information from the.envfile.ProfilesYou can set the profile for the chat model via theprofileattribute.# Create a chat modelchat_model=OpenAIChatModel(name="gpt-3.5-turbo",temperature=0.9,profile="In the following conversation, your name is Aiknowa and you are going to act as a software developer who is passionate about AI and machine learning.",)Single and Multiple InputsYou may simply provide a string as a single user message:chat_model.get_complete_response("Who are you?")ChatResponse(content='Hello! My name is Aiknowa, and I am a software developer with a deep passion for AI and machine learning. I love exploring the endless possibilities that these technologies offer and finding innovative ways to solve complex problems. What can I help you with today?', token_usage=ChatTokenUsage(prompt_tokens=46, completion_tokens=53, total_tokens=99))You can also wrap the message content inUserChatMessage:fromaiknow.llmimportUserChatMessagechat_model.get_complete_response(UserChatMessage(content="Who are you?"))ChatResponse(content="Hello! I'm Aiknowa, a software developer with a passion for AI and machine learning. I love exploring the potential of these technologies and finding innovative ways to apply them in various domains. How can I assist you today?", token_usage=ChatTokenUsage(prompt_tokens=46, completion_tokens=47, total_tokens=93))For multiple messages including both user and assistant messages, you can supply a list ofChatMessageobjects:fromaiknow.llmimportUserChatMessage,AssistantChatMessagechat_model.get_complete_response([UserChatMessage(content="Hello?"),AssistantChatMessage(content="How may I help you?"),UserChatMessage(content="What is AI?"),])ChatResponse(content='AI stands for Artificial Intelligence. It is a branch of computer science that focuses on building machines and software systems that can perform tasks that typically require human intelligence. AI systems are designed to learn, reason, and make decisions based on the data they are provided. They can perform a wide range of tasks, from recognizing speech and images to playing games and driving cars. AI is a rapidly growing field with significant advancements in machine learning, natural language processing, and computer vision.', token_usage=ChatTokenUsage(prompt_tokens=62, completion_tokens=93, total_tokens=155))Complete and Streamed ResponsesCall the API and get a complete response:chat_model.get_complete_response("Who are you?")ChatResponse(content="Hello! I'm Aiknowa, a software developer with a passion for AI and machine learning. I love exploring the potential of these technologies and finding creative ways to implement them in software applications. How can I assist you today?", token_usage=ChatTokenUsage(prompt_tokens=46, completion_tokens=47, total_tokens=93))Get a streamed response:forresponseinchat_model.get_streamed_response("What is AI?"):print(response.content,end="",flush=True)AI stands for Artificial Intelligence. It is a branch of computer science that focuses on creating intelligent machines that can perform tasks that normally require human intelligence. AI involves the development of algorithms and models that enable machines to learn from data, reason, and make decisions. It encompasses various subfields such as machine learning, natural language processing, computer vision, and robotics. The ultimate goal of AI is to create machines that can mimic human cognitive abilities and perform tasks autonomously.We also support async calls.Get a complete response asynchronously:awaitchat_model.get_complete_response_async("Who are you?")ChatResponse(content='Hello! My name is Aiknowa, and I am a software developer passionate about AI and machine learning. I love exploring the possibilities and applications of artificial intelligence in various domains. How can I assist you today?', token_usage=ChatTokenUsage(prompt_tokens=46, completion_tokens=44, total_tokens=90))Get a streamed response asynchronously:asyncforresponseinawaitchat_model.get_streamed_response_async("What is AI?"):print(response.content,end="",flush=True)AI stands for Artificial Intelligence. It is a branch of computer science that focuses on creating intelligent machines capable of mimicking human behavior and cognitive abilities. AI enables machines to perceive, reason, learn, and problem-solve, leading to improved decision-making and problem-solving capabilities. It encompasses various techniques like machine learning, natural language processing, computer vision, and robotics. The aim of AI is to create intelligent systems that can perform tasks autonomously, adapt to changing environments, and continuously improve their performance.
aiko
aiko is a base asyncio’s lightweight web application framework. It is designed to makekoaapi.InstallingInstall by code$ git clone https://github.com/zeromake/aiko $ cd aiko $ python setup.py installA Simple ExampleimportasynciofromaikoimportApploop=asyncio.get_event_loop()app=App(loop)defhello(ctx,next_call):return"Hello, World!"app.use(hello)if__name__=="__main__":app.run(host="0.0.0.0",port=5000)$ curl http://127.0.0.1:5000 Hello, World!LinksTodo[ ] request api like koa[ ] method[ ]accepts[ ]acceptsEncodings->accepts_encodings[ ]acceptsCharsets->accepts_charsets[ ]acceptsLanguages->accepts_languages[ ]is[x]get[ ] getter, setter[x]header[x] getter[ ] setter[x]headers[x] getter[ ] setter[x]url[x]origin[x]href[x]method[x]path[x]query[x]querystring[x]search[ ] getter[x]host[x]hostname[ ]URL[x]fresh[x]stale[x]idempotent[x]socket[x]charset[x]length[x]protocol[x]secure[x]ips[ ]subdomains[x]type[x]originalUrl->original_url[x]ip[ ] response api like koa[x] proxy class property attr and method[x] likefreshmethod
aikoai
No description available on PyPI.
aiko-kernel
Aiko PythonWhat is it?This is Aiko (very simple operating system) port for Python and MicroPython. It is very light, simple and effective. You can use it like standard Aiko, but in MicroPython, to make build projects on ESP or RPi pico very very simple.
aikopanel-bot
AikoPanel Telegram Bot qua PythonAikoPanel Telegram Bot qua PythonCác tính năng hiện cóCác lệnh hiện cóSử dụng thông thườngYêu cầu mã thông báo Telegram BotGiải thích biến môi trườngLưu ý đặc biệtMột dự án đơn giản, cho phép AikoPanel Telegram Bot hỗ trợ nhiều tính năng hơn. Phản hồi nhanh chóng của nhóm:https://t.me/AikoPanel_python_botPython version requirement >= 3.8Các tính năng hiện cóDựa trên MySQL, hỗ trợ đăng nhập bằng SSHTự động xóa tin nhắn trong nhóm chatTự động gửi đơn hàng, yêu cầu hỗ trợ cho quản trị viênTự động gửi thống kê dữ liệu hàng ngàyHỗ trợ ràng buộc và giải ràng buộc trong BotHỗ trợ lấy thông tin người dùng, đăng ký, mờiHỗ trợ lấy thông tin gói và tạo nút muaCác lệnh hiện cóLệnhTham sốMô tảpingKhôngLấy ID cuộc trò chuyệnbindEmail Mật khẩuRàng buộc email này với TelegramunbindEmail Mật khẩuGiải phóng ràng buộc email này với TelegrammysubKhôngLấy liên kết đăng ký của tài khoản nàymyinfoKhôngLấy thông tin đăng ký của tài khoản nàymyusageKhôngLấy chi tiết lưu lượng của tài khoản nàymyinviteKhôngLấy thông tin mời của tài khoản nàybuyplanKhôngLấy liên kết mua góiwebsiteKhôngLấy liên kết trang webSử dụng thông thường# apt install git nếu bạn chưa có git git clone https://github.com/AikoPanel/AikoPanel_Bot.git # Để tiến trình luôn chạy, bạn có thể sử dụng screen hoặc nohup # Bạn cần cài đặt pip3 để quản lý các gói cd AikoPanel_Telegram_Bot pip3 install -r requirements.txt cp config.yaml.example config.yaml nano config.yaml # Chỉnh sửa dòng 2 thành địa chỉ AikoPanel của bạn, cuối cùng đừng thêm ký tự / # Chỉnh sửa dòng 3 thành Bot Token của bạn # Chỉnh sửa dòng 4,5 thành ID của bạn và ID nhóm, lấy thông tin này bằng cách sử dụng /ping # Chỉnh sửa dòng 8~12 thành thông tin kết nối MySQL của bạn # Chỉnh sửa dòng 14 nếu bạn cần kết nối SSH đến cơ sở dữ liệu # Chỉnh sửa dòng 15~24 thành thông tin kết nối SSH của bạn python3 bot.pyYêu cầu mã thông báo telegram botTruy cậphttps://t.me/BotFathertrong phần tin nhắn riêngNhập/newbot, và đặt tên cho bot của bạnTiếp theo, đặt tên người dùng cho bot của bạn, nhưng phải kết thúc bằng "bot", ví dụ:AikoPanel_botCuối cùng, bạn sẽ nhận được mã thông báo của bot, nó sẽ trông giống như thế này:123456789:gaefadklwdqojdoiqwjdiwqdoGiải thích biến môi trườngKhi không gắn kết tệp config.yaml vào container, entrypoint.sh sẽ tạo ra tệp config.yml dựa trên các biến môi trường.Chú ý: Hình ảnh xây dựng bằng distroless hiện tại không hỗ trợ tạo tệp cấu hình từ các biến môi trường.Tùy chọn/Tham sốGiải thíchBOT_WEBSITEĐịa chỉ AikoPanelVí dụ:https://awesomeAikoPanel.comBOT_TOKENBOT_ADMIN_PATHĐường dẫn quản trị AikoPanelVí dụ: adminBOT_ADMIN_IDTelegram ID của người quản trị, cách nhau bằng dấu phẩy ",".Ví dụ: 123456789,321654987,555555,111222BOT_GROUP_IDTelegram ID của nhómAikoPanel_DB_IPĐịa chỉ IP có thể truy cập cơ sở dữ liệu AikoPanelKhi bot và cơ sở dữ liệu AikoPanel cùng được triển khai trên cùng một máy chủ, hãy xem thêm2.6 Giải thích đặc biệtAikoPanel_DB_PORTCổng có thể truy cập cơ sở dữ liệu AikoPanelAikoPanel_DB_USERTên người dùng có thể truy cập cơ sở dữ liệu AikoPanelAikoPanel_DB_PASSMật khẩu người dùng có thể truy cập cơ sở dữ liệu AikoPanelAikoPanel_DB_NAMETên cơ sở dữ liệu AikoPanelAikoPanel_DB_SSH_ENABLEBật/tắt kết nối cơ sở dữ liệu qua SSH.Các giá trị có thể chọn: true / falseAikoPanel_DB_SSH_TYPEPhương thức xác thực SSH.Các giá trị có thể chọn: passwd / pkeyKhi giá trị là passwd, sẽ sử dụng mật khẩu để xác thực, các biến AikoPanel_DB_SSH_KEY và AikoPanel_DB_SSH_KEYPASS sẽ không có hiệu lực.Khi giá trị là pkey, sẽ sử dụng khóa riêng để xác thực, biến AikoPanel_DB_SSH_PASS sẽ không có hiệu lực.AikoPanel_DB_SSH_IPĐịa chỉ IP của máy chủ cơ sở dữ liệu.Khi bot và cơ sở dữ liệu AikoPanel cùng được triển khai trên cùng một máy chủ, hãy xem thêm2.6 Giải thích đặc biệtAikoPanel_DB_SSH_PORTCổng có thể kết nối SSH với máy chủ cơ sở dữ liệuAikoPanel_DB_SSH_USERTên người dùng để thiết lập kết nối SSHAikoPanel_DB_SSH_PASSMật khẩu người dùng để thiết lập kết nối SSHAikoPanel_DB_SSH_KEYNội dung mã khóa private để thiết lập kết nối SSH. Lưu ý rằng khi cấu hình này, bạn cần:1. Không xóa "|-" ở đầu dòng;2. Chú ý lề thụt.AikoPanel_DB_SSH_KEYPASSMật khẩu cho khóa riêng khi kết nối SSH, để trống nếu không có mật khẩuENHANCED_ENABLEBật/tắt chế độ nâng caoENHANCED_MODULECác module nâng cao được bật, hiện tại chỉ có module nâng cao cho đơn hàng được hỗ trợ, thông tin sẽ được lưu trong hai bảng của cơ sở dữ liệu AikoPanel để lưu trạng thái đẩy, có thể tự động cập nhật hoặc không.Giải thích đặc biệtKhi bot và cơ sở dữ liệu AikoPanel được triển khai trên cùng một máy chủ, do network driver của docker container mặc định là bridge, bot sẽ không thể truy cập cơ sở dữ liệu trực tiếp thông qua địa chỉ Loopback (127.0.0.1/localhost/::1, v.v.). Dưới đây là một số giải pháp để lựa chọn:Sử dụng kết nối SSH để kết nối cơ sở dữ liệu Xem chi tiết tạiSử dụng thông thường.Trong đó, địa chỉ IP SSH cần được đặt là: host.docker.internalĐặt địa chỉ IP cơ sở dữ liệu là: 127.0.0.1Độ cách ly: Giải pháp 1 > Giải pháp 2, Độ phổ biến: Giải pháp 1 < Giải pháp 2, nên sử dụng giải pháp 1.Ngoài ra, vì lý do an ninh, không khuyến khích cơ sở dữ liệu lắng nghe trên 0.0.0.0.
ail
ail is a generalized framework for interacting with… well, stuff. It is sometimes better to ail than fail.
ailab
AI LabAI Lab is a gui and a server backend. The GUI provides an overview over experiments and their status, based on logfiles on the disk. The server provides functionalities for the gui such as parsing the logfiles, getting the usage of the system and providing the files to view in the gui. Furthermore, the server also provides a rempy host, where jobs can be submitted for exection.InstallationSimply pip install this package from git:pipinstallailabRunning: AI Lab ServerAI Lab Visualization consists of a ui and a server. Since the ui is a static website that works on your local webbrowser no installation is needed. The static website is hostedhere.Running is as simple as running the module in python providing a path to a config file.ailabmy_config.jsonA config file must contain a host or * for any interface, a port, a list of users as a map and a path to your checkpoints. (Typically the checkpoint path is on a network share, where all computers add their checkpoints and this pc reads them.) The list ofgpusgives you the opportunity to limit the gpus ailab will assign for scheduled tasks. The gpu ids are equivalent to the numbers used forCUDA_VISIBLE_DEVICES.{"host":"*","port":12345,"users":{"admin":"CHANGE_THIS"},"workspace":"/home/$USER/Git","results":"/home/$USER/Results","queue":"/tmp/ailab_queue","auto_detect_experiments":false,"projects":{},"gpus":[0]}PrivacyAll connection data is stored locally in your webbrowser and nothing is transmitted to the host of ailab ui. There is only direct communication between your webbroser and the server you add via the "Add Server" Dialog.The servers you add are not controlled by us and therefore can do whatever they want with your data. However, when the servers are owned/run by you and use the official ailab-server software, they will not track activities or report back information to a third party.Even though this sounds pretty safe, there is yet no ssl implementation for the connection to your servers, keep that in mind. (If you know how to implement an easy to use ssl on the client and the server, I will be happy to receive your pull request.)
ailab-api
No description available on PyPI.
ailab-best-ocr-python
Failed to fetch description. HTTP Status Code: 404
ai-labbook
No description available on PyPI.
ailabdc-client
No description available on PyPI.
ai-lab-dump
Basic algorithms and generic problem solutions
ailab-lite
ailab-liteAI Labis a cloud-based enterprise software application built byFathom Solutionsthat enables building data workflows, managing experiments and deploying models. Theailab-litelibrary enables us to harness the power of the AI Lab product in a simple Jupyter Notebook setting. By adding this extension, the user can exploit the GUI node editor to create and test complicated data pipelines. Below is a link to a visualisation of how to generate an example workflow in Jupyter Notebook using theailab-liteextension:https://github.com/fathom-io/ailab-lite/blob/master/graphics/workflow.gifThe main idea of this product resolves around using the UI to construct a graph, which is later used to generate a data processing pipeline. The following image shows an example visualisation of a particular graph in the AI Lab product.Next , there is the same graph generated in the Jupyter Notebook extension.Each node represents a certain data transformation, model or validation process. Thinking of all these elements as one large graph lets us encapsulate our whole data processing and prediction pipeline into one object. Communication between each node is assured thanks to using a similar API to the ones used in libraries:scikit-learnhttps://scikit-learn.org/stable/developers/develop.htmlMLlibhttps://spark.apache.org/docs/latest/ml-pipeline.htmlButtonsButtons in the editor are revealed after a dataset and at least one component are added to the graph. Before running an experiment, validation must be performed. After clicking the validation button and reveiving positive feedback, the run option appears. When running an experiment the results are printed below the GUI and additional files are saved in the same folder as the opened notebook.InstallationTo install use pip:$ pip install ailab_liteInstructionsJupyter Notebook/LabImport node editor widget:fromailab_liteimportNodeEditorWidgetImport pandas:importpandasaspdDecalare dataset:example=pd.read_csv("example.csv")Run widget:NodeEditorWidget(env=globals())We initialize it withglobals()so all previously defined datasets are available in node editor.We can pass workflow definition directly when initializing widget:NodeEditorWidget(env=globals(),workflow_definition="definition")Running exampleTheexamplesdirectory contain example of widget usage. It contain predefined workflow. To run it, simplycdto the directory and runjupyter notebookorjupyter labcommand.(In order to run widget in the example first time it is required to run all cells. Other way the widget won't render)Resolving issuesSometimes the extension is not enabled by default after installing frompip install ailab-lite. It manifests itself by returing 404 status forailab-lite.jsfile The solution for this is simple and requires running one command:jupyter nbextension enable ailab-lite/extensionAfter that reload the page and restart the kernel and the widget will work.DevelopmentFor a development installation (requiresNode.jsandYarn version 1),$ git clone https://github.com/fathoms-io/ailab-lite.git $ cd ailab-lite $ pip install -e . $ jupyter nbextension install --py --symlink --overwrite --sys-prefix ailab_lite $ jupyter nbextension enable --py --sys-prefix ailab_liteWhen actively developing your extension for JupyterLab, run the command:$ jupyter labextension develop --overwrite ailab_liteThen you need to rebuild the JS when you make a code change:$ cd js $ yarn run buildYou then need to refresh the JupyterLab page when your javascript changes.
ailabs-asr
AILabs ASR Python software development kitDevelopment EnvironmentPython 3.9# install portaudio first if you develop on MAC OS Xbrewinstallportaudio pipinstall--global-option='build_ext'--global-option='-I/usr/local/include'--global-option='-L/usr/local/lib'-rrequirements_dev.txt# please check PyAudio site: https://people.csail.mit.edu/hubert/pyaudio/# if you encouter some issues while installing PyAudioInstallationpipinstallailabs-asrSamples# init the streaming clientasr_client=StreamingClient('api-key-applied-from-devconsole')# start streaming with wav fileasr_client.start_streaming_wav(pipeline='asr-zh-en-std',file='voice.wav'verbose=False,# enable verbose to show detailed recognition resulton_processing_sentence=on_processing_sentence,on_final_sentence=on_final_sentence)# without file to start streaming with the computer's microphoneasr_client.start_streaming_wav(pipeline='asr-zh-en-std',on_processing_sentence=on_processing_sentence,on_final_sentence=on_final_sentence):bulb:start_streaming_wav()method allow users to provide callback function to handle the recognition result see the result formatbelow:bulb: lookup the available pipelines inthe next section:bulb: see more samples in thesample respositorySupport Language(pipeline)pipelineInfolanguageasr-zh-en-stdUse it when speakers speak Chinese more than EnglishMandarin and Englishasr-zh-tw-stdUse it when speakers speak Chinese and Taiwanese.Mandarin and Taiwaneseasr-en-stdEnglishEnglishasr-jp-stdJapaneseJapaneseMessage FormatThere are 2 kinds of recognized result:The Processing Sentence(Segment){"asr_sentence":"範例句子"}The Final Sentence(Complete Sentence){"asr_final":true,"asr_begin_time":9.314,"asr_end_time":11.314,"asr_sentence":"完整的範例句子","asr_confidence":0.5263263653207881,"asr_word_time_stamp":[{"word":"完整的","begin_time":9.74021875,"end_time":10.100875},{"word":"範例句子","begin_time":10.100875,"end_time":10.1664375}],"text_segmented":"完整的 範例句子"}LimitationAudio Data:warning: Send audio data withbinary framewith following spec:Audio data format16kHz, mono16 bitsper samplePCMSample rate per secs: 16K(16000)Sample sizes per sec: 16000(samples) x 1(sec) x 16/8(2 bytes) = 32000 bytes ~= 32 KB(/sec)Each chunk size: 2000 bytes, 1/16 secs
ailabtools
Zalo AI Lab toolsPip package tools for Deep learning taskInstallationpip install ailabtoolsDevelop instructionInstallation for developingClone project to folderailabtoolsGo toailabtoolsfolder, install usingpip:pip install .for editable package (for development process), run:pip install --editable .Check if package installed succesfully by running this python code:from ailabtools import common common.check()right output:>>> from ailabtools import common >>> common.check() AILab Server Check OKmake sure that the output isAILab Server Check OKwithout any problems.Optional, check information of package:pip show ailabtoolsContribution processDeployment branch:master.Development branch:develop.Contribution process steps:Checkout new branch.Add own module.Create pull request to branchdevelop.Waiting for pull request review.Pull request merged, ready for beta deployment.Stable, ready formastermerge for offical deployment.Package modifiedCheckoutsetup.pyfor package information.Add moduleAdd module inailabtoolsfolder.DocumentsCheck wiki pages
ailab-utils
ailab-utilsEsse repositório contém uma coleção de ferramentas para o laboratórioAILabda Universidade de Brasília.
ailca
Artificial Intelligence Library for Chemical Applications (AILCA)This is a python package for machine learning with chemical data. It provides various pre-processing modules for chemical data, such as engineering conditions, chemical formulas, and molecular structures. Also, several wrapper classes and functions are included for chemical machine learning. This package was implemented based onScikit-learnandPyTorch.InstallationBefore installing AILCA, several required packages should be installed in your environment. We highly recommend to useAnacondato build your Python environment for AILCA.Install a cheminformatics packageRDKit. RDKit is available at Anaconda archive. You can install RDKit using the following command in the Anaconda prompt.condainstall-crdkitrdkitInstall a deep learning frameworkPyTorch.If you want to build your machine learning models using GPU,CUDA >= 11.1must be installed your machine.With CUDA of version 11.1, you can install PyTorch using the following command.condainstallpytorchtorchvisiontorchaudiocudatoolkit=11.1-cpytorch-cconda-forgeInstall a graph-based deep learning frameworkPyTorch Geometric. It must be installed to build machine learning models that predict target values from molecular and crystal structures. You can install PyTorch Geometric using the following command.condainstallpytorch-geometric-crusty1s-cconda-forgeInstall required packages fromrequirements.txtin GitHub. After downloading the requirements file, you can install all required packages using the following commend.condainstall--filerequirements.txt(Optional) If your operating system is Windows, installGraphvizto visualize interpretable information of machine learning algorithms. You can install Graphviz using the following command.condainstall-cconda-forgepython-graphvizFinally, install AILCA in your Python environment with the following command.pipinstallailcaExamplesFollow the instructions inPyTorch Installationto install the PyTorch package on your environment.Installation of PyTorch GeometricInstallation of RDKitPackage, module nameMany use a same package and module name, you could definitely do that. But this example package and its module's names are different:example_pypi_packageandexamplepy.Openexample_pypi_packagefolder with Visual Studio Code,Ctrl+Shift+F(Windows / Linux) orCmd+Shift+F(MacOS) to find all occurrences of both names and replace them with your package and module's names. Also remember to change the name of the foldersrc/examplepy.Simply and very roughly speaking, package name is used inpip install <PACKAGENAME>and module name is used inimport <MODULENAME>. Both names should consist of lowercase basic letters (a-z). They may have underscores (_) if you really need them. Hyphen-minus (-) should not be used.You'll also need to make sure the URL "https://pypi.org/project/example-pypi-package/" (replaceexample-pypi-packageby your package name, with all_becoming-) is not occupied.Details on naming convention (click to show/hide)Underscores (_) can be used but such use is discouraged. Numbers can be used if the name does not start with a number, but such use is also discouraged.Name starting with a number and/or containing hyphen-minus (-) should not be used: although technically legal, such name causes a lot of trouble − users have to useimportlibto import it.Don't be fooled by the URL "pypi.org/project/example-pypi-package/" and the name "example-pypi-package" on pypi.org. pypi.org and pip system convert all_to-and use the latter on the website / inpipcommand, but the real name is still with_, which users should use when importing the package.There's alsonamespaceto use if you need sub-packages.Other changesMake necessary changes insetup.py.The package's version number__version__is insrc/examplepy/__init__.py. You may want to change that.The example package is designed to be compatible with Python 3.6, 3.7, 3.8, 3.9, and will be tested against these versions. If you need to change the version range, you should change:classifiers,python_requiresinsetup.pyenvlistintox.inimatrix: python:in.github/workflows/test.ymlIf you plan to upload toTestPyPIwhich is a playground ofPyPIfor testing purpose, changetwine upload --repository pypi dist/*totwine upload --repository testpypi dist/*in the file.github/workflows/release.yml.Developmentpippip is a Python package manager. You already have pip if you use Python 3.4 and later version which include it by default. Readthisto know how to check whether pip is installed. Readthisif you need to install it.Use VS CodeVisual Studio Code is the most popular code editor today, our example package is configured to work with VS Code.Install VS Code extension "Python"."Python" VS Code extension will suggest you install pylint. Also, the example package is configured to use pytest with VS Code + Python extensions, so, install pylint and pytest:pipinstallpylintpytest(It's likely you will be prompted to install them, if that's the case, you don't need to type and execute the command)vscode.env's content is nowPYTHONPATH=/;src/;${PYTHONPATH}which is good for Windows. If you use Linux or MacOS, you need to change it toPYTHONPATH=/:src/:${PYTHONPATH}(replacing;with:). If the PATH is not properly set, you'll see linting errors in test files and pytest won't be able to runtests/test_*.pyfiles correctly.Close and reopen VS Code. You can now click the lab flask icon in the left menu and run all tests there, with pytest. pytest seems better than the standard unittest framework, it supportsunittestthus you can keep usingimport unittestin your test files.The example package also has a.editorconfigfile. You may install VS Code extension "EditorConfig for VS Code" that uses the file. With current configuration, the EditorConfig tool can automatically use spaces (4 spaces for .py, 2 for others) for indentation, setUTF-8encoding,LFend of lines, trim trailing whitespaces in non Markdown files, etc.In VS Code, you can go to File -> Preferences -> Settings, type "Python Formatting Provider" in the search box, and choose one of the three Python code formatting tools (autopep8, black and yapf), you'll be prompted to install it. The shortcuts for formatting of a code file areShift+Alt+F(Windows);Shift+Option (Alt)+F(MacOS);Ctrl+Shift+I(Linux).Write your packageInsrc/examplepy/(examplepyshould have been replaced by your module name) folder, renamemodule1.pyand write your code in it. Add more module .py files if you need to.Write your testsIntests/folder, renametest_module1.py(totest_*.py) and write your unit test code (withunittest) in it. Add moretest_*.pyfiles if you need to.The testing tool `tox` will be used in the automation with GitHub Actions CI/CD. If you want to use `tox` locally, click to read the "Use tox locally" sectionUse tox locallyInstall tox and run it:pipinstalltox toxIn our configuration, tox runs a check of source distribution usingcheck-manifest(which requires your repo to be git-initialized (git init) and added (git add .) at least), setuptools's check, and unit tests using pytest. You don't need to install check-manifest and pytest though, tox will install them in a separate environment.The automated tests are run against several Python versions, but on your machine, you might be using only one version of Python, if that is Python 3.9, then run:tox-epy39If you add more files to the root directory (example_pypi_package/), you'll need to add your file tocheck-manifest --ignorelist intox.ini.Thanks to GitHub Actions' automated process, you don't need to generate distribution files locally. But if you insist, click to read the "Generate distribution files" sectionGenerate distribution filesInstall toolsInstall or upgradesetuptoolsandwheel:python-mpipinstall--user--upgradesetuptoolswheel(Ifpython3is the command on your machine, changepythontopython3in the above command, or add a linealias python=python3to~/.bashrcor~/.bash_aliasesfile if you use bash on Linux)GeneratedistFromexample_pypi_packagedirectory, run the following command, in order to generate production version for source distribution (sdist) indistfolder:pythonsetup.pysdistbdist_wheelInstall locallyOptionally, you can install dist version of your package locally before uploading toPyPIorTestPyPI:pipinstalldist/example_pypi_package-0.1.0.tar.gz(You may need to uninstall existing package first:pipuninstallexample_pypi_packageThere may be several installed packages with the same name, so runpip uninstallmultiple times until it says no more package to remove.)Upload to PyPIRegister on PyPI and get tokenRegister an account onPyPI, go toAccount settings § API tokens, "Add API token". The PyPI token only appears once, copy it somewhere. If you missed it, delete the old and add a new token.(Register aTestPyPIaccount if you are uploading to TestPyPI)Set secret in GitHub repoOn the page of your newly created or existing GitHub repo, clickSettings->Secrets->New repository secret, theNameshould bePYPI_API_TOKENand theValueshould be your PyPI token (which starts withpypi-).Push or releaseThe example package has automated tests and upload (publishing) already set up with GitHub Actions:Every time yougit pushyourmasterormainbranch, the package is automatically tested against the desired Python versions with GitHub Actions.Every time a new release (either the initial version or an updated version) is created, the package is automatically uploaded to PyPI with GitHub Actions.View it on pypi.orgAfter your package is published on PyPI, go tohttps://pypi.org/project/example-pypi-package/(_becomes-). Copy the command on the page, execute it to download and install your package from PyPI. (or test.pypi.org if you use that)If you publish the package to PyPI manually, click to readInstall TwineInstall or upgrade Twine:python-mpipinstall--user--upgradetwineCreate a.pypircfile in your$HOME(~) directory, its content should be:[pypi]username=__token__password=<PyPI token>(Use[testpypi]instead of[pypi]if you are uploading toTestPyPI)Replace<PyPI token>with your real PyPI token (which starts withpypi-).(if you don't manually create$HOME/.pypirc, you will be prompted for a username (which should be__token__) and password (which should be your PyPI token) when you run Twine)UploadRun Twine to upload all of the archives underdistfolder:python-mtwineupload--repositorypypidist/*(usetestpypiinstead ofpypiif you are uploading toTestPyPI)UpdateWhen you finished developing a newer version of your package, do the following things.Modify the version number__version__insrc\examplepy__init__.py.Delete all old versions indist.Run the following command again to regeneratedist:pythonsetup.pysdistbdist_wheelRun the following command again to uploaddist:python-mtwineupload--repositorypypidist/*(usetestpypiinstead ofpypiif needed)ReferencesPython Packaging Authority (PyPA)'s sample projectPyPA's Python Packaging User GuideStackoverflow questions and answersGitHub Actions Guides: Building and testing Python
ailearn
此为ailearn人工智能算法包。包含了Swarm、RL、nn、utils四个模块。Swarm模块当中,实现了粒子群算法、人工鱼群算法、萤火虫算法和进化策略,以及一些对智能算法进行评估的常用待优化的函数。RL模块包括两部分,TabularRL部分和Environment部分。TabularRL部分集成了一些经典的强化学习算法,包括Q学习、Q(Lambda)、Sarsa、Sarsa(lambda)、Dyna-Q等。Environment部分集成了一些强化学习经典的测试环境,如FrozenLake问题、CliffWalking问题、GridWorld问题等。nn模块包括一些常用的激活函数及损失函数。utils模块包括一些常用的功能,包括距离度量、评估函数、PCA算法、标签值与one-hot编码的相互转换、Friedman检测等等。安装方式(在终端中输入):` pip install ailearn `更新方式(在终端中输入):` pip install ailearn--upgrade`更新历史:2018.4.10 0.1.3 第一个版本,首次实现了粒子群算法和人工鱼群算法,首次集成到pip当中。2018.4.16 0.1.4 加入了进化策略的实现,添加了Evaluation模块。2018.4.18 0.1.5 添加了TabularRL模块和Environment模块。2018.4.19 0.1.8 将TabularRL模块和Environment模块整合为RL模块,添加了项目的相关描述,更新了相关协议2018.4.25 0.1.9 输出信息由中文改为英文,并更新了一些已知错误2019.1.15 0.2.0 添加了utils模块,加入了一些常用的功能,包括距离度量、评估函数、PCA算法、标签值与one-hot编码的相互转换、Friedman检测等等;添加了nn模块,加入了一些常用的激活函数及损失函数;更新了Swarm模块的算法,使它们更新得更快。2021.4.6 0.2.1 添加了爬虫工具,增加了RL模块与Swarm模块的示例;添加强化学习经典环境Windy GridWorld环境。其他更新:更新了绘制决策边界的方法更新了绘制数据集的方法优化了Friedman检测的方法项目网址:https://pypi.org/project/ailearn/https://github.com/axi345/ailearn/
aiLearning
No description available on PyPI.
aileen
Humanitarian Aid agencies want to count beneficiaries for the purposes of capacity planning. The Aileen package helps to automate the majority of the manual counting of attendance by looking at Wifi traffic. This data can be very useful in combination with manual impressions.
aileen-test
Failed to fetch description. HTTP Status Code: 404
ailess
Ailess - Easily deploy your machine learning models as endpoints on AWSAiless is a Python package that allows you to easily deploy your machine learning models and turn them into an endpoint on AWS.FeaturesNo DevOps Degree required: Ailess is designed to be used by data scientists and machine learning engineers with no prior experience in DevOps while following best DevOps practices.Solid Pipeline: Ailess packages your model and its dependencies into a Docker image, pushes it to AWS ECR, and deploys it as an endpoint on AWS ECS.Zero downtime deployment: Ailess uses AWS ECS to deploy your model as an endpoint behind an Application Load Balancer (ALB). This allows zero-downtime deployment and auto-scaling of the cluster.Auto-recovery: Ailess runs health checks on the endpoint and restarts the container if it fails or rolls the deployment back if it fails to start.Getting StartedPre-requisitesPython 3.6+DockerTerraformAWS AccountAWS Credentials configured via ~/.aws/credentials or environment variablesHealth check endpoint in your app. Load Balancer will be sending a GET request to/and expect a 200 response code.InstallationpipinstallailessUsageInitialize project with AilessTo initialize Ailess in your project, run the following command in your project's root directory:ailessinitYou will be prompted to select the AWS Region you want to deploy your model to, instance type, and port number etc. In turn, Ailess will generate the following files in .ailess directory:.ailess/config.json: Configuration file for ailess.Dockerfile: Dockerfile for building the Docker image.docker-compose.yml: Docker Compose file for running the Docker image locally..ailess/iam_policy.json: IAM policy for the ECS task..ailess/cluster.tf: Terraform configuration file for creating the ECS cluster..ailess/cluster.tfvars: Terraform variables file for creating the ECS cluster.requirements.txt: Python dependencies for your model.All of these files can be modified to suit your needs and Ailess will continue to work and appropriately update the infrastructure/docker image.Running locallyTo run your model locally, run the following command in your project's root directory:ailessserveThis will build the Docker image and run it locally.Deploy your modelTo deploy your model, run the following command in your project's root directory:ailessdeployThis will build the Docker image, push it to AWS ECR, create infrastrcuture, and deploy it as an endpoint on AWS ECS. When the deployment is complete, you will see the endpoint URL in the output.When you want to update your model, run the same command again. This will update the Docker image, push it to AWS ECR, and update the endpoint on AWS ECS. On each run of thedeploycommand Ailess will verify that the infrastructure is up-to-date and only update it if necessary.Remove your modelTo delete the infrastructure, run the following command in your project's root directory:ailessdestroyThis will delete the infrastructure and the endpoint on AWS ECS.How it worksDocker ImageAiless packages your model and its dependencies into a Docker image. It will try to detect a correct version and install CUDA and cuDNN if needed.ClusterAiless creates an ECS cluster that sits behind an Application Load Balancer (ALB). This allows zero-downtime deployment and auto-scaling of the cluster. The ECS also runs health checks on the endpoint and restarts the container if it fails or rolls the deployment back if it fails to start.ConfigurationAccessing AWS resourcesBy default, your app will have no access to AWS resources. To allow your app to access AWS resources, edit the .ailess/iam_policy.json file and add the necessary permissions. The easiest way to do this is to use theIAM Policy Generatorwith IAM Policy type.To allow your app to access AWS resources while running locally withailess serve, you will need to modify the docker-compose.yml file.If your credentials are stored in ~/.aws/credentials, you can mount the credentials file to the container:services:ailess-test-project:environment:- PYTHONUNBUFFERED=1image: ailess-test-project:latestbuild: .platform: linux/amd64ports:- "5000:5000"+ volumes:+ - $HOME/.aws/credentials:/root/.aws/credentials:roIf your credentials are stored in environment variables, you can pass them to the container:services:ailess-test-project:environment:- PYTHONUNBUFFERED=1+ - AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}+ - AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}+ - AWS_REGION=${AWS_REGION}image: ailess-test-project:latestbuild: .platform: linux/amd64ports:- "5000:5000"ExamplesExamples repositorycontains several projects deployable with Ailess showcasing different use cases.ContributingPull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
ailetic
ailetic-pythonThis is the official pip package for ailetic.
ailever
No description available on PyPI.
ailgorithmic
No description available on PyPI.
ailia
Please install trial version fromhttps://ailia.jp/. For more details seehttps://github.com/axinc-ai/ailia-models/blob/master/TUTORIAL.md.
ailib
Python Artificial Intelligence FrameworkCopyright (c) 2014-2019 Jeremie DECOCK (http://www.jdhp.org)Web site:http://www.ailib.ioOnline documentation:http://www.ailib.io/docs/Examples:http://www.ailib.io/docs/gallery/Notebooks:https://github.com/jeremiedecock/ailib-notebooksSource code:https://github.com/jeremiedecock/ailibIssue tracker:https://github.com/jeremiedecock/ailib/issuesAILib on PyPI:https://pypi.org/project/ailib/AILib on Anaconda Cloud:https://anaconda.org/jdhp/ailibDescriptionAILib is a set of open source frameworks for Artificial Intelligence (mostly machine learning and optimization).This contains (among others):a blackbox non linear noisy optimization framework;a machine learning framework;a multistage optimization and Markov Decision Process framework (Markov Decision Processes).WarningThis project is in beta stage.DependenciesPython >= 3.0NumpyMatplotlibInstallationGnu/LinuxYou can install, upgrade, uninstall AILib with these commands (in a terminal):pip install --pre ailib pip install --upgrade ailib pip uninstall ailibOr, if you have downloaded the AILib source code:python3 setup.py installWindowsYou can install, upgrade, uninstall AILib with these commands (in acommand prompt):py -m pip install --pre ailib py -m pip install --upgrade ailib py -m pip uninstall ailibOr, if you have downloaded the AILib source code:py setup.py installMacOSXYou can install, upgrade, uninstall AILib with these commands (in a terminal):pip install --pre ailib pip install --upgrade ailib pip uninstall ailibOr, if you have downloaded the AILib source code:python3 setup.py installDocumentationOnline documentation:http://ailib.readthedocs.orgAPI documentation:http://ailib.readthedocs.org/en/latest/api.htmlExample usageTODOBug reportsTo search for bugs or report them, please use the AILib Bug Tracker at:https://github.com/jeremiedecock/ailib/issuesLicenseThis project is provided under the terms and conditions of theMIT License.
ailib-full
AILibAILib is a library for multiple languages to use various AI models that I find interestingTable of ContentsInstallationLicenseInstallationpip install ailib-fullLicenseailib-fullis distributed under the terms of theMITlicense.
ai-lib-ls
欢迎大家来到我的库
ai-libs
No description available on PyPI.