package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
aigenerate
Chat GPT generator
aigenmol
No description available on PyPI.
aigens
aigensA TryCrime Python package for AI text generation.Requires a RapidAPI account.aigenslooks for your RapidAPI key in the$RAPID_API_KEYenvironment variable, so make sure to set it.Installationpip install aigensUsage Exampleimportaigens# Returns a list of ideasideas=aigens.generators.blog.generate_idea("A blog about my adventures in an internet gang.")forlineinideas:print(line.strip())# Returns a list of strings in the form of outlinesoutline=aigens.generators.blog.generate_outline("A blog about how hard it is to be a gay KKK member.")forlineinoutline:print(line.strip())Output ExampleThe example code above produces output similar to below:The Bully: A gang blog on bullying. 10 Ways to Increase Your Google SEO: A blog post around 10 ways to improve your SEO and your Google search engine rankings. My First Gang Meeting: A blog post about my first gang meeting. How to Become a Gang Leader: A blog post about how to become a gang leader and how to manage your gang. The Six Most Important Things I've Learned From Being in an Internet Gang: A blog about the six most important things I've learned from being in an internet gang. My Gang: From Gang to Family: A blog post about how to transition from street gang to family. My Life in a Gang: A blog about life in an internet gang. The Gang Life: A blog about an internet gang. 1. Introduction: The struggle of being a gay KKK member 2. Challenges of being a gay KKK member 3. The good things about being a gay KKK member 4. Out of all the groups in the KKK, the gay KKK members are the smallest 5. Conclusion: Gay members of the KKK are still persecuted, and it is hard to be one 1. How hard is it to be a gay KKK member? 2. What are some of the challenges that the KKK faces? 3. What are some of the challenges that the gay KKK members face? 4. What are some of the challenges that the black KKK members face? 5. What are some of the challenges that the black gay KKK members face? 6. What are some of the challenges that the gay black KKK members face? 7. What are some of the challenges that the black gay black KKK members face? Conclusion: Being in a KKK group is not easy, no matter what the race. 1. How hard is it to be a gay KKK member? 2. How does the gay KKK member feel about the gay KKK? 3. What about the gay KKK member's family? 4. What about the KKK members who are gay? Conclusion: Sometimes people don't know what they're missing. This blog post is about how hard it is to be a gay KKK member.
aigent
Failed to fetch description. HTTP Status Code: 404
aiges
No description available on PyPI.
aiges-python
Failed to fetch description. HTTP Status Code: 404
ai-getter
A CLI for Requesting AI-Generated Things!Installationpip install ai_getterUsageaighelpaigimage<prompt>[--clip]--num-images<num_images>[--save-path<path>][--s3]aigtext<prompt>[--clip][--save-path<path>][--s3]--clipwillgetthepromptfromyourclipboard'scontents,inadditionto<prompt>ifyousupplyone--s3willuploadtheresulttoyourAI_GETTER_S3_BUCKETsetenvvarAI_GETTER_ALWAYS_SAVE_TO_S3=1tosavetos3bydefaultEnvironment VariablesOPENAI_ORGOPENAI_TOKENAI_GETTER_SAVE_PATHAI_GETTER_S3_BUCKETAI_GETTER_ALWAYS_SAVE_TO_S3(0 or 1)CredentialsMake sure you have credentials in an~/.awsdirectory if you want to upload any outputs to S3.
ai-ghostfunctions
AI GhostfunctionsWrite the docstring, call the function, get the results.Documentation:https://ai-ghostfunctions.readthedocs.io/Aghostfunctionleverages OpenAI GPT-4 to return sensible outputs from functions without defined logic. It packages the the function name, docstring, and signature into a prompt which is sent to OpenAI's GPT-4 API, and transparently handles coercing the api result into the expected return type.Installationpip install ai-ghostfunctionsQuickstartTo see it in action, save your OpenAI API Key to the env varOPENAI_API_KEYand run:>>>importos>>>fromai_ghostfunctionsimportghostfunction>>>assertos.getenv("OPENAI_API_KEY")>>>@ghostfunction>>>defsanitize_messy_string(messy_string:str)->list[dict]:>>>"""Return a list of dicts that contain the data from `messy_string`.""">>>pass>>>sanitize_messy_string(messy_string="""name|age|nicknameJohn Brighton Bradford, 34, J.BGrace B., "24", Grace""")[{'name':'John Brighton Bradford','age':34,'nickname':'J.B'},{'name':'Grace B.','age':24,'nickname':'Grace'}]###>>>@ghostfunction>>>defgenerate_random_words(n:int,startswith:str)->list:>>>"""Return a list of `n` random words that start with `startswith`.""">>>pass>>>generate_random_words(n=4,startswith="goo")['goofy','google','goose','goodness']>>>generate_random_words(n=3,startswith="foot")['football','footnote','footprint']By default, a ghostfunction will dispatch a sensible prompt to OpenAI GPT-4 that includes the function name, the docstring, and function arguments, parse the result from OpenAI and return it as the result of the function.Ghostfunctions will retry and send the data to gpt-3.5-turbo if it looks like the OPENAI_API_KEY does not have access to gpt-4.CustomizationsYou can control the prompt:>>>importos>>>fromai_ghostfunctionsimportghostfunction>>>fromai_ghostfunctions.keywordsimportUSER>>>fromai_ghostfunctions.typesimportMessage>>>assertos.getenv("OPENAI_API_KEY")>>>@ghostfunction(prompt_function=lambdaf,**kwargs:[>>>Message(role=USER,content=f"tell me a slightly insulting joke about this function name:{f.__name__}.")>>>])>>>defrecursive_function_that_will_recurse():>>>"""Recurse until you go crazy.""">>>pass>>>recursive_function_that_will_recurse()# 'Why did the programmer name his function "recursive_function_that_will_recurse"? Because he wanted to make absolutely sure that no one would confuse it for a function that actually does something useful.'Heh. Not bad.Prompts to gpt-4 and gpt-3.5-turbo are of typeList[ai_ghostfunctions.types.Message].Seeghostfunctions.pyfor the default prompt.LicenseAI Ghostfunctionsis free and open source software.RequirementsSeepyproject.tomlfor a list of dependencies.ContributingContributions are very welcome. To learn more about setting up a dev environment and contributing back to the project, see theContributor Guide.IssuesIf you encounter any problems, pleasefile an issuealong with a detailed description.CreditsThis project was generated from aforkof@cjolowicz'sHypermodern Python Cookiecuttertemplate.
aigit
AI-powered Git Helper (AIGIT)AIGIT is a command-line interface (CLI) tool that uses OpenAI to generate commit messages for your Git repositories. It checks the status of your Git repository, generates a commit message based on the changes, and commits those changes.InstallationTo install AIGIT, you need Python 3.10 and pip installed on your system. Clone the repository and navigate to the project directory. Then, run the following command:pipinstallaigitThis will install the AIGIT package and its dependencies.ConfigurationBefore using AIGIT, you need to configure your OpenAI API key. Run the following command and enter your API key when prompted:aig--configUsage$aig-h usage:aig[-h][--config][-y]AI-poweredGitHelper options:-h,--helpshowthishelpmessageandexit--configConfigureAPIKey-y,--yesAutocommitwithoutaskingforconfirmationTo use AIGIT, navigate to your Git repository and run the following command:aigAIGIT will check the status of your Git repository, generate a commit message, and ask for your confirmation before committing the changes. If you want to auto commit without confirmation, use the-yor--yesoption:aig-yContributingContributions are welcome! Please feel free to submit a pull request.LicenseAIGIT is open-source software licensed under the MIT license.
ai-git-commit
ai-git-commitclick, prompt-toolkitAI Git CommitGenerate Git commit messages automatically with AIBuilt withcoderj001/python-cli-tool📦 InstallationInstall ai-git-commit with pip or pipxpipinstallai-git-commitorpipxinstallai-git-commit🚀 UsageWhen you're ready to commit. To run the CLI in your terminal:ai-git-commitSuggest using alias,alias commit=ai-git-commitadd it to.bashrcor.zshrc.😮 DemoTodoAdd support for OpenAI to generate more accurate and relevant commit messages.Add support for commit message customizationImplement Git hooks to automate the commit process even further.Add the ability to select from multiple AI models to generate commit messages.Improve the user interface with more features and options.
aigofirst
用于初始化baseUtil 需要依赖baseUtil
aigoo
No description available on PyPI.
aigov-libs
aigov_libsA library for tracking and summarizing data transformations as data lineage.DocumentationTBDLicenseThis library is delivered under theInternational License Agreement for Non-Warranted Programs.
aigpy
No description available on PyPI.
aigpytest
No description available on PyPI.
aigpy-test
No description available on PyPI.
ai-grams-pm
AI Grams PM ProjectExplore the docs »About The ProjectThis package belongs to AI Grams, an Italian startup working in the field of European power markets.Getting StartedIn order to install ai_grams_pm package, please digit:pip3installai-grams-pmContactAI Grams [email protected] -https://aigrams.herokuapp.com/
aigremlins
AIGremlinsAI Gremlin - Automatic Error Correction with OpenAI AI Gremlin is a Python module that leverages OpenAI's GPT-3 to automatically fix errors in your Python code. The module contains two main classes, GremlinTest and AIgremlin, which work together to handle exceptions and get suggestions for fixes from OpenAI's API.Note: Using this code in real applications is a terrible idea for many reasons, including:Creating the machine revolution by given an AI direct possibility to execute functions without anyone chekcing.There's no garantuee that the AI generated function won't break anything else or delete something import.Many other reasons.Don't even think about using this in production.Installationpip install aigremlinsExample usagefrom aigremlins import AIGremlin # Initialize AI Gremlin instance with your OpenAI API key gremlin = AIgrAIGremlinemlin(api_key="your_openai_api_key", verbose=True) # Define the function with an error @gremlin.ai_backstop def buggy_function(a, b): """ This function should always return a value""" return a / b # Call the function with parameters that cause an exception result = buggy_function(4, 0)FeaturesAutomatically detects and corrects errors in your Python functions using OpenAI's GPT-3. Tries to stay as close as possible to the intent of the original function. Dynamically adds and executes the new fixed function in the original namespace. Customizable parameters to control the number of iterations, token limit, temperature settings, and verbosity of the output. Ability to add custom instructions for OpenAI's API.How It WorksThe AIgremlin class wraps your target function with a decorator called ai_backstop. When the target function encounters an exception, the ai_backstop decorator captures the error, function code, and parameters. The ai_backstop decorator formats a prompt for OpenAI's API to get a suggestion for fixing the function. The suggestion is received from OpenAI's API, and a new fixed function is generated. The fixed function is added to the original namespace and executed. The process continues until the fixed function executes without errors or the maximum number of iterations is reached.UsageImport the AIgremlin class from the module.Instantiate an AIgremlin object with your OpenAI API key and other optional parameters (e.g., max_iterations, max_tokens, temperature, temperature_escalation, verbose, and instructions).Define your function and apply the ai_backstop decorator to it.Call the decorated function as you normally would. If an exception is encountered, the AI Gremlin will automatically attempt to fix it using OpenAI's API.Optionstemperature -> default temperature of the model used.temperature escalation -> the model can become increasingly creative. Should be somewhere between the range of 0.1-0.4 as the max temperature is 1.instructions -> you can give additional instructions to the AI to take into consideration.
aigyminsper
AIGYMThe goal of this libray is provide a set of tools to help you to learn Artificial Intelligence.How to setup the environmentTo avoid any configuration problems, we recommend creating a virtual environment with python:python3-mvirtualenvvenvsourcevenv/bin/activate python-mpipinstall--upgradepip pipinstall-rrequirements.txtTo quit the virtual environment, typedeactivate. If you already have the virtual environment configured then typesource venv/bin/activate.How to test the projectTo run the tests, please type in the root directory:exportPYTHONPATH=. pytesttestsHow to upgrade the packageIf you need to upgrade the package, please follow these steps:change what you need in the code;test it :smile: ;describe what you did in theChangelog.mdfile;change thesetup.pyfile. In special, theversionattribute;commit and push the changes onmainbranch;then do the same on thestablebranch;We have two workflows in the GitHub Actions:to publish the package in the PyPI when you push something in thestablebranch. This workflow does the same as:pythonsetup.pysdist twineuploaddist/*to publish the website (documentation) in the GitHub Pages.How to install the packagepipinstallaigyminsperChange logThe change log of this library is in theChangelog.mdfile.DocumentationThe documentation of this library is in thedocsfolder. We are usingMkDocsto generate the documentation.To see the documentation in your local machine, please type:mkdocsserveTo deploy a new version of the documentation, please merge the content in the stable branch:gitcheckoutstable gitmergemaster gitpushThere is a GitHub Action that will deploy the documentation in the GitHub Pages.
aih
No description available on PyPI.
aiha
🦉 AI Hardware AdvisorAIHA, Guiding AI with Wisdom and KnowledgeAIHA is a tool to help you choose the best hardware for your AI project. With AIHA you will able to choose wisely the resources you need for inference or training any model on the Hugging Face Hub.AIHA is currently in construction.Feel free to contribute to the project by opening an issue or a pull request.If you find the project useful, please consider giving it a star ⭐️.
aihandler
AI HandlerThis is a simple framework for running AI models. It makes use of the huggingface API which gives you a queue, threading, a simple API, and the ability to run Stable Diffusion and LLMs seamlessly from your local hardware.This is not intended to be used as a standalone application.It can easily be extended and used to power interfaces or it can be run from the command line.AI Handler is a work in progress. It powers two projects at the moment, but may not be ready for general use.InstallationThis is a work in progress.Pre-requisitesSystem requirementsWindows 10+Python 3.10.8pip 23.0.1CUDA toolkit 11.7CUDNN 8.6.0.163Cuda capable GPU16gb+ ramFor Windows, follow windows branch instructionsInstallpip install https://github.com/w4ffl35/diffusers/archive/refs/tags/v0.15.0.ckpt_fix_0.0.1.tar.gz pip install aihandlerOptionalThese are optional instructions for installing TensorRT and Deepspeed for WindowsInstall Tensor RT:Download TensorRT-8.4.3.1.Windows10.x86_64.cuda-11.6.cudnn8.4Git clone TensorRT 8.4.3.1Follow their instructions to build TensorRT-8.4.3.1 python wheelInstall TensorRTpip install tensorrt-*.whlInstall Deepspeed:Git clone Deepspeed 0.8.1Follow their instructions to build Deepspeed python wheelInstall Deepspeed `pip install deepspeed-*.whlEnvironment variablesAIRUNNER_ENVIRONMENT-devorprod. Defaults todev. This controls the LOG_LEVELLOG_LEVEL-FATALfor production,DEBUGfor development. Override this to force a log levelHuggingface variablesOffline modeThese environment variables keep you offline until you need to download a model. This prevents unwanted online access and speeds up usage of huggingface libraries.DISABLE_TELEMETRYKeep this set to 1 at all times. Huggingface collects minimal telemetry when downloading a model from their repository but this will keep it disabled.See more info in this github threadHF_HUB_OFFLINEWhen loading a diffusers model, huggingface libraries will attempt to download an updated cache before running the model. This prevents that check from happening (long with a boolean passed toload_pretrainedsee the runner.py file for examples)TRANSFORMERS_OFFLINESimilar toHF_HUB_OFFLINEbut for transformers models
aihandlerwindows
AI HandlerThis is a simple framework for running AI models. It makes use of the huggingface API which gives you a queue, threading, a simple API, and the ability to run Stable Diffusion and LLMs seamlessly from your local hardware.This is not intended to be used as a standalone application.It can easily be extended and used to power interfaces or it can be run from the command line.AI Handler is a work in progress. It powers two projects at the moment, but may not be ready for general use.InstallationThis is a work in progress.Pre-requisitesSystem requirementsWindows 10+Python 3.10.8pip 23.0.1CUDA toolkit 11.7CUDNN 8.6.0.163Cuda capable GPU16gb+ ramInstallpip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/cu117 pip install https://github.com/w4ffl35/diffusers/archive/refs/tags/v0.14.0.ckpt_fix.tar.gz pip install https://github.com/w4ffl35/transformers/archive/refs/tags/tensor_fix-v1.0.2.tar.gz pip install https://github.com/acpopescu/bitsandbytes/releases/download/v0.37.2-win.0/bitsandbytes-0.37.2-py3-none-any.whl pip install aihandlerwindowsOptionalThese are optional instructions for installing TensorRT and Deepspeed for WindowsInstall Tensor RT:Download TensorRT-8.4.3.1.Windows10.x86_64.cuda-11.6.cudnn8.4Git clone TensorRT 8.4.3.1Follow their instructions to build TensorRT-8.4.3.1 python wheelInstall TensorRTpip install tensorrt-*.whlInstall Deepspeed:Git clone Deepspeed 0.8.1Follow their instructions to build Deepspeed python wheelInstall Deepspeed `pip install deepspeed-*.whlEnvironment variablesAIRUNNER_ENVIRONMENT-devorprod. Defaults todev. This controls the LOG_LEVELLOG_LEVEL-FATALfor production,DEBUGfor development. Override this to force a log levelHuggingface variablesOffline modeThese environment variables keep you offline until you need to download a model. This prevents unwanted online access and speeds up usage of huggingface libraries.DISABLE_TELEMETRYKeep this set to 1 at all times. Huggingface collects minimal telemetry when downloading a model from their repository but this will keep it disabled.See more info in this github threadHF_HUB_OFFLINEWhen loading a diffusers model, huggingface libraries will attempt to download an updated cache before running the model. This prevents that check from happening (long with a boolean passed toload_pretrainedsee the runner.py file for examples)TRANSFORMERS_OFFLINESimilar toHF_HUB_OFFLINEbut for transformers models
ai-harness
AI HarnessIntroductionThis project would like to supply some convenient tools for the machine learning and deep learning. Current features:XMLConfiguration: for loading a configuration defined in xml files into a Python ObjectArguments: Mapping a Python Object to the arguments of argparseinspector: Some convenient method for class/objectexecutors: Some convenient ProcessExecutorfileutils: DirectoryNavigator, FileReadPipeLineothers:Log2019.4.18, version: 0.3.0: Added distributed training tools for python2019.4.23, version: 0.3.5: Added a Json file Reader2019.4.24, version: 0.3.6: Added a data utils for processing data2019.4.26, version: 0.3.7 Added a data utils for processing data for zip file2019.4.28, version: 0.3.8 Added QueueExecutorExamples1. XMLConfiguration(1) Define the configuration in xml file like:<?xml version="1.0" encoding="UTF-8" ?><configuration><argname="name"default="TestName"help="Name Test Help"/><argname="age"default="20"help="Age Test Help"/><groupname="address"><argname="home"default="shanghai"help="Home test Help"/><argname="phone"default="136"help="Phone test Help"/></group><groupname="education"><argname="school"default="beijing"help="school test Help"/><argname="grade"default="master"help="grade test Help"/></group></configuration>you can define multiple xml configuration files, and if the name is same, the value of the later will cover the previous.(3) Define the configuration class like:from aiharness.configuration import configclass,field @configclass class Address: phone: int = field(139, "phone help") home: str = field("beijing", "phone help") @configclass class Education: school: str = field('ustb', "phone help") grade: str = field("master", "phone help") @configclass class Config: name: str = field("test", "name help") age: str = field(10, "age help") address: Address = Address() education: Education = Education()(3) Load the xml configuration into python object as folling:from aiharness.configuration import XmlConfiguration config:Config=XmlConfiguration(Config).load(['configuration1.xml','configuration2.xml'])Arguments ExampleGenerally, we use argparse as following:import argparse parser = argparse.ArgumentParser() parser.add_argument("--name",default='TEST',help='name help') parser.add_argument("--age",default=18,help='age help') arguments=parser.parse_args()And you can got a arguments object.Here give an example showing how to load a xml configuration and set to argparse arguments and to parse the arguments into a object you defined. And here the Config Class and 'configuration.xml' are same with those of the Configuration example.Firstly, in fact, the Config Class instead of the codes of 'add_argument' of the argparse.ArgumentParser. Secondly, you can put the configuration into a xml file so that you can change it conveniently.from aiharness.configuration import Arguments, XmlConfiguration config: Config = XmlConfiguration(Config).load(['configuration.xml']) arguments = Arguments(config) config: Config = arguments.parse()
ai-harness-sdk
AI-HarnessAI-Harness is a Python package designed to showcase and utilize the capabilities of the AI Harness platform. This SDK provides a set of tools and functionalities to interact with the AI Harness ecosystem and integrate AI models seamlessly.InstallationYou can install AI-Harness using pip:pipinstallai-harnessUsage# Import necessary modulesfromdotenvimportload_dotenv fromai_harness.documents.documentimportDocuments# Load environment variables from a .env fileload_dotenv()# Initialize a Documents instancedoc_instance=Documents()# To Create a collection in the AI Harness platformdoc_instance.create_collection(collection_name="sample_collection")# Upload documents to the created collectiondoc_instance.upload_documents(doc_path=r"document_path",# Specify the path to the documentscollection_id="",# Provide the ID of the target collectioningest_with_google="false"# Indicate whether to ingest documents using Google API or open source pdf loader)# import applets from agentsfromai_harness.agents.applet_nameimportapplet_name# run a Conversation appletresult=Conversation(prompt="Your Prompt").run()# run a Place Making appletresult=PlaceMaking(site_name="write site name here").run()# run a Query Generator appletresult=QueryGenerator(prompt="Your Prompt",dialect="dialect name",schema="schema",db_type="db type(relational or non-relational)",additional_filters="any additional filters").run()# run a Site Analysis appletresult=SiteAnalysis(lat="latitude of site",# either use coordinates or site IDlong="longitude of site",siteId="URA site id of site",# using URA site IDproperties=["stations","market_analysis","competitive_analysis","airport","seaport","zones","history","demographics"],# list of features that can be usedmarket_analysis="residential",# market analysis (residential , commercial or industrial)competitive_analysis="industrial"# competitive analysis (residential , commercial or industrial)).run()# run JSON TO JSON appletresult=JsonToJson(input_json="input json",output_json="output json",).run()# run Query Retrieval appletresult=QaRetrieval(prompt="prompt",document_id="id of document uploaded on ai harness",).run()
aih-codegen
No description available on PyPI.
aih-dev
No description available on PyPI.
aih-diendn
No description available on PyPI.
aih-dynamodb
No description available on PyPI.
ai-header-generator
AI Header GeneratorAI Header Generator is a Python package that generates headers for your code files using OpenAI's GPT-4. It supports various file types and can be easily customized via a configuration file.Software RequirementsPython 3.8 or higherPip (package installer for Python)OpenAI API KeyGitInstallation InstructionsClone the repository to your local machine: git clonehttps://github.com/your-username/your-repo.gitNavigate to the project directory: cd ai-generated-headersCreate a virtual environment: python -m venv venvActivate the virtual environment: source venv/bin/activate (for Unix-based systems) or venv\Scripts\activate (for Windows)Install the project dependencies: pip install -r requirements.txtCopy the config.ini.sample file to config.ini: cp config.ini.sample config.iniOpen config.ini and add your OpenAI API Key.You're ready to go!UsageTo generate headers for your code files, run:ai-header-generatorTo generate a README file based on the generated headers, use the --readme option:ai-header-generator--readmeTo use a custom configuration file, use the --config option:ai-header-generator--configpath/to/your/config.iniConfigurationThe config.ini file contains the following configuration options:api_key: Your OpenAI API key. folder_path: The path to the folder containing the code files to be analyzed.analysis_file_extension: The file extension of the code files to be analyzed (e.g., .sql).template_file: The path to the JSON file containing the template for generating headers.Modify these options to customize the behavior of the AI Header Generator.
aihelper
No description available on PyPI.
aihems
No description available on PyPI.
aihems-pkg
# AIHEMS Example PackageThis is a AIHEMS example package.
aihero
AI Hero Python SDKThe AI Hero Python SDK offers a powerful set of tools for managing and developing AI models. With our latest release, you can easily manage prompt templates and versions, allowing for easier and more effective model development, testing, and deployment.AI Hero's PromptCraft and PromptStash helps you with:Prompt Creation: Define and format AI's interaction templates.Project Initialization: Quickly set up projects with a dedicated ID and API key.Version Control for Prompts: Easily stash, manage, and recall prompt variants.Track and Monitor AI Interactions: Document and visualize every AI interaction in real-time.Analysis and Oversight: Observe AI's steps and decision-making.Evaluation: Implement automated testing for AI's responses and compare your previous runs.Feedback System: Seamlessly log user feedback on AI outputs.InstallationInstall AI Hero using pip:pipinstallaihero==0.3.1PromptCraftAre you a product manager navigating the dynamic AI space? Imagine having the power to track edits in a shared document, but for AI prompts. With prompt versioning, we can keep an eye on alterations, dip back into past iterations, and streamline updates. Ever wished you could pull up an older version of an AI prompt for clarity or comparison without diving deep into the technical weeds? "PromptCraft" is your ally, tailor-made to simplify the journey for those on the product side of things.How to use itIn a folder for your project, create a file.envcontaining the following API Keys:OPENAI_API_KEY=<Your OpenAI API Key> AI_HERO_PROJECT_ID=<Project ID from your project the AI Hero Website> AI_HERO_PROJECT_API_KEY=<API Key for the project>Running PromptCraftIf you're using the Completions API:aihero promptcraft completionsOR if you are using the Chat Completions API:aihero promptcraft chat_completionsThere's more coming soon!We're building a notebook for Retrieval Augmented Generation.Hit us up if you'd like a custom demo [email protected].
aihttp
aihttpA harmless package to prevent typosquatting attacks
aihub
No description available on PyPI.
ai-hub
AI_HUBAI utils for developer. such as notice、send massage when model training is over.Bind WeChat Official Account(AI_HUB) 插入在代码里的小工具,可以在模型训练结束时通过公众号及时发送微信消息给自己,提高科研效率。 inferServer: server your ai model as a API and match the tianchi eval 简单的操作把你训练好的模型变为服务API,并且支持天池大赛的流评测。INSTALLpip install ai-hubSAMPLENOTICEfromai_hubimportnotice#到AGIHub微信公众号获取个人openid如(oM8pVuBWl8Rw_vFz7rZNgeO4T8H8),需替换为自己的openidnc=notice("oM8pVuBWl8Rw_vFz7rZNgeO4T8H8")#借助AGIHub公众号发送消息给自己nc.sendmsg("hi,AI_HUB.I am su")inferServer'''依赖:pip install ai-hub #(version>=0.1.7)测试用例:model为y=2*x请求数据为json:{"img":3}-----------post请求:curl localhost:8080/tccapi -X POST -d '{"img":3}'返回结果 6'''fromai_hubimportinferServerimportjsonclassmyInfer(inferServer):def__init__(self,model):super().__init__(model)print("init_myInfer")#数据前处理defpre_process(self,data):print("my_pre_process")#json processjson_data=json.loads(data.decode('utf-8'))img=json_data.get("img")print("processed data: ",img)returnimg#数据后处理defpost_process(self,data):print("post_process")processed_data=datareturnprocessed_data#模型预测:默认执行self.model(preprocess_data),一般不用重写#如需自定义,可覆盖重写#def pridect(self, data):# ret = self.model(data)# return retif__name__=="__main__":mymodel=lambdax:x*2my_infer=myInfer(mymodel)my_infer.run(debuge=True)#默认为("127.0.0.1", 80),可自定义端口,如用于天池大赛请默认即可,指定debuge=True可获得更多报错信息TccProgressBarfromai_hubimportTccProgressBar#定义progress,显示名为training,在竞赛平台TCC上显示该进度条(tccBar_show=false 不影响本地打印进度条)progress=TccProgressBar(title="training",tccBar_show=True)forjinprogress(range(100)):time.sleep(0.1)TccTensorboardfromai_hubimportLogger#Logger用法与tensorboard的logger包一致info={'loss':loss.data[0],'accuracy':accuracy.data[0]}fortag,valueininfo.items():logger.scalar_summary(tag,value,step)获取OPENID1.扫描关注公众号AGIHub2.发送“openid”给公众号 即可获得openid
aihubcli
Project Structure for MLflow integrated ML ProjectsThis cli tool generates the following directory structure for quickstart ML projectsinstallaton:pip install aihubcliexample use:aihubcli create myProjectmyProject/ │ ├── input/ │ ├── raw/ <-- Raw data here │ ├── interim/ <-- Any intermediate data, to pause and continue experiments │ └── processed/ <-- Processed data ready for ML pipeline │ ├── output/ │ ├── models/ <-- Model pickle or model weights stored here │ ├── artifacts/ <-- Serialized artifacts like LabelEncoder, Vectorizer etc │ ├── figures / <-- All plots and visualizations goes here │ └── results/ <-- If the results needs to be stored for review, save here │ ├── notebooks/ <-- All notebooks and experiments resides here │ ├── eda_plots.ipynb <-- ┌───────────────────────────────────────────┐ │ ├── ml_rnn.ipynb <-- │ free to name notebooks any way you prefer │ │ └── ml_seq2seq.ipynb <-- └───────────────────────────────────────────┘ │ ├── src/ <-- Final program, with training and prediction pipeline │ ├── __init__.py <-- Makes src a Python module │ ├── preprocess.py <-- code related to preprocessing the data and storing it in input/processed/ │ ├── model.py <-- model definition here, can be used in train or prediction │ ├── train.py <-- all code related to training model goes here │ ├── hyperopt.py <-- hyperparameter optimizations related code │ ├── package.py <-- packaging the trained model with preprocessing logic for MLflow │ ├── predict.py <-- prediction logic, usually loads the model from Mlflow registry and predict │ └── server.py <-- any API interface like Flask etc. Create as needed │ README.md <-- Description and instruction about the project MLProject <-- MLflow project file. If you want to use this directory as MLflow project requirements.txt <-- python dependencies config.yml <-- configuration key value pairs in yaml format
aihub-deep
AihubA Python package for all Deep learning Developers.UsageFollowing query on terminal will provide you details.import aihub
aihub-things
No description available on PyPI.
aihwkit
IBM Analog Hardware Acceleration KitDescriptionIBM Analog Hardware Acceleration Kitis an open source Python toolkit for exploring and using the capabilities of in-memory computing devices in the context of artificial intelligence.:warning: This library is currently in beta and under active development. Please be mindful of potential issues and keep an eye for improvements, new features and bug fixes in upcoming versions.The toolkit consists of two main components:Pytorch integrationA series of primitives and features that allow using the toolkit withinPyTorch:Analog neural network modules (fully connected layer, 1d/2d/3d convolution layers, LSTM layer, sequential container).Analog training using torch training workflow:Analog torch optimizers (SGD).Analog in-situ training using customizable device models and algorithms (Tiki-Taka).Analog inference using torch inference workflow:State-of-the-art statistical model of a phase-change memory (PCM) array calibrated on hardware measurements from a 1 million PCM devices chip.Hardware-aware training with hardware non-idealities and noise included in the forward pass to make the trained models more robust during inference on Analog hardware.Analog devices simulatorA high-performant (CUDA-capable) C++ simulator that allows for simulating a wide range of analog devices and crossbar configurations by using abstract functional models of material characteristics with adjustable parameters. Features include:Forward pass output-referred noise and device fluctuations, as well as adjustable ADC and DAC discretization and boundsStochastic update pulse trains for rows and columns with finite weight update size per pulse coincidenceDevice-to-device systematic variations, cycle-to-cycle noise and adjustable asymmetry during analog updateAdjustable device behavior for exploration of material specifications for training and inferenceState-of-the-art dynamic input scaling, bound management, and update management schemesOther featuresAlong with the two main components, the toolkit includes other functionalities such as:A library of device presets that are calibrated to real hardware data and based on models in the literature, along with a configuration that specifies a particular device and optimizer choice.A module for executing high-level use cases ("experiments"), such as neural network training with minimal code overhead.A utility to automatically convert a downloaded model (e.g., pre-trained) to its equivalent Analog model by replacing all linear/conv layers to Analog layers (e.g., for convenient hardware-aware training).Integration with theAIHW Composerplatform, a no-code web experience that allows executing experiments in the cloud.How to cite?In case you are using theIBM Analog Hardware Acceleration Kitfor your research, please cite the AICAS21 paper that describes the toolkit:Malte J. Rasch, Diego Moreda, Tayfun Gokmen, Manuel Le Gallo, Fabio Carta, Cindy Goldberg, Kaoutar El Maghraoui, Abu Sebastian, Vijay Narayanan."A flexible and fast PyTorch toolkit for simulating training and inference on analog crossbar arrays"(2021 IEEE 3rd International Conference on Artificial Intelligence Circuits and Systems)UsageTraining examplefromtorchimportTensorfromtorch.nn.functionalimportmse_loss# Import the aihwkit constructs.fromaihwkit.nnimportAnalogLinearfromaihwkit.optimimportAnalogSGDx=Tensor([[0.1,0.2,0.4,0.3],[0.2,0.1,0.1,0.3]])y=Tensor([[1.0,0.5],[0.7,0.3]])# Define a network using a single Analog layer.model=AnalogLinear(4,2)# Use the analog-aware stochastic gradient descent optimizer.opt=AnalogSGD(model.parameters(),lr=0.1)opt.regroup_param_groups(model)# Train the network.forepochinrange(10):pred=model(x)loss=mse_loss(pred,y)loss.backward()opt.step()print('Loss error:{:.16f}'.format(loss))You can find more examples in theexamples/folder of the project, and more information about the library in thedocumentation. Please note that the examples have some additional dependencies - you can install them viapip install -r requirements-examples.txt. You can find interactive notebooks and tutorials in thenotebooks/directory.Further readingWe also recommend to take a look at the tutorial article that describes the usage of the toolkit that can be found here:Manuel Le Gallo, Corey Lammie, Julian Buechel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch."Using the IBM Analog In-Memory Hardware Acceleration Kit for Neural Network Training and Inference"(APL Machine Learning Journal:1(4) 2023)What is Analog AI?In traditional hardware architecture, computation and memory are siloed in different locations. Information is moved back and forth between computation and memory units every time an operation is performed, creating a limitation called thevon Neumann bottleneck.Analog AI delivers radical performance improvements by combining compute and memory in a single device, eliminating the von Neumann bottleneck. By leveraging the physical properties of memory devices, computation happens at the same place where the data is stored. Such in-memory computing hardware increases the speed and energy efficiency needed for next-generation AI workloads.What is an in-memory computing chip?An in-memory computing chip typically consists of multiple arrays of memory devices that communicate with each other. Many types of memory devices such asphase-change memory(PCM),resistive random-access memory(RRAM), andFlash memorycan be used for in-memory computing.Memory devices have the ability to store synaptic weights in their analog charge (Flash) or conductance (PCM, RRAM) state. When these devices are arranged in a crossbar configuration, it allows to perform an analog matrix-vector multiplication in a single time step, exploiting the advantages of analog storage capability andKirchhoff’s circuits laws. You can learn more about it in ouronline demo.In deep learning, data propagation through multiple layers of a neural network involves a sequence of matrix multiplications, as each layer can be represented as a matrix of synaptic weights. The devices are arranged in multiple crossbar arrays, creating an artificial neural network where all matrix multiplications are performed in-place in an analog manner. This structure allows to run deep learning models at reduced energy consumption.Awards and Media MentionsIBM Research blog: [Open-sourcing analog AI simulation]:https://research.ibm.com/blog/analog-ai-for-efficient-computingWe are proud to share that the AIHWKIT and the companion cloud composer received the IEEE OPEN SOURCE SCIENCEawardin 2023.InstallationInstalling from PyPIThe preferred way to install this package is by using thePython package index:pipinstallaihwkitConda-based InstallationThere is a conda package for aihwkit available in conda-forge. It can be installed in a conda environment running on a Linux or WSL in a Windows system.CPUcondainstall-cconda-forgeaihwkitGPUcondainstall-cconda-forgeaihwkit-gpuIf you encounter any issues during download or want to compile the package for your environment, please take a look at theadvanced installationguide. That section describes the additional libraries and tools required for compiling the sources using a build system based oncmake.Docker InstallationFor GPU support, you can also build a docker container following theCUDA Dockerfile instructions. You can then run a GPU enabled docker container using the follwing command from your peoject dircetorydockerrun--rm-it--gpusall-v$(pwd):$HOME--nameaihwkitaihwkit:cudabashAuthorsIBM Research has developed IBM Analog Hardware Acceleration Kit, with Malte Rasch, Diego Moreda, Fabio Carta, Julian Büchel, Corey Lammie, Charles Mackin, Kim Tran, Tayfun Gokmen, Manuel Le Gallo-Bourdeau, and Kaoutar El Maghraoui as the initial core authors, along with manycontributors.You can contact us by opening a new issue in the repository or alternatively at [email protected] address.LicenseThis project is licensed underApache License 2.0.
aiia-django-rest-swagger
Django REST SwaggerAn API documentation generator for Swagger UI and Django REST Framework.Installation From pip:pip install aiia_django_rest_swaggerProject @https://github.com/aiia-admin/aiia_django_rest_swaggerDocs @https://aiia_django_rest_swagger.readthedocs.io/
aiida
AiiDA (www.aiida.net) is a workflow manager for computational science with a strong focus on provenance, performance and extensibility.DeprecatedThis metapackage for AiiDA has been deprecated as of v1.0 and is no longer being maintained. If you want to install AiiDA, please install theaiida-corepackageinstead.AcknowledgementsAiiDA is aNumFOCUS Affiliated Projectand supported by theMARVEL National Centre of Competence in Research, theMaX European Centre of Excellenceand by a number of other supporting projects, partners and institutions, whose complete list is available on theAiiDA website acknowledgements page.
aiida-abinit
aiida-abinitThis is anAiiDAplugin forABINIT.InstallationTo install from PyPI, simply execute:pip install aiida-abinitTo install from source, execute:git clone https://github.com/sponce24/aiida-abinit pip install aiida-abinitPseudopotentialsPseudopotentials are installed and managed through theaiida-pseudoplugin. The easiest way to install pseudopotentials is to install a version of thePseudoDojothrough the CLI ofaiida-pseudo. To install the default PseudoDojo version, run:aiida-pseudo install pseudo-dojoList the installed pseudopotential families with the commandaiida-pseudo list.AcknowledgementsThis work was supported by the the European Unions Horizon 2020 Research and Innovation Programme, under theMarie Skłodowska-Curie Grant Agreement SELPH2D No. 839217.LicenseMITContactaiida-abinitis developed and maintained bySamuel Poncé[email protected] PetrettoAustin [email protected]
aiida-aimall
!This README and all documentation is a work in progress!Copyright noticeThis repository contains modified versions of the calculations and parsers presented inAiida-Gaussian. Copyright (c) 2020 Kristjan Eimre. The modifications basically amount to adding the wfx file to the retrieved nodes and adding some groups/extras to calculation output.Also, the (incomplete) testing framework is heavily influenced by the infrastructure presented inaiida-quantumespresso. Copyright (c), 2015-2020, ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (Theory and Simulation of Materials (THEOS) and National Centre for Computational Design and Discovery of Novel Materials (NCCR MARVEL))aiida-aimallA plugin to interface AIMAll with AiiDARepository contents.github/:Github Actionsconfigurationworkflows/ci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryconfig/config files for testing/docs environmentcode-aim.yamlconfig file for building precommit and test envscode-gwfx.yamlconfig file for building precommit and test envsprofile.yamlconfig file for aiida profileprofile.yamlconfig file for localhost computerprofile.yamlsetup file for localhost computeraiida_aimall/: The main source code of the plugin packagedata/: A newAimqbParametersdata class, used as input to theAimqbCalculationCalcJobclasscalculations.py: A newAimqbCalculationCalcJobclass, andGaussianWFXCalculation, a modified version ofGaussianCalculationfromAiiDA Gaussianparsers.py: A newParserfor theAimqbCalculation, andGaussianWFXParser, a modified version ofGaussianBaseParserfromAiiDA Gaussianworkchains.py: NewWorkChains.MultiFragmentWorkChainto fragment molecules using cml files from the Retrievium database and submit Gaussian calculations for the fragments using functions infrag_functionsfromsubproptools GithubG16OptWorkchainto take output fromMultiFragmentWorkChainand submit Gaussian optimization calculationsAimAllReorWorkChainto runAimqbCalculationon output fromGaussianWFXCalculations, then reorient to coordinate systems defined insubreorfromsubproptools Githubcontrollers.py: Workflow controllers to limit number of running jobs on localhost computers.AimReorSubmissionControllerto controlAimReorWorkChains. These useparent_group_labelfor the wavefunction file nodes fromGaussianWFXCalculationsAimAllSubmissionControllerto controlAimqbCalculations``. These useparent_group_labelfor the wavefunction file nodes fromGaussianWFXCalculation`sGaussianSubmissionControllerto controlGaussianWFXCalculations. This is mostly intended to have a arbitrarily large number of max concurrents and scan for output structures ofAimReorWorkchains to submit to a remote clusterdocs/: Source code of documentation forRead the Docsexamples/: An example of how to link the four controllers in an overall workflowtests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest.conftest.py: Configuration of fixtures forpytest.gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the Docs.isort.cfg: Configuration to make isort and black precommit actions compatibleLICENSE: License for your pluginREADME.md: This filepyproject.toml: Python package metadata for registration onPyPIand theAiiDA plugin registry(including entry points)FeaturesFeature specificityMany of the workflows provided are specific to my field of study, but the calculations and parsers should be widely useful. Again, as many things designed here were specific to my usage, there are some quirks that must be used at this time, but as I progress with this, I'll endeavour to replace them as optional parts.Many calculations and parsers store results in groups. I have used this, due to my usage of the FromGroupSubmissionController from aiida-submission-controller. I wrote for wfx files to be stored in a group in a Gaussian calculation because then a submission controller looks for wfx files in that group.What this means for the general user is that currently, some nodes are going to be stored in groups, and some group labels are to be provided to certain CalcJobsFor similar reasons as above, many Parsers/CalcJobs add extras to the node, typically SMILES in my caseSome calculations then, require an extra label (frag_label or fragment_label) to be provided as input to tag the outputFeature ListAimqbParameters Data class to validate command line parameters used to run AIMAll calculationsAimqbParameters=DataFactory('aimall.aimqb')aim_params=AimqbParameters(parameter_dict={"naat":2,"nproc":2,"atlaprhocps":True})will check for instance, that the value supplied for naat (number of atoms at a time) is an integer.Most of the options provided atAIMQB Command Lineare defined and validatedRun an AIMQB calculation using a valid starting file format (WFN/WFX/FCHK as SinglefileData)AimqbCalculation=CalculationFactory('aimall.aimqb')builder=AimqbCalculation.get_builder()builder.parameters=aim_paramsbuilder.file=SinglefileData('/absolute/path/to/file')# Alternatively, if you have file stored as a string:# builder.file = SinglefileData(io.BytesIO(wfx_file_string.encode()))submit(builder)InstallationThe aiida-dataframe dependency tables requires h5 headers on your system. You may already have this, or not. One easy way that allows installation of the headers using the h5py package(conda-env)condainstallh5py(conda-env)pipinstallaiida-aimall verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of calculationDevelopmentgitclonehttps://github.com/kmlefran/aiida-aimall.cdaiida-aimall pipinstall--upgradepip pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more [email protected]
aiida-ase
aiida-aseAiiDA plugin for ASEInstallationFrom PyPIpip install aiida-aseFrom this repository (useful for development):git clone https://github.com/aiidateam/aiida-ase pip install -e aiida-aseUsageThe main goal of this plugin is to be a wrap around for ASE.To make it easy to setup the calculation generate abuilderas followsAseCalculation = CalculationFactory('ase.ase') builder = AseCalculation.get_builder()The main parameters for the builder that need to be specified are:Codefrom aiida.orm import load_code code = load_code('your-code-here@your-computer-here') builder.code = codeNOTE: If using GPAW, there are two possibilities to set up the calculator a. Specify the Python executable with specific module loaded for GPAW b. Specify directly the GPAW executable. In this case a CMDLINE parameter will be needed (see below).Structurebuilder.structure = structurek-points datakpoints = KpointsData() kpoints.set_kpoints_mesh([2,2,2]) # choose the right mesh here builder.kpoints = kpointsParametersAn example parameter set for GPAW is shown here in parts. See theexamplesfolder for specific examples for other functionality (will be constantly updated).Define a calculator for aPWcalculation with GPAW. Here thenameof the calculator is set to GPAW,argsis the equivalent of arguments passed into the calculator used in ASE. Note that the@functionfunctionality enables passing arguments to a function inside the calculators. In this example the equivalent ASE command isPW(300). Other arguments such asconvergenceandoccupationscan be added.calculator = { 'name': 'gpaw', 'args': { 'mode': { '@function': 'PW', 'args': {'ecut': 300} }, 'convergence': { 'energy': 1e-9 }, 'occupations': { 'name': 'fermi-dirac', 'width':0.05 } }Add here tags that will be written asatoms.get_xyz(), so for example the first item will beatoms.get_temperature().atoms_getters = [ 'temperature', ['forces', {'apply_constraint': True}], ['masses', {}], ]Some addition utility functions are:pre_lines: list of lines to added to the start of the python filepost_lines: list of lines to added to the end of the python fileextra_imports: list of extra imports as separated strings, for example["numpy", "array"]will lead tofrom numpy import arrayNote about choosing a codeIf using GPAW it is possible to run parallel calculations using/path/to/execut/gpaw python run_gpaw.py. Set up the code through AiiDA by adding in thegpawexecutable. The add thepythontag using the command line optionsettings = {'CMDLINE': ['python']} builder.settings = orm.Dict(settings)If the code you are interested in is present in this plugin registry it might make more sense to use thathttps://aiidateam.github.io/aiida-registry/DocumentationThe documentation for this package can be found on Read the Docs athttp://aiida-ase.readthedocs.io/en/latest/
aiida-atoms
aiida-atomsAiiDA Plugin for keeping track of structure manipulations of anase.Atomsobjects. Every operation acted through theAtomsTrackerobject will be recorded on the provenance graph.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_atoms/: The main source code of the plugin packagedocs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugintests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest..gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit installLICENSE: License for your pluginREADME.md: This fileconftest.py: Configuration of fixtures forpytestpyproject.toml: Python package metadata for registration onPyPIand theAiiDA plugin registry(including entry points)FeaturesAutomatic tracking of changes made toase.Atomsobject through its methods, and saving them to the provenance graph.Wrapped other common routines that records provenance for tracking.Provenance graph visualization.Installationpipinstallaiida-atomsUsageDevelopmentgitclonehttps://github.com/zhubonan/aiida-atoms.cdaiida-atoms pipinstall--upgradepip pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all [email protected]
aiida-aurora
aiida-auroraAiiDA plugin for the Aurora project (autonomous robotic battery innovation platform). A collaboration between EPFL & Empa, within the BIG-MAP Stakeholder Initiative Call 2021-2023.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_aurora/: The main source code of the plugin packagedata/: A newDiffParametersdata class, used as input to theDiffCalculationCalcJobclasscalculations.py: A newDiffCalculationCalcJobclasscli.py: Extensions of theverdi datacommand line interface for theDiffParametersclasshelpers.py: Helpers for setting up an AiiDA code fordiffautomaticallyparsers.py: A newParserfor theDiffCalculationdocs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugintests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest..coveragerc: Configuration ofcoverage.pytool reporting which lines of your plugin are covered by tests.gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginMANIFEST.in: Configure non-Python files to be included for publication onPyPIREADME.md: This fileInstallationpipinstallaiida-aurora verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of calculationThe plugin also includes verdi commands to inspect its data types:verdidataauroralist verdidataauroraexport<PK>Developmentgitclonehttps://github.com/epfl-theos/aiida-aurora.cdaiida-aurora pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more information.LicenseMITAcknowledgementsThis project was supported by the Open Research Data Program of the ETH Board.ContactEdan Bainglass ([email protected])Francisco F. Ramirez ([email protected])Loris Ercole ([email protected])Giovanni Pizzi ([email protected])
aiida-bader
AiiDA-BaderAiiDA plugin forBadercharge analysis.InstallationTo install from PyPI, simply execute:pip install aiida-baderor when installing from source:git clone https://github.com/superstar54/aiida-bader pip install aiida-baderDevelopmentRunning testsTo run the tests, simply clone and install the package locally with the [tests] optional dependencies:gitclonehttps://github.com/superstar54/aiida-bader.cdaiida-bader pipinstall-e.[tests]# install extra dependencies for testpytest# run testsPre-commitTo contribute to this repository, please enable pre-commit so the code in commits are conform to the standards. Simply install the repository with thepre-commitextra dependencies:cdaiida-bader pipinstall-e.[pre-commit]pre-commitinstallLicenseTheaiida-baderplugin package is released under the MIT license. See theLICENSEfile for more details.AcknowledgementsWe acknowledge support from:theNCCR MARVELfunded by the Swiss National Science Foundation;
aiida-bands-inspect
aiida-bands-inspectThis is an AiiDA plugin to run calculations with thebands-inspectcode. It contains calculations to plot and calculate the difference between band structures.
aiida-bigdft
aiida-bigdftTranslation layer for AiiDA-PyBigDFTInstallationpipinstallaiida-bigdft verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calculation pluginsRequirementsA functioning BigDFT installationA copy ofbigdft.py(Available in later versions of BigDFT, but also in this repository atbigdft/bigdft.py)When setting up the BigDFT code, ensure that the executable is set to thebigdft.pyscript.It is also important to source theinstall/bin/bigdftvars.shscript in your prepend.# aiida code prependsource${BIGDFT_BUILD_DIR}/install/bin/bigdftvars.shWhere BIGDFT_BUILD_DIR is the directory in which BigDFT was built.UsageTo see how calculations can be submitted, see the examples directory:verdidaemonstart# make sure the daemon is runningcdexamples verdirunexamples/example_01.py# run test calculationverdiprocesslist-a# check record of calculationThe plugin also includes verdi commands to inspect its data types:verdidatabigdftlist verdidatabigdftexport<PK>WorkChainsThe included workchains use thenamespaceformat, so inputs should be placed under thebigdftnamespace when launching a [email protected]
aiida-calcmonitor
aiida-calcmonitorAiiDA plugin that contains tools for monitoring ongoing calculations.Development [email protected]:ramirezfranciscof/aiida-calcmonitor.git.cdaiida-calcmonitor pipinstall--upgradepip pipinstall-e.# These are not available yet!pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsRunning the monitor testTo run the monitoring test you need to have an existing aiida profile. You must set up a computer in which to run the toymodel code (theaiida_calcmonitor/utils/toymodel_code.shneeds to be copied into the computer and you also need to setup a code in AiiDA). You can set up the localhost like this:$verdicomputersetup-Llocalhost-Hlocalhost-Tlocal-Sdirect-w/scratch/{username}/aiida/--mpiprocs-per-machine1-n$verdicomputerconfigurelocallocalhost--safe-interval5-nFor running the calcjob monitor calcjob, you will need to set up a special kind of localhost with the following Mpirun command:verdi -p <PROFILE_NAME> run(replacing<PROFILE_NAME>with the corresponding one). It also needs to have a prepend that activates the virtual environment that aiida is using. For a typical python virtual environment you can do something like this:source /home/username/.virtualenvs/aiida/bin/activateOnce all if this is set up, you can use the example in/examples/example01/submit_everything.pyas a template on how to prepare and submit a toymodel calculation and monitor.Creating your own monitorsTo create your own monitor, you need to subclass theMonitorBaseclass. This is a data type derived fromDict, so the idea is that you are going to create a sub-data type with a method that describes the checking procedure, you then are going to create a data node from that subtype which contains a dict with the options for that procedure, and the monitor code will get that data node input and call the checking method.This is just an example to show the critical variables that are accessible from the parent class (self[...]) and the returns that need to be used, but the structure can be modified however it is necessary (no need to set and use theerror_detectedboolean for example, you can just return error messages in the middle of the checks).classNewMonitorSubclass(MonitorBase):# pylint: disable=too-many-ancestors"""Example of monitor for the toy model."""defmonitor_analysis(self):sources=self['sources']# this contains the mapping to the actual filepaths (see below)options=self['options']# this contains the specific options for this method (the structure is determined here in this method and the user must know what is expected when constructing it)retrieve=self['retrieve']# there will also be a list of files to retrieve if the original calculation is killed, but this should probably not be needed hereinternal_naming=sources['internal_naming']['filepath']# This is how you access the mapping for the files; now internal_naming contains the actual path to the file it requireswithopen(internal_naming)asfileobj:# Here one performs the parsing of the files and checking.# Setups the variables error_detected and error_msg or equivalentiferror_detected:returnf'An error was detected:{error_msg}'# The calcjob_monitor will interpret string returns as errors and will kill the processelse:returnNone# The calcjob_monitor will interpret None returns to mean everything is going fineIn order to be usable, you also need to add this as an entry point / plugin, inside of thepyproject.tomlfor example:[project.entry-points."aiida.data"]"calcmonitor.monitor.toymodel"="aiida_calcmonitor.data.monitors.monitor_toymodel:MonitorToymodel""calcmonitor.monitor.newmonitor"="aiida_calcmonitor.data.monitors.newmonitor_file:NewMonitorSubclass"LicenseMIT
aiida-castep
AiiDA plugin for working with CASTEPA plugin forAiiDAto work with plane-wave pseudopotential DFT codeCASTEP. CASTEP has a single binary executable and calculation is primarily controlled by thetaskkeyword. The genericCastepCalculationshould work with all tasks, at least in terms of generating input files. Likewise a genericCastepParserclass is implemented and can handle parsing most information we are interested insinglepoint,geometryoptimisation,bandstructure/spectraltasks. Most output files are retrieved if present, and it is possible to explicitly request retrieval from the remote computer. The goal of this plugin is not to provide a comprehensive parser of the CASTEP results, but to build a graph of calculations performed for provenance preservation and workflow automation. Input and output of a simple calculation:or a series of operations and automated calculations:The raw files can always be extracted from the database and analysed by the post-processing tools of choice. Even better, such tools may be integrated with the AiiDA framework and have the analysis appended to the provenance graph.Highlights of available features:Storing usp/recpot asUspData(sub-class ofSingleFileData) in AiiDA database and create/use of pseudo family groups.Store OTFG generating strings asOTFGDatain AiiDA. Create of family/group are also supported. OTFG library (such as "C19") are represented as a OTFG string works for all elements.Preparation of CASTEP input files. Writing cell and parameters files are both supported. Tags inpositions_absblock file should also work, e.gLABEL,SPIN,MIXTURE.Parsing energy, force, stress from output .castep file and .geom fileParsing trajectory from .geom, .ts, .md files.Checking errors in .param and .cell files before submitting, using dictionaries shipped from built from CASTEP executable.Extra KpointData input node for BS, SEPCTRAL and PHONON tasks.Preparing transition state search calculationsAcreate_restartfunction for easy creation of continuation/restart calculations. Input can be altered usingparam_updateandparam_deletekeyword arguments. Automatic copying/linking of remote check files by AiiDA.Aget_castep_inputs_summaryfunction to print a summary of inputs of a calculations.Acompare_withmethod to compare the inputs of two calculations.DocumentationQuick glimpse into how to use the plugin for running calculations:Running CastepCalculationRunning CastepBaseWorkChainDocumentation is hosted at Read the Docs:dev versionmaster versionDependenciesThe primary dependency is theaiida_corepackage. The dependencies are:The plugin version 2.0 and above support onlyaiida_core>=2.0.The plugin version 1.0 and above support onlyaiida_core>=1.0.0b6, <2.The plugin version 0.3 support onlyaiida_core0.12.x versions.There is only minor API changes in theaiida_corebetween v1 and v2, scripts written should be compatible between the two.Todos and nice-to-havesMethods for importing existing calculationsSupport for submitting file based CASTEP calculations.At the moment there is no enforcement on the type inDictinput node. For example, settingsmearing_widthto 0.1 and "0.1" is equivalent, but they will store differently in the database.How to testThe tests uses thepytestframework. First, install with the dependenciespip install aiida_core[testing] pip install aiida-castep[testing]Then you can run the commandpytestfrom the project directory.
aiida-catmat
aiida-catmatCollection ofAiiDAWorkChains developed inCATMAT projectatUniversity of OxfordandUniversity of Bath.Compatible with:Installation, usage, and available workchainsPlease check thedocumentation.ContactPlease contactPezhman Zarabadi-Poorif you have any questions regarding this package.AcknowledgmentI think the Faraday Institution (FI) CATMAT project (EP/S003053/1, FIRG016) for financial support. I'd also thank Dr. Benjamin Morgan, Dr. Alex Squires, and Hollie Richards for their inputs during the developemnt.
aiida-champ
aiida-champAiiDA plugin that wraps thevmcexecutable of the CHAMP code for computing the total energy of a molecular system.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_champ/: The main source code of the plugin packagedata/: A newCHAMPParametersdata class, used as input to theCHAMPCalculationCalcJobclasscalculations.py: A newCHAMPCalculationCalcJobclasscli.py: Extensions of theverdi datacommand line interface for thechampParametersclasshelpers.py: Helpers for setting up an AiiDA code forchampautomaticallyparsers.py: A newParserfor theCHAMPCalculationdocs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugintests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest..coveragerc: Configuration ofcoverage.pytool reporting which lines of your plugin are covered by tests.gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginMANIFEST.in: Configure non-Python files to be included for publication onPyPIREADME.md: This fileconftest.py: Configuration of fixtures forpytestpytest.ini: Configuration ofpytesttest discoverysetup.json: Plugin metadata for registration onPyPIand theAiiDA plugin registry(including entry points)setup.py: Installation script for pip /PyPISee also the following video sequences from the 2019-05 AiiDA tutorial:aiida-champ setup.jsonrun aiida-champ example calculationaiida-champ CalcJob pluginaiida-champ Parser pluginaiida-champ computer/code helpersaiida-champ input data (with validation)aiida-champ cliaiida-champ testsAdding your plugin to the registrypre-commit hooksFor more information, see thedeveloper guideof your plugin.FeaturesAdd input files usingSinglefileData:SinglefileData=DataFactory('singlefile')filemain=SinglefileData(file='vmc.inp')molecule=SinglefileData(file='butadiene.xyz')orbitals=SinglefileData(file='cas44.lcao')determinants=SinglefileData(file='cas44.det')Installationpipinstallaiida-champ verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples pythonexample_01.py# run test calculationverdiprocesslist-a# check record of calculationThe plugin also includes verdi commands to inspect its data types:verdidatachamplist verdidatachampexport<PK>Developmentgitclonehttps://github.com/neelravi/aiida-champ.cdaiida-champ pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more information.LicenseMITAuthorName :: Ravindra Shinde (TREX-CoE) Email ::[email protected]
aiida-codtools
aiida-codtoolsThis is the official AiiDA plugin forcod-tools.Compatibility matrixThe following table shows which versions ofaiida-codtoolsare compatible with which versions of AiiDA and Python.PluginAiiDAPythonv3.0.0 < v4.0.0v2.2.0 < v3.0.0v2.1.0 < v2.2.0v2.0.0 < v2.1.0v1.0.0 < v2.0.0InstallationTo install from PyPi, simply execute:pip install aiida-codtoolsor when installing from source:git clone https://github.com/aiidateam/aiida-codtools pip install aiida-codtoolsGet startedIn order to useaiida-codtools, after installing the package,aiida-coreneeds to be setup and configured. For instructions please follow the documentation ofaiida-core.The package provides a command line scriptaiida-codtoolsthat comes with some useful commands, such as launching calculation or imports CIF files. Call the command with the--helpflag to display its usage:Usage: aiida-codtools [OPTIONS] COMMAND [ARGS]... CLI for the `aiida-codtools` plugin. Options: -p, --profile PROFILE Execute the command for this profile instead of the default profile. -h, --help Show this message and exit. Commands: calculation Commands to launch and interact with calculations. data Commands to import, create and inspect data nodes. workflow Commands to launch and interact with workflows.Each sub command can have multiple other sub commands. To enable tab completion, add the following line to your shell activation script:eval "$(_AIIDA_CODTOOLS_COMPLETE=source aiida-codtools)"To import 10 random CIF files from the COD database, for example, you can do the following:verdi group create cod_cif_raw aiida-codtools data cif import -d cod -G cod_cif_raw -M 10After you have configured a computer and a code, you can also easily launch acod-toolscalculation through AiiDA:aiida-codtools calculation launch cod-tools -X cif-filter -N 10Herecif-filteris the label of the code that you have configured and10is the pk of aCifDatanode. These will most likely be different for your database, so change them accordingly.DocumentationThe documentation for this package can be found onreadthedocs.AcknowledgementsThis work is supported in part by theswissuniversities P-5 project "Materials Cloud".
aiida-common-workflows
AiiDA common workflows (ACWF) package:aiida-common-workflows(Image © Giovanni Pizzi, 2021)The AiiDA common workflows (ACWF) project provides computational workflows, implemented inAiiDA, to compute various material properties using any of the quantum engines that implement it. The distinguishing feature is that the interfaces of the AiiDA common workflows are uniform, independent of the quantum engine that is used underneath to perform the material property simulations. These common interfaces make it trivial to switch from quantum engine. In addition to the common interface, the workflows provide input generators that automatically define the required inputs for a given task and desired computational precision. For more information, please refer to theonline documentation.How to citeIf you use the workflow of this package, please cite the paper in which the work is presented:S. P. Huber et al., npj Comput. Mater. 7, 136 (2021); doi:10.1038/s41524-021-00594-6In addition, if you run the common workflows, please also cite:The AiiDA engine that manages the simulations and stores the provenance:Main AiiDA paper:S.P. Huber et al., Scientific Data 7, 300 (2020)AiiDA engine:M. Uhrin et al., Comp. Mat. Sci. 187 (2021)the quantum engine(s) that you will use. We provide below a table of references for your convenience.EngineDOIs or URLs to be citedABINIT10.1016/j.cpc.2016.04.00310.1016/j.cpc.2019.10704210.1063/1.5144261BigDFT10.1063/5.0004792CASTEP10.1524/zkri.220.5.567.65075CP2K10.1002/wcms.115910.1063/5.0007045FLEURhttps://www.flapw.deGaussiansee instructions hereGPAW10.1103/PhysRevB.71.03510910.1088/0953-8984/22/25/253202NWChem10.1063/5.0004997ORCA10.1002/wcms.8110.1002/wcms.1327Quantum ESPRESSO10.1088/0953-8984/21/39/39550210.1088/1361-648x/aa8f79SIESTA10.1063/5.000507710.1088/0953-8984/14/11/302VASP10.1103/physrevb.54.1116910.1103/physrevb.59.1758WIEN2k10.1063/1.5143061
aiida_conda_scheduler
aiida_conda_schedulerAiiDA scheduler plugins that allow forconda run.Currently, as of aiida-corev2.0.1, it is impossible to set up aComputer+Codewhich can run, for example:condarun--namemyenvmpirun-np4pw.x-iinput.insince, (a) a code can only specify aremote_abs_pathand (b) only a computer can specify thempirun_command.This is really the only way to run a code which is not in the Condabaseenvironment, sinceconda activate myenv(which could perhaps be added toprepend_text) fails when run in a script.These scheduler plugins subclass the built-in scheduler subclasses, and overrides the_get_run_linemethod and:Raises aNotImplementedErroriflen(codes_info) != 1orcodes_run_mode != CodeRunMode.SERIAL(i.e. only one code is supported).Loads the code fromcodes_info[0].code_uuidand retrieves its descriptionUses this description to determine the environment name, by findingenv=myenvAppendsconda run --name myenvto the run line.Installationpipinstallaiida_conda_scheduler reentryscan# aiida v1.x onlyThen the plugins should show in:verdipluginlistaiida.schedulersDevelopmentUsepipxto install thetoxandpre-commitcommand tools.gitclonehttps://github.com/chrisjsewell/aiida-conda-scheduler.cdaiida-conda-scheduler pre-commitrun--all toxFor aiida-core v1, use [email protected]
aiida-core
AiiDA (www.aiida.net) is a workflow manager for computational science with a strong focus on provenance, performance and extensibility.Latest releaseGetting helpBuild statusBenchmarksActivityCommunityFeaturesWorkflows:Write complex, auto-documenting workflows in python, linked to arbitrary executables on local and remote computers. The event-based workflow engine supports tens of thousands of processes per hour with full checkpointing.Data provenance:Automatically track inputs, outputs & metadata of all calculations in a provenance graph for full reproducibility. Perform fast queries on graphs containing millions of nodes.HPC interface:Move your calculations to a different computer by changing one line of code. AiiDA is compatible with schedulers likeSLURM,PBS Pro,torque,SGEorLSFout of the box.Plugin interface:Extend AiiDA withpluginsfor new simulation codes (input generation & parsing), data types, schedulers, transport modes and more.Open Science:Export subsets of your provenance graph and share them with peers or make them available online for everyone on theMaterials Cloud.Open source:AiiDA is released under theMIT open source licenseInstallationPlease see AiiDA'sdocumentation.How to contributeThe AiiDA team appreciates help from a wide range of different backgrounds. Small improvements of the documentation or minor bug fixes are always welcome.Please see theContributor wikion how to get started.Frequently Asked QuestionsIf you are experiencing problems with your AiiDA installation, please refer to theFAQ page of the documentation. For any other questions, discussion and requests for support, please visit theDiscourse server.How to citeIf you use AiiDA in your research, please consider citing the following publications:S. P. Huberet al.,AiiDA 1.0, a scalable computational infrastructure for automated reproducible workflows and data provenance, Scientific Data7, 300 (2020); DOI:10.1038/s41597-020-00638-4M. Uhrinet al.,Workflows in AiiDA: Engineering a high-throughput, event-based engine for robust and modular computational workflows, Computational Materials Science187, 110086 (2021); DOI:10.1016/j.commatsci.2020.110086If the ADES concepts are referenced, please also cite:Giovanni Pizzi, Andrea Cepellotti, Riccardo Sabatini, Nicola Marzari,and Boris Kozinsky,AiiDA: automated interactive infrastructure and database for computational science, Computational Materials Science111, 218-230 (2016); DOI:10.1016/j.commatsci.2015.09.013LicenseAiiDA is distributed under the MIT open source license (seeLICENSE.txt). For a list of other open source components included in AiiDA, seeopen_source_licenses.txt.AcknowledgementsAiiDA is aNumFOCUS Affiliated Projectand supported by theMARVEL National Centre of Competence in Research, theMaX European Centre of Excellenceand by a number of other supporting projects, partners and institutions, whose complete list is available on theAiiDA website acknowledgements page.
aiida-core-bot
AiiDAAutomated Interactive Infrastructure and Database for Computational ScienceAiiDA is a sophisticated framework designed from scratch to be a flexible and scalable infrastructure for computational science. Being able to store the full data provenance of each simulation, and based on a tailored database solution built for efficient data mining implementations, AiiDA gives the user the ability to interact seamlessly with any number of HPC machines and codes thanks to its flexible plugin interface, together with a powerful workflow engine for the automation of simulations.The official homepage is athttp://www.aiida.netThe code is hosted on GitHub athttps://github.com/aiidateam/aiida_coreThe documentation is hosted on Read The Docs at:stable versiondevelop versionWhich branch should you use?Users: the stable version of the code is in themaster branchDevelopers: the procedure to contribute through pull-requests can be found in thewikiHow to cite AiiDAIf you use AiiDA in your research, please consider citing the following work:Giovanni Pizzi, Andrea Cepellotti, Riccardo Sabatini, Nicola Marzari, and Boris Kozinsky,AiiDA: automated interactive infrastructure and database for computational science, Comp. Mat. Sci 111, 218-230 (2016);http://dx.doi.org/10.1016/j.commatsci.2015.09.013;http://www.aiida.net.LicenseThe terms of the AiiDA license can be found in the LICENSE.txt file.AcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation, as well as by theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598.
aiida-cp2k
AiiDA CP2KAiiDAplugin forCP2K.InstallationIf you usepip, you can install it as:pip install aiida-cp2kTo install the plugin in an editable mode, run:git clone https://github.com/aiidateam/aiida-cp2k cd aiida-cp2k pip install -e . # Also installs aiida, if missing (but not postgres/rabbitmq).LinksDocumentationfor the calculation examples and features of the plugin.Make an issuefor bug reports, questions and suggestions.AiiDAto learn about AiiDA.CP2Kto learn about CP2K.For maintainersTo create a new release, clone the repository, install development dependencies withpip install '.[dev]', and then executebumpver update --major/--minor/--patch. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to override the release [email protected] work is supported by:theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation;theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598;theswissuniversities P-5 project "Materials Cloud".
aiida-cp2k-ng
AiiDA CP2KThe CP2K plugin for the AiiDA workflow and provenance engine.InstallationIf you usepip, you can install it as:pip install aiida-cp2kFeaturesFollowing the philosophy to''enable without getting in the way'', this plugin provides access to all of CP2K's capabilities through a small set of well-tested features:A fullCP2K inputhas to be provided as a nested Python dictionary (example):params = {'FORCE_EVAL': {'METHOD': 'Quickstep', 'DFT': { ... }}} calc.use_parameters(ParameterData(dict=params))Section parameters are stored as key_in the dictionary:xc_section = {'XC_FUNCTIONAL': {'_': 'LDA'}}Repeated sections are stored as a list:kind_section = [{'_': 'H', 'BASIS_SET': 'DZVP-MOLOPT-GTH', 'POTENTIAL': 'GTH-LDA'}, {'_': 'O', 'BASIS_SET': 'DZVP-MOLOPT-GTH', 'POTENTIAL': 'GTH-LDA'}]Most data files (basis sets, pseudo potentials, VdW, etc.) are auto-discovered from CP2K'sdata directory.dft_section = {'BASIS_SET_FILE_NAME': 'BASIS_MOLOPT', ...}Additional data files can be added as AiiDA SinglefileData (example):water_pot = SinglefileData(file="/tmp/water.pot") calc.use_file(water_pot, linkname="water_pot")The start geometry can be provided as AiiDA StructureData (example):atoms = ase.build.molecule('H2O', vacuum=2.0) calc.use_structure(StructureData(ase=atoms))Alternatively the start geometry can be contained in the CP2K input (example):coord_section = {' ': ['H 2.0 2.0 2.737166', 'H 2.0 2.0 2.000000']},For restarting a calculation a parent folder can be attached (example):calc2.use_parent_folder(calc1.out.remote_folder)By default only the output and restart file (if present) are retrieved. Additional files are retrieved upon request (example):settings = {'additional_retrieve_list': ["*.cube"]} calc.use_settings(ParameterData(dict=settings))The final geometry is extracted from the restart file (if present) and stored in AiiDA (example):print(calc.out.output_structure)From the CP2K output only the #warnings and final energy are parsed (example):print(calc.res.nwarnings, calc.res.energy, calc.res.energy_units)The calculation is considered failed if #warnings can not be found (example).The conversion of geometries between AiiDA and CP2K has a precision of at least 1e-10 Ångström (example).The Python code complies with theFlake8coding conventions.TestingEvery commit and pull request is automatically tested byTravisCI.To run the tests locally installDockerand execute the following commands:git clone https://github.com/cp2k/aiida-cp2k docker build -t aiida_cp2k_test aiida-cp2k docker run -it --init aiida_cp2k_test pytest -v
aiida-crystal
aiida-crystalThis package has been moved toaiida-crystal17.
aiida-crystal17
aiida-crystal17AiiDA plugin for running theCRYSTAL17code. The code is principally tested against CRYSTAL17, but the output parsing has also been tested against CRYSTAL14.Documentation:https://readthedocs.org/projects/aiida-crystal17InstallationTo install from Conda (recommended)::>>condainstall-cconda-forgeaiida-crystal17aiida-core.servicesTo install from pypi::>>pipinstallaiida-crystal17To install the development version:>>gitclonehttps://github.com/aiidaplugins/aiida-crystal17. >>cdaiida-crystal17 >>pipinstall-e.# also installs aiida, if missing (but not postgres)>>#pip install -e .[pre-commit,testing] # install extras for more features>>verdiquicksetup# set up a new profile>>verdicalculationplugins# should now show the calclulation plugins (with prefix crystal17.)DevelopmentTesting against mock CRYSTAL17 executablesBecause CRYSTAL17 is a licensed software, it is not possible to source a copy of the executable on Travis CI. Therefore, a mock executable (mock_runcry17) has been created for testing purposes (which also speeds up test runs).This executable computes the md5 hash of the supplied input file and tries to match it against a dictionary of precomputed hashes. If found, the executable will write the matching output (fromtest/output_files) to stdout.The following will discover and run all unit test:>>pipinstall-e.[testing]>>reentryscan-raiida >>pytest-vTo omit tests which call external executables (likecrystal17):>>pytest--cry17-skip-execTo call the actual executables (e.g.crystal17instead ofmock_crystal17):>>pytest--cry17-no-mockTo output the results of calcjob executions to a specific directory:>>pytest--cry17-workdir"test_workdir"Coding Style RequirementsThe code style is tested usingflake8, with the configuration set in.flake8, and code should be formatted withyapf(configuration set in.style.yapf).Installing withaiida-crystal17[code_style]makes thepre-commitpackage available, which will ensure these tests are passed by reformatting the code and testing for lint errors before submitting a commit. It can be setup by:>>cdaiida-crystal17 >>pre-commitinstallOptionally you can runyapfandflake8separately:>>yapf-r-i.# recrusively find and format files in-place>>flake8Editors like VS Code also have automatic code reformat utilities, which can adhere to this standard.Setting up CRYSTAL17 locallyTo set up local version of CRYSTAL17 on a mac (after downloading a copy from the distributor), I had to:Remove the quarantine from the executable permissions:xattr-ccrystal xattr-cpropertiesCreate versions of the lapack/blas libraries in the expected folders:sudoportinstalllapack sudocp/opt/local/lib/lapack/liblapack.3.dylib/usr/local/opt/lapack/lib/liblapack.3.dylib sudocp/opt/local/lib/lapack/libblas.3.dylib/usr/local/opt/lapack/lib/libblas.3.dylibDefine environmental variables in~/.bashrc, as detailed incry17_scripts/cry17.bashrcCopy or symlink thecry17_scripts/runcry17script into/usr/local/bin/[email protected]
aiida-crystal-dft
aiida-crystal-dftThis is an AiiDA plugin for running theCRYSTALcode, spin-off from theaiida-crystal17plugin by Chris Sewell.InstallationAs of now, the development version can be installed:>>gitcloneaiida-crystal-dft >>pipinstall-eaiida-crystal-dftUsageThe plugin can be used as any other AiiDA plugin.
aiida-cusp
aiida-cusp - a Custodian based VASP Plugin for AiiDACustodianplugin for VASP enabling automated error correction forAiiDAmanaged VASP calculationsHighlightsAutomated error corrections for VASP calculations on the calculation's runtime level (rather than on the workflow level)Full compatability withpymatgenand easy access to set of therein implemented tools for pre- and postprocessing of VASP calculations directly from the implemented datatypesIt's still VASP, but better!InstallationPlease refer to aiida-cusp'sdocumentationfor further information.ContributingSuggestions for useful improvements, feature requests, bug reports and the like are highly welcome and appreciated. Especially the contribution of new workflows that were developed based on this plugin is highly encouraged to transform this plugin into a powerful tool for the scientific community.Bug ReportsIf you think you found a bug feel free to open an issue on the plugin'sGitHub repository.Adding new Features and ChangesFor changes you would like to add to the plugin please refer to theCONTRIBUTING.mdfile located in the repository root.LicenseTheaiida-cuspplugin for AiiDA is distributed as free and open-source software (FOSS) licensed under the MIT open-source license (seeLICENSE).
aiida-dataframe
aiida-dataframeAiiDA data plugin for pandas DataFrame objectsFeaturesStorepandas.DataFrameobjects in the Database:importpandasaspdPandasFrameData=DataFactory('dataframe.frame')df=pd.DataFrame({"A":1.0,"B":pd.Timestamp("20130102"),"C":pd.Series(1,index=list(range(4)),dtype="float32"),"D":np.array([3]*4,dtype="int32"),"E":pd.Categorical(["test","train","test","train"]),"F":"foo",})df_node=PandasFrameData(df)df_node.store()Retrieving thepandas.DataFramefrom the Database :fromaiida.ormimportQueryBuilderdf_node=QueryBuilder().append(PandasFrameData).first()[0]df=df_node.df#The df property reconstructs the pandas DataFrameprint(df.head())Installationpipinstallaiida-dataframe verdiquicksetup# better to set up a new profileverdipluginlistaiida.data# should now show your data pluginsUsageThe plugin also includes verdi commands to inspect its data types:verdidatadataframelist verdidatadataframeexport<PK> verdidatadataframeshow<PK>Developmentgitclonehttps://github.com/janssenhenning/aiida-dataframe.cdaiida-dataframe pipinstall--upgradepip pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more [email protected]
aiida-ddec
PluginAiiDAPython2.0.01.1.0aiida-ddecpluginADDECplugin for AiiDA.Installationpipinstallaiida-ddecUsageExamples in theexamplesfolder:test_cp2k_ddec_h2o.py: Run an ENERGY calculation, printing the electron density cube file, and compute the DDEC charges from thatRun testsgitclonehttps://github.com/lsmo-epfl/aiida-ddeccdaiida-ddec pipinstall-e.['pre-commit','testing']pytestIf you are changing the inputs of existing tests or need to regenerate test data, place an.aiida-testing-config.ymlfile in your repository that points to the required simulation codes:---mock_code:# code-label: absolute pathcp2k-7.1:/path/to/cp2k.soptchargemol-09_26_2017:/path/to/Chargemol_09_02_2017_linux_serialLicenseMITContactaliaksandr.yakutovich@epfl.ch
aiida-defects
Welcome to AiiDA-DefectsAiiDA-Defects is a plugin for theAiiDAcomputational materials science framework, and provides tools and automated workflows for the study of defects in materials.The package is available for download fromGitHub.If you use AiiDA-Defects in your work, please cite:AiiDA-defects: An automated and fully reproducible workflow for the complete characterization of defect chemistry in functional materialsdoi.org/10.48550/arXiv.2303.12465 (preprint)Please also remember to cite theAiiDA paper.Quick SetupInstall the latest release of this package by running the following in your shell:$ pip install aiida-defectsThis will install all of the prerequisites automatically (including for the optional docs) in your environment, including AiiDA core, if it not already installed.Getting StartedExpample usage of the workchains is documented in the collection of Jupyter notebooks in theexamplesdirectory.AcknowledgementsThis work is supported by the MARVEL National Centre of Competence in Research (NCCR) funded by the Swiss National Science Foundation (grant agreement ID 51NF40-182892) and by the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 824143 (European MaX Centre of Excellence “Materials design at the Exascale”) and Grant Agreement No. 814487 (INTERSECT project). We thank Chiara Ricca and Ulrich Aschauer for discussions and prototype implementation ideas. The authors also would like to thank the Swiss National Supercomputing Centre CSCS (project s1073) for providing the computational ressources and Solvay for funding this project. We thank Arsalan Akhtar, Lorenzo Bastonero, Luca Bursi, Francesco Libbi, Riccardo De Gennaro and Daniele Tomerini for useful discussions and feedback.
aiida-diff
aiida-diffAiiDA demo plugin that wraps thediffexecutable for computing the difference between two files.This plugin is the default output of theAiiDA plugin cutter, intended to help developers get started with their AiiDA plugins.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_diff/: The main source code of the plugin packagedata/: A newDiffParametersdata class, used as input to theDiffCalculationCalcJobclasscalculations.py: A newDiffCalculationCalcJobclasscli.py: Extensions of theverdi datacommand line interface for theDiffParametersclasshelpers.py: Helpers for setting up an AiiDA code fordiffautomaticallyparsers.py: A newParserfor theDiffCalculationdocs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugintests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest..gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginREADME.md: This fileconftest.py: Configuration of fixtures forpytestpyproject.toml: Python package metadata for registration onPyPIand theAiiDA plugin registry(including entry points)See also the following video sequences from the 2019-05 AiiDA tutorial:run aiida-diff example calculationaiida-diff CalcJob pluginaiida-diff Parser pluginaiida-diff computer/code helpersaiida-diff input data (with validation)aiida-diff cliaiida-diff testsAdding your plugin to the registrypre-commit hooksFor more information, see thedeveloper guideof your plugin.FeaturesAdd input files usingSinglefileData:SinglefileData=DataFactory('core.singlefile')inputs['file1']=SinglefileData(file='/path/to/file1')inputs['file2']=SinglefileData(file='/path/to/file2')Specify command line options via a python dictionary andDiffParameters:d={'ignore-case':True}DiffParameters=DataFactory('diff')inputs['parameters']=DiffParameters(dict=d)DiffParametersdictionaries are validated usingvoluptuous. Find out about supported options:DiffParameters=DataFactory('diff')print(DiffParameters.schema.schema)Installationpipinstallaiida-diff verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of calculationThe plugin also includes verdi commands to inspect its data types:verdidatadifflist verdidatadiffexport<PK>Developmentgitclonehttps://github.com/aiidateam/aiida-diff.cdaiida-diff pipinstall--upgradepip pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more information.LicenseMIT
aiida-environ
aiida-environThis plugin builds on top ofaiida-quantumespresso.CompatibilityCurrently tested with aiida-core1.6 and aiida-quantumespresso3.5InstallationTo install from PyPI,pip install aiida-environTo install from sourcegit clone https://github.com/environ-developers/aiida-environ pip install aiida-environLicenseSee theLICENSE.txtfile for more details.
aiida_eos
aiida_eosA demonstration of creating a Python package for AiiDA plugins. The goal is to create a plugin package for the Equation of State workflow demonstrated in:https://aiida-qe-demo.readthedocs.io/en/latest/6_write_your_own_workflow.html.Each commit in this repository corresponds to a step in the tutorial.Note,https://github.com/aiidateam/aiida-plugin-cuttercan be used to automate most of these steps, but here we shall do it manually explain each aspect of the package.Initial creationThe first step is to create a new repository on GitHub. We will call itaiida_eos. The repository should be created with aREADME.mdand a.gitignorefile.Interacting with the repositoryWe shall useVisual Studio Codeto interact with the repository. This is a free, open-source, cross-platform IDE, with nice integration with GitHub, and many useful extensions.Creating the package metadataThe first step is to create the package metadata. This is done by creating apyproject.tomlfile in the root of the repository. This can be used by pip to install the package, and by other tools to build the package:https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/We shall useflitto build the package. This is a simple tool that is designed to build Python packages from apyproject.tomlfile.We can initialise thepyproject.tomlfile by running:flitinitThis also generates a license file, which is crucial for allowing others to use your package.Create the package and install itWe create the initial package with a single file:src/aiida_eos/__init__.py. This file should have a docstring that describes the package, and a__version__variable.We now want to install the package in editable mode. This means that we can make changes to the package, and they will be immediately available to Python.First we create avirtual environment, and activate it:python-mvenv.venvsource.venv/bin/activateVirtual environments are a way of isolating Python environments.We can now install the package in editable mode:python-mpipinstall--upgradepip pipinstall-e.We can now import the package in Python:>>>importaiida_eos>>>aiida_eos.__version__'0.0.1'Adding formatting and linting with pre-commitWe shall usepre-committo automatically format and lint the code. This will ensure that the code is formatted consistently, and that it conforms to the style guide.We can initialise a pre-commit configuration file with:pre-commitsample-config>.pre-commit-config.yaml pre-commitautoupdate pre-commitinstallWe shall add a few additional hooks to the configuration file:black: a Python code formatterflake8: a Python linterisort: a tool to sort the Python importsAdding testingWe shall usepytestto run tests on our package. To install pytest, we shall add it to anoptional-dependenciessection in thepyproject.tomlfile. This is because we only need pytest to run the tests, and not to use the package.We can now install the package, with the optional dependencies:pipinstall-e".[test]"We can now add a test to the package. We shall add a test that checks that the package can be imported. This is done by adding atestsdirectory, and atest_import.pyfile in it.Now we can run the tests:pytestTo check the coverage of the tests, we can run:pytest--cov=aiida_eosUsing toxThetoxCLI tool is an optional way to automate both setting up the virtual environment, then running the tests within it. See thepyproject.tomlsection for the configuration. You can then simply runtoxto run the tests, ortox -e py39to run with a certain python version. Seetox-condafor an example of how to use tox with conda.Adding GitHub ActionsWe can useGitHub Actionsto automatically run the tests on each commit. This is done by adding a.github/workflows/test.ymlfile.Adding the rescale calcfunctionWe shall add arescalecalcfunction to the package. This is a simple function that takes a structure, and rescales it by a given factor.Note that up until now, we have not added any AiiDA specific code. Now that we want to add an AiiDA specific calcfunction, we need to add theaiida-coredependency to thepyproject.tomlfile, and alsoasefor the structure manipulation.If usingtox, we can regenerate our virtual environment with:tox-rType annotationsWe will also notice that we added type annotations to the function. This is a good practice, to provide static type inference, and also allows us to use tools likemypyto check the type annotations.Testing the calcfunctionWe can now add a test for therescalecalcfunction. In fact, if we were to use test-driven development, we would first write the test, and then write the code to make the test pass!Since the calcfunction needs to store data to an AiiDa profile, we need to create a profile for the tests. We can do this by utilising thepgtest packageto create a temporary, local PostgreSQL database cluster, which allows us to create a temporary AiiDA profile, which we add to thepyproject.tomltest dependencies.For this we will first need anPostgreSQLserver running, with which to connect. There are numerous ways to do this, such as using theDocker image, or installing viaHomebrewon macOS, or via thePostgreSQL apt repositoryon Ubuntu.To create the profile for the tests, we can add aconftest.pyfile to thetestsdirectory. Then we register the AiiDApytest fixturesin it. You can see all the available fixtures by runningpytest --fixtures(ortox -- --fixtures). The initial one we need is theaiida_profile_cleanfixture, which creates a temporary profile, and tears it down at the end of the test.We can now add a passing test for therescalecalcfunction.Setting up PostgreSQL on GitHub ActionsWe also need to start a PostgreSQL server for our GitHub Actions, by specifying a service in thetest.ymlfile. This actually uses thePostgreSQL Docker image, which is started before the tests are run.Adding the EquationOfState workchainWe shall now add the workflow itself.We add anproject.entry-points."aiida.workflows"section to thepyproject.tomlfile, to register theEquationOfStateworkchain as an entry point, for AiiDA to access. After re-installing the package (pip install --no-deps -e ".[test]"), we can now use theverdi plugin list aiida.workflows eos.basecommand and see the registered workflow.For the testing we need to set up some more resources, before running the workflow. We can do this by adding pytest fixtures to theconftest.pyfile.We also need to run against the actualpw.xexecutable. One way to do this is to use conda, to install thequantum-espressopackage, which will install thepw.xexecutable in thebindirectory of the conda environment. Another way we are developing isaiida-testing.Publishing the packageOnce you have a working package, it is good to create a version tag, and aCHANGELOG.mdto keep track of the changes.We can now publish the package toPyPI. First create an account on PyPI, and then create an API token.You can either use locallyflit publishto publish the package, or use the GitHub Actions: Add the token to thePYPI_KEYsecret in the GitHub repository settings. Then we add a publishing workflow that runs on a new release.Once released to PyPI (or before) you can make a PR to theAiiDA plugin registryfor others to find your plugin.
aiida-export-migration-tests
Testing of AiiDA export file migrationsTest modules for migration ofAiiDAexport files.Can be installed by addingtestingpackage, when installing AiiDA:pipinstallaiida-core[testing]Note: This only works for AiiDAv1.0.0and newer.Historical table of version comparisons between releases of this module and AiiDA, including when tests for the different export versions are first included.This moduleAiiDAExport versions (when first included)0.9.0to be released0.90.8.01.0.10.80.7.01.0.0b40.6, 0.70.6.01.0.0b40.5.21.0.0b30.5.11.0.0b30.5.01.0.0b30.4 -> 0.50.1.11.0.0b20.1.01.0.0b20.1 -> 0.2 ; 0.2 -> 0.3 ; 0.3 -> 0.4Q&AQ: Why not include these test archives in the core of AiiDA?A:In order to not take up unneccesary disk space, when installing AiiDA, these test archives have been separated out ofaiida-core. Furthermore, the legacy export versions will never change, i.e., the incremental migration functions need only be thoroughly tested once, and will therefore not be affected by changes to the core of the AiiDA code in any way.Q: What happens when the export version is upped?A:A new export archive file will be added to this repo as well as a new test-filled file (toaiida-coreundertests.tools.importexport.migration.).Q: What if the import system changes in AiiDA core?A:This repo is only for storing the export archives and their creation workflows for different export versions. All tests can be found inaiida-core.Release notes0.9.0 (April 2020)AiiDA version:To be releasedUpdate repository with export archive for export version 0.9. The file follows the naming of previous export version files having the suffix_manual, since it was not produced properly, i.e., through a new workflow run in AiiDA. Instead, the export fileexport_v0.8_manual.aiidahas been the unpacked, updated manually, and repacked. This means the latest "proper" export file isexport_v0.4.aiida.Changes are expected to be released with AiiDA version 1.2.0.New file:export_v0.9_manual.aiida0.8.0 (November 2019)AiiDA version:1.0.0Update repository with export archive for export version 0.8. The file follows the naming of previous export version files having the suffix_manual, since it was not produced properly, i.e., through a new workflow run in AiiDA. Instead, the export fileexport_v0.7_manual.aiidahas been the unpacked, updated manually, and repacked. This means the latest "proper" export file isexport_v0.4.aiida.Changes are expected to be released with AiiDA version 1.0.1.New file:export_v0.8_manual.aiida0.7.0 (July 2019)AiiDA version:1.0.0b4Update repository with export archive for export versions 0.6 and 0.7.These files are named differently, having the suffix_manual, since they were not produced properly, i.e., through a new workflow run in AiiDA. Instead, the export fileexport_v0.4.aiidahas been the unpacked, updated manually, and repacked.The fileexport_v0.5.aiidahas also been made this way, and will therefore have its name changed accordingly.New files:export_v0.5_manual.aiida(only the filename has been altered)export_v0.6_manual.aiidaexport_v0.7_manual.aiida0.6.0 (July 2019)AiiDA version:1.0.0b4This version was never released.0.5.2 (June 2019)AiiDA version:1.0.0b3Remove test files according to issue#6. The files have been moved toaiida-coreunderaiida.backends.tests.tools.importexport.migration.. As explained in the issue, the problem of having tests that concern files inaiida-core, is that they cannot be skipped with a helpful message usingunittest.skip(<message>).This repository has now become the keeper of the export archives and the workflows that created them. Hence, it should only ever be touched, when a new export version is introduced in AiiDA, and a new archive must be added.0.5.1 (June 2019)AiiDA version:1.0.0b3Update imports from aiida-core in accordance with the restructuring of the import/export-module. For a reference, this was done in aiida-core PR#3052.0.5.0 (June 2019)AiiDA version:1.0.0b3Update version number of this repository to reflect the currentEXPORT_VERSIONinaiida-core.The version number is now as follows:<EXPORT_VERSION>.<patch>Implement initial change related to the update ofEXPORT_VERSIONfrom 0.4 to 0.5:Migration0034: Remove fields/columns"nodeversion"and"public"from theDbNodemodel.Migration0036: Remove field/column"transport_params"from theDbComputermodel.Note: The newly createdexport_v0.5.aiidaunder/aiida-export-migration-tests/archivesisnotactually created from an AiiDA workflow, but instead a migration ofexport_v0.4.aiida. This is because the changes between the versions are insignificant for the outcome of the workflow, and that only very minor changes have occurred.0.1.1 (May 2019)AiiDA version:1.0.0b2Minor fixes due to changes inaiida-core.Folderfixtureschanges name toarchivesto match the same change happening for aiida-core.The simple AiiDA export archives in aiida-core representing the different export versions have changed names:export_v0.1_no_UPF.aiida->export_v0.1_simple.aiidaexport_v0.2_no_UPF.aiida->export_v0.2_simple.aiidaexport_v0.3_no_UPF.aiida->export_v0.3_simple.aiidaexport_v0.4_no_UPF.aiida->export_v0.4_simple.aiidaA change in the conversion message included inmetadata.jsonfiles, when a migration has been successfully performed, introduces the version of AiiDA into the message. In order to have the tests be version-agnostic, theconversion_infokey has to be handled specially in the relevant tests.When migrating an archive, it is asserted that the correct conversion message appears in the migrated archive, and theconversion_infoitem is then popped from both the migrated archive and the archive imported for comparison.0.1.0 (April 2019)AiiDA version:1.0.0b2First release.Tests for step-wise export migrations from versions 0.1, 0.2, and 0.3.Representative export files created in specializedQuantum Mobilevirtual machines by @yakutovicha to test currently known migrations from the basis of what could be exported at the time of the different export versions.Table of version comparisons (similar to the one inaiida-corePR#2478).Export versionAiiDA versionAiiDA version release dateFound changed in commitv0.1*0.6.0.101.03.2016as exported by 0.6.0v0.20.9.101.09.2017189e29fea4c7f4213d0be0914d55cccaa581c364(v0.7.0)v0.30.12.304.03.2019788d3206e0eaaf062d1a13710aaa64a18a0bbbcd(v0.10.0rc1)v0.41.0.0b209.04.20191673ec28e8b594693a0ee4cdec82669e72abcc4c(v1.0.0b1)*Due to the following reasons, we decidednotto invest an effort in making the representative archive migration for 0.1:The earliest version released on PyPi is 0.8.0rc1 (22.03.2017).The previous stable version (AiiDA 0.5.0) was not working in a virtual environment.The migration from v0.1 to v0.2 is small and quite simple. If an export file should be found that cannot be properly migrated, due to this step, it can be migrated manually with little effort.Representative export files creationTo create the representative export files, a simple workflow was developed by @yakutovicha that runs two consequtiveQuantumESPRESSOPW calculations. First an SCF calculation, followed by an MD calculation.The workflows can be found in the repository's folder.qm, and correspond to the following export versions andQuantum Mobileeditions:Export versionWorkflowRun workflow scriptQuantum Mobilev0.1*---v0.2wf.pyrun_wf.pyhistorical_aiida_0.9.1v0.3wf.pyrun_wf.pyv19.03.0v0.4wf_aiida_1_0.pyrun_wf_aiida_1_0.pyin development*See sectionRelease notes#0.1.0 (April 2019).They contain the following AiiDA node types (according to AiiDA1.0.0):Node typeParent-type tree, up toNodeFloatNumericType -> BaseType/Data -> NodeIntNumericType -> BaseType/Data -> NodeDictData -> NodeFolderDataData -> NodeRemoteDataData -> NodeStructureDataData -> NodeUpfDataSinglefileData -> Data -> NodeKpointsDataArrayData -> Data -> NodeBandsDataKpointsData -> ArrayData -> Data -> NodeTrajectoryDataArrayData -> Data -> NodeCodeData -> NodeWorkChainNodeWorkflowNode -> ProcessNode -> NodeCalcJobNodeCalculationNode -> ProcessNode -> Node
aiida-fenics
Enabling usage of the FEniCS computing platform with AiiDAThis software contains a plugins that enables the usage of the FENiCS computing platform with theAiiDA framework. It includes special plugins for software building on FENiCs like the Phasefield dislocation interaction program Pdfdisloc. The enables provenance tracking for such simulations and workflows, which is need for research datamanagement, reproducibility and FAIR data.DocumentationHosted athttp://aiida-fenics.readthedocs.io/en/develop/index.html. For other information see the AiiDA-core docs, or the FeniCs project.License:MIT license. See the license file.How to cite:If you use this package please consider citing:Comments/Disclaimer:ContentsIntroductionInstallation InstructionsCode DependenciesFurther InformationIntroductionThis is a python package (AiiDA plugin and utility) allowing to use the pdfdisloc code in the AiiDA Framework. The Pdfdisloc program contains workflows based on Fenics a finite element solver, that is widely applied in the material science and physics community.The plugin :The plugin consists of:1. A data-structure representing Meshes. 2. pdfdisloc calculationInstallation InstructionsFrom the aiida-fenics folder (after downloading the code, recommended) use:$ pip install . # or which is very useful to keep track of the changes (developers) $ pip install -e .To uninstall use:$ pip uninstall aiida-fenicsOr install latest release version from pypi:$ pip install aiida-fenicsTest InstallationTo test rather the installation was successful use:$verdipluginslistaiida.calculations# example output:## Pass as a further parameter one (or more) plugin names## to get more details on a given plugin....*fenics.dfdislocYou should see 'fenics.*' in the listThe other entry points can be checked with the AiiDA Factories (Data, Workflow, Calculation, Parser). (this is done in test_entry_points.py)We suggest to run all the (unit)tests in the aiida-fleur/aiida_fleur/tests/ folder.$ bash run_all_cov.shCode DependenciesRequirements are listed in setup.json.most important are:aiida_core >= 1.3.0Mainly AiiDA:Download fromwww.aiida.net -> Downloadinstall and setup ->aiida's documentationFurther InformationUsage examples are shown in 'examples'.AcknowledgementsBesides the Forschungszentrum Juelich GmbH (FZJ), this project was supported within the hub Information at the FZJ by the Helmholtz Metadata Collaboration (HMC), an incubator-platform of the Helmholtz Association within the framework of the Information and Data Science strategic initiative.
aiida_firecrest
aiida-firecrest [IN-DEVELOPMENT]AiiDA Transport/Scheduler plugins for interfacing withFirecRESTInstallation(pip not yet available)pipinstallaiida-firecrestOr for development:gitclonecdaiida-firecrest pipinstall-e.CLI Usage$aiida-firecrest-cli--helpUsage: aiida-firecrest-cli [OPTIONS] COMMAND [ARGS]...FireCrest CLI.Options:--config PATH Path to the connection file (default: .firecrest-config.json).--help Show this message and exit.Commands:fs File system operations.slurm Slurm operations.stat Status operations.The configuration file should look like this:{"url":"https://firecrest.cscs.ch","token_uri":"https://auth.cscs.ch/auth/realms/cscs/protocol/openid-connect/token","client_id":"username-client","client_secret":"xyz","machine":"daint","scratch_path":"/scratch/snx3000/username"}scratch_pathis optional. If specified, all operations will be relative to this path.$aiida-firecrest-clistatUsage: aiida-firecrest-cli stat [OPTIONS] COMMAND [ARGS]...Status operations.Options:--help Show this message and exit.Commands:parameters Get parameters that can be configured in environment files.service Information about a service.services List available services.system Information about a system.systems List available systems.$aiida-firecrest-clifsUsage: aiida-firecrest-cli fs [OPTIONS] COMMAND [ARGS]...File system operations.Options:--help Show this message and exit.Commands:cat Get the contents of a file.chmod Change the mode of a file.cwd Get the current working directory.ls List files in a path.putfile Upload file to the remote.stat Get information about a file.$aiida-firecrest-clislurmUsage: aiida-firecrest-cli slurm [OPTIONS] COMMAND [ARGS]...Slurm operations.Options:--help Show this message and exit.Commands:sacct Retrieve information for all jobs.squeue Retrieves information for queued jobs.submit Submit a job script.Code StyleTo format the code and lint it, runpre-commit:pre-commitrun--all-filesTestingIt is recommended to run the tests viatox.toxBy default, the tests are run using a mock FirecREST server (in a temporary folder). You can also provide connections details to a real FirecREST server:tox----firecrest-config=".firecrest-config.json"The format of the.firecrest-config.jsonfile is:{"url":"https://firecrest.cscs.ch","token_uri":"https://auth.cscs.ch/auth/realms/cscs/protocol/openid-connect/token","client_id":"username-client","client_secret":"xyz","machine":"daint","scratch_path":"/scratch/snx3000/username"}
aiida-fireworks-scheduler
aiida-fireworks-schedulerAiiDA plugin for usingfireworksas the execution engine forCalcJobProcess.The main advantage of using theFwScheduler, as provided in this plugin, compared to the standard AiiDA scheduler plugins is that it allows more flexible job placement. For example, your may be forced to submit very large jobs to the cluster (or simply such jobs goes through the queue faster!), or that the cluster has a strict limit on the number of jobs that can be in the queue. UsingFwScheduler, a single allocation of the resources from the scheduler (SGE, PBSpro, SLURM etc.) can be used to run multiple AiiDACalcJobs in serial or in parallel, depending on the user configuration. In addition, AiiDA jobs can be run along side other workflows in fireworks.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_fireworks_scheduler/: The main source code of the plugin packagefwscheduler.py: A newFWSchedulerclass.scripts/arlauncher.py: A specialrlaunchscript for launching jobs respecting the walltime limits.jobs.py: SpecialisedAiiDAJobFireworkfor running AiiDA prepared jobs.fworker.py: SpecialisedAiiDAFWorkerto generate query for selecting appropriate jobs from the FireServer.docs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugintests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest..coveragerc: Configuration ofcoverage.pytool reporting which lines of your plugin are covered by tests.gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginMANIFEST.in: Configure non-Python files to be included for publication onPyPIREADME.md: This fileconftest.py: Configuration of fixtures forpytestpytest.ini: Configuration ofpytesttest discoverysetup.json: Plugin metadata for registration onPyPIand theAiiDA plugin registry(including entry points)setup.py: Installation script for pip /PyPIFeaturesFWSchedulerscheduler plugin to submit jobs to LaunchPad managed byfireworks.arlaunchcommand for launching jobs on the cluster machine.verdi data fireworks-schedulercommand line tool for duplicating existingComputer/Coldfor switching toFwScheduler.InstallationOn the local machine where AiiDA is installed:pipinstallaiida-fireworks-scheduler[local]On the remote machine where jobs to be launched:pipinstallaiida-fireworks-schedulerUsageSimply create a new computer usingverdi computer setupand select thefwscheduler. Configure yourfireworksconfiguration following the guidehere.Note that you must configure theLAUNCHPAD_LOCsetting in the file as defined by theFW_CONFIG_FILEenvironment variable to point to yourmy_launchpad.yamlfile on BOTH the local and remote machines. On the local machine, it will be picked up by the daemon.In addition, on the remote machine, setup yourmy_fworker.yamlwith special directives for identifying the computer and username. These files can be generated using:verdidatafireworks-schedulergenerate-worker-YCOMPUTER-mpinpNUM_MPI_PROCESSORSNote that each *worker" can only launch jobs of a particular size (number of MPI processors). But you can always combine multiple workers in one or more cluster jobs.At runtime, jobs needs to be launched with thearlaunchcommand on the remote machine.Adding walltime selectors for standard fireworks jobsStandard fireworks jobs can also be selected based on the requested walltime usingarlaunch. If a job hasspec._walltime_secondskey, it will only be selected to run if there is sufficient time left. However, unlike AiiDA jobs, this walltime limit is not enforced, and the launch can proceed even if the requested seconds have elapsed.Developmentgitclonehttps://github.com/zhubonan/aiida-fireworks-scheduler.cdaiida-fireworks-scheduler pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more [email protected]
aiida-fleur
FLEUR with AiiDAThis software contains a plugin that enables the usage of the all-electron DFTFLEUR codewith theAiiDA framework.Developed atForschungszentrum Jülich GmbHCompatibility matrixFLEUR PluginAiiDA COREPythonFLEUR/XML file versionv2.0.0 < v3.0.0/<=0.37v1.2.0 < v2.0.0/<=0.35v1.0.0 < v1.2.0/<=0.33< v0.6.3/<=0.31Documentation and User SupportHosted athttp://aiida-fleur.readthedocs.io/en/develop/index.html. For other information see the AiiDA-core docs orhttp://www.flapw.de.Users can post any questions in the Fleur userforumFor bugs, feature requests and further issues please use the issue tracker on github of the aiida-fleur repository.License:MIT license. See the license file.How to cite:If you use this package please consider citing:J. Broeder, D. Wortmann, and S. Blügel, Using the AiiDA-FLEUR package for all-electron ab initio electronic structure data generation and processing in materials science, In Extreme Data Workshop 2018 Proceedings, 2019, vol 40, p 43-48Comments/Disclaimer:The plug-in and the workflows will only work with a Fleur version using xml files as I/O, i.e >v0.27.ContentsIntroductionInstallation InstructionsCode DependenciesFurther InformationIntroductionThis is a python package (AiiDA plugin, workflows and utility) allowing to use the FLEUR-code in the AiiDA Framework. The FLEUR-code is an all-electron DFT code using the FLAPW method, that is widely applied in the material science and physics community.The plugin :The Fleur plugin consists of:1. A data-structure representing input files and called FleurinpData. 2. inpgen calculation 3. FLEUR calculation 4. Workchains 5. utilityWorkchains in this package:workflow entry point nameDescriptionfleur.scfSCF-cycle of Fleur. Converge the charge density and the Total energy with multiple FLEUR runsfleur.eosCalculate and Equation of States with FLEUR (currently cubic systems only)fleur.dosCalculate a Density of States (DOS) with FLEURfleur.bandCalculate a Band structure with FLEURfleur.relaxRelaxation of the atomic positions of a crystal structure with FLEURfleur.init_clsCalculate initial corelevel shifts and formation energies with FLEURfleur.coreholeWorkflow for corehole calculations, calculation of Binding energies with FLEURfleur.dmiCalculates Dzyaloshinskii–Moriya Interaction energy dispersion of a spin spiralfleur.ssdispCalculates exchange interaction energy dispersion of a spin spiralfleur.maeCalculates Magnetic Anisotropy EnergySee the AiiDA documentation for general info about the AiiDA workflow system or how to write workflows.Utility/tools:filenameDescriptionStructure_util.pyConstains some methods to handle AiiDA structures (some of them might now be methods of the AiiDA structureData, if so use them from there!)merge_parameter.pyMethods to handle parameterData nodes, i.e merge them. Which is very useful for all-electron codes, because instead of pseudo potentialsfamilies you can create now families of parameter nodes for the periodic table.read_cif.pyThis can be used as stand-alone to create StructureData nodes from .cif files from an directory tree.Utility and tools, which are independend of AiiDA are moved to themasci-tools(material science tools) repository, which is a dependency of aiida-fleur.Command line interface (CLI)Besides the python API, aiida-fleur comes with a builtin CLI:aiida-fleur. This interface is built using the click library and supports tab-completion.To enable tab-completion, add the following to your shell loading script, e.g. the .bashrc or virtual environment activate script:eval "$(_AIIDA_FLEUR_COMPLETE=source aiida-fleur)"the main subcommands include:data: Commands to create and inspect data nodes fleurinp Commands to handle `FleurinpData` nodes. parameter Commands to create and inspect `Dict` nodes containing FLAPW parameters structure Commands to create and inspect `StructureData` nodes. launch: Commands to launch workflows and calcjobs of aiida-fleur banddos Launch a banddos workchain corehole Launch a corehole workchain create_magnetic Launch a create_magnetic workchain dmi Launch a dmi workchain eos Launch a eos workchain fleur Launch a base_fleur workchain. init_cls Launch an init_cls workchain inpgen Launch an inpgen calcjob on given input If no code is... mae Launch a mae workchain relax Launch a base relax workchain # TODO final scf input scf Launch a scf workchain ssdisp Launch a ssdisp workchain plot: Invoke the plot_fleur command on given nodes workflow: Commands to inspect aiida-fleur workchains and prepare inputsfor example to launch an scf workchain on a given structure execute:$ aiida-fleur launch scf -i <inpgenpk> -f <fleurpk> -S <structurepk>the command can also process structures in any formatasecan handle, this includesCif,xsfandposcarfiles. In such a case simply parse the path to the file:$ aiida-fleur launch scf -i <inpgenpk> -f <fleurpk> -S ./structure/Cu.cifInstallation InstructionsFrom the aiida-fleur folder (after downloading the code, recommended) use:$ pip install . # or which is very useful to keep track of the changes (developers) $ pip install -e .To uninstall use:$ pip uninstall aiida-fleurOr install latest release version from pypi:$ pip install aiida-fleurTest InstallationTo test rather the installation was successful use:$verdipluginslistaiida.calculations# example output:## Pass as a further parameter one (or more) plugin names## to get more details on a given plugin....*fleur.fleur*fleur.inpgenYou should see 'fleur.*' in the listThe other entry points can be checked with the AiiDA Factories (Data, Workflow, Calculation, Parser). (this is done in test_entry_points.py)We suggest to run all the (unit)tests in the aiida-fleur/aiida_fleur/tests/ folder.$ bash run_all_cov.shCode DependenciesRequirements are listed inpyproject.tomlmost important are:aiida_core >= 2.0lxmlasemasci-toolsMainly AiiDA:Download fromwww.aiida.net -> Downloadinstall and setup ->aiida's documentationEasy plotting and other useful routines that do not depend on aiida_core are part of themasci-tools(material science tools) repository.For easy plotting we recommend using 'plot_methods' from masci-tools, which are also deployed by the 'plot_fleur(<node(s)>)' function.Further InformationThe plug-in source code documentation ishere. also some documentation of the plug-in, further things can be found atwww.flapw.de. Usage examples are shown in 'examples'.AcknowledgementsBesides the Forschungszentrum Juelich, this work is supported by the European MaX Centre of Excellence 'Materials design at the Exascale'MaXfunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598 and under grant agreement No. 824143. This work is further supported by the Joint Lab Virtual Materials Design (JLVMD) of the Forschungszentrum Jülich.For this work essential is AiiDA, which itself is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation.
aiida-gaussian
aiida-gaussianAiiDA plugin for the Gaussian quantum chemistry softwareFeaturesGaussian input can be provided as a python dictionary following the convention defined bypymatgenparameters={'functional':'PBE1PBE','basis_set':'6-31g','charge':0,'multiplicity':1,'link0_parameters':{'%chk':'aiida.chk','%mem':"1024MB",'%nprocshared':4,},'route_parameters':{'scf':{'maxcycle':128,'cdiis':None,},'nosymm':None,'output':'wfx','opt':'tight',},'input_parameters':{# appended at the end of the input'output.wfx':None},}Inroute_parameters, specifyingkey: Noneadds onlykeywithout the equals sign to the input script.Parsing of the results is performed with theccliblibrary and by default all of its output is stored in theoutput_parametersnode.Additionally, simple plugins to submit the Gaussian utilitiesformchkandcubegenare provided.Installationpipinstallaiida-gaussianThis installs the plugins to the AiiDA instance (to double-check, one can list all installed plugins byverdi plugin list aiida.calculations). After this, the Gaussian codes should be set up using the plugins (https://aiida.readthedocs.io/projects/aiida-core/en/latest/).UsageA quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples# Submit test calculation (argument is the label of gaussian code)verdirunexample_01_opt.pygaussian09For maintainersTo create a new release, clone the repository, install development dependencies withpip install '.[dev]', and then executebumpver update --major/--minor/--patch. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.
aiida-gaussian-datatypes
AiiDA Gaussian Data PluginPlugin to handle GTO-based basis sets and pseudopotentials and manage them as first-class citizens in AiiDA.Commandline usageAfter the installation, you will get new commands inverdi data$verdidataUsage: verdi data [OPTIONS] COMMAND [ARGS]...Inspect, create and manage data nodes.Options:-h, --help Show this message and exit.Commands:array Manipulate ArrayData objects.bands Manipulate BandsData objects.cif Manipulation of CIF data objects.parameter View and manipulate Dict objects.plugins Print a list of registered data plugins or details ofa...remote Managing RemoteData objects.structure Manipulation of StructureData objects.trajectory View and manipulate TrajectoryData instances.upf Manipulation of the upf families.gaussian.basisset Manage basis sets for GTO-based codesgaussian.pseudo Manage Pseudopotentials for GTO-based codes$verdidatagaussian.basissetUsage: verdi data gaussian.basisset [OPTIONS] COMMAND [ARGS]...Manage basis sets for GTO-based codesOptions:-h, --help Show this message and exit.Commands:dump Print specified Basis Setsimport Add a basis sets from a file to the databaselist List Gaussian Basis Sets$verdidatagaussian.pseudoUsage: verdi data gaussian.pseudo [OPTIONS] COMMAND [ARGS]...Manage Pseudopotentials for GTO-based codesOptions:-h, --help Show this message and exit.Commands:dump Print specified Pseudopotentialimport Add a pseudopotential from a file to the databaselist List Gaussian PseudopotentialsExamplesImport and use Basis Set from CP2KTo import a specific basis set from a file with basis sets in CP2K's native format, simply use:$verdidatagaussian.basissetimport--symHedata/BASIS_MOLOPTInfo: 2 Gaussian Basis Sets found:Nr. Sym Names Tags # Val. e⁻ Version----- ----- ----------------------------------------- ------------------------- ----------- ---------1 He SZV-MOLOPT-SR-GTH-q2, SZV-MOLOPT-SR-GTH SZV, MOLOPT, SR, GTH, q2 2 12 He DZVP-MOLOPT-SR-GTH-q2, DZVP-MOLOPT-SR-GTH DZVP, MOLOPT, SR, GTH, q2 2 1Which Gaussian Basis Set do you want to add? ('n' for none, 'a' for all, comma-seperated list or range of numbers): 2Info: Adding Gaussian Basis Set for: He (DZVP-MOLOPT-SR-GTH-q2)... DONE$verdidatagaussian.basissetlistInfo: 1 Gaussian Basis Sets found:ID Sym Names Tags # Val. e⁻ Version------------------------------------ ----- ----------------------------------------- ------------------------- ----------- ---------4a173d43-b022-4e1e-aca9-c4db51da223b He DZVP-MOLOPT-SR-GTH-q2, DZVP-MOLOPT-SR-GTH DZVP, MOLOPT, SR, GTH, q2 2 1Notes:The command line argument--sym Heis optional (leaving it away will simply show all available entries)The plugin automatically filters already imported basis setsTo reference this in averdiscript, you can use the following snippet:fromaiida.pluginsimportDataFactoryBasisSet=DataFactory('gaussian.basisset')basis_He=BasisSet.get(element="He",name="DZVP-MOLOPT-SR-GTH")# the generic way using BasisSet.objects.find(...) works too, of courseNotes:You don't have to specify the full name (DZVP-MOLOPT-SR-GTH-q2), the shorter name (DZVP-MOLOPT-SR-GTH) also worksImport and use Pseudopotential from CP2KTo import a specific pseudopotential from a file with pseudopotentials in CP2K's native format, simply use:$verdidatagaussian.pseudoimport--symHedata/GTH_POTENTIALSInfo: 4 Gaussian Pseudopotentials found:Nr. Sym Names Tags Val. e⁻ (s, p, d) Version----- ----- ------------------------------------------ ------------- ------------------- ---------1 He GTH-BLYP-q2, GTH-BLYP GTH, BLYP, q2 2, 0, 0 12 He GTH-BP-q2, GTH-BP GTH, BP, q2 2, 0, 0 13 He GTH-PADE-q2, GTH-LDA-q2, GTH-PADE, GTH-LDA GTH, PADE, q2 2, 0, 0 14 He GTH-PBE-q2, GTH-PBE GTH, PBE, q2 2, 0, 0 1Which Gaussian Pseudopotentials do you want to add? ('n' for none, 'a' for all, comma-seperated list or range of numbers): 4Info: Adding Gaussian Pseudopotentials for: He (GTH-PBE-q2)... DONE$verdidatagaussian.pseudolistInfo: 1 Gaussian Pseudopotential found:ID Sym Names Tags Val. e⁻ (s, p, d) Version------------------------------------ ----- -------------------------------------------- -------------- ------------------- ---------5838b0b7-336a-4b97-b76a-e5c42a4e98ac He GTH-PBE-q2, GTH-PBE GTH, PBE, q2 2, 0, 0 1Notes:The command line argument--sym Heis optional (leaving it away will simply show all available entries)The plugin automatically filters already imported basis setsTo reference this in averdiscript, you can use the following snippet:fromaiida.pluginsimportDataFactoryPseudopotential=DataFactory('gaussian.pseudo')pseudo_He=Pseudopotential.get(element="He",name="GTH-PBE")# the generic way using Pseudopotential.objects.find(...) works too, of courseNotes:You don't have to specify the full name (GTH-PBE-q2), the shorter name (GTH-PBE) also works
aiida-graphql
aiida-graphqlStrawberry-based GraphQL Server for AiiDAWhy GraphQL when there is already the REST API? Seehttps://www.howtographql.com/basics/1-graphql-is-the-better-rest/... a lot of possible optimizations and fits the graph-based structure of the AiiDA DB a lot better than a REST API.RequirementsPython 3.7+https://pypi.org/project/strawberry-graphql/0.16.7+https://pypi.org/project/aiida-core/1.0.0b6+For development:https://poetry.eustace.io/Why Strawberry for GraphQL? It uses graphql-core v3 (while graphene is still stuck with v2), uses typings and dataclasses for both validation and schema generation. And it uses modern Python to write the schema, in comparison to theschema-first approach.Why Python 3.7+? It's the future, and for Strawberry. In fact, were it not for a bug inuvloopthis would be Python 3.8+ (for the walrus operator). And given the timeline these projects are running for, we'll probably see Python 3.9 until people effectively start using it.Why Poetry? I wanted to get away fromsetuptoolsand used Poetry already in adifferent projectand liked the virtualenv integration.UsageDevelopmentInstalling the dependencies:gitclonehttps://github.com/dev-zero/aiida-graphql.gitcdaiida-graphql# for poetry installation use the official documentationpoetryinstallTo run the development server:$poetryrunstrawberryserveraiida_graphql.schemathen visithttp://localhost:8000/graphqlwith your browser.Example query:{computers{uuidnamedescriptionschedulerTypetransportType}}Available fieldsnodecalculationcomputerusersinglefilegaussian_basissets (only if theaiida-gaussian-datatypesis installed)Documentation and schema are embedded in the development server.
aiida-gromacs
aiida-gromacsThe GROMACS plugin for AiiDA aims to enable the capture and sharing of the full provenance of data when parameterising and running molecular dynamics simulations. This plugin is being developed as part of the Physical Sciences Data Infrastructure programme to improve the practices around data within the Physical Sciences remit area within the UK.This package is currently a work in progress and we will be adding much more complete functionality to this plugin in the coming months.The design pattern we are aiming for is to simply allow researchers to capture the full data provenance for their simulations by only switching on an AiiDA conda environment, along with modifying your command lines very slightly.This means that you should gain access to powerful FAIR data practices without wholesale cultural or usage pattern shifts in your daily work.DocumentationSeeherefor documentation for users and [email protected]@stfc.ac.uk
aiida-grouppathx
aiida-grouppathxAiiDA plugin provides theGroupPathXclass.This plugin was kickstarted usingAiiDA plugin cutter, intended to help developers get started with their AiiDA plugins.Features and usageThis package is provides a enhanced version ofGroupPath-GroupPathX. The main feature is that it allows nodes stored under a group to benamedby an alias. This way, one can address a specificNodeasGroupPath('mygroup/structure1'). In addition, ashow_treemethod is provided for visualising the content of a specificGroupPathX, similiar to the command line tooltreethat works on the file system. The goal is to provide a way for managing data with an interface what is similar to a file system based approach.tree aiida_grouppathx aiida_grouppathx ├── __init__.py ├── pathx.py └── __pycache__ ├── __init__.cpython-38.pyc └── pathx.cpython-38.pycIn analogy:fromaiida_grouppathximportGroupPathXpath=GroupPathX('group1')path.get_or_create_group()path['group2'].get_or_create_group()path.add_node(Int(1).store(),'int1')path['group2'].add_node(Int(1).store(),'int2')path.show_tree()givesgroup1 ├── group2 │ └── int2 * └── int1 *where the*highlights that a leaf is aNoderather than a group. This kind of mark up can be customised, for example, to show the status of workflow nodes.defdecorate_name(path):ifpath.is_node:return' '+str(path.get_node())path.show_tree(decorate_name)gives:group1 ├── group2 │ └── int2 uuid: de79d244-d3bb-4f61-9d3a-b3f09e1afb72 (pk: 7060) value: 1 └── int1 uuid: e2f70643-0c25-4ae5-929a-a3e055969d10 (pk: 7059) value: 1Multiple decorators can be combinedfrom aiida_grouppathx import decorate_with_group_names, decorate_with_label decorate_with_uuid_first_n path.show_tree(decorate_with_group_names, decorate_with_label, decorate_with_uuid_first_n())output:group1 ├── group2 │ └── int2 group1/group2 | | de79d244-d3b └── int1 group1 | | e2f70643-0c2The stored nodes can be access through:group1['group2/int2'].get_node() # Gives node de89d2 group1.browse.group2.int2().get_node() # Also gives node de89d2and alsopath.browse.<tab> path.browse.int1() # To access the `group1/int1` path path.browse.int1().get_node() # To access the `group1/int1` nodePlease see thepathx.pyfor the extended methods, and the official documentation for the concept ofGroupPath.The package does not change howGroupandNodeoperates in the AiiDA. It is only built on top of the existing system as an alternative way to access the underlying data.Installationpipinstallaiida-grouppathx verdiquicksetup# better to set up a new profileDevelopmentgitclonehttps://github.com/zhubonan/aiida-grouppathx.cdaiida-grouppathx pipinstall--upgradepip pipinstall-e.[pre-commit,testing]# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more [email protected]
aiida-gudhi
aiida-gudhiAiiDA plugin for theGUDHIlibrary for topological data analysis.Installationgitclonehttps://github.com/ltalirz/aiida-gudhi.cdaiida-gudhi pipinstall-e.# also installs aiida, if missing (but not postgres)#pip install -e .[precommit,testing] # install extras for more featuresverdiquicksetup# better to set up a new profileverdicalculationplugins# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this [email protected]
aiida-gulp
aiida-gulpAiiDA plugin for running theGULPcode.Documentation:https://readthedocs.org/projects/aiida-gulpInstallationTo install from Conda (recommended)::>>condainstall-cconda-forgeaiida-gulpaiida-core.servicesTo install from pypi::>>pipinstallaiida-gulpTo install the development version:>>gitclonehttps://github.com/chrisjsewell/aiida-gulp. >>cdaiida-gulp >>pipinstall-e.# also installs aiida, if missing (but not postgres)>>#pip install -e .[pre-commit,testing] # install extras for more features>>verdiquicksetup# set up a new profile>>verdicalculationplugins# should now show the calculation plugins (with prefix gulp.)DevelopmentTesting against mock GULP executablesBecause GULP is a licensed software, it is not possible to source a copy of the executable on Travis CI. Therefore, a mock executable (gulp_mock) has been created for testing purposes (which also speeds up test runs).This executable computes the md5 hash of the supplied input file and tries to match it against a dictionary of precomputed hashes. If found, the executable will write the matching output (fromtest/output_files) to stdout.The following will discover and run all unit test:>>pipinstall-e.[testing]>>reentryscan-raiida >>pytest-vTo omit tests which call external executables (likegulp):>>pytest--gulp-skip-execTo call the actual executable (e.g.gulpinstead ofgulp_mock):>>pytest--gulp-no-mockTo output the results of calcjob executions to a specific directory:>>pytest--gulp-workdir"test_workdir"Coding Style RequirementsThe code style is tested usingflake8, with the configuration set in.flake8, and code should be formatted withblack.Installing withaiida-gulp[code_style]makes thepre-commitpackage available, which will ensure these tests are passed by reformatting the code and testing for lint errors before submitting a commit. It can be setup by:>>cdaiida-gulp >>pre-commitinstallOptionally you can runblackandflake8separately:>>black.# recursively find and format files in-place>>flake8Editors like VS Code also have automatic code reformat utilities, which can adhere to this [email protected]
aiida-hyperqueue
AiiDA HyperQueue pluginAiiDA plugin for theHyperQueuemetascheduler.❗️ This package is still in the early stages of development and we will most likely break the API regularly in new 0.X versions. Be sure to pin the version when installing this package in scripts.FeaturesAllows task farming on Slurm machines through the submission of AiiDA calculations to theHyperQueuemetascheduler. See theDocumentationfor more information on how to install and use the plugin.
aiida-icl
aiida_iclInstallation:$condainstall-ccjs14aiida-icl#or$pipinstallaiida-iclAiiDA plugin for working with HPC at Imperial College London.Provides theaiida.schedulerentry point:pbspro_cx1.To create a new Computer:fromaiidaimportload_profile()fromaiida_icl.utilsimportget_cx1_computerload_profile()computer=get_cx1_computer('/path/to/workdir','/Users/user_name/.ssh/id_rsa')print(computer)icl_cx1 (login.cx1.hpc.ic.ac.uk), pk: 8To generate calculationmetadata.options:fromaiida_icl.utilsimportJOB_CLASSES,get_calulation_optionsoptions=get_calulation_options(JOB_CLASSES.general_24)print(options){'resources':{'num_machines':1,'num_mpiprocs_per_machine':32},'max_memory_kb':10000000,'max_wallclock_seconds':86400,'withmpi':True}Setting up an SSH Public and Private KeysRather than directly using a password to access the remote host, public key authentication is used, as a more secure authentication method. There are numerous explanations on the internet (includinghere) and below follows a short setup guide (taken fromhere):First open a shell on the computer you want to connect from. Enter cd ~/.ssh. If anlsshows to files called 'id_rsa' and 'id_rsa.pub' you already have a key pair. If not, enterssh-keygenHere is what the result should look like:heiko@clove:~/.ssh$ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/Users/heiko/.ssh/id_rsa):Enter passphrase (empty for no passphrase):Enter same passphrase again:Your identification has been saved in id_rsa.Your public key has been saved in id_rsa.pub.The key fingerprint is:f0:da:dc:77:cf:71:12:c8:50:dc:18:a9:8d:66:38:ae [email protected] key's randomart image is:+--[ RSA 2048]----+| .o= || .+ . || . ..+ || oo =o.. || .S+ o . || +.. . || ..o . . o..|| E . . +o|| o|+-----------------+You should keep the standard directory and choose a suitably difficult passphrase.The two file you just created are key and keyhole. The first file 'id_rsa' is the key. You should not ever ever ever give it to anybody else or allow anyone to copy it. The second file 'id_rsa.pub' the keyhole. It is public and you could give it to anyone. In this case, give it to the hpc.If you open 'id_rsa.pub' it should contain one line of, similar to:ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwRDgM+iQg7OaX/CFq1sZ9jl206nYIhW9SMBqsOIRvGM68/6o6uxZo/D4IlmQI9sAcU5FVNEt9dvDanRqUlC7ZtcOGOCqZsj1HTGD3LcOiPNHYPvi1auEwrXv1hDh4pmJwdgZCRnpewNl+I6RNBiZUyzLzp0/2eIyf4TqG1rpHRNjmtS9turANIv1GK1ONIO7RfVmmIk/jjTQJU9iJqje9ZSXTSm7rUG4W8q+mWcnACReVChc+9mVZDOb3gUZV1Vs8e7G36nj6XfHw51y1B1lrlnPQJ7U3JdqPz6AG3Je39cR1vnfALxBSpF5QbTHTJOX5ke+sNKo//kDyWWlfzz3rQ== [email protected] log in to the HPC and open (or create) the file '~/.ssh/authorized_keys'. In a new line at the end of this file, you should add a comment (starting with #) about where that keypair comes from and then in a second line you should copy and paste the complete contents of your 'id_rsa.pub' file.#MACintheofficessh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwRDgM+iQg7OaX/CFq1sZ9jl206nYIhW9SMBqsOIRvGM68/6o6uxZo/D4IlmQI9sAcU5FVNEt9dvDanRqUlC7ZtcOGOCqZsj1HTGD3LcOiPNHYPvi1auEwrXv1hDh4pmJwdgZCRnpewNl+I6RNBiZUyzLzp0/2eIyf4TqG1rpHRNjmtS9turANIv1GK1ONIO7RfVmmIk/jjTQJU9iJqje9ZSXTSm7rUG4W8q+mWcnACReVChc+9mVZDOb3gUZV1Vs8e7G36nj6XfHw51y1B1lrlnPQJ7U3JdqPz6AG3Je39cR1vnfALxBSpF5QbTHTJOX5ke+sNKo//kDyWWlfzz3rQ== [email protected] the 'authorized_keys' file and your connection to the HPC. Now connect again. You will be asked for the passphrase for your keyfile. Enter it. You should now be logged in to the HPC. If you are not asked for the passphrase but for the password of your account, the Server does not accept your key pair.So far, we have replaced entering the password for your account with entering the passphrase for your keypair. This is where a so called SSH-agent comes handy. The agent will store your passphrases for you so you do not have to enter them anymore. Luckily MacOS has one build in, that should have popped up and asked you, whether you want the agent to take care of your passphrases. If you said 'YES', that was the very last time you ever heard or saw anything of it or your passphrase. Similar agents exist for more or less every OS. From now on you just have to enter hostname and username and you are logged in.
aiida-kkr
aiida-kkrAiiDAplugin for theJülich KKR codesplus workflows and utility.FeaturesKKR calculations for bulk and interfacestreatment of alloys using VCA or CPAself-consistency, DOS and bandstructure calculationsextraction of magnetic exchange coupling parameters (J_ij,D_ij)impurity embedding solving the Dyson equationHow to citeIf you use this plugin please cite:Rüßmann, P., Bertoldo, F. & Blügel, S. The AiiDA-KKR plugin and its application to high-throughput impurity embedding into a topological insulator.npj Comput Mater7, 13 (2021). https://doi.org/10.1038/s41524-020-00482-5The ArXiv preprint can be found here:Philipp Rüßmann, Fabian Bertoldo and Stefan Blügel,The AiiDA-KKR plugin and its application to high-throughput impurity embedding into a topological insulator, arXiv:2003.08315 [cond-mat.mtrl-sci] (2020)Installation$pipinstallaiida-kkr# install latest version of aiida-kkr (published on pypi.org)$reentryscan-raiida# update entry points, needed in order to find kkr.* entrypoints in aiida# setupt aiida if this was not done already:$verdiquicksetup# better to set up a new profile$verdicalculationplugins# should now show kkr.* entrypointsTo install the developer version download the repository and install the downloaded version (seesetup.jsonfor a list of optional packages that are installed with the extras given in[])$gitclonehttps://github.com/JuDFTteam/aiida-kkr.git $pipinstall-eaiida-kkr[testing,devtools,docs]$reentryscan-raiidaRemarks about dependencies and extrasTheaiida-kkrplugin uses theaseandpymatgenpackages for structure conversions.Foraiida-core>=1.5,<1.6make sure to use the requirements specified inrequirements_aiida-core_1.5.txt(usepip install -r requirements_aiida-core_1.5.txt aiida-kkrfor the installation to overwrite the aiida-core dependency).Other extras that can be optionally installed withaiida-kkrarepre-commitwhich installes the pre-commit hooks that allow style (yapf) and static code checking (pylint)testingwhich installspytestand all extension used in the testsdocswhich installsSphinxto build the documentationdevtoolswhich installs tools that might be helpful during developmentUsage and Documentationseehttp://aiida-kkr.readthedocs.iofor user's guide and API reference.check outhttp://judft.deandhttps://jukkr.fz-juelich.defor information of the KKR codes used by the pluginContributingThank you for your interest in contributing to aiida-kkr. Check out ourcontributing guidefor some information.
aiidalab
AiiDAlab packageTheaiidalabpackage sets up the python environment found on theAiiDAlab.InstallationInstall latest version from pypi:pip install aiidalabor from theconda-forgeconda channel:conda install -c conda-forge aiidalabDocumentationThe documentation can be found on thefollowing web page.For maintainersTo create a new release, clone the repository, install development dependencies withpip install -e '.[dev]', and then executebumpver update. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.LicenseMITCitationUsers of AiiDAlab are kindly asked to cite the following publication in their own work:A. V. Yakutovich et al., Comp. Mat. Sci. 188, 110165 (2021).DOI:10.1016/j.commatsci.2020.110165Contactaiidalab@materialscloud.orgAcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation, as well as by theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598.
aiidalab-eln
AiiDAlab-ELNIntegrate AiiDAlab with Electronic Laboratory Notebooks (ELN). This repository implements a general API for interfacing AiiDAlab with some ELN and a concrete implementation for the integration with thecheminfo ELN.AiiDAlab-Cheminfo ELN implementationAs a first prototype we implemented an integration with the open-sourcecheminfo ELN. The ELN and integration can be tested via thepublic deployment of the ELN. Documentation on how to use the frontend can be foundhere.APIeln_instancerefers to the URL of the ELN API.eln_typereferst to the type of ELN, e.g. "cheminfo", "openbis".data_type"subfolder" in the cheminfo data schema of characterization techniques, e.g., "xray", "isotherm"spectrum_typewill be renamed to thissample_uuidrefers to the sample unique identifier in the ELN databasefile_namerefers to the name of the file attached to the sample and containing information of the specifieddata_type.file_contentrefers to the content of the file attached to the sample.noderefers to the AiiDA database node.tokenrefers to the token that gives access to the ELN database.export_data()sends the AiiDA node (stored in thenodeattribute) to the ELN.import_data()import ELN data into an AiiDA node.sampleobject that refers to an ELN sample, previously known assample_manager.sample.put_data()- put data into the ELN sample.sample.get_data()- get data from the ELN sample.For maintainersTo create a new release, clone the repository, install development dependencies withpip install '.[dev]', and then executebumpver update --major/--minor/--patch. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.AcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation, as well as by theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598 and an European Research Council (ERC) Advanced Grant (Grant Agreement No. 666983, MaGic).
aiidalab-launch
AiiDAlab LaunchAiiDAlab Launch makes it easy to runAiiDAlabon your own workstation or laptop.Getting StartedTo use AiiDAlab launch you will have toInstall Docker on your workstation or laptop.(you can also use Podman,see below)Install AiiDAlab launch withpipx(recommended):pipx install aiidalab-launchOr directly with pip (pip install aiidalab-launch).Start AiiDAlab withaiidalab-launch startFollow the instructions on screen to open AiiDAlab in the browser.Seeaiidalab-launch --helpfor detailed help.Instance ManagementYou can inspect the status of all configured AiiDAlab profiles with:aiidalab-launch statusProfile ManagementThe tool allows to manage multiple profiles, e.g., with different home directories or ports. Seeaiidalab-launch profile --helpfor more information.Data ManagementBy default AiiDAlab will store all of its data in aDocker volumedefined in the profile configuration optionhome_mount. You can also provide an absolute path to the AiiDAlab home directory on the host system (so calledbind mount). If this directory does not exist, AiiDAlab launch will try to create it on startup.Additional volumes to be mounted to the the AiiDAlab container can be specified via theextra_mountsoption using the"docker-compose "short syntax"source:target:mode.sourceis either a volume name or an absolute path to an existing directory on the host system,targetis a path within the AiiDAlab container, and mode is eitherrwfor read-write volume (default) orrofor read-only volume.As an example, here's how you can mount a quantum chemistry program installed on the host system to make it accessible to AiiDA inside the AiiDAlab container:extra_mounts=["/path/to/qcprogram:/opt/qcprogram:ro",]Finally, AiiDAlab launch will create a dedicated volume for the local conda environment (~/.conda). That is because some conda packages are not compatible with non-linux file systems, meaning that they cannot be installed if the home directory was, for example, bound to a Mac OS-X file system on the host.Forward AiiDAlab from a remote server via SSHPlease seeherefor instructions on how to run AiiDAlab on a remote machine and then forward it to your local terminal.CompatibilityThis package follows the Python compatibility and deprecation schedule specified byNEP 29.Using Podman instead of DockerYou should be able to use Podman as as a drop-in replacement for Docker, with just a little extra setup. The following was tested on Fedora 39 which comes with Podman pre-installed.systemctl --user enable podman.socketsystemctl --user start podman.socketsystemctl --user status podman.socketexport DOCKER_HOST=unix:///run/user/$UID/podman/podman.sockDevelopmentSetting up a development environmentTo develop this package, first clone it and then install the development dependencies withpip install -e '.[dev]'. We recommend to install thepre-commithooks to avoid unnecessary iterations when pushing new changes. To install the pre-commit hooks, switch into the repository root directly and execute:pre-commit installRun automated testsTo run the automated tests suite, clone the repository, install test dependencies withpip install -e '.[tests]', and then execute tests withpytestfor all standard tests, andpytest --slowto run the full test suite, including tests that will start docker instances and may take multiple minutes.The continuous integration workflow will run all tests, including the "slow" ones.Note: On Mac OS-X you may have to override the standard temporary base directory to successfully run all tests (e.g.pytest --basetemp=$HOME/tmp) since the default base directory may not be accessible to the Docker runtime.Creating a new releaseTo create a new release, clone the repository, install development dependencies withpip install -e '.[dev]', and then executebumpver update. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release should be created directly on themainbranch, and as such you need special permissions for writing to it.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.AuthorsCarl Simon Adorf (EPFL)-@csadorfAiiDAlab teamSee also the list ofcontributors.CitationUsers of AiiDAlab are kindly asked to cite the following publication in their own work:A. V. Yakutovich et al., Comp. Mat. Sci. 188, 110165 (2021).DOI:10.1016/j.commatsci.2020.110165Contactaiidalab@materialscloud.orgContributionsContributions in any form are very welcome. Please seeCONTRIBUTING.mdfor contribution guidelines.MIT LicenseCopyright (c) 2021 Carl Simon Adorf (EPFL)All rights reserved.Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.AcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation, the MARKETPLACE project funded byHorizon 2020under the H2020-NMBP-25-2017 call (Grant No. 760173), as well as by theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598.
aiidalab-vapory
VaporyVapory is a Python library to render photo-realistic 3D scenes with the free ray-tracing enginePOV-Ray.Here is how you would draw a purple sphere:fromvaporyimport*camera=Camera('location',[0,2,-3],'look_at',[0,1,2])light=LightSource([2,4,-3],'color',[1,1,1])sphere=Sphere([0,1,2],2,Texture(Pigment('color',[1,0,1])))scene=Scene(camera,objects=[light,sphere])scene.render("purple_sphere.png",width=400,height=300)Vapory enables to pipe the rendered images back into Python and integrates very well in the Python libraries ecosystem (seethis blog postfor examples)Vapory is an open-source software originally written byZulko, released under the MIT licence, and hosted onGithub, where everyone is welcome to contribute or ask for support.InstallationVapory should work on any platform with Python 2.7+ or Python 3.You first need to install POV-Ray. Seeherefor the Windows binaries. For Linux/MacOS you mustcompile the source(tested on Ubuntu, it’s easy).If you have PIP installed you can :(sudo) pip install vaporyIf you have neither setuptools nor ez_setup installed the command above will fail, is this case type this before installing:(sudo) pip install ez_setupVapory can also be installed manually by unzipping the source code in one directory and typing in a terminal:(sudo) python setup.py installGetting startedIn Vapory you create a scene, and then render it:fromvaporyimport*scene=Scene(camera=mycamera,# a Camera objectobjects=[light,sphere],# POV-Ray objects (items, lights)atmospheric=[fog],# Light-interacting objectsincluded=["colors.inc"])# headers that POV-Ray may needscene.render("my_scene.png",# output to a PNG image filewidth=300,height=200,# in pixels. Determines the camera ratio.antialiasing=0.01# The nearer from zero, the more precise the image.quality=1)# quality=1 => no shadow/reflection, quality=10 is 'normal'# passing 'ipython' as argument at the end of an IPython Notebook cell# will display the picture in the IPython notebook.scene.render('ipython',width=300,height=500)# passing no 'file' arguments returns the rendered image as a RGB numpy arrayimage=scene.render(width=300,height=500)Objects are defined by passing a list of arguments:camera = Camera( 'location', [0,2,-3], 'look_at', [0,1,2] )Keep in mind that this snippet will later be transformed into POV-Ray code by converting each argument to a string and placing them on different lines, to make a valid POV-Ray codecamera { location <0,1,0> look_at <0,0,0> }All the objects (Sphere, Box, Plane… with a few exceptions) work the same way. Therefore syntax of Vapory is the same as the syntax of POV-Ray. To learn how to use the different objects:Have a look at the scenes in theexamplesfolderSee the docstring of the different objects, which provides a basic example.See the onlinePOV-Ray documentationwhich will give you all the possible uses of each object (there can be many !). This documentation is easily accessible from Vapory, just type`Sphere.help(),Plane.help()etc., it will open it in your browser.Finally, it is easy to find POV-Ray examples online and transcribe them back into Vapory.Missing FeaturesFor the moment a many features (Sphere, Fog, etc.) are implemented but not all of them (POV-Ray has a LOT of possible shapes and capabilities).It is really easy to add new features, because they all basically do the same thing, are just empty classes. For instance here is how Camera is implemented:class Camera(POVRayElement): """ Camera([type,] 'location', [x,y,z], 'look_at', [x,y,z]) """Yep, that’s all, but just the name of the class is sufficient for Vapory to understand that this will translate into POV-Ray codecamera{...}. So in most case it shouldn’t be difficult to create your own new feature. If you need a non-implemented feature to be included in the package, just open an issue or push a commit.
aiidalab-widgets-base
AiiDAlab WidgetsAiiDAlabapplications typically involve some of following steps:Prepare the input for a calculation (e.g. an atomic structure).Select computational resources and submit a calculation to AiiDA.Monitor a running calculation.Find and analyze the results of a calculation.The AiiDAlab widgets help with these common tasks.DocumentationHosted onaiidalab-widgets-base.readthedocs.io.For maintainersTo create a new release, clone the repository, install development dependencies withpip install -e '.[dev]', and then executebumpver update [--major|--minor|--patch] [--tag-num --tag [alpha|beta|rc]]. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release and publishes it on PyPI.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.This package followssemantic versioning.LicenseMITCitationUsers of AiiDAlab are kindly asked to cite the following publication in their own work:A. V. Yakutovich et al., Comp. Mat. Sci. 188, 110165 (2021).DOI:10.1016/j.commatsci.2020.110165Contactaiidalab@materialscloud.orgAcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation, as well as by theMaX European Centre of Excellencefunded by the Horizon 2020 EINFRA-5 program, Grant No. 676598.
aiida-lammps
AiiDA LAMMPS pluginAnAiiDAplugin for the classical molecular dynamics codeLAMMPS.This plugin contains 2 types of calculations:lammps.base: Calculation making use of parameter based input generation for single stage LAMMPS calculations.lammps.raw: Calculation making use of a pre-made LAMMPS input file.Thelammps.baseis also used to handle three workflows:lammps.base: A workflow that can be used to submit any single stage LAMMPS calculation.lammps.relax: A workflow to submit a structural relaxation using LAMMPS.lammps.md: A workflow to submit a molecular dynamics calculation using LAMMPS.AiiDA LAMMPS pluginInstallationBuilt-in Potential SupportExamplesCode SetupStructure SetupPotential SetupForce CalculationOptimisation CalculationMD CalculationDevelopmentCoding Style RequirementsTestingInstallationTo install a stable version from pypi:pipinstallaiida-lammpsTo install from source:gitclonehttps://github.com/aiidaplugins/aiida-lammps.git pipinstall-eaiida-lammpsBuilt-in Potential SupportThelammps.basecalculation and associated workflows make use of theLammpsPotentialDatadata structure which is created by passing a potential file, plus some labelling parameters to it.This data structure can be used to handle the following potential types:Single file potentials: Any potential that can be stored in a single file, e.g.EAM,MEAM,TersoffandReaxFF.Directly parametrized potentials: Potentials whose parameters are directly given viapair_coeffin the input file, e.gBorn,Lennard-JonesandYukawa. These parameters should be written into a file that is then stored into aLammpsPotentialDatanode.ExamplesMore example calculations are found in the folder/examplesas well as in the documentation. The examples touch some common cases for the usage of LAMMPS for a single stage calculation.DevelopmentRunning testsThe test suite can be run in an isolated, virtual environment usingtox(seetox.iniin the repo):pipinstalltox tox-e3.9-aiida_lammps--tests/or directly:pipinstall.[testing]pytest-vThe tests require that both PostgreSQL and RabbitMQ are running. If you wish to run an isolated RabbitMQ instance, see thedocker-compose.ymlfile in the repo.Some tests require that alammpsexecutable be present.The easiest way to achieve this is to use Conda:condainstalllammps==2019.06.05# this will install lmp_serial and lmp_mpiYou can specify a different executable name for LAMMPS with:tox-e3.9-aiida_lammps----lammps-execlmp_execTo output the results of calcjob executions to a specific directory:pytest--lammps-workdir"test_workdir"Pre-commitThe code is formatted and linted usingpre-commit, so that the code conform to the standard:cdaiida-lammps pre-commitrun--allor to automate runs, triggered before each commit:pre-commitinstallLicenseTheaiida-lammpsplugin package is released under the MIT license. See theLICENSEfile for more details.
aiida-lsmo
aiida-lsmoAiiDA workflows designed by the LSMO group at EPFL.The documentation can be found athttps://aiida-lsmo.readthedocs.io/en/latest/.
aiida-mlip
aiida-mlipmachine learning interatomic potentials aiida pluginThis plugin is the default output of theAiiDA plugin cutter, intended to help developers get started with their AiiDA plugins.Repository contents.github/:Github Actionsconfigurationci.yml: runs tests, checks test coverage and builds documentation at every new commitpublish-on-pypi.yml: automatically deploy git tags to PyPI - just generate aPyPI API tokenfor your PyPI account and add it to thepypi_tokensecret of your github repositoryaiida_mlip/: The main source code of the plugin packagedata/: A newDiffParametersdata class, used as input to theDiffCalculationCalcJobclasscalculations.py: A newDiffCalculationCalcJobclasscli.py: Extensions of theverdi datacommand line interface for theDiffParametersclasshelpers.py: Helpers for setting up an AiiDA code fordiffautomaticallyparsers.py: A newParserfor theDiffCalculationdocs/: A documentation template ready for publication onRead the Docsexamples/: An example of how to submit a calculation using this plugintests/: Basic regression tests using thepytestframework (submitting a calculation, ...). Installpip install -e .[testing]and runpytest..gitignore: Telling git which files to ignore.pre-commit-config.yaml: Configuration ofpre-commit hooksthat sanitize coding style and check for syntax errors. Enable viapip install -e .[pre-commit] && pre-commit install.readthedocs.yml: Configuration of documentation build forRead the DocsLICENSE: License for your pluginREADME.md: This fileconftest.py: Configuration of fixtures forpytestpyproject.toml: Python package metadata for registration onPyPIand theAiiDA plugin registry(including entry points)See also the following video sequences from the 2019-05 AiiDA tutorial:run aiida-diff example calculationaiida-diff CalcJob pluginaiida-diff Parser pluginaiida-diff computer/code helpersaiida-diff input data (with validation)aiida-diff cliaiida-diff testsAdding your plugin to the registrypre-commit hooksFor more information, see thedeveloper guideof your plugin.FeaturesAdd input files usingSinglefileData:SinglefileData=DataFactory('core.singlefile')inputs['file1']=SinglefileData(file='/path/to/file1')inputs['file2']=SinglefileData(file='/path/to/file2')Specify command line options via a python dictionary andDiffParameters:d={'ignore-case':True}DiffParameters=DataFactory('mlip')inputs['parameters']=DiffParameters(dict=d)DiffParametersdictionaries are validated usingvoluptuous. Find out about supported options:DiffParameters=DataFactory('mlip')print(DiffParameters.schema.schema)Installationpipinstallaiida-mlip verdiquicksetup# better to set up a new profileverdipluginlistaiida.calculations# should now show your calclulation pluginsUsageHere goes a complete example of how to submit a test calculation using this plugin.A quick demo of how to submit a calculation:verdidaemonstart# make sure the daemon is runningcdexamples ./example_01.py# run test calculationverdiprocesslist-a# check record of calculationThe plugin also includes verdi commands to inspect its data types:verdidatamliplist verdidatamlipexport<PK>DevelopmentInstallpoetry(Optional) Create a virtual environmentInstallaiida-mlipwith dependencies:gitclonehttps://github.com/stfc/aiida-mlipcdaiida-mlip pipinstall--upgradepip poetryinstall--withpre-commit,dev,docs# install extra dependenciespre-commitinstall# install pre-commit hookspytest-v# discover and run all testsSee thedeveloper guidefor more information.LicenseBSD 3-Clause LicenseFundingContributors to this project were funded by
aiida-nanotech-empa
aiida-nanotech-empaAiiDA library containing plugins/workflows developed at nanotech@surfaces group from Empa.Contents:nanotech_empa.nanoribbon: work chain to characterize 1D periodic systems based on Quantum Espresso.nanotech_empa.gaussian.spin: Work chain to characterize spin properties of molecular systems with Gaussian. Calls multiple child work chains. Steps:Wavefunction stability is tested for each spin multiplicityGeometry is relaxed for the different spin states and ground state is foundProperty calcuation on the ground state: ionization potential and electron affinity with Δ-SCF, natural orbital analysis in case of open-shell singletVertical excitation energies for non-ground state multiplicitiesOrbitals and densities are rendered with PyMOL (needs to be installed separately as a python library, e.g. frompymol-open-source)Installationpipinstallaiida-nanotech-empaFor maintainersTo create a new release, clone the repository, install development dependencies withpip install '.[dev]', and then executebumpver update --major/--minor/--patch. This will:Create a tagged release with bumped version and push it to the repository.Trigger a GitHub actions workflow that creates a GitHub release.Additional notes:Use the--dryoption to preview the release change.The release tag (e.g. a/b/rc) is determined from the last release. Use the--tagoption to switch the release tag.
aiida-nwchem
aiida-nwchemAiiDA-NWChem is the AiiDA plugin for the NWChem code.The plugin provides support for the following calculation tasks:energyoptimizefreq...using these theory types:scfdftNWPW BandNWPW PSPWTCECompatibility matrixThe matrix below shows the versions ofaiida-coreand Python with which the plugin is compatible for various plugin version ranges:PluginAiiDAPythonv3.0 < v4.0v2.0 < v3.0DocumentationThe documentation for this package can be found on Read the Docs athttp://aiida-nwchem.readthedocs.io/en/latest/AcknowledgementsThis work is supported by theMARVEL National Centre for Competency in Researchfunded by theSwiss National Science Foundation.
aiida-optimade
OPTIMADE API implementation for AiiDAThe compatibility matrix below assumes the user always install the latest patch release of the specified minor version, which is recommended.PluginAiiDAPythonSpecificationv1.2 < v2.0v1.0 < v1.2v0.18 <= v0.20Latest releaseBuild statusActivityThis is a RESTful API server created withFastAPIthat exposes an AiiDA database according to theOPTIMADE specification.It is mainly used byMaterials Cloudto expose access to archived AiiDA databases through the OPTIMADE API. But it may be freely implemented by any to fulfill a similar purpose.The server is based on the test server "template" used in theoptimade-python-toolspackage. Indeed, the filter grammar and parser andpydanticmodels fromoptimade-python-toolsare used directly here.PrerequisitesEnvironment where AiiDA is installed.AiiDA database containingStructureDatanodes, since these are theonlyAiiDA nodes that are currently exposed with this API (under the/structuresendpoint).InstallationThe package is relased on PyPI, hence you can install it by:$pipinstallaiida-optimadeOtherwise, you can alsogit clonethe repository from GitHub:$gitclonehttps://github.com/aiidateam/aiida-optimade/path/to/aiida-optimade/parent/dir $pipinstall-e/path/to/aiida-optimadeDevelopmentFor developers, there is a special setuptools extradev, which can be installed by:$pipinstallaiida-optimade[dev]or$pipinstall-e/path/to/aiida-optimade[dev]This package usesBlackfor formatting. If you wish to contribute, please install the git pre-commit hook:/path/to/aiida-optimade$pre-commitinstallThis will automatically update the formatting when runninggit commit, as well as check the validity of various repository JSON and YAML files.For testing runpytest, which will run with an AiiDA backend as standard. The tests can also be run with the MongoDB backend by setting the environment variablePYTEST_OPTIMADE_CONFIG_FILE, the value being a path to the config file to be used:$PYTEST_OPTIMADE_CONFIG_FILE=/path/to/aiida-optimade/tests/static/test_mongo_config.jsonpytestHowever, note that themongo_urivalue will have to be updated according to your local setup.InitializationYou should first initialize your AiiDA profile.This can be done by using theaiida-optimadeCLI:$aiida-optimade-p<PROFILE>initWhere<PROFILE>is the AiiDA profile.Note: Currently, the default isoptimade, if the-p / --profileoption is not specified. This will be changed in the future to use the default AiiDA profile.Initialization goes through your profile'sStructureDatanodes, adding anoptimadeextra, wherein all OPTIMADE-specific fields that do not have an equivalent AiiDA property are stored.If in the future, moreStructureDatanodes are added to your profile's database, these will be automatically updated for the first query, filtering on any of these OPTIMADE-specific fields. However, if you do not wish a significant lag for the user or risking several GET requests coming in at the same time, trying to update your profile's database, you should re-runaiida-optimade initfor your profile (in between shutting the server down and restarting it again).Running the serverLocallyUsing theaiida-optimadeCLI, you can do the following:$aiida-optimade-p<PROFILE>runWhere<PROFILE>is the AiiDA profile you wish to serve.Note: Currently, the default isoptimade, if the-p / --profileoption is now specified. This will be changed in the future to use the default AiiDA profile.You also have the opportunity to specify the AiiDA profile via the environment variableAIIDA_PROFILE. Note, however, that if a profile name is passed to the CLI, it will overruleand replacethe currentAIIDA_PROFILEenvironment variable.# Specifying AiiDA profile as an environment variable$exportAIIDA_PROFILE=optimade $aiida-optimaderunNavigate tohttp://localhost:5000/v1/infoTip: To see the default AiiDA profile, typeverdi profile listto find the colored profile name marked with an asterisk (*), or typeverdi profile show, which will show you more detailed information about the default profile.Note: Theaiida-optimade runcommand has more options to configure your server, run$aiida-optimaderun--helpfor more information.With DockerAdaptprofiles/test_psql_dos.jsonandprofiles/docker-compose.ymlappropriately.$docker-compose-fprofiles/docker-compose.ymlup--buildNavigate tohttp://localhost:3253/v1/infoStop by using$docker-compose-fprofiles/docker-compose.ymldownJinja templatesIf you are familiar withJinja, there are two templates to create the JSON and YAML files:profiles/config.j2andprofiles/docker-compose.j2, respectively.Configure the serverYou can configure the server with theaiida_optimade/config.jsonfile or set certain environment variables.To learn more about this, see theoptimade-python-toolsrepository.Using AiiDA group for curated dataAnAiiDA Groupcan be used to curate data and serve only this curated data through the OPTIMADE server. Setting thequery_groupoption inconfig.jsonwill ensure only the valid (StructureData,CifData) data nodes in the given AiiDA Group will be served. Set thequery_groupparameter tonull(default) to serve all structure data from the database.Design choicesQ: Why create an individualconfig.jsonfile instead of just mounting an existing.aiidadirectory and using that directly?A:This, currently, wouldn't work because theREPOSITORY_URIneeds to point to the right pathinsidethe container, not on the host. Furthermore, storing all configurations in the same file can be fragile.For maintainersTo release the new version, go to GitHub release API of the repo create a new release and update the release information.The release action will be triggered by newly created release. Note, the tag should start with avand be followed by a full semantic version (seeSemVer). For example:v2.3.12.
aiida-optimize
aiida-optimizeDefines AiiDA workchains that simplify running optimization workflows.Documentation:http://aiida-optimize.readthedocs.io
aiida_orca
aiida-orcaAiiDAplugin fororcapackageDISCLAIMER: Under heavy development!Compatible with:InstallationThe latest release can be installed fromPyPIpip install aiida-orcaThe current development version can be installed viagit clone https://github.com/pzarabadip/aiida-orca.gitcd aiida-orcapip install .aiida-common-workflowsTheaiida-orcapackage is available in theaiida-common-workflowpackage. You may try it to have a quick setup and exploration ofaiida-orcaand many more packages. For further details, please checkour paperonaiida-common-worlflows.Contribution guideWe welcome contribution to the code either it is a new feature implementation or bug fix. Please check theDeveloper Guidein documentation for the instructions.Issue reportingPlease feel free to open an issue to report bugs or requesting new features.AcknowledgmentI would like to thank the funding received from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Actions and cofinancing by the South Moravian Region under agreement 665860. This software reflects only the authors’ view and the EU is not responsible for any use that may be made of the information it contains.