package
stringlengths
1
122
pacakge-description
stringlengths
0
1.3M
ai-lib-storage
Failed to fetch description. HTTP Status Code: 404
ai-lifecycle-cli
AI Lifecycle CLICLI allows to export and import assets from Watson Machine Learning service on Cloud Pack for Data and move them between different spaces. It also allows to manage your WML resources such as spaces, assets and deployments. In general, CLI performs import/export jobs in respective APIs and poll the status of the job, until it's completed (regardless of status). We are supporting scenarios, where we can export assets from WML installed on either CPD version 2.5 or 3.0 and import them back to WML installed on CPD version 3.0. During the export, we can choose if either we'd like to export all assets, all assets of given type or just selected assets.How to installpipinstallai-lifecycle-cli-UHow to run CLIExampleai-lifecycle-cli export --url=https://<CP4D_HOST> --user=<CP4D_USER> --pass=<CP4D_PASS> --output_dir=./ --export_json_file=./sources.jsonai-lifecycle-cli import --url=https://<CP4D_HOST> --user=<CP4D_USER> --pass=<CP4D_PASS> --input_file=./archive.zip --import_timeout=1200ExportYou can export assets only from Watson Machine Learning service on CP4D in versions:2.5.03.0.0Usage ofexportcommand is shown here:usage: ai-lifecycle-cli export [-h] --url CPD_URL --user CPD_USER --pass CPD_PASS --output_dir OUTPUT_DIR --export_json_file EXPORT_JSON_FILE [--archive_all] [--export_version {2.5,3.0}] [--export_timeout EXPORT_TIMEOUT] [--temp_dir TEMP_DIR] optional arguments: -h, --help show this help message and exit --url CPD_URL URL to CP4D --user CPD_USER Username used for CP4D login --pass CPD_PASS Password used for CP4D login --output_dir OUTPUT_DIR Directory, where exported content should be saved --export_json_file EXPORT_JSON_FILE JSON file, specifying which assets to export from which projects/spaces --archive_all Archive all exported content into single ZIP archive --export_version {2.5,3.0} Version of exported environment - if not provided, it'll be auto-detected --export_timeout EXPORT_TIMEOUT Set timeout for export job in seconds --temp_dir TEMP_DIR Directory for temporary filesList of supported assetsList with supported assets for export with their respective names in API:Models:wml_modelPython Functions:wml_functionPipelines:wml_pipelineModel Definitions:wml_model_definitionsExperiments:wml_experimentPython Scripts:scriptShiny Apps:shiny_assetHow to specify, which assets do you want to exportFlag--export_json_file EXPORT_JSON_FILEis responsible for passing the file with specification of which aasets should be exported from which spaces in source environment. Syntax of such file looks like this:{ "space": [ { "guid": "<space_guid_1>" }, ... { "guid": "<space_guid_2>", "assets": { "<asset_type_1>": "all", "<asset_type_2>": ["asset_guid_1", ...] } }, ... ] }You need to provivde JSON file with key namedspace- under this key you specify array of space items, which can be defined in few ways:To specify that you want to export all assets from given space (i.e.<space_guid_1>), space item in array would look like this:{ "guid": "<space_guid_1>" }To specify that you want to export all assets of given type (i.e.<asset_type_1>from given space (i.e.<space_guid_2>), space item in array would look like this:{ "guid": "<space_guid_2>", "assets": { "<asset_type_1>": "all" } }To specify that you want to export specified assets (i.e."asset_guid_1"and"asset_guid_2"of given type (i.e.<asset_type_2>from given space (i.e.<space_guid_3>), space item in array would look like this:{ "guid": "<space_guid_3>", "assets": { "<asset_type_2>": ["asset_guid_1", "asset_guid_2"] } }Sample files to use for--export_json_fileare undersamplesdirectory.ImportYou can import assets only to Watson Machine Learning service on CP4D in versions:3.0.0Usage ofimportcommand is shown here:usage: ai-lifecycle-cli import [-h] --url CPD_URL --user CPD_USER --pass CPD_PASS --input_file INPUT_FILE [--import_version {2.5,3.0}] [--import_name IMPORT_NAME] [--import_desc IMPORT_DESC] [--import_timeout IMPORT_TIMEOUT] [--temp_dir TEMP_DIR] optional arguments: -h, --help show this help message and exit --url CPD_URL URL to CP4D (default: None) --user CPD_USER Username used for CP4D login (default: None) --pass CPD_PASS Password used for CP4D login (default: None) --input_file INPUT_FILE Path to exported archive file to be imported (default: None) --import_version {2.5,3.0} Version of imported environment - if not provided, it'll be auto-detected (default: 3.0) --import_name IMPORT_NAME Name of created container (e.g. space) during import (default: None) --import_desc IMPORT_DESC Description of created container (e.g. space) during import (default: None) --import_timeout IMPORT_TIMEOUT Set timeout for export job in seconds (default: 600) --temp_dir TEMP_DIR Directory for temporary files (default: /var/tmp/ai- lifecycle-cli)Managing WML resourcesAI Lifecycle CLI also supports simple management for WML assets and deployments. In sections below, there's a list of supported commands for WML asset and deployment management. To get more information on the commands, use-h/--helpflag with the command.AssetsYou can perform the following operations:asset list- list all assets persisted in the deployment spaceDeploymentsYou can perform the following operations:deployment list- list all deployments persisted in the deployment spacedeployment create- create new deployment in the deployment spacedeployment delete- delete deployment from the deployment spaceSpacesYou can perform the following operations:space list- list all deployment spacesspace delete- delete deployment spaceDeployment jobsYou can perform the following operations:deployment-job list- list all deployment jobs in the deployment spacedeployment-job create- create new deployment job in the deployment spacedeployment-job cancel- cancel running deployment job in the deployment spaceNotes & LimitationsSupporting for now only:export from CP4D version 2.5 and 3.0import to CP4D version 3.0Flag--export-timeoutis not applicable for exports from CP4D version 2.5.
ailiga
AILigaGoalsMonthly releases of session/tournament resultsUser foldersStrict versioning for reproducibility (ocne a version is pushed, gitignore it)InstallationgitcloneTHIS_PROJECT_URL poertyinstall poetryshellTesting and TrainingCurrently, training/testing fighters works through the fighter tests.pythontests/test_dqn_fighter.pyTensorboardtensorboard--logdirlog/--load_fast=falseLimitationsCurrently, the implementation throughtianshou.BasePolicyseems to only support DQNPolicy and also notDiscrete()observation spaces.ReferencesFrameworkshttps://github.com/Farama-Foundation/PettingZoohttps://github.com/vwxyzjn/cleanrlhttps://github.com/Farama-Foundation/Gymnasiumhttps://github.com/deepmind/open_spielhttps://github.com/datamllab/rlcardhttps://tianshou.readthedocs.io/en/master/Bookshttp://incompleteideas.net/book/the-book-2nd.htmlDevelopmentWe use black throughpackage/python structure:https://mathspp.com/blog/how-to-create-a-python-package-in-2022https://www.brainsorting.com/posts/publish-a-package-on-pypi-using-poetry/
ailingbot
🇬🇧English🇨🇳简体中文AilingBot - One-stop solution to empower your IM bot with AI.Table of ContentsWhat is AilingBotFeaturesQuick StartStart an AI chatbot in 5 minutesUsing DockerUsing PIPInstallationGenerate Configuration FileStart the ChatbotStart API ServiceUsing DockerUsing PIPInstallationGenerate Configuration FileStart the ServiceIntegrating with WeChat WorkUsing DockerUsing PIPInstallationGenerate Configuration FileModify Configuration FileStart the ServiceIntegrating with FeishuUsing DockerUsing PIPInstallationGenerate Configuration FileModify Configuration FileStart the ServiceIntegrating with DingTalkUsing DockerUsing PIPInstallationGenerate Configuration FileModify Configuration FileStart the ServiceIntegrating with SlackUsing DockerUsing PIPInstallationGenerate Configuration FileModify Configuration FileStart the Service📖User GuideMain ProcessMain ConceptsConfigurationConfiguration MethodsConfiguration MappingConfiguration ItemsGeneralBuilt-in Policy Configurationconversationdocument_qaModel ConfigurationOpenAICommand Line ToolsInitialize Configuration File (init)UsageOptionsView Current Configuration (config)UsageOptionsStart Command Line Bot (chat)UsageOptionsStart Webhook Service (serve)UsageOptionsStart API Service (api)UsageOptions🔌API💻Development GuideDevelopment GuidelinesDeveloping Chat PolicyDeveloping Channel🤔Frequently Asked Questions🎯RoadmapWhat is AilingBotAilingBot is an open-source engineering development framework and an all-in-one solution for integrating AI models into IM chatbots. With AilingBot, you can:☕Code-free usage: Quickly integrate existing AI large-scale models into mainstream IM chatbots (such as WeChat Work, Feishu, DingTalk, Slack etc.) to interact with AI models through IM chatbots and complete business requirements. Currently, AilingBot has built-in capabilities for multi-turn dialogue and document knowledge Q&A, and more capabilities will be added in the future.🛠️Secondary development: AilingBot provides a clear engineering architecture, interface definition, and necessary basic components. You do not need to develop the engineering framework for large-scale model services from scratch. You only need to implement your Chat Policy and complete end-to-end AI model empowerment to IM chatbots through simple configurations. It also supports expanding to your own end (such as your own IM, web application, or mobile application) by developing your own channel.Features💯Open source & Free: Completely open source and free.📦Ready to use: No need for development, with pre-installed capabilities to integrate with existing mainstream IM and AI models.🔗LangChain Friendly: Easy to integrate with LangChain.🧩Modular: The project is organized in a modular way, with modules dependent on each other through abstract protocols. Modules of the same type can be implemented by implementing the protocol, allowing for plug-and-play.💻Extensible: AilingBot can be extended to new usage scenarios and capabilities. For example, integrating with new IMs, new AI models, or customizing your own chat policy.🔥High performance: AilingBot uses a coroutine-based asynchronous mode to improve system concurrency performance. At the same time, system concurrency processing capabilities can be further improved through multi-processes.🔌API Integration: AilingBot provides a set of clear API interfaces for easy integration and collaboration with other systems and processes.🚀 Quick StartStart an AI chatbot in 5 minutesBelow is a guide on how to quickly start an AI chatbot based on the command-line interface using AilingBot. The effect is shown in the following figure:💡 First, you need to have an OpenAI API key. If you don't have one, refer to relevant materials on the Internet to obtain it.Using Dockergitclonehttps://github.com/ericzhang-cn/ailingbot.gitailingbotcdailingbot dockerbuild-tailingbot. dockerrun-it--rm\-eAILINGBOT_POLICY__LLM__OPENAI_API_KEY={yourOpenAIAPIkey}\ailingbotpoetryrunailingbotchatUsing PIPInstallationpipinstallailingbotGenerate Configuration Fileailingbotinit--silence--overwriteThis will create a file calledsettings.tomlin the current directory, which is the configuration file for AilingBot. Next, modify the necessary configurations. To start the bot, only one configuration is needed. Find the following section insettings.toml:[policy.llm]_type="openai"model_name="gpt-3.5-turbo"openai_api_key=""temperature=0Change the value ofopenai_api_keyto your actual OpenAI API key.Start the ChatbotStart the chatbot with the following command:ailingbotchatStart API ServiceUsing Dockergitclonehttps://github.com/ericzhang-cn/ailingbot.gitailingbotcdailingbot dockerbuild-tailingbot. dockerrun-it--rm\-eAILINGBOT_POLICY__LLM__OPENAI_API_KEY={yourOpenAIAPIkey}\ailingbotpoetryrunailingbotapiUsing PIPInstallationpipinstallailingbotGenerate Configuration FileSame as starting the command line bot.Start the ServiceStart the bot using the following command:ailingbotapiNow, enterhttp://localhost:8080/docsin your browser to see the API documentation. (If it is not a local start, please enterhttp://{your public IP}:8080/docs)Here is an example request:curl-X'POST'\'http://localhost:8080/chat/'\-H'accept: application/json'\-H'Content-Type: application/json'\-d'{"text": "你好"}'And the response:{"type":"text","conversation_id":"default_conversation","uuid":"afb35218-2978-404a-ab39-72a9db6f303b","ack_uuid":"3f09933c-e577-49a5-8f56-fa328daa136f","receiver_id":"anonymous","scope":"user","meta":{},"echo":{},"text":"你好!很高兴和你聊天。有什么我可以帮助你的吗?","reason":null,"suggestion":null}Integrating with WeChat WorkHere's a guide on how to quickly integrate the chatbot with WeChat Work.Using Dockergitclonehttps://github.com/ericzhang-cn/ailingbot.gitailingbotcdailingbot dockerbuild-tailingbot. dockerrun-d\-eAILINGBOT_POLICY__NAME=conversation\-eAILINGBOT_POLICY__HISTORY_SIZE=5\-eAILINGBOT_POLICY__LLM__OPENAI_API_KEY={yourOpenAIAPIkey}\-eAILINGBOT_CHANNEL__NAME=wechatwork\-eAILINGBOT_CHANNEL__CORPID={yourWeChatWorkrobot's corpid} \-e AILINGBOT_CHANNEL__CORPSECRET={your WeChat Work robot'scorpsecret}\-eAILINGBOT_CHANNEL__AGENTID={yourWeChatWorkrobot's agentid} \-e AILINGBOT_CHANNEL__TOKEN={your WeChat Work robot'swebhooktoken}\-eAILINGBOT_CHANNEL__AES_KEY={yourWeChatWorkrobot'swebhookaes_key}\-p8080:8080ailingbotpoetryrunailingbotserveUsing PIPInstallationpipinstallailingbotGenerate Configuration Fileailingbotinit--silence--overwriteModify Configuration FileOpensettings.toml, and fill in the following section with your WeChat Work robot's real information:[channel]name="wechatwork"corpid=""# Fill in with real informationcorpsecret=""# Fill in with real informationagentid=0# Fill in with real informationtoken=""# Fill in with real informationaes_key=""# Fill in with real informationIn thellmsection, fill in your OpenAI API Key:[policy.llm]_type="openai"model_name="gpt-3.5-turbo"openai_api_key=""# Fill in with your real OpenAI API Key heretemperature=0Start the ServiceailingbotserveFinally, we need to go to the WeChat Work admin console to configure the webhook address so that WeChat Work knows to forward the received user messages to our webhook. The webhook URL is:http(s)://your_public_IP:8080/webhook/wechatwork/event/After completing the above configuration, you can find the chatbot in WeChat Work and start chatting:Integrating with FeishuHere's a guide on how to quickly integrate the chatbot with Feishu and enable a new conversation policy: uploading documents and performing knowledge-based question answering on them.Using Dockergitclonehttps://github.com/ericzhang-cn/ailingbot.gitailingbotcdailingbot dockerbuild-tailingbot. dockerrun-d\-eAILINGBOT_POLICY__NAME=document_qa\-eAILINGBOT_POLICY__CHUNK_SIZE=1000\-eAILINGBOT_POLICY__CHUNK_OVERLAP=0\-eAILINGBOT_POLICY__LLM__OPENAI_API_KEY={yourOpenAIAPIkey}\-eAILINGBOT_POLICY__LLM__MODEL_NAME=gpt-3.5-turbo-16k\-eAILINGBOT_CHANNEL__NAME=feishu\-eAILINGBOT_CHANNEL__APP_ID={yourFeishurobot's app id} \-e AILINGBOT_CHANNEL__APP_SECRET={your Feishu robot'sappsecret}\-eAILINGBOT_CHANNEL__VERIFICATION_TOKEN={yourFeishurobot'swebhookverificationtoken}\-p8080:8080ailingbotpoetryrunailingbotserveUsing PIPInstallationpipinstallailingbotGenerate Configuration Fileailingbotinit--silence--overwriteModify Configuration FileOpensettings.toml, and change thechannelsection to the following, filling in your Feishu robot's real information:[channel]name="feishu"app_id=""# Fill in with real informationapp_secret=""# Fill in with real informationverification_token=""# Fill in with real informationReplace thepolicysection with the following document QA policy:[policy]name="document_qa"chunk_size=1000chunk_overlap=5Finally, it is recommended to use the 16k model when using the document QA policy. Therefore, changepolicy.llm.model_nameto the following configuration:[policy.llm]_type="openai"model_name="gpt-3.5-turbo-16k"# Change to gpt-3.5-turbo-16kopenai_api_key=""# Fill in with real informationtemperature=0Start the ServiceailingbotserveFinally, we need to go to the Feishu admin console to configure the webhook address. The webhook URL for Feishu is:http(s)://your_public_IP:8080/webhook/feishu/event/After completing the above configuration, you can find the chatbot in Feishu and start chatting:Integrating with DingTalkHere's a guide on how to quickly integrate the chatbot with DingTalk.Using Dockergitclonehttps://github.com/ericzhang-cn/ailingbot.gitailingbotcdailingbot dockerbuild-tailingbot. dockerrun-d\-eAILINGBOT_POLICY__NAME=conversation\-eAILINGBOT_POLICY__HISTORY_SIZE=5\-eAILINGBOT_POLICY__LLM__OPENAI_API_KEY={yourOpenAIAPIkey}\-eAILINGBOT_CHANNEL__NAME=dingtalk\-eAILINGBOT_CHANNEL__APP_KEY={yourDingTalkrobot's app key} \-e AILINGBOT_CHANNEL__APP_SECRET={your DingTalk robot'sappsecret}\-eAILINGBOT_CHANNEL__ROBOT_CODE={yourDingTalkrobot'srobotcode}\-p8080:8080ailingbotpoetryrunailingbotserveUsing PIPInstallationpipinstallailingbotGenerate Configuration Fileailingbotinit--silence--overwriteModify Configuration FileOpensettings.toml, and change thechannelsection to the following, filling in your DingTalk robot's real information:[channel]name="dingtalk"app_key=""# Fill in with real informationapp_secret=""# Fill in with real informationrobot_code=""# Fill in with real informationStart the ServiceailingbotserveFinally, we need to go to the DingTalk admin console to configure the webhook address. The webhook URL for DingTalk is:http(s)://your_public_IP:8080/webhook/dingtalk/event/After completing the above configuration, you can find the chatbot in DingTalk and start chatting:Integrating with SlackHere's a guide on how to quickly integrate the chatbot with Slack and enable a new conversation policy: uploading documents and performing knowledge-based question answering on them.Using Dockergitclonehttps://github.com/ericzhang-cn/ailingbot.gitailingbotcdailingbot dockerbuild-tailingbot. dockerrun-d\-eAILINGBOT_POLICY__NAME=document_qa\-eAILINGBOT_POLICY__CHUNK_SIZE=1000\-eAILINGBOT_POLICY__CHUNK_OVERLAP=0\-eAILINGBOT_POLICY__LLM__OPENAI_API_KEY={yourOpenAIAPIkey}\-eAILINGBOT_POLICY__LLM__MODEL_NAME=gpt-3.5-turbo-16k\-eAILINGBOT_CHANNEL__NAME=slack\-eAILINGBOT_CHANNEL__VERIFICATION_TOKEN={yourSlackAppwebhookverificationtoken}\-eAILINGBOT_CHANNEL__OAUTH_TOKEN={yourSlackAppoauthtoken}\-p8080:8080ailingbotpoetryrunailingbotserveUsing PIPInstallationpipinstallailingbotGenerate Configuration Fileailingbotinit--silence--overwriteModify Configuration FileOpensettings.toml, and change thechannelsection to the following, filling in your Slack robot's real information:[channel]name="slack"verification_token=""# Fill in with real informationoauth_token=""# Fill in with real informationReplace thepolicysection with the following document QA policy:[policy]name="document_qa"chunk_size=1000chunk_overlap=5Finally, it is recommended to use the 16k model when using the document QA policy. Therefore, changepolicy.llm.model_nameto the following configuration:[policy.llm]_type="openai"model_name="gpt-3.5-turbo-16k"# Change to gpt-3.5-turbo-16kopenai_api_key=""# Fill in with real informationtemperature=0Start the ServiceailingbotserveFinally, we need to go to the Slack admin console to configure the webhook address. The webhook URL for Slack is:http(s)://your_public_IP:8080/webhook/slack/event/After completing the above configuration, you can find the chatbot in Slack and start chatting:📖User GuideMain ProcessThe main processing flow of AilingBot is as follows:First, the user sends a message to the IM bot.If a webhook is configured, the instant messaging tool will forward the request sent to the bot to the webhook service address.The webhook service processes the original IM message and converts it into AilingBot's internal message format, which is then sent to ChatBot.ChatBot processes the request and forms a response message based on the configured chat policy. During this process, ChatBot may perform operations such as requesting a large language model, accessing a vector database, or calling an external API to complete the request processing.ChatBot sends the response message to the IM Agent. The IM Agent is responsible for converting the AilingBot internal response message format into a specific IM format and calling the IM open capability API to send the response message.The IM bot displays the message to the user, completing the entire processing process.Main ConceptsIM bot: A capability built into most instant messaging tools that allows administrators to create a bot and process user messages through a program.Channel: A channel represents different terminals, which can be an IM or a custom terminal (such as the web).Webhook: An HTTP(S) service used to receive user messages forwarded by IM bots. Different channels have their own specifications for webhooks, so each channel requires its own webhook implementation.IM Agent: Used to call IM open capability APIs. Different IM open capability APIs are different, so each channel requires a corresponding agent implementation.ChatBot: The core component used to receive and respond to user messages.Chat Policy: Defines how to respond to users and is called by ChatBot. A chat policy specifically defines the robot's abilities, such as chitchat or knowledge Q&A.LLM: Large language model, such as OpenAI's ChatGPT and open ChatGLM, are all different large language models. The large language model is a key component for implementing AI capabilities.ConfigurationConfiguration MethodsAilingBot can be configured in two ways:Using configuration files: AilingBot readssettings.tomlin the current directory as the configuration file inTOMLformat. Please refer to the following section for specific configuration items.Using environment variables: AilingBot also reads configuration items in environment variables. Please refer to the following section for a list of environment variables.💡 Both configuration files and environment variables can be used together. If a configuration item exists in both, the environment variable takes precedence.Configuration MappingAll configurations have the following mappings between TOML keys and environment variables:All environment variables start withAILINGBOT_.Double underscores__are used as separators between levels.Underscores in configuration keys are preserved in environment variables.Case-insensitive.For example:The corresponding environment variable ofsome_confisAILINGBOT_SOME_CONF.The corresponding environment variable ofsome_conf.conf_1isAILINGBOT_SOME_CONF__CONF_1.The corresponding environment variable ofsome_conf.conf_1.subconfisAILINGBOT_SOME_CONF__CONF_1__SUBCONF.Configuration ItemsGeneralConfiguration ItemDescriptionTOMLEnvironment VariableLanguageLanguage code (Reference:http://www.lingoes.net/en/translator/langcode.htm)langAILINGBOT_LANGTimezoneTimezone code (Reference:https://en.wikipedia.org/wiki/List_of_tz_database_time_zones)tzAILINGBOT_TZPolicy NamePredefined policy name or complete policy class pathpolicy.nameAILINGBOT_POLICY__NAMEChannel NamePredefined channel namechannel.nameAILINGBOT_CHANNEL__NAMEWebhook PathComplete class path of non-predefined channel webhookchannel.webhook_nameAILINGBOT_CHANNEL__WEBHOOK_NAMEAgent PathComplete class path of non-predefined channel agentchannel.agent_nameAILINGBOT_CHANNEL__AGENT_NAMEUvicorn ConfigAll uvicorn configurations (Reference:uvicorn settings). These configurations will be passed to uvicornuvicorn.*AILINGBOT_UVICORN__*Configuration example:lang="zh_CN"tz="Asia/Shanghai"[policy]name="conversation"# More policy configurations[channel]name="wechatwork"# More channel configurations[uvicorn]host="0.0.0.0"port=8080Built-in Policy ConfigurationconversationConversation uses LangChain's Conversation as the policy, which enables direct interaction with LLM and has a conversation history context, enabling multi-turn conversations.Configuration ItemDescriptionTOMLEnvironment VariableHistory SizeIndicates how many rounds of historical conversations to keeppolicy.history_sizeAILINGBOT_POLICY__HISTORY_SIZEConfiguration example:# Use the conversation policy and keep 5 rounds of historical conversations[policy]name="conversation"history_size=5document_qaDocument_qa uses LangChain'sStuffas the policy. Users can upload a document and then ask questions based on the document content.Configuration ItemDescriptionTOMLEnvironment VariableChunk SizeCorresponds to LangChain Splitter's chunk_sizepolicy.chunk_sizeAILINGBOT_POLICY__CHUNK_SIZEChunk OverlapCorresponds to LangChain Splitter's chunk_overlappolicy.chunk_overlapAILINGBOT_POLICY__CHUNK_OVERLAPConfiguration example:# Use the document_qa policy, with chunk_size and chunk_overlap set to 1000 and 0, respectively[policy]name="document_qa"chunk_size=1000chunk_overlap=0Model ConfigurationThe model configuration is consistent with LangChain. The following is an example.OpenAI[policy.llm]_type="openai"# Corresponding environment variable: AILINGBOT_POLICY__LLM___TYPEmodel_name="gpt-3.5-turbo"# Corresponding environment variable: AILINGBOT_POLICY__LLM__MODEL_NAMEopenai_api_key="sk-pd8I'msorry,itseemslikeyourmessagegotcutoff.Canyoupleaseprovidemewithmoreinformationorclarifyyourrequest?Command Line ToolsInitialize Configuration File (init)UsageTheinitcommand generates a configuration filesettings.tomlin the current directory. By default, the user will be prompted interactively. You can use the--silenceoption to generate the configuration file directly using default settings.Usage: ailingbot init [OPTIONS] Initialize the AilingBot environment. Options: --silence Without asking the user. --overwrite Overwrite existing file if a file with the same name already exists. --help Show this message and exit.OptionsOptionDescriptionTypeRemarks--silenceGenerate the default configuration directly without asking the user.Flag--overwriteAllow overwriting thesettings.tomlfile in the current directory.FlagView Current Configuration (config)Theconfigcommand reads the current environment configuration (including the configuration file and environment variables) and merges them.UsageUsage: ailingbot config [OPTIONS] Show current configuration information. Options: -k, --config-key TEXT Configuration key. --help Show this message and exit.OptionsOptionDescriptionTypeRemarks-k, --config-keyConfiguration keyStringIf not passed, the complete configuration information will be displayed.Start Command Line Bot (chat)Thechatcommand starts an interactive command-line bot for testing the current chat policy.UsageUsage: ailingbot chat [OPTIONS] Start an interactive bot conversation environment. Options: --debug Enable debug mode. --help Show this message and exit.OptionsOptionDescriptionTypeRemarks--debugEnable debug modeFlagThe debug mode will output more information, such as the prompt.Start Webhook Service (serve)Theservecommand starts a Webhook HTTP server for interacting with specific IM.UsageUsage: ailingbot serve [OPTIONS] Run webhook server to receive events. Options: --log-level [TRACE|DEBUG|INFO|SUCCESS|WARNING|ERROR|CRITICAL] The minimum severity level from which logged messages should be sent to(read from environment variable AILINGBOT_LOG_LEVEL if is not passed into). [default: TRACE] --log-file TEXT STDOUT, STDERR, or file path(read from environment variable AILINGBOT_LOG_FILE if is not passed into). [default: STDERR] --help Show this message and exit.OptionsOptionDescriptionTypeRemarks--log-levelThe minimum severity level from which logged messages should be sent to.StringBy default, all log levels will be displayed (TRACE).--log-fileThe location where logs are output.StringBy default, logs will be output to standard error (STDERR).Start API Service (api)Theapicommand starts the API HTTP server.UsageUsage: ailingbot api [OPTIONS] Run endpoint server. Options: --log-level [TRACE|DEBUG|INFO|SUCCESS|WARNING|ERROR|CRITICAL] The minimum severity level from which logged messages should be sent to(read from environment variable AILINGBOT_LOG_LEVEL if is not passed into). [default: TRACE] --log-file TEXT STDOUT, STDERR, or file path(read from environment variable AILINGBOT_LOG_FILE if is not passed into). [default: STDERR] --help Show this message and exit.OptionsOptionDescriptionTypeRemarks--log-levelDisplay log level, which will display logs at this level and aboveStringBy default, all levels are displayed (TRACE)--log-fileLog output locationStringBy default, logs are printed to standard error (STDERR)🔌APITBD💻Development GuideDevelopment GuidelinesTBDDeveloping Chat PolicyTBDDeveloping ChannelTBD🤔Frequently Asked QuestionsDue to the fact that WeChat Work does not support uploading file event callbacks, the built-indocument_qapolicy cannot be used for WeChat Work.The webhook of each IM requires a public IP. If you do not have one, you can consider testing locally through the " intranet penetration" solution. Please refer to online resources for specific methods.We expect the chat policy to be stateless, and the state should be stored externally. However, in specific implementations, the policy may still have local states (such as storing conversation history in local variables). Therefore, when uvicorn has multiple worker processes, these local states cannot be shared because each process has a separate chat policy instance, and a request from the same user may be responded to by different workers, leading to unexpected behavior. To avoid this, please ensure that at least one of the following two conditions is met:Chat policy does not use local states.Only one uvicorn worker is started.🎯RoadmapProvide complete usage and developer documentation.Support more channels.WeChat WorkFeishuDingTalkSlackSupport more request message types.Text requestImage requestFile requestSupport more response message types.Text responseImage responseFile responseMarkdown responseTable responseDevelop more out-of-the-box chat policies.Multi-round conversation policyDocument question and answer policyDatabase question and answer policyOnline search question and answer policySupport calling standalone chat policy services through HTTP.Abstract basic componentsLarge language modelKnowledge baseToolsSupport local model deployment.ChatGLM-6BSupport API.Web management background and visual configuration management.Provide deployment capability based on Docker containers.Enhance the observability and controllability of the system.Complete test cases.
ailist
Augmented Interval ListAugmented interval list (AIList) is a data structure for enumerating intersections between a query interval and an interval set. AILists have previously been shown to be faster than interval tree, NCList, and BEDTools.This implementation is a Python wrapper of the one used in the originalAIList library.Additonal wrapper functions have been created which allow easy user interface.All citations should reference tooriginal paper.For full usage and installationdocumentationInstallIf you dont already have numpy and scipy installed, it is best to downloadAnaconda, a python distribution that has them included.https://continuum.io/downloadsDependencies can be installed by:pip install -r requirements.txtPyPI install, presuming you have all its requirements installed:pip install ailistBenchmarkTest numpy random integers:# ailist version: 0.1.7fromailistimportAIList# ncls version: 0.0.53fromnclsimportNCLS# numpy version: 1.18.4importnumpyasnp# pandas version: 1.0.3importpandasaspd# quicksect version: 0.2.2importquicksect# Set seednp.random.seed(100)# First valuesstarts1=np.random.randint(0,100000,100000)ends1=starts1+np.random.randint(1,10000,100000)ids1=np.arange(len(starts1))values1=np.ones(len(starts1))# Second valuesstarts2=np.random.randint(0,100000,100000)ends2=starts2+np.random.randint(1,10000,100000)ids2=np.arange(len(starts2))values2=np.ones(len(starts2))LibraryFunctionTime (µs)nclssingle overlap1170pandassingle overlap924quicksectsingle overlap550ailistsingle overlap73LibraryFunctionTime (s)Max Memory (GB)nclsbulk overlap151 s>50ailistbulk overlap17.8 s~9UsagefromailistimportAIListimportnumpyasnpi=AIList()i.add(15,20)i.add(10,30)i.add(17,19)i.add(5,20)i.add(12,15)i.add(30,40)# Print intervalsi.display()# (15-20) (10-30) (17-19) (5-20) (12-15) (30-40)# Find overlapping intervalso=i.intersect(6,15)o.display()# (5-20) (10-30) (12-15)# Find index of overlapsi.intersect_index(6,15)# array([3, 1, 4])# Now i has been constructed/sortedi.display()# (5-20) (10-30) (12-15) (15-20) (17-19) (30-40)# Can be done manually as well at any timei.construct()# Iterate over intervalsforxini:print(x)# Interval(5-20, 3)# Interval(10-30, 1)# Interval(12-15, 4)# Interval(15-20, 0)# Interval(17-19, 2)# Interval(30-40, 5)# Interval comparisonsj=AIList()j.add(5,15)j.add(50,60)# Subtract regionss=i-j#also: i.subtract(j)s.display()# (15-20) (15-30) (15-20) (17-19) (30-40)# Common regionsi+j#also: i.common(j)# AIList# range: (5-15)# (5-15, 3)# (10-15, 1)# (12-15, 4)# AIList can also add to from arraysstarts=np.arange(10,1000,100)ends=starts+50ids=startsvalues=np.ones(10)i.from_array(starts,ends,ids,values)i.display()# (5-20) (10-30) (12-15) (15-20) (17-19) (30-40)# (10-60) (110-160) (210-260) (310-360) (410-460)# (510-560) (610-660) (710-760) (810-860) (910-960)# Merge overlapping intervalsm=i.merge(gap=10)m.display()# (5-60) (110-160) (210-260) (310-360) (410-460)# (510-560) (610-660) (710-760) (810-860) (910-960)# Find array of coveragec=i.coverage()c.head()# 5 1.0# 6 1.0# 7 1.0# 8 1.0# 9 1.0# dtype: float64# Calculate window protection scorew=i.wps(5)w.head()# 5 -1.0# 6 -1.0# 7 1.0# 8 -1.0# 9 -1.0# dtype: float64# Filter to interval lengths between 3 and 20fi=i.filter(3,20)fi.display()# (5-20) (10-30) (15-20) (30-40)# Query by arrayi.intersect_from_array(starts,ends,ids)# (array([ 10, 10, 10, 10, 10, 10, 10, 110, 210, 310, 410, 510, 610,# 710, 810, 910]),# array([ 5, 2, 0, 4, 10, 1, 3, 110, 210, 310, 410, 510, 610,# 710, 810, 910]))Original paperJianglin Feng, Aakrosh Ratan, Nathan C Sheffield; Augmented Interval List: a novel data structure for efficient genomic interval search, Bioinformatics, btz407,https://doi.org/10.1093/bioinformatics/btz407
ail-lang
No description available on PyPI.
ailment
AILmentAIL is the angr intermediate language.Project LinksProject repository:https://github.com/angr/ailmentDocumentation:https://api.angr.io/projects/ailment/en/latest/
ai-logicplum
ChangelogAll notable changes to this project will be documented in this file.[1.0.0] - 2023-12-08Released.[1.0.1] - 2023-12-08code optimized.[1.0.2] - 2023-12-08Passing individual params instead of passing as dictionary except training.[1.0.3] - 2023-12-081.0.2 fixes.[1.0.4] - 2023-12-08config file changes.[1.0.5] - 2023-12-081.0.4 fixes[1.0.6] - 2023-12-08Code Fixes[1.0.7] - 2023-12-08Code Fixes[1.0.8] - 2023-12-08Code Fixes[1.0.9] - 2023-12-08Training module new updates[1.1.0] - 2023-12-08Report module new updates[1.1.1] - 2023-12-08graph module new updates[1.1.2] - 2023-12-08graph module new updates[1.1.3] - 2023-12-08graph module new updates[1.1.4] - 2023-12-08training new updates[1.1.5] - 2023-12-08training new updates[1.1.6] - 2023-12-08graph displaying feature added in plot.[1.1.7] - 2023-12-08Fixes.[1.1.8] - 2023-12-08JWT Authentication implemented[1.1.9] - 2023-12-08Fixes[1.2.0] - 2023-12-08Fixes[1.2.1] - 2023-12-08Newplot blueprint and auto model selection in deployment added .[1.2.2] - 2023-12-08Retrain model feature added.[1.2.3] - 2023-12-08Deployment auto model selection fixes.[1.2.4] - 2023-12-08Removed auto model selection from deployment .[1.2.5] - 2023-12-08Content type changed in deployment.[1.2.6] - 2023-12-08Fixes in deployment.[1.2.7] - 2023-12-08Fixes in deployment.[1.2.8] - 2023-12-08Manual prediction changes.
ai-logics
-Features -Speaking: ai-win will help your comuter to say what you want:- Using you systems engine driver it can speak in different voices. Currently it has only two voices- male and female, but soon you will get a lot of voices and also languages.-Recognizing speech: This feature will make your computer listen you:- This function is made to make your computer not only take input form actions but also from your voice!-Accessing camera: This feature will make your computer to access your camera anytime you want. It might take a few seconds or even a minute- depending on your computer's RAM. You can also use speech recognition function with so it can access your camera when say it to do so. Note: 1. This function is absolutely safe and your captures will remain with you only. 2. This function will only work when your computer has an accessible camera. In case of any issues you can ask on StackOverflow, we will be very happy to answer your questions. -Open internet: Opening internet is a do-daily activity, you can automate it with this function:- This function can automate your access to internet. Note: This function is still developing. Only a part of it is there in this library. This function will update more and more in the coming newer versions of ai-win.-Greetings: With this wonderful feature of this library, your computer can greet you at any point of day.-Time: This function will help you get the local-time-anytime.-Accessing Wikipedia: This is an amazing part of this library. This has no limits. With this function your computer will know every famous personality, place, facts, history and many more.-Search: The eagerness-to-know of any curious has no limits, it makes us search more and more and more:- The search function will search anything anytime anywhere- with an internet connection.-Face detection: With this library your computer can also detect your face. It uses opencv haarcascade to detect your facial muscles. It will access your camera, capture your live expressions, convert it to graysacle, detect your muscles and after all the process it gives you a complete colored frame as output.-Examples -Speak:#from ai-logicsimportAIif__name__=='__main__':AI.speak.maleVoice('Hello World!')You can also use female voice instead:-#from ai-logicsimportAIif__name__=='__main__':AI.speak.femaleVoice('Hello World!')-Recognize speech:#from ai-logicsimportAIif__name__=='__main__':speech=AI.recognizeSpeech.listen()print(speech)#Example:- I said 'Hello World'Result:-![1674717566946](image/README/1674717566946.png) -Access the camera:#from ai-logicsimportAIif__name__=='__main__':AI.access_cam.capturefromDefaultcam('a')#'a' is the close button from which you can terminate the execution. You can use any other key.'a' is the key from which you can terminate the execution. You can use any other key from your keyboard. -Using functions: You will find it in the documentation which will be released on my github account by 30th March. Link will be uploaded in later vrsions.Note: This library uses some modules which requires some system dependencies. If you are a Windows user, make sure your windows is activated; If you are a Linux user you may install espeak.To activate Windows -https://support.microsoft.com/en-us/windows/activate-windows-c39005d4-95ee-b91e-b399-2820fda32227#:~:text=Select%20the%20Start%20button%2C%20and%20then%20select%20Settings%20%3E%20Update%20%26,COA%20and%20follow%20the%20instructions.To download espeak -https://espeak.sourceforge.net/Terms and Conditions apply. Copyright © 2023|Developed by Aditya Pratap Singh
ailola
No description available on PyPI.
ailove-django-fias
Приложение для работы с базой данных ФИАС в DjangoОсновные возможности====================* Импорт базы ФИАС из скачанного архива XML или напрямую с сайта http://fias.nalog.ru* Возможность хранить данные в отдельной БД* Поле модели AddressField, предоставляющее в админке Django ajax-поиск адреса* Поддержка полнотекстового поиска для поля AddressField (`демо <http://youtu.be/ZVVrxg9-o_4>`_)* Связанное поле модели для выбора района внутри выбранного в AddressField города (районы никак не привязаны к улицам, соответственно, их нужно выбирать отдельно, если это требуется)* Несколько абстрактных моделей, немного упрощающих жизньУстановка============1. Установите `django-fias`::pip install django-fias2. Добавьте `fias` и `django_select2` в ваш список `INSTALLED_APPS`.3. Добавьте `url(r'^fias/', include('fias.urls', namespace='fias')),` в ваш urlpatterns4. Любым доступным способом подключите к админке приложения, в котором будете использовать поле FiasAddress свежую версию jQuery::# например так:class ItemAdmin(admin.ModelAdmin):class Media:js = ['//ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.js']admin.site.register(Item, ItemAdmin)5. Если вы желаете использовать отдельную БД под данные ФИАС, выполните следующее* Создайте БД и подключите её к Джанго обычным способом* Добавьте в ваш `settings.py` параметр::FIAS_DATABASE_ALIAS = 'fias'где `fias` - имя БД* Добавьте в список `DATABASE_ROUTERS`::fias.routers.FIASRouter* Выполните::# для Southpython manage.py migrate --database=fias# без Southpython manage.py syncdb --database=fiasгде `fias` - имя БД ФИАС5. Выполните::# для Southpython manage.py migrate# без Southpython manage.py syncdb6. Выполните::python manage.py collectstaticОбновление до версии 0.3========================Обязательно наличие South.Выполните::# Если данные ФИАС хранятся в основной БДpython manage.py migrate# Если данные ФИАС хранятся в другой БДpython manage.py migrate --database=fiasгде `fias` - имя БД ФИАСОбновление до версии 0.4========================Обязательно наличие South.Если данные ФИАС хранятся в MySQL, выполните::# Если данные ФИАС хранятся в основной БДpython manage.py migrate fias 0004 --fakepython manage.py migrate fias# Если данные ФИАС хранятся в другой БДpython manage.py migrate fias 0004 --fake --database=fiaspython manage.py migrate fias --database=fiasИначе выполните::# Если данные ФИАС хранятся в основной БДpython manage.py migrate# Если данные ФИАС хранятся в другой БДpython manage.py migrate --database=fiasЗатем следует сгенерировать новый конфиг для Sphinx, как описано ниже, и переиндексировать базу.Настройка полнотекстового поиска================================AddressField поддерживает 2 метода поиска адреса: последовательный (sequence) и полнотекстовый (sphinx).**NOTE**: поддерживаются только 2 СУБД: PostgreSQL и MySQL.**NOTE2**: для индексации базы в MySQL может потребоваться до 2-2.5ГБ свободного места во временном каталоге MySQL.**NOTE3**: нет необходимости слишком часто пересоздавать поисковый индекс базы ФИАС. Это требуется делать только после обновления базы.По-умолчанию используется последовательный метод, т. к. не требует дополнительных настроек.Для активации полнотекстового поиска необходимо выполнить несколько дополнительных шагов:1. Добавьте в ваш `settings.py` параметр::FIAS_SEARCH_ENGINE='sphinx'2. Установите:* `sphinxit <https://github.com/semirook/sphinxit>`_* `Sphinx Search Engine <http://sphinxsearch.com>`_ Для Debian, Ubuntu, RHEL, Windows есть `пакеты <http://sphinxsearch.com/downloads/release/>`_3. Сгенерируйте конфигурацию `sphinx`:Если у вы уже используете `sphinx` в проекте, то вам нужен только конфиг индекса. Выполните::python manage.py fias_sphinx --path=PATH > sphinx.confгде `PATH` - путь до каталога с индексами sphinx.Иначе выполните::python manage.py fias_sphinx --path=PATH --full > sphinx.confчтобы получить полный конфиг sphinx.Замените конфиг sphinx полученными настройками (для **Gentoo** это файл `/etc/sphinx/sphinx.conf`, для **Ubuntu**: `/etc/sphinxsearch/sphinx.conf`)4. Псоле того, как данные **импортированы** и обновлены выполните::indexer -c /etc/sphinx/sphinx.conf --all*NOTE*: для повторной переиндексации при запущенном Sphinx следует выполнять::indexer -c /etc/sphinx/sphinx.conf --all --rotate5. Запустите sphinx::# для Gentoo/etc/init.d/searchd start# для Ubuntu/etc/init.d/sphinxsearch start**NOTE** Если Sphinx работает на другом хосте или на другом порту, добавьте в `settings.py` словарь соответствующими параметрами::FIAS_SEARCHD_CONNECTION = {'host': '127.0.0.1','port': 9306,}Настройка весов===============Из-за особенностей организации БД ФИАС, сортировка результатов поиска происходит не так, как хотелось бы.Поэтому, начиная с версии 0.4 добавлена возможность настроить веса типов адресных объектов по своему усмотрению.Для этого в `settings.py` добавьте словарь `FIAS_SB_WEIGHTS` вида::FIAS_SB_WEIGHTS = {# СОКРАЩЕНИЕ: ВЕС'г': 128,'с': 100,}где* СОКРАЩЕНИЕ - сокращённое наименование вида объекта из таблицы SocrBase* ВЕС - число от 0 до 128*NOTE*: по-умолчанию вес всех типов равен 64*NOTE*: пример заполнения можно посмотреть в weights.py - там перечислены предустановленные веса.Чтобы применить свои изменения, выполните::python manage.py fias --fill-weightsКроме того изменить веса можно в панели администрирования Django.Но помните, что эти изменения будут **перезаписаны** при следующем вызове упомянутой команды!После внесения изменений обязательно нужно переиндексировать базу.Выбор импортируемых таблиц==========================Таблицы NORMDOC, SOCRBASE и ADDROBJ импортируются всегда. Таблицы LANDMARK, HOUSEINT и HOUSE можно не импортировать.Добавьте в ваш `settings.py` список названий таблиц, которые вы хотели бы импортировать::FIAS_TABLES = ('landmark', 'houseint', 'house')Импорт данных==============Первоначальная загрузка данных------------------------------Существует несколько способов импортировать данные в БД ФИАСПолностью автоматический импорт с сайта ФИАС::python manage.py fias --remote-fileТакой способ не всегда целесообразен по разным причинам, поэтому лучше самостоятельно скачать полный архив и импортировать уже его::python manage.py fias --file /path/to/fias_xml.rar**Но!**В случае, если в БД уже есть какие-то данные, скрипт выдаст соответствующее сообщение и прекратит работу.Такое поведение связано с тем, что при импорте из файла, если версия файла не совпадает с версией данных в какой-то таблице в БД ФИАС,данные в этой таблице будут удалены полностью и заменены новыми, при этомORM Django при наличии связанных таблиц удалит данные так же и оттуда.Если вы уверены в том, что делаете, добавьте к предыдущей команде флаг *--really-replace*::python manage.py fias --file /path/to/fias_xml.rar --really-replace# orpython manage.py fias --remote-file --really-replaceЕсли по какой-то причине нужно импортировать всю БД ФИАС заново, добавьте флаг *--force-replace*::python manage.py fias --file /path/to/fias_xml.rar --force-replace --really-replace# orpython manage.py fias --remote-file --force-replace --really-replaceЕсли скачанный файл не актуален, можно добавить к указанной выше команде флаг *--update* - скрипт сразу после импорта обновит БД до актуальной версии.::python manage.py fias --file /path/to/fias_xml.rar --update# orpython manage.py fias --remote-file --update**NOTE**Импортируются только актуальные записи. Если данные об объекте менялись, будет загружена самая последняя версия записи об этом объекте.Записи из будущего не импортируются.Обновление существующей БД--------------------------Для обновления БД выполните::python manage.py fias --updateОбновление выполняется только с сайта ФИАС. Обновить базу из файла нельзя.**NOTE**Как это ни печально, но мы живём в России. Тут всякое бывает. Вот и сервис ФИАС время от времени подсовывает битые дельта-архивы.Чтобы оные пропускать автоматически и обновляться следующими по порядку, используйте флаг *--skip* совместно с *--update*Использование==============Вы можете самостоятельно ссылаться на таблицы БД фиас.Вы так же можете добавить в свои модели поле `fias.fields.address.AddressField`, которое предоставит вам удобныйпоиск адреса по базе и прявязку Один-ко-Многим вашей модели к таблице `AddrObj` базы ФИАС. (см. модель `Item` в тестовом приложении)Либо вы можете унаследоваться от любой модели из `fias.models.address`, которые добавят несколько дополнительныхполей к вашим моделям и выполнят за вас кое-какую рутину:**FIASAddress** (см. модель `CachedAddress` в тестовом приложении)Помимо поля `address` добавляет еще два: `full_address` и `short_address`. В первом хранится полная запись адреса (но без индекса), во втором - укороченная.**FIASAddressWithArea** (см. модель `CachedAddressWithArea` в тестовом приложении)Наследуется от предыдущей модели и добавляет еще поле `area` - позволяет указывать район города, выбранного в поле `address` (если, конечно, таковые имеются в БД ФИАС для данного города)**FIASHouse** (см. модель `CachedAddressWithHouse` в тестовом приложении)Миксин, добавляющий 3 поля `house`, `corps` и `apartment` - соответственно номер дома, корпус и квартира.**FIASFullAddress**Комбинация моделей `FIASAddress` и `FIASHouse`.**FIASFullAddressWithArea**Комбинация моделей `FIASAddressWithArea` и `FIASHouse`*NOTE*: в моделях `FIASFullAddress` и `FIASFullAddressWithArea` реализованы методы `_get_full_address` и `_get_short_address`, возвращающие соответственно полную и сокращённую строку адреса, включая номер дома/корпуса/квартиры.TODO==============* Проверять списки удалённых объектов и все связанные с AddrObj модели мигрировать на правильные записиИзвестные проблемы====================* Если используется отдельная БД под данные ФИАС, в админке в список `list_display` нельзя добавлять поля типа `ForeignKey`* South не умеет работать с несколькими БДБлагодарности====================`Коммит от EagerBeager <https://github.com/EagerBeager/django-fias/commit/ed375c2e1cafdc04f0c9612091eb040ef8f9f4fe>`_Благодаря этому коммиту до меня наконец дошло, почему импорт отжирал память.
ailp
No description available on PyPI.
ailt
UNKNOWN
ail-typo-squatting
ail-typo-squattingail-typo-squatting is a Python library to generate list of potential typo squatting domains with domain name permutation engine to feed AIL and other systems.The tool can be used as a stand-alone tool or to feed other systems.RequirementsPython 3.6+inflectlibrarypyyamltldextractdnspythonInstallationSource installail-typo-squatting can be install with poetry. If you don't have poetry installed, you can do the followingcurl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python.$poetryinstall $poetryshell $cdail-typo-squatting $pythontypo.py-hpip installation$pip3installail-typo-squattingUsagedacru@dacru:~/git/ail-typo-squatting/bin$python3typo.py--helpusage:typo.py[-h][-v][-dnDOMAINNAME[DOMAINNAME...]][-fdnFILEDOMAINNAME][-oOUTPUT][-foFORMATOUTPUT][-br][-dnsr][-dnsl][-lLIMIT][-var][-ko][-a][-om][-repe][-repl][-drepl][-cho][-add][-md][-sd][-vs][-ada][-hg][-ahg][-cm][-hp][-wt][-wsld][-at][-sub][-sp][-cdd][-addns][-uddns][-ns][-combo][-ca]optionalarguments:-h,--helpshowthishelpmessageandexit-vverbose,moredisplay-dnDOMAINNAME[DOMAINNAME...],--domainNameDOMAINNAME[DOMAINNAME...]listofdomainname-fdnFILEDOMAINNAME,--filedomainNameFILEDOMAINNAMEfilecontaininglistofdomainname-oOUTPUT,--outputOUTPUTpathtoouputlocation-foFORMATOUTPUT,--formatoutputFORMATOUTPUTformatfortheoutputfile,yara-regex-yaml-text.Default:text-br,--betterregexUseretrieforfasterregex-dnsr,--dnsresolvingresolveallvariationofdomainnametoseeifit'supornot-dnsl,--dnslimitedresolveallvariationofdomainnamebutkeeponlyupdomaininfinalresultjson-lLIMIT,--limitLIMITlimitofvariationsforadomainname-var,--givevariationsgivethealgothatgeneratevariations-ko,--keeporiginalKeepintheresultlisttheoriginaldomainname-a,--allUseallalgo-om,--omissionLeaveoutaletterofthedomainname-repe,--repetitionCharacterRepeat-repl,--replacementCharacterreplacement-drepl,--doublereplacementDoubleCharacterReplacement-cho,--changeorderChangetheorderoflettersinword-add,--additionAddacharacterinthedomainname-md,--missingdotDeleteadotfromthedomainname-sd,--stripdashDeleteofadashfromthedomainname-vs,--vowelswapSwapvowelswithinthedomainname-ada,--adddashAddadashbetweenthefirstandlastcharacterinastring-hg,--homoglyphOneormorecharactersthatlooksimilartoanothercharacterbutaredifferentarecalledhomogylphs-ahg,--all_homoglyphgenerateallpossiblehomoglyphpermutations.Ex:circl.lu,e1rc1.lu-cm,--commonmisspellingChangeawordbyismisspellings-hp,--homophonesChangewordbyanotherwhosoundthesamewhenspoken-wt,--wrongtldChangetheoriginaltopleveldomaintoanother-wsld,--wrongsldChangetheoriginalsecondleveldomaintoanother-at,--addtldAddingatldbeforetheoriginaltld-sub,--subdomainInsertadotatvaryingpositionstocreatesubdomain-sp,--singularpluralizeCreatebymakingasingulardomainpluralandviceversa-cdd,--changedotdashChangedottodash-addns,--adddynamicdnsAdddynamicdnsattheendofthedomain-uddns,--updatedynamicdnsUpdatedynamicdnswarninglist-ns,--numeralswapChangeanumberstowordsandviceversa.Ex:circlone.lu,circl1.lu-comboCombinemultiplealgoonadomainname-ca,--catchallCombinewith-dnsr.Generatearandomstringinfrontofthedomain.Usage exampleCreation of variations forail-project.organdcircl.lu, using all algorithm.dacru@dacru:~/git/ail-typo-squatting/bin$python3typo.py-dnail-project.orgcircl.lu-a-o.Creation of variations for a file who contains domain name, using character omission - subdomain - hyphenation.dacru@dacru:~/git/ail-typo-squatting/bin$python3typo.py-fdndomain.txt-co-sub-hyp-o.-foyaraCreation of variations forail-project.organdcircl.lu, using all algorithm and using dns resolution.dacru@dacru:~/git/ail-typo-squatting/bin$python3typo.py-dnail-project.orgcircl.lu-a-dnsr-o.Creation of variations forail-project.organd give the algorithm that generate the variation (only for text format).dacru@dacru:~/git/ail-typo-squatting/bin$python3typo.py-dnail-project.org-a-o--varUsed as a libraryTo run all algorithmsfromail_typo_squattingimportrunAllimportmathresultList=list()domainList=["google.com"]formatoutput="yara"pathOutput="."fordomainindomainList:resultList=runAll(domain=domain,limit=math.inf,formatoutput=formatoutput,pathOutput=pathOutput,verbose=False,givevariations=False,keeporiginal=False)print(resultList)resultList=list()To run specific algorithmfromail_typo_squattingimportformatOutput,omission,subdomain,addDashimportmathresultList=list()domainList=["google.com"]limit=math.infformatoutput="yara"pathOutput="."fordomainindomainList:resultList=omission(domain=domain,resultList=resultList,verbose=False,limit=limit,givevariations=False,keeporiginal=False)resultList=subdomain(domain=domain,resultList=resultList,verbose=False,limit=limit,givevariations=False,keeporiginal=False)resultList=addDash(domain=domain,resultList=resultList,verbose=False,limit=limit,givevariations=False,keeporiginal=False)print(resultList)formatOutput(format=formatoutput,resultList=resultList,domain=domain,pathOutput=pathOutput,givevariations=False)resultList=list()Sample outputThere's4 formatpossible for the output file:textyararegexsigmaForTextfile, each line is a variation.ail-project.org il-project.org al-project.org ai-project.org ailproject.org ail-roject.org ail-poject.org ail-prject.org ail-proect.org ail-projct.org ail-projet.org ail-projec.org aail-project.org aiil-project.org ...ForYarafile, each rule is a variation.rule ail-project_org { meta: domain = "ail-project.org" strings: $s0 = "ail-project.org" $s1 = "il-project.org" $s2 = "al-project.org" $s3 = "ai-project.org" $s4 = "ailproject.org" $s5 = "ail-roject.org" $s6 = "ail-poject.org" $s7 = "ail-prject.org" $s8 = "ail-proect.org" $s9 = "ail-projct.org" $s10 = "ail-projet.org" $s11 = "ail-projec.org" condition: any of ($s*) }ForRegexfile, each variations is transform into regex and concatenate with other to do only one big regex.ail\-project\.org|il\-project\.org|al\-project\.org|ai\-project\.org|ailproject\.org|ail\-roject\.org|ail\-poject\.org|ail\-prject\.org|ail\-proect\.org|ail\-projct\.org|ail\-projet\.org|ail\-projec\.orgForSigmafile, each variations are list undervariationskey.title: ail-project.org variations: - ail-project.org - il-project.org - al-project.org - ai-project.org - ailproject.org - ail-roject.org - ail-poject.org - ail-prject.org - ail-proect.org - ail-projct.org - ail-projet.org - ail-projec.orgDNS outputIn case DNS resolve is selected, an additional file will be created in JSON formateach keys are variations and may have a field "ip" if the domain name have been resolved. The filed "NotExist" will be there each time with a Boolean value to determine if the domain is existing or not.{"circl.lu":{"NotExist":false,"ip":["185.194.93.14"]},"ircl.lu":{"NotExist":true},"crcl.lu":{"NotExist":true},"cicl.lu":{"NotExist":true},"cirl.lu":{"NotExist":true},"circ.lu":{"NotExist":true},"ccircl.lu":{"NotExist":true},"ciircl.lu":{"NotExist":true},...}List of algorithms usedAlgoDescriptionAddDashThese typos are created by adding a dash between the first and last character in a string.AdditionThese typos are created by add a characters in the domain name.AddDynamicDnsThese typos are created by adding a dynamic dns at the end of the original domain.AddTldThese typos are created by adding a tld before the right tld. Example: google.com becomes google.com.itChangeDotDashThese typos are created by changing a dot to a dash.ChangeOrderThese typos are created by changing the order of letters in the each part of the domain.ComboThese typos are created by combining multiple algorithms. For example, circl.lu becomes cirl6.luCommonMisspellingThese typos are created by changing a word by is misspelling. Over 8000 common misspellings from Wikipedia. For example,www.youtube.combecomeswww.youtub.comandwww.abseil.combecomeswww.absail.com.Double ReplacementThese typos are created by replacing identical, consecutive letters of the domain name.HomoglyphThese typos are created by replacing characters to another character that look similar but are different. An example is that the lower case l looks similar to the numeral one, e.g. l vs 1. For example, google.com becomes goog1e.com.HomophonesThese typos are created by changing word by an other who sound the same when spoken. Over 450 sets of words that sound the same when spoken. For example,www.base.combecomeswww.bass.com.MissingDotThese typos are created by deleting a dot from the domain name.NumeralSwapThese typos are created by changing a number to words and vice versa. For example, circlone.lu becomes circl1.lu.OmissionThese typos are created by leaving out a letter of the domain name, one letter at a time.RepetitionThese typos are created by repeating a letter of the domain name.ReplacementThese typos are created by replacing each letter of the domain name.StripDashThese typos are created by deleting a dash from the domain name.SingularPluralizeThese typos are created by making a singular domain plural and vice versa.SubdomainThese typos are created by placing a dot in the domain name in order to create subdomain. Example: google.com becomes goo.gle.comVowelSwapThese typos are created by swapping vowels within the domain name except for the first letter. For example,www.google.combecomeswww.gaagle.com.WrongTldThese typos are created by changing the original top level domain to another. For example,www.trademe.co.nzbecomeswww.trademe.co.mzandwww.google.combecomeswww.google.orgUses the 19 most common top level domains.WrongSldThese typos are created by changing the original second level domain to another. For example,www.trademe.co.ukbecomeswww.trademe.ac.ukandwww.google.comwill still bewww.google.com.AcknowledgmentThe project has been co-funded by CEF-TC-2020-2 - 2020-EU-IA-0260 - JTAN - Joint Threat Analysis Network.
aily-sdk
aily-sdk
aim
Drop a star to support Aim ⭐Join Aim discord communityAn easy-to-use & supercharged open-source experiment trackerAim logs your training runs and any AI Metadata, enables a beautiful UI to compare, observe them and an API to query them programmatically.SEAMLESSLY INTEGRATES WITH:TRUSTED BY ML TEAMS FROM:AimStack offers enterprise support that's beyond core Aim. Contact [email protected]•Demos•Ecosystem•Quick Start•Examples•Documentation•Community•Blogℹ️ AboutAim is an open-source, self-hosted ML experiment tracking tool designed to handle 10,000s of training runs.Aim provides a performant and beautiful UI for exploring and comparing training runs. Additionally, its SDK enables programmatic access to tracked metadata — perfect for automations and Jupyter Notebook analysis.Aim's mission is to democratize AI dev tools 🎯Log Metadata Across Your ML Pipeline 💾Visualize & Compare Metadata via UI 📊ML experiments and any metadata trackingIntegration with popular ML frameworksEasy migration from other experiment trackersMetadata visualization via Aim ExplorersGrouping and aggregationQuerying using Python expressionsRun ML Trainings Effectively ⚡Organize Your Experiments 🗂️System info and resource usage trackingReal-time alerting on training progressLogging and configurable notificationsDetailed run information for easy debuggingCentralized dashboard for holistic viewRuns grouping with tags and experiments🎬 DemosCheck out live Aim demos NOW to see it in action.Machine translation experimentslightweight-GAN experimentsTraining logs of a neural translation model(from WMT'19 competition).Training logs of 'lightweight' GAN, proposed in ICLR 2021.FastSpeech 2 experimentsSimple MNISTTraining logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech".Simple MNIST training logs.🌍 EcosystemAim is not just an experiment tracker. It's a groundwork for an ecosystem. Check out the two most famous Aim-based tools.aimlflowAim-spaCyExploring MLflow experiments with a powerful UIan Aim-based spaCy experiment tracker🏁 Quick startFollow the steps below to get started with Aim.1. Install Aim on your training environmentpip3installaim2. Integrate Aim with your codefromaimimportRun# Initialize a new runrun=Run()# Log run parametersrun["hparams"]={"learning_rate":0.001,"batch_size":32,}# Log metricsforiinrange(10):run.track(i,name='loss',step=i,context={"subset":"train"})run.track(i,name='acc',step=i,context={"subset":"train"})See the full list of supported trackable objects(e.g. images, text, etc)here.3. Run the training as usual and start Aim UIaimupLearn moreMigrate from other toolsAim has built-in converters to easily migrate logs from other tools. These migrations cover the most common usage scenarios. In case of custom and complex scenarios you can use Aim SDK to implement your own conversion script.TensorBoard logs converterMLFlow logs converterWeights & Biases logs converterIntegrate Aim into an existing projectAim easily integrates with a wide range of ML frameworks, providing built-in callbacks for most of them.Integration with Pytorch IgniteIntegration with Pytorch LightningIntegration with Hugging FaceIntegration with Keras & tf.KerasIntegration with Keras TunerIntegration with XGboostIntegration with CatBoostIntegration with LightGBMIntegration with fastaiIntegration with MXNetIntegration with OptunaIntegration with PaddlePaddleIntegration with Stable-Baselines3Integration with AcmeIntegration with ProphetQuery runs programmatically via SDKAim Python SDK empowers you to query and access any piece of tracked metadata with ease.fromaimimportRepomy_repo=Repo('/path/to/aim/repo')query="metric.name == 'loss'"# Example query# Get collection of metricsforrun_metrics_collectioninmy_repo.query_metrics(query).iter_runs():formetricinrun_metrics_collection:# Get run paramsparams=metric.run[...]# Get metric valuessteps,metric_values=metric.values.sparse_numpy()Set up a centralized tracking serverAim remote tracking server allows running experiments in a multi-host environment and collect tracked data in a centralized location.See the docs on how toset up the remote server.Deploy Aim on kubernetesThe official Aim docker image:https://hub.docker.com/r/aimstack/aimA guide on how to deploy Aim on kubernetes:https://aimstack.readthedocs.io/en/latest/using/k8s_deployment.htmlRead the full documentation onaimstack.readthedocs.io📖🆚 Comparisons to familiar toolsTensorBoard vs AimTraining run comparisonOrder of magnitude faster training run comparison with AimThe tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params.TensorBoard doesn't have features to group, aggregate the metricsScalabilityAim is built to handle 1000s of training runs - both on the backend and on the UI.TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.Beloved TB visualizations to be added on AimEmbedding projector.Neural network visualization.MLflow vs AimMLFlow is an end-to-end ML Lifecycle tool. Aim is focused on training tracking. The main differences of Aim and MLflow are around the UI scalability and run comparison features.Aim and MLflow are a perfect match - check out theaimlflow- the tool that enables Aim supoerpowers on Mlflow.Run comparisonAim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.UI ScalabilityAim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!MLflow UI becomes slow to use when there are a few hundreds of runs.Weights and Biases vs AimHosted vs self-hostedWeights and Biases is a hosted closed-source MLOps platform.Aim is self-hosted, free and open-source experiment tracking tool.🛣️ RoadmapDetailed milestonesTheAim product roadmap:sparkle:TheBacklogcontains the issues we are going to choose from and prioritize weeklyThe issues are mainly prioritized by the highly-requested featuresHigh-level roadmapThe high-level features we are going to work on the next few months:In progressAim SDK low-level interfaceDashboards – customizable layouts with embedded explorersErgonomic UI kitText ExplorerNext-upAim UIRuns managementRuns explorer – query and visualize runs data(images, audio, distributions, ...) in a central dashboardExplorersDistributions ExplorerSDK and StorageScalabilitySmooth UI and SDK experience with over 10.000 runsRuns managementCLI commandsReporting - runs summary and run details in a CLI compatible formatManipulations – copy, move, delete runs, params and sequencesCloud storage support – store runs blob(e.g. images) data on the cloudArtifact storage – store files, model checkpoints, and beyondIntegrationsML Frameworks:Shortlist: scikit-learnResource management toolsShortlist: Kubeflow, SlurmWorkflow orchestration toolsDoneLive updates (Shipped:Oct 18 2021)Images tracking and visualization (Start:Oct 18 2021, Shipped:Nov 19 2021)Distributions tracking and visualization (Start:Nov 10 2021, Shipped:Dec 3 2021)Jupyter integration (Start:Nov 18 2021, Shipped:Dec 3 2021)Audio tracking and visualization (Start:Dec 6 2021, Shipped:Dec 17 2021)Transcripts tracking and visualization (Start:Dec 6 2021, Shipped:Dec 17 2021)Plotly integration (Start:Dec 1 2021, Shipped:Dec 17 2021)Colab integration (Start:Nov 18 2021, Shipped:Dec 17 2021)Centralized tracking server (Start:Oct 18 2021, Shipped:Jan 22 2022)Tensorboard adaptor - visualize TensorBoard logs with Aim (Start:Dec 17 2021, Shipped:Feb 3 2022)Track git info, env vars, CLI arguments, dependencies (Start:Jan 17 2022, Shipped:Feb 3 2022)MLFlow adaptor (visualize MLflow logs with Aim) (Start:Feb 14 2022, Shipped:Feb 22 2022)Activeloop Hub integration (Start:Feb 14 2022, Shipped:Feb 22 2022)PyTorch-Ignite integration (Start:Feb 14 2022, Shipped:Feb 22 2022)Run summary and overview info(system params, CLI args, git info, ...) (Start:Feb 14 2022, Shipped:Mar 9 2022)Add DVC related metadata into aim run (Start:Mar 7 2022, Shipped:Mar 26 2022)Ability to attach notes to Run from UI (Start:Mar 7 2022, Shipped:Apr 29 2022)Fairseq integration (Start:Mar 27 2022, Shipped:Mar 29 2022)LightGBM integration (Start:Apr 14 2022, Shipped:May 17 2022)CatBoost integration (Start:Apr 20 2022, Shipped:May 17 2022)Run execution details(display stdout/stderr logs) (Start:Apr 25 2022, Shipped:May 17 2022)Long sequences(up to 5M of steps) support (Start:Apr 25 2022, Shipped:Jun 22 2022)Figures Explorer (Start:Mar 1 2022, Shipped:Aug 21 2022)Notify on stuck runs (Start:Jul 22 2022, Shipped:Aug 21 2022)Integration with KerasTuner (Start:Aug 10 2022, Shipped:Aug 21 2022)Integration with WandB (Start:Aug 15 2022, Shipped:Aug 21 2022)Stable remote tracking server (Start:Jun 15 2022, Shipped:Aug 21 2022)Integration with fast.ai (Start:Aug 22 2022, Shipped:Oct 6 2022)Integration with MXNet (Start:Sep 20 2022, Shipped:Oct 6 2022)Project overview page (Start:Sep 1 2022, Shipped:Oct 6 2022)Remote tracking server scaling (Start:Sep 11 2022, Shipped:Nov 26 2022)Integration with PaddlePaddle (Start:Oct 2 2022, Shipped:Nov 26 2022)Integration with Optuna (Start:Oct 2 2022, Shipped:Nov 26 2022)Audios Explorer (Start:Oct 30 2022, Shipped:Nov 26 2022)Experiment page (Start:Nov 9 2022, Shipped:Nov 26 2022)HuggingFace datasets (Start:Dec 29 2022,Feb 3 2023)👥 CommunityAim README badgeAdd Aim badge to your README, if you've enjoyed using Aim in your work:[![Aim](https://img.shields.io/badge/powered%20by-Aim-%231473E6)](https://github.com/aimhubio/aim)Cite Aim in your papersIn case you've found Aim helpful in your research journey, we'd be thrilled if you could acknowledge Aim's contribution:@software{Arakelyan_Aim_2020,author={Arakelyan, Gor and Soghomonyan, Gevorg and {The Aim team}},doi={10.5281/zenodo.6536395},license={Apache-2.0},month={6},title={{Aim}},url={https://github.com/aimhubio/aim},version={3.9.3},year={2020}}Contributing to AimConsidering contibuting to Aim? To get started, please take a moment to read theCONTRIBUTING.mdguide.Join Aim contributors by submitting your first pull request. Happy coding! 😊Made withcontrib.rocks.More questions?Read the docsOpen a feature request or report a bugJoin Discord community server
aim3
AppImages-ManagerA little project i made for myself to sort my AppImages collection. I made it really just for myself to be able to sort and categorize my AppImages in an automated method, but I thought i'd share it incase anyone else finds it useful.Run with eitheraimfor the command line, oraimguifor the GUI.Open the Config button in the app to choose where your Downloads folder is (where your browser will download appimages to), and the Storage path (Where this app stores the appimages). Press Install button to move all apps from Downloads in to Storage. Refresh reloads the list box with the files that are in Storage. Run launches the selected app. If the app doesn't have permissions to execute, it'll handle that automatically. Delete removes the selected appimage file from the system. Group button allows you to sort the selected image in to categories or groups. Either enter a name in the box for a new group, or select a button for an existing group. Edit button lets you edit the groups. (For now, just used for removing images from a group.CLI VersionThere is also a command-line version available. Run the aim-cli.py script for a non-gui way of quickly managing appimages. If run on its own, enters an interactive mode where it'll ask for commands as you go. If you run it with command arguments, it'll run the command then exit. aim-cli.py help shows all the commands Currently supported operations: finding apps in directory, running apps, installing apps from Downloads, automatically handling execution permissions on appimages.RequiresJust tkinter. Everything else should be stock python3 libraries. Only tested in python3.
aima
IntroductionCode for Artificial Intelligence: A Modern Approach (AIMA) 4th edition by Peter Norvig and Stuart Russel.Shameless reuse of Norvig's official repository athttps://github.com/aimacode/aima-python/The code should work in Python 3.7+.How to Browse the CodeYou can get some use out of the code here just by browsing, starting at the root of the source tree or by clicking on the links in the index on the project home page. The source code is in the .py files; the .txt files give examples of how to use the code.How to Install the CodeIf you like what you see, install the code using either one of these methods:From a command shell on your computer, execute the svn checkout command given on the source tab of the project. This assumes you have previously installed the version control system Subversion (svn). Download and unzip the zip file listed as a "Featured download"on the right hand side of the project home page. This is currently (Oct 2011) long out of date; we mean to make a new .zip when the svn checkout settles down.You'll also need to install the data files from the aima-data project. These are text files that are used by the tests in the aima-python project, and may be useful for yout own work.You can put the code anywhere you want on your computer, but it should be in one directory (you might call it aima but you are free to use whatever name you want) with aima-python as a subdirectory that contains all the files from this project, and data as a parallel subdirectory that contains all the files from the aima-data project.How to Test the CodeFirst, you need to install Python (version 2.5 through 2.7; parts of the code may work in other versions, but don't expect it to). Python comes preinstalled on most versions of Linux and Mac OS. Versions are also available for Windows, Solaris, and other operating systems. If your system does not have Python installed, you can download and install it for free.In the aima-python directory, execute the commandpython doctests.py -v *.pyThe "-v" is optional; it means "verbose". Various output is printed, but if all goes well there should be no instances of the word "Failure", nor of a long line of "". If you do use the "-v" option, the last line printed should be "Test passed."How to Run the CodeYou're on your own -- experiment! Create a new python file, import the modules you need, and call the functions you want.AcknowledgementsMany thanks for the bug reports, corrected code, and other support from Phil Ruggera, Peng Shao, Amit Patil, Ted Nienstedt, Jim Martin, Ben Catanzariti, and others.
aima3
.. raw:: html<div align="center"><a href="http://aima.cs.berkeley.edu/"><img src="https://raw.githubusercontent.com/aimacode/aima-python/master/images/aima_logo.png"></a><br><br></div>``aima-python`` |Build Status| |Binder|=======================================Python code for the book *`Artificial Intelligence: A ModernApproach <http://aima.cs.berkeley.edu>`__.* You can use this inconjunction with a course on AI, or for study on your own. We're lookingfor `solidcontributors <https://github.com/aimacode/aima-python/blob/master/CONTRIBUTING.md>`__to help.Structure of the Project------------------------When complete, this project will have Python implementations for all thepseudocode algorithms in the book, as well as tests and examples of use.For each major topic, such as ``nlp`` (natural language processing), weprovide the following files:- ``nlp.py``: Implementations of all the pseudocode algorithms, andnecessary support functions/classes/data.- ``tests/test_nlp.py``: A lightweight test suite, using ``assert``statements, designed for use with```py.test`` <http://pytest.org/latest/>`__, but also usable on theirown.- ``nlp.ipynb``: A Jupyter (IPython) notebook that explains and givesexamples of how to use the code.- ``nlp_apps.ipynb``: A Jupyter notebook that gives exampleapplications of the code.Python 3.4 and up-----------------| This code requires Python 3.4 or later, and does not run in Python 2.You can `install Python <https://www.python.org/downloads>`__ or use abrowser-based Python interpreter such as`repl.it <https://repl.it/languages/python3>`__.| You can run the code in an IDE, or from the command line with``python -i filename.py`` where the ``-i`` option puts you in aninteractive loop where you can run Python functions. See`jupyter.org <http://jupyter.org/>`__ for instructions on setting upyour own Jupyter notebook environment, or run the notebooks onlinewith `try.jupiter.org <https://try.jupyter.org/>`__.Index of Algorithms===================Here is a table of algorithms, the figure, name of the algorithm in thebook and in the repository, and the file where they are implemented inthe repository. This chart was made for the third edition of the bookand is being updated for the upcoming fourth edition. Emptyimplementations are a good place for contributors to look for an issue.The `aima-pseudocode <https://github.com/aimacode/aima-pseudocode>`__project describes all the algorithms from the book. An asterisk next tothe file name denotes the algorithm is not fully implemented. Anothergreat place for contributors to start is by adding tests and writing onthe notebooks. You can see which algorithms have tests and notebooksections below. If the algorithm you want to work on is covered, don'tworry! You can still add more tests and provide some examples of use inthe notebook!+-------+----------------------+-------------------+--------------------+-----+--------+| **Fig | **Name (in 3rd | **Name (in | **File** | **T | **Note || ure** | edition)** | repository)** | | est | book** || | | | | s** | |+=======+======================+===================+====================+=====+========+| 2 | Random-Vacuum-Agent | ``RandomVacuumAge | ```agents.py`` <.. | Don | || | | nt`` | /master/agents.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2 | Model-Based-Vacuum-A | ``ModelBasedVacuu | ```agents.py`` <.. | Don | || | gent | mAgent`` | /master/agents.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.1 | Environment | ``Environment`` | ```agents.py`` <.. | Don | Includ || | | | /master/agents.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.1 | Agent | ``Agent`` | ```agents.py`` <.. | Don | Includ || | | | /master/agents.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.3 | Table-Driven-Vacuum- | ``TableDrivenVacu | ```agents.py`` <.. | | || | Agent | umAgent`` | /master/agents.py> | | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.7 | Table-Driven-Agent | ``TableDrivenAgen | ```agents.py`` <.. | | || | | t`` | /master/agents.py> | | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.8 | Reflex-Vacuum-Agent | ``ReflexVacuumAge | ```agents.py`` <.. | Don | || | | nt`` | /master/agents.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.10 | Simple-Reflex-Agent | ``SimpleReflexAge | ```agents.py`` <.. | | || | | nt`` | /master/agents.py> | | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 2.12 | Model-Based-Reflex-A | ``ReflexAgentWith | ```agents.py`` <.. | | || | gent | State`` | /master/agents.py> | | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3 | Problem | ``Problem`` | ```search.py`` <.. | Don | || | | | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3 | Node | ``Node`` | ```search.py`` <.. | Don | || | | | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3 | Queue | ``Queue`` | ```utils.py`` <../ | Don | || | | | master/utils.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.1 | Simple-Problem-Solvi | ``SimpleProblemSo | ```search.py`` <.. | | || | ng-Agent | lvingAgent`` | /master/search.py> | | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.2 | Romania | ``romania`` | ```search.py`` <.. | Don | Includ || | | | /master/search.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.7 | Tree-Search | ``tree_search`` | ```search.py`` <.. | Don | || | | | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.7 | Graph-Search | ``graph_search`` | ```search.py`` <.. | Don | || | | | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.11 | Breadth-First-Search | ``breadth_first_s | ```search.py`` <.. | Don | Includ || | | earch`` | /master/search.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.14 | Uniform-Cost-Search | ``uniform_cost_se | ```search.py`` <.. | Don | Includ || | | arch`` | /master/search.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.17 | Depth-Limited-Search | ``depth_limited_s | ```search.py`` <.. | Don | || | | earch`` | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.18 | Iterative-Deepening- | ``iterative_deepe | ```search.py`` <.. | Don | || | Search | ning_search`` | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.22 | Best-First-Search | ``best_first_grap | ```search.py`` <.. | Don | || | | h_search`` | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.24 | A\*-Search | ``astar_search`` | ```search.py`` <.. | Don | Includ || | | | /master/search.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 3.26 | Recursive-Best-First | ``recursive_best_ | ```search.py`` <.. | Don | || | -Search | first_search`` | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 4.2 | Hill-Climbing | ``hill_climbing`` | ```search.py`` <.. | Don | || | | | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 4.5 | Simulated-Annealing | ``simulated_annea | ```search.py`` <.. | Don | || | | ling`` | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 4.8 | Genetic-Algorithm | ``genetic_algorit | ```search.py`` <.. | Don | Includ || | | hm`` | /master/search.py> | e | ed || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 4.11 | And-Or-Graph-Search | ``and_or_graph_se | ```search.py`` <.. | Don | || | | arch`` | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 4.21 | Online-DFS-Agent | ``online_dfs_agen | ```search.py`` <.. | | || | | t`` | /master/search.py> | | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 4.24 | LRTA\*-Agent | ``LRTAStarAgent`` | ```search.py`` <.. | Don | || | | | /master/search.py> | e | || | | | `__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 5.3 | Minimax-Decision | ``minimax_decisio | ```games.py`` <../ | Don | Includ || | | n`` | master/games.py>`_ | e | ed || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 5.7 | Alpha-Beta-Search | ``alphabeta_searc | ```games.py`` <../ | Don | Includ || | | h`` | master/games.py>`_ | e | ed || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 6 | CSP | ``CSP`` | ```csp.py`` <../ma | Don | Includ || | | | ster/csp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 6.3 | AC-3 | ``AC3`` | ```csp.py`` <../ma | Don | || | | | ster/csp.py>`__ | e | |+-------+----------------------+-------------------+--------------------+-----+--------+| 6.5 | Backtracking-Search | ``backtracking_se | ```csp.py`` <../ma | Don | Includ || | | arch`` | ster/csp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 6.8 | Min-Conflicts | ``min_conflicts`` | ```csp.py`` <../ma | Don | || | | | ster/csp.py>`__ | e | |+-------+----------------------+-------------------+--------------------+-----+--------+| 6.11 | Tree-CSP-Solver | ``tree_csp_solver | ```csp.py`` <../ma | Don | Includ || | | `` | ster/csp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 7 | KB | ``KB`` | ```logic.py`` <../ | Don | Includ || | | | master/logic.py>`_ | e | ed || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.1 | KB-Agent | ``KB_Agent`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.7 | Propositional Logic | ``Expr`` | ```logic.py`` <../ | Don | || | Sentence | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.10 | TT-Entails | ``tt_entails`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.12 | PL-Resolution | ``pl_resolution`` | ```logic.py`` <../ | Don | Includ || | | | master/logic.py>`_ | e | ed || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.14 | Convert to CNF | ``to_cnf`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.15 | PL-FC-Entails? | ``pl_fc_resolutio | ```logic.py`` <../ | Don | || | | n`` | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.17 | DPLL-Satisfiable? | ``dpll_satisfiabl | ```logic.py`` <../ | Don | || | | e`` | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.18 | WalkSAT | ``WalkSAT`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.20 | Hybrid-Wumpus-Agent | ``HybridWumpusAge | | | || | | nt`` | | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 7.22 | SATPlan | ``SAT_plan`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 9 | Subst | ``subst`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 9.1 | Unify | ``unify`` | ```logic.py`` <../ | Don | Includ || | | | master/logic.py>`_ | e | ed || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 9.3 | FOL-FC-Ask | ``fol_fc_ask`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 9.6 | FOL-BC-Ask | ``fol_bc_ask`` | ```logic.py`` <../ | Don | || | | | master/logic.py>`_ | e | || | | | _ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 9.8 | Append | | | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 10.1 | Air-Cargo-problem | ``air_cargo`` | ```planning.py`` < | Don | || | | | ../master/planning | e | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 10.2 | Spare-Tire-Problem | ``spare_tire`` | ```planning.py`` < | Don | || | | | ../master/planning | e | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 10.3 | Three-Block-Tower | ``three_block_tow | ```planning.py`` < | Don | || | | er`` | ../master/planning | e | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 10.7 | Cake-Problem | ``have_cake_and_e | ```planning.py`` < | Don | || | | at_cake_too`` | ../master/planning | e | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 10.9 | Graphplan | ``GraphPlan`` | ```planning.py`` < | | || | | | ../master/planning | | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 10.13 | Partial-Order-Planne | | | | || | r | | | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 11.1 | Job-Shop-Problem-Wit | ``job_shop_proble | ```planning.py`` < | Don | || | h-Resources | m`` | ../master/planning | e | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 11.5 | Hierarchical-Search | ``hierarchical_se | ```planning.py`` < | | || | | arch`` | ../master/planning | | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 11.8 | Angelic-Search | | | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 11.10 | Doubles-tennis | ``double_tennis_p | ```planning.py`` < | | || | | roblem`` | ../master/planning | | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 13 | Discrete Probability | ``ProbDist`` | ```probability.py` | Don | Includ || | Distribution | | ` <../master/proba | e | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 13.1 | DT-Agent | ``DTAgent`` | ```probability.py` | | || | | | ` <../master/proba | | || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 14.9 | Enumeration-Ask | ``enumeration_ask | ```probability.py` | Don | Includ || | | `` | ` <../master/proba | e | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 14.11 | Elimination-Ask | ``elimination_ask | ```probability.py` | Don | Includ || | | `` | ` <../master/proba | e | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 14.13 | Prior-Sample | ``prior_sample`` | ```probability.py` | | Includ || | | | ` <../master/proba | | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 14.14 | Rejection-Sampling | ``rejection_sampl | ```probability.py` | Don | Includ || | | ing`` | ` <../master/proba | e | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 14.15 | Likelihood-Weighting | ``likelihood_weig | ```probability.py` | Don | Includ || | | hting`` | ` <../master/proba | e | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 14.16 | Gibbs-Ask | ``gibbs_ask`` | ```probability.py` | Don | Includ || | | | ` <../master/proba | e | ed || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 15.4 | Forward-Backward | ``forward_backwar | ```probability.py` | Don | || | | d`` | ` <../master/proba | e | || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 15.6 | Fixed-Lag-Smoothing | ``fixed_lag_smoot | ```probability.py` | Don | || | | hing`` | ` <../master/proba | e | || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 15.17 | Particle-Filtering | ``particle_filter | ```probability.py` | Don | || | | ing`` | ` <../master/proba | e | || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 16.9 | Information-Gatherin | | | | || | g-Agent | | | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 17.4 | Value-Iteration | ``value_iteration | ```mdp.py`` <../ma | Don | Includ || | | `` | ster/mdp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 17.7 | Policy-Iteration | ``policy_iteratio | ```mdp.py`` <../ma | Don | || | | n`` | ster/mdp.py>`__ | e | |+-------+----------------------+-------------------+--------------------+-----+--------+| 17.9 | POMDP-Value-Iteratio | | | | || | n | | | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 18.5 | Decision-Tree-Learni | ``DecisionTreeLea | ```learning.py`` < | Don | Includ || | ng | rner`` | ../master/learning | e | ed || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 18.8 | Cross-Validation | ``cross_validatio | ```learning.py`` < | | || | | n`` | ../master/learning | | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 18.11 | Decision-List-Learni | ``DecisionListLea | ```learning.py`` < | | || | ng | rner`` | ../master/learning | | || | | | .py>`__\ \* | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 18.24 | Back-Prop-Learning | ``BackPropagation | ```learning.py`` < | Don | Includ || | | Learner`` | ../master/learning | e | ed || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 18.34 | AdaBoost | ``AdaBoost`` | ```learning.py`` < | | || | | | ../master/learning | | || | | | .py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+| 19.2 | Current-Best-Learnin | ``current_best_le | ```knowledge.py`` | Don | Includ || | g | arning`` | <knowledge.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 19.3 | Version-Space-Learni | ``version_space_l | ```knowledge.py`` | Don | Includ || | ng | earning`` | <knowledge.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 19.8 | Minimal-Consistent-D | ``minimal_consist | ```knowledge.py`` | Don | || | et | ent_det`` | <knowledge.py>`__ | e | |+-------+----------------------+-------------------+--------------------+-----+--------+| 19.12 | FOIL | ``FOIL_container` | ```knowledge.py`` | Don | || | | ` | <knowledge.py>`__ | e | |+-------+----------------------+-------------------+--------------------+-----+--------+| 21.2 | Passive-ADP-Agent | ``PassiveADPAgent | ```rl.py`` <../mas | Don | || | | `` | ter/rl.py>`__ | e | |+-------+----------------------+-------------------+--------------------+-----+--------+| 21.4 | Passive-TD-Agent | ``PassiveTDAgent` | ```rl.py`` <../mas | Don | Includ || | | ` | ter/rl.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 21.8 | Q-Learning-Agent | ``QLearningAgent` | ```rl.py`` <../mas | Don | Includ || | | ` | ter/rl.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 22.1 | HITS | ``HITS`` | ```nlp.py`` <../ma | Don | Includ || | | | ster/nlp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 23 | Chart-Parse | ``Chart`` | ```nlp.py`` <../ma | Don | Includ || | | | ster/nlp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 23.5 | CYK-Parse | ``CYK_parse`` | ```nlp.py`` <../ma | Don | Includ || | | | ster/nlp.py>`__ | e | ed |+-------+----------------------+-------------------+--------------------+-----+--------+| 25.9 | Monte-Carlo-Localiza | ``monte_carlo_loc | ```probability.py` | Don | || | tion | alization`` | ` <../master/proba | e | || | | | bility.py>`__ | | |+-------+----------------------+-------------------+--------------------+-----+--------+Index of data structures========================Here is a table of the implemented data structures, the figure, name ofthe implementation in the repository, and the file where they areimplemented.+--------------+-------------------------------------+-----------------------------------------------+| **Figure** | **Name (in repository)** | **File** |+==============+=====================================+===============================================+| 3.2 | romania\_map | ```search.py`` <../master/search.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 4.9 | vacumm\_world | ```search.py`` <../master/search.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 4.23 | one\_dim\_state\_space | ```search.py`` <../master/search.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 6.1 | australia\_map | ```search.py`` <../master/search.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 7.13 | wumpus\_world\_inference | ```logic.py`` <../master/logic.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 7.16 | horn\_clauses\_KB | ```logic.py`` <../master/logic.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 17.1 | sequential\_decision\_environment | ```mdp.py`` <../master/mdp.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+| 18.2 | waiting\_decision\_tree | ```learning.py`` <../master/learning.py>`__ |+--------------+-------------------------------------+-----------------------------------------------+Acknowledgements================Many thanks for contributions over the years. I got bug reports,corrected code, and other support from Darius Bacon, Phil Ruggera, PengShao, Amit Patil, Ted Nienstedt, Jim Martin, Ben Catanzariti, andothers. Now that the project is on GitHub, you can see the`contributors <https://github.com/aimacode/aima-python/graphs/contributors>`__who are doing a great job of actively improving the project. Many thanksto all contributors, especially @darius, @SnShine, @reachtarunhere,@MrDupin, and @Chipe1... raw:: html<!---Reference Links-->.. |Build Status| image:: https://travis-ci.org/aimacode/aima-python.svg?branch=master:target: https://travis-ci.org/aimacode/aima-python.. |Binder| image:: http://mybinder.org/badge.svg:target: http://mybinder.org/repo/aimacode/aima-python
aimage
https://github.com/aieater/python_async_image_libraryaimage (Native aimage library wrapper for internal use)Description
ai-maintainer-git-util
AI Maintainer Git UtilA git util for working with the AI Maintainer go git repo in google cloudFree software: MIT licenseDocumentation:https://ai-maintainer-git-util.readthedocs.io.FeaturesTODOCreditsThis package was created withCookiecutterand theaudreyr/cookiecutter-pypackageproject template.History0.1.0 (2023-08-31)First release on PyPI.
aimai-search
No description available on PyPI.
aiman
UNKNOWN
ai-management
Artificial Intelligence ManagementThis is a toolbox to help AI & ML teams to have a better management of their metrics and processes.Our desire is to enable the company with data related to AI solution, in a easy way to read and use. Some new goals are going to be included laterConfluence Documentation LinkTangram LinkTable of ContentsProject StructureFeaturesInstallation/UsageContactProject StructureDescribe the structure of theprojectfolder, including the organization of modules, directories, and any important files.ai_management/ ├── __init__.py ├── model_evaluation.py ├── config.yamlExplain the purpose of each module or significant files.ModelEvaluationHistorize the technical model evaluation results at a Google Big Query table at a Google Cloud Platform project.Installationpipinstallai-managementUsageBinary classificationy_true=[1,0,0,1,1]y_pred=[1,0,0,0,1]y_test_a_lst=y_truey_pred_a_lst=y_predy_test_a_arr=np.array(y_true)y_pred_a_arr=np.array(y_pred)Multi class classificationy_true=[0,1,2,1,2]y_pred=[[0.9,0.1,0.0],[0.3,0.2,0.5],[0.2,0.3,0.5],[0.1,0.8,0.1],[0.1,0.2,0.7]]y_test_b_lst=y_truey_pred_b_lst=y_predy_test_b_arr=np.array(y_true)y_pred_b_arr=np.array(y_pred)Multi label classificationy_test=[[0,1,2],[3,4,5],[6,7,8]]y_pred=[[0,1,2],[3,4,5],[6,7,9]]y_test_c_lst=y_testy_pred_c_lst=y_predy_test_c_arr=np.array(y_true)y_pred_c_arr=np.array(y_pred)Regressiony_true=[2.5,3.0,4.0,5.5,6.0]y_pred=[2.0,3.5,3.8,5.0,6.5]y_test_d_lst=y_truey_pred_d_lst=y_predy_test_d_arr=np.array(y_true)y_pred_d_arr=np.array(y_pred)Assossiation Rulesimportpandasaspdimportnumpyasnp# Create a dataframe with random valuesdf_assossiation=pd.DataFrame({'ID_PRNCPAL':np.random.randint(1,50000,size=103846),'CONFIDENCE':np.random.uniform(0.01,0.03,size=103846)})df_assossiation.sort_values('ID_PRNCPAL')Solution Evaluationimportai_managementasaimclient_bq=bigquery.Client(project='project')me=aim.ModelEvaluation(client_bq=client_bq,destination='project.dataset.table')# Historizing standard metricsme.historize_model_evaluation(soltn_nm='Solution X',lst_mdls=[{'mdl_nm':'Model A','algrthm_typ':'binary_classification','data':[y_test_a_lst,y_pred_a_lst]},{'mdl_nm':'Model B','algrthm_typ':'multi_class_classification','data':[y_test_b_lst,y_pred_b_lst]},{'mdl_nm':'Model C','algrthm_typ':'multi_label_classification','data':[y_test_c_lst,y_pred_c_lst]},{'mdl_nm':'Model D','algrthm_typ':'assossiation','data':['confidence',df_assossiation]},])# Historizing custom metricsme.historize_custom_metric(soltn_nm="Solution Y",lst_mdls=[{'mdl_nm':'Model E','algrthm_typ':'regression','data':[["Lin's Concordance Correlation Coefficient",0.85,None],["Huber's error",123,{"delta":0.75}],]},])ContactLeroy Merlin Brazil AI scientists and developers:[email protected]
aimanager
scailable-ai-manager-cli
aimap
No description available on PyPI.
aimapper
No description available on PyPI.
aimaps
aimapsPython Boilerplate contains all the boilerplate you need to create a Python package.Free software: MIT licenseDocumentation:https://gisam.github.io/aimapsFeaturesTODO
aimaster
Artificial Neural Network learning and experimentation tools with live visualization. (NOTE: Donot use pipe visualization with very large networks 😅.)This module needsnumpyandscipyinstalled. Andmatplotlibfor visualization.Feel free to experiment with aimaster.## InstallationUsepip install aimasterorpip3 install aimasterto install package from PyPI.
aimaxcli
SetupFirst, you need to create a virtual environment and activate it.$ pip install virtualenv $ virtualenv .venv $ . .venv/bin/activate (.venv)$Next, installcliffin the environment.(.venv)$ python setup.py installNow, install the demo application into the virtual environment.(.venv)$ cd demoapp (.venv)$ python setup.py installUsageWith cliff and the demo setup up, you can now play with it.To see a list of commands available, run:(.venv)$ cliffdemo --helpOne of the available commands is “simple” and running it(.venv)$ cliffdemo simpleproduces the followingsending greeting hi!To see help for an individual command, include the command name on the command line:(.venv)$ cliffdemo files --helpCleaning UpFinally, when done, deactivate your virtual environment:(.venv)$ deactivate $
ai-maze
AI-MazeThis program is for a school project in AI. The purpose is to have the agent traverse the left side of the maze until it finds the entrance. Once found, it will make a node and start its path inside the maze until it finds the exit. Once the exit is found, the path traveled is printed.Code written by Jody Bailey
aimbase
Getting StartedTo get started withaimbasefor your application, visit the docs athttps://aimbase.erob.io/To contribute toaimbase, see thecontributingsection below.ContributingGetting Started LocallyLaunch postgres and pgadmin via docker-composedocker-compose up --build.Keeping your containers running, open a new terminal with the root of this repo as the working directory. Installpoetry:pip install poetry(or usepipxon link hereif you prefer isolated envs, or consider usingconda).Create and enter the virtual environment:poetry shellInstall the dependenciespoetry installStart the app:uvicorn examples.example_app:auto_app --reload.Openlocalhost:8000/v1/docsand start interacting with swagger!You can shut down and your db / minio data will persist via docker volumes.MinIOThe MinIO console is available atlocalhost:9001. Login with user:miniouserand password:minioadminif you launched the containers withdocker-compose up --build.Hooks and TestsSet up the precommit hook withpre-commit install.Run tests and get coverage withpytest --cov, and get html reports for vs code live server (or any server) withpytest --cov --cov-report=html:coverage_reOpen a pull request against the repo! Please write tests, your coverage will automatically be added as a comment to any PR via GH actions.Viewing Docs LocallyInstall docs dependencies withpip install -r requirements-docs.txt.Installmkdocswithpip install mkdocs-material.Move into thedocs/endirectory viacd docs/en.Runmkdocs serveto start a local server.
aimbat
AIMBATAIMBAT (Automated and Interactive Measurement of Body wave Arrival Times) is an open-source software package for efficiently measuring teleseismic body wave arrival times for large seismic arrays[1]. It is based on a widely used method called MCCC (Multi-Channel Cross-Correlation)[2]. The package is automated in the sense of initially aligning seismograms for MCCC, which is achieved by an ICCS (Iterative Cross Correlation and Stack) algorithm. Meanwhile, a GUI (graphical user interface) is built to perform seismogram quality control interactively. Therefore, user processing time is reduced while valuable input from a user's expertise is retained. As a byproduct, SAC[3]plotting and phase picking functionalities are replicated and enhanced.Modules and scripts included in the AIMBAT package were developed usingPythonand its open-source modules on the Mac OS X platform since 2009. The original MCCC[2]code was transcribed into Python. The GUI of AIMBAT was inspired and initiated at the2009 EarthScope USArray Data Processing and Analysis Short Course. AIMBAT runs on Mac OS X, Linux/Unix and Windows thanks to the platform-independent feature of Python.For more information visit theproject websiteor thepysmo repositories.Authors' ContactsXiaoting LouEmail: xlou at u.northwestern.eduSuzan van der LeeEmail: suzan at northwestern.eduSimon LloydEmail: simon at slloyd.netContributorsLay Kuan LohReferences[1]Xiaoting Lou, Suzan van der Lee, and Simon Lloyd (2013), AIMBAT: A Python/Matplotlib Tool for Measuring Teleseismic Arrival Times.Seismol. Res. Lett., 84(1), 85-93, doi:10.1785/0220120033.[2]VanDecar, J. C., and R. S. Crosson (1990), Determination of teleseismic relative phase arrival times using multi-channel cross-correlation and least squares.Bulletin of the Seismological Society of America, 80(1), 150–169.[3]Goldstein, P., D. Dodge, M. Firpo, and L. Minner (2003), SAC2000: Signal processing and analysis tools for seismologists and engineers,International Geophysics, 81, 1613–1614.
aim-build
AimA command line tool for building C++ projects.IntroductionAim is an attempt to make building C++ projects from source as simple as possible while encouraging a modular approach to software development.Aim only requires atarget.pyfile which is used to specify the builds of your project. Each build specifies a component of your project, like a static library, dynamic library, or an executable.Each target you aim to support requires its owntarget.pyfile. This is easier to explain with an example:+ Project + builds + linux-debug - target.py + linux-release - target.py + windows-debug - target.py + windows-release - target.py + src + ...Wherewindows/linuxanddebug/releasecompose to make make different targets.When running commands, you often need to specify the path to directory of the target.py file. For example:builds/windows-debugorbuilds/linux-release. Do not addtarget.pyto the path.Aim supports:Windows withmsvcfrontend.Linux withgccfrontend.It should also be possible to use thegccfrontend on Windows when using GCC-like compilers but this hasn't been tested.Updates(23/01/2022) CLI has changed again. Removed previous change. There is now aexeccommand that executes several commands in one. For exampleaim exec <path> <build> clobber build run.(23/12/2021) CLI has changed.list,build,runandclobberare nowtargetcommands and are executed like so:aim target <path> build <name>instead ofaim build --target=<path> <name>. This is to make switching between commands easier.Aim no longer uses thetomlfor thetargetfile format.targetfiles are now written in Python. The motivation for this change is that it can be useful to access environment variables and to store properties, such as compiler flags, as variables. To support this change, there is theutil/convert_toml.pyscript. To convert atomlfile, execute from the aim root directory:poetry run python util\convert_toml.py <relative/path/to/target.toml>. The Python file will be written to the same directory as thetarget.tomlfile.Getting StartedPrerequisitesAim requires the following dependencies:python- version 3.7 or above.ninjapoetry- for development onlylinux-debugtarget.pylinux-releasetarget.pyInstallationAim is apythonproject and is installed usingpip.pip install --user aim-buildUsingBasic usage:aim --help # displays the help. aim init --demo-files # creates src, include, lib directory and adds demo files. aim list builds/linux-clang++-debug # lists the builds in target.py aim build builds/linux-clang++-debug <build> # executes <build>. aim clobber builds/linux-clang++-debug # deletes all build artifacts.You can run executables directly or using theruncommand:./builds/clang++-linux-debug/<build-name>/<output-name> aim run builds/clang++-linux-debug run <build-name>Target filesAtarget.pyfile describes a project and its build components.Begin by specifyingprojectRootwhich is the path from the target file to your source files. All relative paths will be relative to this directory.The compiler frontend informs Aim how to construct the arguments for the compiler. Usegccfor GCC-like compilers andmsvcfor Microsoft cl-like compilers. Next specify thecompiler,archiver,flagsand anydefines.projectRoot = "../.." compilerFrontend="gcc" compiler = "clang++" archiver = "ar" flags = [ "-std=c++17", "-O3", "-g", "-Wall", ] # defines = [...] # Defines do not need the -D prefix.Next specify your builds. For each build you must specify thenameandbuildRule. Valid build rules arestaticLibrary,dynamicLibrary,executable,headerOnlyorlibraryReference. Atarget.pythat consists of a dynamic or shared library, an application and a test executable looks like:builds = [ { "name": "calculatorstatic", "buildRule": "staticLibrary", "outputName": "CalculatorStatic", "sourceFiles": ["lib/*.cpp"], "includePaths": [ "include" ] }, { "name": "calculatordynamic", "buildRule": "dynamicLibrary", "outputName": "CalculatorShared", "sourceFiles": ["lib/*.cpp"], "includePaths": [ "include" ] }, { "name": "calculatortests", "buildRule": "executable", "requires": ["calculatorstatic"], "outputName": "CalculatorTests", "sourceFiles": ["tests/*.cpp"], "includePaths": ["include"] }, { "name": "calculatorapp", "buildRule": "executable", "requires": ["calculatordynamic"], "outputName": "CalculatorApp", "sourceFiles": ["src/*.cpp"], "includePaths": ["include"] } ]Other notes:Therequiresfield is important as it is how you specify the dependencies for a build. For example, if you create a static library named "myAwesomeLibrary", this can be used in other builds simply by specifyingrequires=["myAwesomeLibrary"].AheaderOnlybuild does not have anoutputNameorsourceFilesas it is not built. TheheaderOnlyrule is not essential and is mostly for convenience. If you have a header only library, repeating the include paths across several builds can be become repetitive. Instead, create aheaderOnlybuild to capture the include paths and use it in other builds by adding the rule to the buildsrequiresfield.AlibraryReferencedoes not havesourceFilesas it is not built. Like theheaderOnlyrule it is mostly for convience to reduce duplication. The primary use case is for capturing theincludePaths,libraryPathsandlibrariesof a third party library that you need to use in a build. AlibraryReferencecan then be used by other builds by adding it to a buildsrequiresfield.The fieldscompiler,flagsanddefinesare normally written at the top of the target file before the builds section. By default, all builds will use these fields i.e. they are global, but they can also be overridden by specifying them again in a build. Note that when these fields are specified specifically for a build, they completely replace the global definition; anyflagsordefinesthat you specify must be written out in full as they will not share any values with the global definition.Since target files are just python, you can have variables. However, since target files are validated with a schema, variables must be escaped with a leading underscore. For example_custom_defines = [...]is okay, butcustom_defines = [...]will cause a schema error.Supporting Multiple TargetsAim treats any build variation as its own unique build target with its own uniquetarget.py.A build target is some combination ofthingsthat affects the output binary such as:operating system (Windows, OSX, Gnu Linux)compiler (MSVC, GCC, Clang)build type (Release, Debug, Sanitized)etc.Each build target and correspondingtarget.pyfile must have its own directory ideally named using a unique identifier that comprises the 'parts' that make up the build. For example,builds/linux-clang++-release/target.pyindicates that the target file describes a project that is areleasebuild, uses theclang++compiler and is for thelinuxoperating system.As an example, if you were developing an application for both Windows and Linux, you may end up with a build directory structure like the following:builds/linux-clang++-release/target.pybuilds/linux-clang++-debug/target.pybuilds/windows-clangcl-release/target.pybuilds/windows-clangcl-debug/target.pyNote: eachtarget.pyfile must be written out in full for each target that you need to support. There is no way for target files to share information or to depend on another. While this leads to duplication between target files, it makes them very explicit and makes debugging builds much easier.Advice Structuring ProjectsIf you structure your project/libraries as individual repositories then it may seem logical to nest dependencies inside one another. For example, if library B depends on library A, then B needs a copy of A in order for it to be built. So you may choose to nest the source of A inside B, perhaps using a git submodule.The problem comes when your dependency hierarchy becomes more complex. If library C also depends on A, and an application D depends on B and C, you'll end up with multiple copies of library A which can become difficult to manage.You may need to use this approach, as it can be useful to build a library in isolation, however you should do so in such a way where pulling the source for the dependencies is optional.The approach the author uses is to use a non-project-specific directory that includes all your projects directly below it i.e. a "flat" structure. So rather than nesting dependencies you have:+ MyProjects + - LibA + - LibB + - LibC + - Application_1 + - Application_2 + - builds + - - App1 + - - - linux-clang++-debug + - - - - target.pyThe flat structure has a single build directory and a single target file for each build target you need to support. This eliminates any duplication and is easy to manage.Aimis flexible enough that you can add additional levels to the project structure should you need to. For example, you may want to group all libraries under a libraries sub-directory. But the take-away message is that you should notenforcenested dependencies as this leads to duplication.Developing AimAim is a Python project and uses thepoetrydependency manager. Seepoetry installationfor instructions.Once you have cloned the project, the virtual environment and dependencies can be installed by executing:poetry installDev InstallUnfortunately, unlikesetuptools, there is no means to do a 'dev install' using poetry. A dev install effectively generates an application that internally references the active source files under development. This allows developers to test the application without having to re-install the application after each change.In order to use a development version of Aim on the command line, is it recommended creating an alias. The alias needs to:add the Aim directory toPYTHONPATHto resolve import/module pathsexecute the main Aim script using the virtualenv created by poetryThere aredev-env.bashanddev-env.fishscripts that configure this for you in the root of the Aim project directory. Note, these files must be sourced in order for them to work.
aim-cli
A super-easy way to record, search and compare AI experiments.
aimd
AimAim is an AI deployment and version control system. It can handle both small and large projects through their whole life cycle with efficiency and speed. It is built to seamlessly blend in with existing ML stack and become an integral part of the development lifecycle.Aim CLIAim CLI is a command line tool for building end-to-end AI. Aim is built to be: compatible with the existing ecosystem of tools be familiar just work make building AI productiveAim has three main features: tracking of training, export and deploy.Tracking - ML TrainingCommand:aim trainAim train runs training for the given aim repository. Aim train tracks the gradients and updates in the model with given interval and saves them for visualization and analysis. Aim Train is paired with UI that visualizes the artifacts tracked. Aim Tracking is used to debug and have a detailed understanding of the process of training.Export - ML ModelCommand:aim exportAim export creates the saved model checkpoint file and exports .aim model which could be committed and pushed to the Aimhub and/or deployed to different platforms. Exported .aim model could also be converted to .onnx, .tf and other checkpoints for other frameworks. Aim CLI Export is based on aim Intermediate Representation that allows for automatic deployment of the model. Aim Export can also export pre-processing steps similarly to the model and could be included in the model deployment process.Deploy - Aim ModelCommand:aim deployAim Deploy produces a deployable artifact from .aim (model and preprocessing) files. The produced artifacts can run in cloud, on different hardware and as a hybrid. Deployments are also reflected on Aimhub to track and version the deployed artifacts.Other Commandsaimfork aimbranchoff aimpause,continueaimconvert
aimdfragmentation
Automated Fragmentation AIMD CalculationA automated fragmentation method for Ab Initio Molecular Dynamics (AIMD).Author: Jinzhe ZengEmail:jzzeng@stu.ecnu.edu.cnRequirementsOpenBabelnumpyASEGaussianRunnerInstallationUsing pip$pipinstallaimdfragmentationBuild from sourceYou should installGaussian 16andOpenBabelfirst. Then:gitclonehttps://github.com/njzjz/aimdfragmentationcdaimdfragmentation/ pipinstall.ExampleRun a Python programYou can seeexamples/example.pyas an example, and run with:pythonexample.pyRun MD with LAMMPSSeenjzjz/Pyforcerepository and install Pyforce module. Then renameexamples/example.pyasforce.pyand put it where you run LAMMPS. Add a line in the LAMMPS input file:fix 1 all pyforce C H O
aimdvcli
This is forSNS AIMDV (AI based Mood Detection thru Voice)clients, who want to install and test mood AIMDV engine.Table of ContentsInstallationSystem RequirementsInstallataionUpgradeBasic UsageAudio Format ConverterChange LogsInstallationSystem RequirementsOS: Windows, Linux (and maybe Mac)Python 3.4+Installataionpipinstallaimdvcli# orpip3installaimdvcliUpgrade# for updatingpipinstall-UaimdvcliBasic UsageType like this at your console,aimdvcli<command>[options]Audio Format ConverterThis is a audio format converter to standardize the wave format to fit into AIMIDV embedded version in embedded environment, esp., in Android. It can convert all files of the directory you specify, and save into target directory.Target format:sampling rate: 22,050quantization bit: 16 bitchannels: 1 (mono)byte order: little endianusage:aimdvcliaudio[options]<sourcedirectory>[outputdirectory]options:--help:displayhelp-r:convertallaudiofilesandtheirsubdirectoriesrecursivelyarguments:sourcedirectory:directorycontainsaudiofilesoutputdirectory:defaultis<sourcedirectory>/AIMDVexample:# convert all audios in testset directory recursivelyaimdvcliconvert-r./testset./outputChange Logs0.1 (June 28, 2018): project initialized
aime
test
aime-detector-sdk
This module use for aimereception software base detector module only
aimedic-utils-package
No description available on PyPI.
aimed-xnat
Failed to fetch description. HTTP Status Code: 404
aimee
aimee!pipinstallaimeeimportaimeeaimee.prompt('which donors should I call today?')
aimem
AIMemoryA very much not maintained SDK.
aimemory
No description available on PyPI.
aimepdf
This is the Converter Package.
aimepdftool
This is the Converter Package.
aimergirls
No description available on PyPI.
aimes.bundle
``Bundle``========='bundle_manager' is the major module provided by 'bundle''bundle' depends on the following third party library to run:paramiko (A Python SSHv2 protocal library)Pyro4* (Python Remote Objects, needed only when using Bundle as a remote object)Usage example:**************Before using bundle_manager, user needs to create a configuration file.src/bundle/example/bundle_credentials.txt is a template configuration fileThere are two ways to use bundle_manager.The first way is to directly import bundle_manager as a library.src/bundle/example/bundle_cml.py shows an example of using bundle_manager inthis way.The second way is to launch bundle_manager as a deamon, register it with Pyro4as a remote object. User program should query Pyro4 with bundle_manager's urito get a reference to the remote object and call thie remote object's functionssrc/bundle/example/bundle_cmlPyro.py shows an example of using bundle_managerin this way.A screen copy of results on india.futuregrid.org (using aboved mentioned first way)**************fengl@exa (/home/grad03/fengl/DOEProj) % ls bundle_cml.py bundlebundle_cml.py*bundle:api/ bundle_credentials.txt db/ impl/ __init__.py __init__.pyc tools/fengl@exa (/home/grad03/fengl/DOEProj) % cat bundle/bundle_credentials.txt#bundle cluster credential file#each line contains credential of a cluster, used for launching a remote connection to the cluster#accepted credential fields include: hostname, port, username, password, key_filenamefinished_job_trace=/home/grad03/fengl/DOEProj/bundle/dbcluster_type=moab hostname=india.futuregrid.org username=liux2102 key_filename=/home/grad03/fengl/.ssh/id_rsa h_flag=True#cluster_type=moab hostname=xray.futuregrid.org username=liux2102 key_filename=/home/grad03/fengl/.ssh/id_rsa#cluster_type=moab hostname=hotel.futuregrid.org username=liux2102 key_filename=/home/grad03/fengl/.ssh/id_rsa#cluster_type=moab hostname=sierra.futuregrid.org username=liux2102 key_filename=/home/grad03/fengl/.ssh/id_rsa#cluster_type=moab hostname=alamo.futuregrid.org username=liux2102 key_filename=/home/grad03/fengl/.ssh/id_rsafengl@exa (/home/grad03/fengl/DOEProj) % ~/virtualenv/bin/python bundle_cml.pyEnter command: loadc bundle/bundle_credentials.txt2013-10-29 17:11:39,251 india.futuregrid.org india.futuregrid.org INFO __init__:112 Connected to india.futuregrid.orgEnter command: list['india.futuregrid.org']Enter command: showc india.futuregrid.org{'state': 'Up', 'num_procs': 248, 'pool': {'compute': {'np': 8, 'num_procs': 168, 'num_nodes': 21}, 'b534': {'np': 8, 'num_procs': 8, 'num_nodes': 1}, 'delta': {'np': 12, 'num_procs': 72, 'num_nodes': 6}}, 'queue_info': {'bravo': {'started': 'True', 'queue_name': 'bravo', 'enabled': 'True', 'pool': 'bravo', 'max_walltime': 86400}, 'batch': {'started': 'True', 'queue_name': 'batch', 'enabled': 'True', 'pool': 'compute', 'max_walltime': 86400}, 'long': {'started': 'True', 'queue_name': 'long', 'enabled': 'True', 'pool': 'compute', 'max_walltime': 604800}, 'delta-long': {'started': 'True', 'queue_name': 'delta-long', 'enabled': 'True', 'pool': 'delta', 'max_walltime': 604800}, 'delta': {'started': 'True', 'queue_name': 'delta', 'enabled': 'True', 'pool': 'delta', 'max_walltime': 86400}, 'b534': {'started': 'True', 'queue_name': 'b534', 'enabled': 'True', 'pool': 'b534', 'max_walltime': 604800}, 'ib': {'started': 'True', 'queue_name': 'ib', 'enabled': 'True', 'pool': 'compute', 'max_walltime': 86400}, 'interactive': {'started': 'True', 'queue_name': 'interactive', 'enabled': 'True', 'pool': 'compute', 'max_walltime': 86400}}, 'num_nodes': 28}Enter command: showw india.futuregrid.org{'free_procs': 208, 'per_pool_workload': {'compute': {'free_procs': 128, 'free_nodes': 16, 'alive_nodes': 21, 'busy_nodes': 5, 'np': 8, 'busy_procs': 40, 'alive_procs': 168}, 'b534': {'free_procs': 8, 'free_nodes': 1, 'alive_nodes': 1, 'busy_nodes': 0, 'np': 8, 'busy_procs': 0, 'alive_procs': 8}, 'delta': {'free_procs': 72, 'free_nodes': 6, 'alive_nodes': 6, 'busy_nodes': 0, 'np': 12, 'busy_procs': 0, 'alive_procs': 72}}, 'free_nodes': 23, 'alive_nodes': 28, 'busy_nodes': 5, 'busy_procs': 40, 'alive_procs': 248}Enter command: quit2013-10-29 17:12:17,222 india.futuregrid.org india.futuregrid.org DEBUG close:1003 close2013-10-29 17:12:17,222 india.futuregrid.org india.futuregrid.org DEBUG run:607 received "close" commandcmd_line_loop finishA screen copy of results on india.futuregrid.org (using aboved mentioned second way)**************fengl@exa (/home/grad03/fengl) % ~/virtualenv/bin/python -m Pyro4.naming/home/grad03/fengl/virtualenv/local/lib/python2.7/site-packages/Pyro4/core.py:167: UserWarning: HMAC_KEY not set, protocol data may not be securewarnings.warn("HMAC_KEY not set, protocol data may not be secure")Not starting broadcast server for localhost.NS running on localhost:9090 (127.0.0.1)URI = PYRO:Pyro.NameServer@localhost:9090#Open another terminalfengl@exa (/home/grad03/fengl/DOEProj) % ~/virtualenv/bin/python bundle/impl/bundle_manager.py -D -c bundle/bundle_credentials.txtdaemon mode2013-10-29 16:05:12,393 india.futuregrid.org india.futuregrid.org INFO __init__:112 Connected to india.futuregrid.org2013-10-29 16:05:12,393 INFO:india.futuregrid.org:bundle_agent.py:112:Connected to india.futuregrid.org/home/grad03/fengl/virtualenv/local/lib/python2.7/site-packages/Pyro4/core.py:167: UserWarning: HMAC_KEY not set, protocol data may not be securewarnings.warn("HMAC_KEY not set, protocol data may not be secure")Object <__main__.BundleManager object at 0x1c59f90>:uri = PYRO:obj_9262a45a566a46f39c4fad5288fbf9ae@localhost:41540name = BundleManagerPyro daemon running.#Check bundle_manager has successfully registered itself as a remote object to Pyro4fengl@exa (/home/grad03/fengl) % ~/virtualenv/bin/python -m Pyro4.nsc list/home/grad03/fengl/virtualenv/local/lib/python2.7/site-packages/Pyro4/core.py:167: UserWarning: HMAC_KEY not set, protocol data may not be securewarnings.warn("HMAC_KEY not set, protocol data may not be secure")--------START LISTBundleManager --> PYRO:obj_9262a45a566a46f39c4fad5288fbf9ae@localhost:41540Pyro.NameServer --> PYRO:Pyro.NameServer@localhost:9090--------END LIST``AIMES``=========AIMES is a DOE ASCR funded collaborative project between the RADICALgroup at Rutgers, University of Minnesota, and the Computation Instituteat the University of Chicago, that will explore the role of abstractionsand integrated middleware to support science at extreme scales. AIMESwill co-design middleware from an application and infrastructureperspective. AIMES will provide abstractions for compute, data andnetwork, integrated across multiple levels to provide an interoperable,extensible and scalable middleware stack to support extreme-scalescience.AIMES is funded by DOE ASCR under grant numbers: DE-FG02-12ER26115,DE-SC0008617, and DE-SC0008651Changelog for ``aimes``================================0.1.0 (2013-06-12)------------------- Created the python module for the bundle project.
aimes-math
aimes_mathThis packege contains methods, which are used in AIMES tables and numbers to make simple math. You might need to install this packege if you have done project python export in AIMES.
aimes.skeleton
<a href=”http://dx.doi.org/10.5281/zenodo.13750”><img src=”https://zenodo.org/badge/doi/10.5281/zenodo.13750.svg” alt=”10.5281/zenodo.13750”></a>SkeletonApplication Skeleton is a tool to generate skeleton applications — easy-to-program, easy-to-run applications — that mimic a real applications’ parallel or distributed performance at a task (but not process) level.Application classes that can be represented include: bag of tasks, map reduce, multi-stage workflow, and variations of these with a fixed number of iterations.Applications are described as one or more stages.Stages are described as one more more tasks. Stages can also be iterative.Tasks can be serial or parallel, and have compute and I/O (read and write) elements.Documentation about Skeletons can be found in the report directoryContributors are welcome!A paper about the first version of Application Skeletons is: Z. Zhang and D. S. Katz, “Application Skeletons: Encapsulating MTC Application Task Computation and I/O,” Proceedings of 6th Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS), (in conjunction with SC13), 2013.http://dx.doi.org/10.1145/2503210.2503222A paper about the current version is: Z. Zhang and D. S. Katz, “Using Application Skeletons to Improve eScience Infrastructure,” Proceedings of 10th IEEE International Conference on eScience, 2014.http://dx.doi.org/10.1109/eScience.2014.9(paper).http://www.slideshare.net/danielskatz/using-application-skeletons-to-improve-escience-infrastructure(slides).
aimet
AIMet (Artificial Intelligence for Meteorology)Python Artificial Intelligence Package for Meteorology (under development)
ai-metadata
AI-MetadataAI-Metadatais a helper library to detect and extract metadata about AI/ML models for deployment and visualization.FeaturesIt's critical that an inference system needs to know their metadata information of each deployed model when it serves many AI/ML models. For a single model, its model type, runtime, serialization method, inputs and outputs schema, and other informative fields for visualization, like model metrics, training optimization params, and so on.AI-metadata provides a unified API to detect and extract metadata automatically, it supports the following models by default, and more types will be added to the list.Scikit-learnXGBoostLightGBMKeras and Tensorflow(tf.keras)PytorchPySparkPMMLONNXCustomPrerequisitesPython 2.7 or >= 3.5Dependenciesnumpypandasscikit-learnpypmmlonnxruntimeInstallationpipinstallpypmmlOr install the latest version from github:pipinstall--upgradegit+https://github.com/autodeployai/ai-metadata.gitUsageWrap the built model by the static methodwrapofMetadataModelwith several optional arguments.fromai_metadataimportMetadataModelMetadataModel.wrap(model,mining_function:'MiningFunction'=None,x_test=None,y_test=None,data_test=None,source_object=None,**kwargs)Data preparation for the following examples except of Spark:fromsklearnimportdatasetsfromsklearn.model_selectionimporttrain_test_splitX,y=datasets.load_iris(return_X_y=True,as_frame=True)X_train,X_test,y_train,y_test=train_test_split(X,y)1. Example: scikit learn modelfromsklearn.svmimportSVC# Train a SVC modelsvc=SVC(probability=True)svc.fit(X_train,y_train)# Wrap the model with test datasetsmodel=MetadataModel.wrap(svc,x_test=X_test,y_test=y_test)model_metadata=model.model_metadata(as_json=True,indent=2)Model metadata example of the SVC model in json:{"runtime":"Python3.10","type":"scikit-learn","framework":"Scikit-learn","framework_version":"1.1","function_name":"classification","serialization":"joblib","algorithm":"SVC","metrics":{"accuracy":0.9736842105263158},"inputs":[{"name":"sepal length (cm)","sample":5.0,"type":"float64"},{"name":"sepal width (cm)","sample":3.2,"type":"float64"},{"name":"petal length (cm)","sample":1.2,"type":"float64"},{"name":"petal width (cm)","sample":0.2,"type":"float64"}],"targets":[{"name":"target","sample":0,"type":"int64"}],"outputs":[],"object_source":null,"object_name":null,"params":{"C":"1.0","break_ties":"False","cache_size":"200","class_weight":"None","coef0":"0.0","decision_function_shape":"ovr","degree":"3","gamma":"scale","kernel":"rbf","max_iter":"-1","probability":"True","random_state":"None","shrinking":"True","tol":"0.001","verbose":"False"}}2. Example: PMML modelfromsklearn.pipelineimportPipelinefromsklearn.treeimportDecisionTreeClassifierfromsklearn.preprocessingimportStandardScalerfromnyokaimportskl_to_pmml# Export the pipeline of scikit-learn to PMML# Train a pipelinepipeline=Pipeline([("scaler",StandardScaler()),("model",DecisionTreeClassifier())])pipeline.fit(X_train,y_train)# Export to PMMLpmml_model='./pmml-cls.xml'skl_to_pmml(pipeline,X_train.columns,y_train.name,pmml_model)# Wrap the model with test datasetsmodel=MetadataModel.wrap(pmml_model,x_test=X_test,y_test=y_test)model_metadata=model.model_metadata(as_json=True,indent=2)Model metadata example of the PMML model in json:{"runtime":"PyPMML","type":"pmml","framework":"PMML","framework_version":"4.4.1","function_name":"classification","serialization":"pmml","algorithm":"TreeModel","metrics":{"accuracy":0.9736842105263158},"inputs":[{"name":"sepal length (cm)","sample":5.0,"type":"double"},{"name":"sepal width (cm)","sample":3.2,"type":"double"},{"name":"petal length (cm)","sample":1.2,"type":"double"},{"name":"petal width (cm)","sample":0.2,"type":"double"}],"targets":[{"name":"target","sample":0,"type":"integer"}],"outputs":[{"name":"probability_0","type":"double"},{"name":"probability_1","type":"double"},{"name":"probability_2","type":"double"},{"name":"predicted_target","type":"integer"}],"object_source":null,"object_name":null,"params":{}}3. Example: ONNX modelfromsklearn.linear_modelimportLogisticRegressionimportonnxmltools# Export to ONNXfromonnxmltools.convert.common.data_typesimportFloatTensorType# Train a Logistic Regression modelclf=LogisticRegression()clf.fit(X_train,y_train)# Export to ONNXinitial_types=[('X',FloatTensorType([None,X_test.shape[1]]))]onnx_model=onnxmltools.convert_sklearn(clf,initial_types=initial_types)# Wrap the model with test datasetsmodel=MetadataModel.wrap(onnx_model,x_test=X_test,y_test=y_test)model_metadata=model.model_metadata(as_json=True,indent=2)Model metadata example of the ONNX model in json:{"runtime":"ONNXRuntime","type":"onnx","framework":"ONNX","framework_version":"8","function_name":"classification","serialization":"onnx","algorithm":"LinearClassifier","metrics":{"accuracy":1.0},"inputs":[{"name":"X","type":"tensor(float)","shape":[null,4],"sample":[[5.0,3.2,1.2,0.2]]}],"targets":[],"outputs":[{"name":"output_label","type":"tensor(int64)","shape":[null]},{"name":"output_probability","type":"seq(map(int64,tensor(float)))","shape":[]}],"object_source":null,"object_name":null,"params":{}}4. Example: Spark MLlib modelfrompyspark.sqlimportSparkSessionfrompyspark.ml.classificationimportLogisticRegressionfrompyspark.ml.featureimportVectorAssemblerfrompyspark.mlimportPipeline# Convert pandas dataframe to the dataframe of Sparkspark=SparkSession.builder.getOrCreate()iris=datasets.load_iris(as_frame=True)df=spark.createDataFrame(iris.frame)df_train,df_test=df.randomSplit([0.75,0.25])# Train a pipeline of Sparkassembler=VectorAssembler(inputCols=iris.feature_names,outputCol='features')lr=LogisticRegression().setLabelCol(iris.target.name)pipeline=Pipeline(stages=[assembler,lr])pipeline_model=pipeline.fit(df_train)# Wrap the model with test datasetmodel=MetadataModel.wrap(pipeline_model,data_test=df_test)model_metadata=model.model_metadata(as_json=True,indent=2)Model metadata example of the Spark model in json:{"runtime":"Python3.10","type":"mllib","framework":"Spark","framework_version":"3.3","function_name":"classification","serialization":"spark","algorithm":"PipelineModel","metrics":{"accuracy":0.8780487804878049},"inputs":[{"name":"sepal length (cm)","sample":4.8,"type":"float"},{"name":"sepal width (cm)","sample":3.4,"type":"float"},{"name":"petal length (cm)","sample":1.6,"type":"float"},{"name":"petal width (cm)","sample":0.2,"type":"float"}],"targets":[{"name":"target","sample":0.0,"type":"float"}],"outputs":[],"object_source":null,"object_name":null,"params":{"VectorAssembler_43c37a968944":{"outputCol":"features","handleInvalid":"error","inputCols":["sepal length (cm)","sepal width (cm)","petal length (cm)","petal width (cm)"]},"LogisticRegression_98944bb4d096":{"aggregationDepth":2,"elasticNetParam":0.0,"family":"auto","featuresCol":"features","fitIntercept":true,"labelCol":"target","maxBlockSizeInMB":0.0,"maxIter":100,"predictionCol":"prediction","probabilityCol":"probability","rawPredictionCol":"rawPrediction","regParam":0.0,"standardization":true,"threshold":0.5,"tol":1e-06}}}You can refer to the tests of different model types for more details.SupportIf you have any questions about theAI-Metadatalibrary, please open issues on this repository.LicenseAI-metadatais licensed underAPL 2.0.
aime-text-postprocessing
This module use for text conversion base on dict
aimet-ml
aimet-mlPython package of frequently used modules for ML developments in AIMET.Documentation:https://aimet-tech.github.io/aimet-mlGitHub:https://github.com/aimet-tech/aimet-mlPyPI:https://pypi.org/project/aimet-ml/Free software: MITFeaturesTODOCreditsThis package was created withCookiecutterand thewaynerv/cookiecutter-pypackageproject template.
aimet-onnx
AIMET for ONNXComing soon!
aimet-tensorflow
AIMET for TensorFlowComing soon!
aimet-torch
#==============================================================================# @@-COPYRIGHT-START-@@## Copyright (c) 2021-2023, Qualcomm Innovation Center, Inc. All rights reserved.## SPDX-License-Identifier: BSD-3-Clause## @@-COPYRIGHT-END-@@#======================================================================================Overview========AI Model Efficiency Toolkit (AIMET) is a library that provides advanced modelquantization and model compression techniques for trained neural networkmodels. It provides features that have been proven to improve run-timeperformance of deep learning neural network models with lower compute andmemory requirements and minimal impact to task accuracy.Features========AIMET supports the following features- Model Quantization- Quantization simulation: Simulates on-target quantized inference.Specifically simulates Qualcomm SnapDragon DSP accelerators.- Quantization-aware training: Fine-tune models to improve on-targetquantized accuracy- Data Free quantization: Post-training technique to improve quantizedaccuracy by equalizing model weights (Cross-Layer Equalization) andcorrecting shifts in layer outputs due to quantization (Bias Correction)- Model Compression- Spatial SVD: Tensor decomposition technique to split a large layerinto two smaller ones- Channel Pruning: Removes redundant input channels of convolutionallayers and modifies the model graph accordingly- Compression-ratio Selection: Automatically selects per-layer compressionratios============Dependencies============See the https://quic.github.io/aimet-pages/releases/latest/install/index.html for details.=============Documentation=============Please refer to the Documentation at https://quic.github.io/aimet-pages/index.htmlfor the user guide and API documentation.=================Using the Package=================Please see https://github.com/quic/aimet#getting-started for package requirementsand usage.
aime-xai
**AIME:**Approximate Inverse Model ExplanationsThe AIME methodology is detailed in the paper available at The AIME methodology is detailed in the paper available athttps://ieeexplore.ieee.org/document/10247033. AIME is proposed to address the challenges faced by existing methods in providing intuitive explanations for black-box models. AIME offers unified global and local feature importance by deriving approximate inverse operators for black-box models. It introduces a representative instance similarity distribution plot, aiding comprehension of the predictive behavior of the model and target dataset. This software only supports the global feature importance of AIME.FeaturesUnified Global and Local Feature Importance: AIME derives approximate inverse operators for black-box models, offering insights into both global and local feature importance.Representative Instance Similarity Distribution Plot: This feature aids in understanding the predictive behavior of the model and the target dataset, illustrating the relationship between different predictions.Effective Across Diverse Data Types: AIME has been tested and proven effective across various data types, including tabular data, handwritten digit images, and text data.LicenseAIME is dual-licensed under the The 2-Clause BSD License and the Commercial License. Apply the The 2-Clause BSD License only for academic or research purposes, and apply Commercial License for commercial and other purposes. You can choose which one to use.Commercial LicenseFor those interested in Commercial License, a licensing fee may be required. Please contact us for more details at:Email:[email protected] install aime-xaiCitationIf you use this software for research or other purposes, please cite the following paper@ARTICLE{10247033, author={Nakanishi, Takafumi}, journal={IEEE Access}, title={Approximate Inverse Model Explanations (AIME): Unveiling Local and Global Insights in Machine Learning Models}, year={2023}, volume={11}, number={}, pages={101020-101044}, doi={10.1109/ACCESS.2023.3314336}}
aimfast
aimfastAn Astronomical Image Fidelity Assessment ToolMain website:aimfast.rtfd.ioIntroductionImage fidelity is a measure of the accuracy of the reconstructed sky brightness distribution. A related metric, dynamic range, is a measure of the degree to which imaging artifacts around strong sources are suppressed, which in turn implies a higher fidelity of the on-source reconstruction. Moreover, the choice of image reconstruction algorithm also affects the correctness of the on-source brightness distribution.InstallationInstallation fromsource, working directory where source is checked out$pipinstall.This package is available onPYPI, allowing$pipinstallaimfastLicenseThis project is licensed under the GNU General Public License v3.0 - seelicensefor details.ContributeContributions are always welcome! Please ensure that you adhere to our coding standardspep8.
aim-git-util
AI Maintainer Git UtilDocumentation:https://docs.ai-maintainer.comA git util for working with the AI Maintainer go git repo in google cloud. Used in several of our other libraries as well as by us locally.Free software: MIT licenseFeaturesAllows for easy forking, cloning and pushing to our remote repositories, via wrapped sub process commands.History0.1.0 (2023-08-31)First release on PyPI.0.1.1 (2023-09-11)OSS Prep0.1.2 (2023-09-15)Add branch arg to clone and push
aimgtrs
No description available on PyPI.
aimgui
No description available on PyPI.
aimhardest
No description available on PyPI.
aimhii
This package provides software for identifying the genome insertion points random mutants generated by insertional mutagenesis, which have been sequenced as a pool.The package will install two executables: aimhii and extract_chimeras.aimhii will run a full analysis starting from sequence data (a FASTQ file), genome sequence, and insert sequence. extract_chimeras just runs the last step of this analysis, assuming you already have a BAM file and a concatenated genome-insert sequence.For details see the project websitehttps://github.com/granek/aimhii/Changes in v0.5.5: Added filter_bam.filter_chimeric_reads to pipeline extract only putative junction reads from BWA output. Shifted to hosting on github.Changes in v0.5.4: Added –plot option to aimhii and shifted a few remaining “prints” to loggingChanges in v0.5.3: Shifted debugging prints to logging and added scripts for generating synthetic sequencesChanges in v0.5.2: Lowered pysam version requirement to allow installation using apt-get in Debian v8.Changes in v0.5.1: Fixed problem with missing DESCRIPTION.rst.
aimhub
AimAim is an AI deployment and version control system. It can handle both small and large projects through their whole life cycle with efficiency and speed. It is built to seamlessly blend in with existing ML stack and become an integral part of the development lifecycle.Aim CLIAim CLI is a command line tool for building end-to-end AI. Aim is built to be: compatible with the existing ecosystem of tools be familiar just work make building AI productiveAim has three main features: tracking of training, export and deploy.Tracking - ML TrainingCommand:aim trainAim train runs training for the given aim repository. Aim train tracks the gradients and updates in the model with given interval and saves them for visualization and analysis. Aim Train is paired with UI that visualizes the artifacts tracked. Aim Tracking is used to debug and have a detailed understanding of the process of training.Export - ML ModelCommand:aim exportAim export creates the saved model checkpoint file and exports .aim model which could be committed and pushed to the Aimhub and/or deployed to different platforms. Exported .aim model could also be converted to .onnx, .tf and other checkpoints for other frameworks. Aim CLI Export is based on aim Intermediate Representation that allows for automatic deployment of the model. Aim Export can also export pre-processing steps similarly to the model and could be included in the model deployment process.Deploy - Aim ModelCommand:aim deployAim Deploy produces a deployable artifact from .aim (model and preprocessing) files. The produced artifacts can run in cloud, on different hardware and as a hybrid. Deployments are also reflected on Aimhub to track and version the deployed artifacts.Other Commandsaimfork aimbranchoff aimpause,continueaimconvert
aimhub-client
AimHub - The collaboration-first AI metadata platform
ai-microcore
AI MicroCore: A Minimalistic Foundation for AI Applicationsmicrocoreis a collection of python adapters for Large Language Models and Semantic Search APIs allowing to communicate with these services in a convenient way, make it easily switchable and separate business logic from implementation details.It defines interfaces for features typically used in AI applications, that allows you to keep your application as simple as possible and try various models & services without need to change your application code.You even can switch between text completion and chat completion models only using configuration.The basic example of usage is as follows:frommicrocoreimportllmwhileuser_msg:=input('Enter message: '):print('AI: '+llm(user_msg))🔗 LinksAPI ReferencePyPi PackageGitHub Repository💻 InstallationInstall as PyPi package:pip install ai-microcoreAlternatively, you may just copymicrocorefolder to your project sources [email protected]:Nayjest/ai-microcore.git&&mvai-microcore/microcore./&&rm-rfai-microcore📋 RequirementsPython 3.10+ / 3.11+Both v0.28.X and v1.x.x OpenAI package versions are supported.⚙️ ConfiguringMinimal ConfigurationHavingOPENAI_API_KEYin OS environment variables is enough for basic usage.Similarity search features will work out of the box if you have thechromadbpip package installed.Configuration MethodsThere are a few options available for configuring microcore:Usemicrocore.configure()💡 All configuration options should be available in IDE autocompletion tooltipsCreate a.envfile in your project root (example)Use a custom configuration file:mc.configure(DOT_ENV_FILE='dev-config.ini')Define OS environment variablesFor the full list of available configuration options, you may also checkmicrocore/config.py.Priority of Configuration SourcesConfiguration options passed as arguments tomicrocore.configure()have the highest priority.The priority of configuration file options (.envby default or the value ofDOT_ENV_FILE) is higher than OS environment variables.💡 SettingUSE_DOT_ENVtofalsedisables reading configuration files.OS environment variables have the lowest priority.🌟 Core Functionsllm(prompt: str, **kwargs) → strPerforms a request to a large language model (LLM)frommicrocoreimport*# Will print all requests and responses to consoleuse_logging()# Basic usageai_response=llm('What is your model name?')# You also may pass a list of strings as prompt# - For chat completion models elements are treated as separate messages# - For completion LLMs elements are treated as text linesllm(['1+2','='])llm('1+2=',model='gpt-4')# To specify a message role, you can use dictionary or classesllm(dict(role='system',content='1+2='))# equivalentllm(SysMsg('1+2='))# The returned value is a stringassert'7'==llm([SysMsg('You are a calculator'),UserMsg('1+2='),AssistantMsg('3'),UserMsg('3+4=')]).strip()# But it contains all fields of the LLM response in additional attributesforiinllm('1+2=?',n=3,temperature=2).choices:print('RESPONSE:',i.message.content)# To use response streaming you may specify the callback function:llm('Hi there',callback=lambdax:print(x,end=''))# Or multiple callbacks:output=[]llm('Hi there',callbacks=[lambdax:print(x,end=''),lambdax:output.append(x),])tpl(file_path, **params) → strRenders prompt template with params.Full-featured Jinja2 templates are used by default.Related configuration options:frommicrocoreimportconfigureconfigure(# 'tpl' folder in current working directory by defaultPROMPT_TEMPLATES_PATH='my_templates_folder')texts.search(collection: str, query: str | list, n_results: int = 5, where: dict = None, **kwargs) → list[str]Similarity searchtexts.find_one(self, collection: str, query: str | list) → str | NoneFind most similar texttexts.get_all(self, collection: str) -> list[str]Return collection of textstexts.save(collection: str, text: str, metadata: dict = None))Store text and related metadata in embeddings databasetexts.save_many(collection: str, items: list[tuple[str, dict] | str])Store multiple texts and related metadata in the embeddings databasetexts.clear(collection: str):Clear collectionAPI providers and models supportLLM Microcore supports all models & API providers having OpenAI API.List of API providers and models tested with LLM Microcore:API ProviderModelsOpenAIAll GPT-4 and GTP-3.5-Turbo modelsall text completion models (davinci, gpt-3.5-turbo-instruct, etc)Microsoft AzureAll OpenAI modelsdeepinfra.comdeepinfra/airoboros-70bjondurbin/airoboros-l2-70b-gpt4-1.4.1meta-llama/Llama-2-70b-chat-hfand other models having OpenAI APIAnyscalemeta-llama/Llama-2-70b-chat-hfmeta-llama/Llama-2-13b-chat-hfmeta-llama/Llama-7b-chat-hf🖼️ Examplescode-review-tool examplePerforms code review by LLM for changes in git .patch files in any programming languages.Other examplesPython functions as AI tools@TODO🤖 AI ModulesThis is experimental feature.Tweaks the Python import system to provide automatic setup of MicroCore environment based on metadata in module docstrings.Usage:importmicrocore.ai_modulesFeatures:Automatically registers template folders of AI modules in Jinja2 environment🛠️ ContributingPlease seeCONTRIBUTINGfor details.📝 LicenseLicensed under theMIT License© 2023Vitalii Stepanenko
aiminify
No description available on PyPI.
ai-minimization-toolkit
The EU General Data Protection Regulation (GDPR) mandates the principle of data minimization, which requires that only data necessary to fulfill a certain purpose be collected. However, it can often be difficult to determine the minimal amount of data required, especially in complex machine learning models such as neural networks.This toolkit is a first-of-a-kind implementation to help reduce the amount of personal data needed to perform predictions with a machine learning model, by removing or generalizing some of the input features. The type of data minimization this toolkit focuses on is the reduction of the number and/or granularity of features collected for analysis.The generalization process basically searches for several similar records and groups them together. Then, for each feature, the individual values for that feature within each group are replaced with a represenataive value that is common across the whole group. This process is done while using knowledge encoded within the model to produce a generalization that has little to no impact on its accuracy.The minimization-toolkit is compatible with:Python 3.7.Officialai-minimization-toolkit documentationUsing the minimization-toolkitThe main class,GeneralizeToRepresentative, is a scikit-learn compatibleTransformer, that receives an existing estimator and labeled training data, and learns the generalizations that can be applied to any newly collected data for analysis by the original model. Thefit()method learns the generalizations and thetransform()method applies them to new data.It is also possible to export the generalizations as feature ranges.The current implementation supports only numeric features, so any categorical features must be transformed to a numeric representation before using this class.Start by training your machine learning model. In this example, we will use aDecisionTreeClassifier, but any scikit-learn model can be used. We will use the iris dataset in our example.fromsklearnimportdatasetsfromsklearn.model_selectionimporttrain_test_splitfromsklearn.treeimportDecisionTreeClassifierdataset=datasets.load_iris()X_train,X_test,y_train,y_test=train_test_split(dataset.data,dataset.target,test_size=0.2)base_est=DecisionTreeClassifier()base_est.fit(X_train,y_train)Now create theGeneralizeToRepresentativetransformer and train it. Supply it with the original model and the desired target accuracy. The training process may receive the original labeled training data or the model’s predictions on the data.predictions=base_est.predict(X_train)gen=GeneralizeToRepresentative(base_est,target_accuracy=0.9)gen.fit(X_train,predictions)Now use the transformer to transform new data, for example the test data.transformed=gen.transform(X_test)The transformed data has the same columns and formats as the original data, so it can be used directly to derive predictions from the original model.new_predictions=base_est.predict(transformed)To export the resulting generalizations, retrieve theTransformer’s_generalizeparameter.generalizations=base_est._generalizeThe returned object has the following structure:{ ranges: { list of (<feature name>: [<list of values>]) }, untouched: [<list of feature names>] }For example:{ ranges: { age: [21.5, 39.0, 51.0, 70.5], education-years: [8.0, 12.0, 14.5] }, untouched: ["occupation", "marital-status"] }Where each value inside the range list represents a cutoff point. For example, for theagefeature, the ranges in this example are:<21.5,21.5-39.0,39.0-51.0,51.0-70.5,>70.5. Theuntouchedlist represents features that were not generalized, i.e., their values should remain unchanged.
aiml
aiml implements an interpreter for AIML, the Artificial Intelligence Markup Language developed by Dr. Richard Wallace of the A.L.I.C.E. Foundation. It can be used to implement a conversational AI program.Forked from:0.9.1https://github.com/paulovn/python-aiml0.8.6https://github.com/cdwfs/pyaimlPyAIML (c) Cort Stratton
aiml7
Failed to fetch description. HTTP Status Code: 404
aiml7lab
AIMLUsageimportaiml7labasa
aimlbotkernel
This is a Jupyter kernel that deploys a chatbot, implemented using thepython-aimlpackage. The idea was taken from theCalysto chatbotkernel.It has been tested with Jupyter 4.x. The code works with either Python 2.7 or Python 3 (tested with Python 3.4)InstallationThe installation process requires two steps:Install the Python package:pip install aimlbotkernelInstall the kernel into Jupyter:jupyter aimlbotkernel install [--user] [--logdir <dir>]The--useroption will install the kernel in the current user’s personal config, while the generic command will install it as a global kernel (but needs write permissions in the system directories).The--logdirspecifies the default place into which the logfile will be written (unless overriden at runtime by theLOGDIRenvironment variable). If no directory is specified, the (platform-specific) default temporal directory will be used.Note that the Jupyter kernel installation also installs some custom CSS; its purpose is to improve the layout of the kernel results as they are presented in the notebook (but it also means that the rendered notebook will look slightly different in a Jupyter deployment in which the kernel has not been installed, or within an online viewer).To uninstall, perform the inverse operations (in reverse order), to uninstall the kernel from Jupyter and to remove the Python package:jupyter aimlbotkernel remove pip uninstall aimlbotkernelOperationOnce installed, anAIML Chatbotkernel will be available in the NotebookNewmenu. Starting one such kernel will create a chatbot. The chatbot is initially empty but can be loaded with a couple of predefined DBs (use the%helpmagic for initial instructions).Notebook input is of two kinds:Regular text cells are considered human input and are sent to the chatbot, which produces its corresponding outputCells starting with%contain “magic” commands that affect the operation of the kernel (load AIML databases, inspecting/modifying bot state, saving/loading state to/from disk, etc). Use the%helpmagic for some instructions, and%lsmagicsto show the current list of defined magics (magics have autocompletion and contextual help).Theexamplesdirectory contains a few notebooks showing some of the provided functionality. They can also be seen withonline Notebook viewer(note that, as said above, they will look slightly different than in a running kernel).AIMLAIMLis an XML-based specification to design conversational agents. Its most famous application is ALICE, a chatbot (the DB for the free version of ALICE is included in this kernel, as it is included in python-aiml)The chatbot can load an AIML database (which is basically a bunch of XML files). It can also define AIML rules on the fly, by using the%aimlmagic in a cell.
aimlflow
aiflow: Navigator ClientThis is the navigator client package to connect and interact with Navigator
aiml-lab
AIML LABDescriptionAIML LAB Module contains various examples that involves concepts from AI/ML to solve problems. Some of the available problem solvers available are as follows:Sudoku SolverShortest Path SolverCrossword SolverEight Puzzle Problem Solver
aimlLab23
No description available on PyPI.
aiml-l-prjt
No description available on PyPI.
aimlogpy
No description available on PyPI.
aimlops-sentiment-model
Sentiment Model for assessing sentiment. It is a classification model package
aimlprog
No description available on PyPI.
aiml-py-common-utils
AIML Python Common UtilitiesVisit Documentation -Click hereThis repository provides a collection of utilities that are frequently used in various AIML applications.Current Version FeaturesThe current version includes the following utilities:1. YAML File ReaderThis utility reads a YAML file and returns a ConfigBox type object. For instance, given a YAML file with the following content:# Scalarsstring:"Hello,World"integer:25floating_point:3.14boolean:truenull_value:null# Sequencessequence:-item1-item2-item3You can access the content of the YAML file as follows:fromaiml_py_common_utilsimportread_yamlcontent=read_yaml(path_to_yaml)print(content.string)# Outputs: "Hello, World"print(content.integer)# Outputs: 252. Directory CreatorThis utility allows you to create multiple directories. For example, to create directories nameddir_one,dir_two, anddir_three, you can use the function as follows:frompathlibimportPathfromaiml_py_common_utilsimportcreate_directorieslist_of_directories_paths=[Path("./dir_one"),Path("./dir_two"),Path("./dir_three")]create_directories(path_to_directories=list_of_directories_paths)3. JSON File WriterThis utility saves a dictionary as a JSON file:frompathlibimportPathfromaiml_py_common_utilsimportsave_dict2jsonexample_dict={"string":"Hello, World","integer":25,"floating_point":3.14,"boolean":True,"null_value":None,}path_to_json=Path("path/to/example.json")save_dict2json(path=path_to_json)4. JSON File ReaderThis utility loads a JSON file. For example, given a JSON file at a certain path containing:{"string":"Hello, World","integer":25,"floating_point":3.14,"boolean":true,"null_value":null,}You can load the content of the JSON file as follows:frompathlibimportPathfromaiml_py_common_utilsimportload_jsonpath=Path("path/to/example.json")content=load_json(path=path_to_json)print(content.string)# Outputs: "Hello, World"print(content.integer)# Outputs: 255. Binary File WriterThis utility saves a snapshot of data as a binary file:frompathlibimportPathfromaiml_py_common_utilsimportsave_binexample_dict={"string":"Hello, World","integer":25,"floating_point":3.14,"boolean":True,"null_value":None,}path_to_bin=Path("path/to/example.bin")save_bin(data=example_dict,path=path_to_bin)6. Binary File ReaderThis utility loads a snapshot of data from a binary file:frompathlibimportPathfromaiml_py_common_utilsimportload_binpath_to_bin=Path("path/to/example.bin")loaded_bin_content=load_bin(path=path_to_bin)7. File Size CalculatorThis utility calculates the size of a file in kilobytes:frompathlibimportPathfromaiml_py_common_utilsimportget_sizefilepath=Path("path/to/example.file")size_in_kb=get_size(path=filepath)
aimluae
Aiml UAE Python libreryChange Log1.0.6 (06/07/2022)Data From github1.0.8 (06/07/2022)Excel file Support Added1.0.22 (07/07/2022)New Datasets added
aimlufotech
AIML MacondoLabModified AIML package to create conversations easily with more capabilities. Made in fablab MacondoLab, located in Barranquilla, Colombia.This aiml package has some new tags and other tags have been modified in order to help developer to make converstions easily.
aiml-utils
Test
aim-mlflow
aimlflowAim-powered supercharged UI for MLFlow logsRun beautiful UI on top of your MLflow logs and get powerful run comparison features.Aboutaimlflow helps to explore various types of metadata tracked during the training with MLFLow, including:hyper-parametersmetricsimagesaudiotextMore about Aim:https://github.com/aimhubio/aimMore about MLFLow:https://github.com/mlflow/mlflowGetting StartedFollow the steps below to set up aimlflow.Install aimlflow on your training environment:pip install aim-mlflowRun live time convertor to sync MLFlow logs with Aim:aimlflow sync --mlflow-tracking-uri={mlflow_uri} --aim-repo={aim_repo_path}Run the Aim UI:aim up --repo={aim_repo_path}Why use aimlflow?Powerful pythonic search to select the runs you want to analyze.Group metrics by hyperparameters to analyze hyperparameters’ influence on run performance.Select multiple metrics and analyze them side by side.Aggregate metrics by std.dev, std.err, conf.interval.Align x axis by any other metric.Scatter plots to learn correlations and trends.High dimensional data visualization via parallel coordinate plot.
aimmo
No description available on PyPI.
aimmo-avatar-api
No description available on PyPI.
aim.models
aim.modelsAn object model for semantic cloud infrastructure.aim.modelsparses a directory of YAML files that compose an AIM Project and loads them into a complete object model.What's in the model?The model defines common logical cloud infrastructure concepts, such as networks, accounts, applications and environments.The model uses network and applications as hierarchical trees of configuration that can have their values over rode when they are placed into environments. Environments live in a network and contain applications, and typically represent the stages of the software development lifecycle (SDLC), such as 'development', 'staging' and 'production'.The model has a declarative schema that explicitly defines the fields for each object type in the model. This schema declares not only type (e.g. string, integer) but can also declare defaults, min and max values, constrain to specific values, and define invariants that ensure that if one field has a specific value, another fields value is compatabile with that. The model will validates these fields when it loads an AIM Project.DevelopingInstall this package with your Python tool of choice. Typically set-up a virtualenv and pip install the dependencies in there:python -m venv env ./env/bin/pip install -e .There are unit tests using PyTest. If you are using VS Code you can turn on the "Py Test Enabled" setting and run "Discover Unit Tests" command.Changelog for aim.models6.1.0 (2019-11-06)AddedApplications can be provisioned in the same environment more than once with a new "app{suffix}" syntax for an environments application keys.INotificationGroups has a regions field, if it is the default of ['ALL'] it will apply to all of a project's active regions. Otherwise is will just provision in the selected region(s).ICloudFormationInit for modelling AWS::CloudFormation::Init, which can be applied to the IASG.cfn_init field.ICloudWatchLogAlarm schema. ICloudWatchAlarm now has "type: Alarm" and if it is "type: LogAlarm" an ICloudWatchLogAlarm will be created which can be used to connect an alarm to a MetricFilter of a LogGroup.IDBParameterGrouups resource.IElastiCache hasdescriptionandcache_clustersfields, while IElastiCacheRedis hassnapshot_retention_limit_daysandsnapshot_windowfields.IRDS has newlicense_model,cloudwatch_logs_exportanddeletion_protectionfields.global_role_namefield for IAM Role can be set to True and the RoleName will not be hashed. Can only be used for global Roles, otherwise if these Roles overlap per-environment, things will break!monitoring.health_checkswhich can contain HealthCheck Resources. IRoute53HealthCheck resource for Route53 health checks.region_nameproperty can be overrode if aoverrode_region_nameattribute is set.Added a CodeBuild IAM Permission for IAM UsersAddedresolve_refmethod to DeploymentPipelineConfigurationAdded the EIP Application Resource and a support 'eip' field to the ASG resource for associating an EIP with a single instance ASG.Added AWS Cli install commands to vocabulary.Addeddnsto EIP Application ResourceAddedcftemplate_iam_user_delegates_2019_10_02legacy flag to make user delegate role stack names consistent with others.Addedroute53_hosted_zone_2019_10_12legacy flag for Route53 CFTemplate refactor.Addedroute53_record_set_2019_10_16legacy flag for the Route53 RecordSet refactor.Addedavailability_zonefor locking in an ASG to a single Availability Zone.Addedparameter_groupto IElastiCache Application ResourceAddedvpc_associationsto IPrivateHosted.Addedvpc_configto the ILambda Application ResourcesAddedsecrets_managerto IIEnvironmentDefault.Addedttlto IDNSAdded caching to instance AMI ID function.ref lookups.Added the EBS Application Resources. Addedebs_volume_mountsto IASG to mount volumes to single instance groups.Addedlaunch_optionsto IASG as an IEC2LaunchOptions object. The initial option is update_packages which will update the linux distributions packages on launch.Added resolve_ref() to Resource in base.py as a catch all.ChangedISecurityGroupRulesource_security_groupwas moved to IIngressRule and IEgressRule (finally!) has adestination_security_groupfield.load_resourceswas removed and you can now simply apply_attributes to an Application and it will recurse through app.groups..resources. without any external fiddling.Moved deepdiff CLI functions intoaimproject.IApplication is now IMonitorable. Alarms at the Application level must specify their Namespace and Dimensions.Changed RDSprimary_domain_nameandprimary_hosted_zoneto an IDNS objectFixedAlarm overrides are now cast to the schema of the field. Fixes "threshold: 10" loading as in int() when the schema expects a float().6.0.0 (2019-09-27)AddedICloudWatchAlarms haveenable_ok_actionsandenable_insufficient_data_actionsbooleans that will send to the notification groups when the alarm enters the OK or INSUFFICIENT_DATA states.references.get_model_obj_refwill resolve an aim.ref to a model object and won't attempt to do Stack output lookups.Service plug-ins are loaded according to aninitilization_orderinteger that each plug-in can supply. If no integer is supplied, loading for unordered plug-ins count up from 1000.Minimal API Gateway models for Methods, Resources, Models and Stages.S3Bucket NotificationConfiguration for Lambdas.S3Bucket hasget_bucket_name()to return the full computed bucket name.IGlobalResources for project['resource'] to contain config from the ./Resources/ directory. Resources such as S3 and EC2 now implement INamed and are loaded into project['resource'].ISNSTopic hascross_account_accesswhich grantssns:Publishto all accounts in the AIM Project.IAccountContainer and IRegionContainer are lightweight containers for account and region information. They can be used by Services that want to set-up Resources in a multi-account, multi-region manner.ChangedCloudTrail defines CloudWatchLogGroup as a sub-object rather than an aim.ref.Alarms haveget_alarm_actions_aim_refsrenamed fromget_alarm_actionsas alarms can only provide aim.refs and need to get the ARNs from the stacks.NotificationGroups are now Resources. Now they have regular working aim.ref's.5.0.0 (2019-08-26)AddedNew fieldaim.models.reference.FileReferencewhich resolves the path and replaces the original value with the value of the file indicated by the path. IApiGatewayRestApi.body_file_location uses this new field.ApiGatewayRestApi and CloudWatchAlarm have acfn_export_dictproperty that returns a new dict that can be used to created Troposphere resources.Added external_resource support to the ACMAdded ReadOnly support to the Administrator IAMUserPermissionChangedMulti-Dimension Alarms now need to specify anaim.refas the Value.Added IAMUser schemas and loading for IAM users.Added a CommaList() schema type for loading comma separated lists into schema.List()Moved aim reference generation into the Model. Model objects now have .aim_ref and .aim_ref_parts properties which contain their aim.ref reference.Renamed project['ne'] to project['netenv']Modified NatGateway segments to aim referencesFixedInvariants were not being check for resources. Invariants need to be checked by the loader if they are not contained in azope.schema.Objectfield, which will run the check behind the scenes.4.0.0 (2019-08-21)AddedIVPCPeering and IVPCPeeringRoute have been added to the model for VPC Peering support.Added a CloudTrail schema configured inResources/CloudTrail.yaml.IS3BucketPolicy now hasprincipalandconditionfields.principalcan be either a Key-Value dictionary, where the key is either 'AWS', 'Service', etc. and the value can be either a String or a List. It is an alternate to theawsfield, which will remain for setting simpler AWS-only principals. Theconditionfield is a Key-Value dictionary of Key-Value filters.Alarm now has 'get_alarm_actions' and 'get_alarm_description' to help construct alarms.CloudTrail has a 'get_accounts' which will resolve the CloudTrail.accounts field to a list of Account objects in the model.IAlarm hasdescriptionandrunbook_urlfields.CodePipeBuildDeploy.resolve_ref() function covers wider scope of ref lookupsAdded VPCPeering to the model.Added IElastiCache and IElastiCacheRedis to the model.ChangedMonitorConfig/LogSets.yamlhas been renamed toMonitorConfig/Logging.yaml. CloudWatch logging is under the top levelcw_loggingkey. The schema has been completely reworked so that LogGroups and LogSets are properly modelled.IAccount.region, IEC2KeyPair.region and ICredentials.aws_default_region no longer haveus-west-2as a default. The region needs to be explicity set.FixedIAlarm.classification is now a required field.3.1.0 (2019-08-08)Addedaim-project-version.txt file in the root directory can now contain the AIM Project YAML version. IProject now has an aim_project_version field to store this value.ICloudWatchAlarm gets a namespace field. Can be used to override the default Resource namespace, for example, use 'CWAgent' for the CloudWatch agent metrics.IResource now has a resource_fullname field. The fullname is the name needed to specify for a metric in a CloudWatch Alarm.ICloudWatchAlarm now has a dimensions field, which is a List of Dimension objects.ITargetGroup now inherits from IResource. It loads resource_name from outputs.3.0.0 (2019-08-06)AddedNewMonitorConfig/NotificationGroups.yamlthat contains subscription groups for notifications.sdb_cache field for Lambda.Lambda can have alarms.ISNSTopic and ISNSTopicSubscription to model SNS.ChangedAll references have been renamed to start withaim.reffor consistency.AlarmSets, AlarmSet and Alarm all now implement INamed and are locatable in the modelService plugins can load their outputs2.0.0 (2019-07-23)AddedSchema for Notifications for subscribing to AlarmsAdded S3Resource for Resources/S3.yml configurationAdded Lambda resolve_ref supportChangedServices are loaded as entry_point plugins namedaim.servicesRefactored the models applications, resources, and services.Renamed IRoute53 to IRoute53Resource.FixedCloudWatchAlarms now validate a classification field value of 'performance', 'health' or 'security' is supplied.1.1.0 (2019-07-06)AddedAdded function.ref to be able to look-up latest AMI IDsAdded more constraints to the schemas.Added default to IS3Bucket.policyAdded Route53 to schema and modelAdded redirect to Listner rules in the ALBChangedDescription attribute for Fields is now used to describe constraints.Ported CodeCommit to schema and modelRefactored S3 to use Application StackGroupCPBD artifacts s3 bucket now uses S3 Resource in NetEnv yaml insteadConverted the ALB's listener and listener rules to dicts from listsRemovedRemoved unused yaml config from aimdemo under fixtures.1.0.1 (2019-06-19)Improvements to Python packaging metadata.1.0.0 (2019-06-19)First open source release
aimmo-game-worker-simulation-test
No description available on PyPI.
aimmo-models
# AIMMO Models
aimms-pygments-style
No description available on PyPI.
aimmvp_nester
UNKNOWN
aimodel
No description available on PyPI.
ai-model-base
No description available on PyPI.
ai-models
ai-modelsTheai-modelscommand is used to run AI-based weather forecasting models. These models need to be installed independently.UsageAlthough the source codeai-modelsand its plugins are available under open sources licences, some model weights may be available under a different licence. For example some models make their weights available under the CC-BY-NC-SA 4.0 license, which does not allow commercial use. For more informations, please check the license associated with each model on their main home page, that we link from each of the corresponding plugins.PrerequisitesBefore using theai-modelscommand, ensure you have the following prerequisites:Python 3.10 (it may work with different versions, but it has been tested with 3.10 on Linux/MacOS).An ECMWF and/or CDS account for accessing input data (see below for more details).A computed with a GPU for optimal performance (strongly recommended).InstallationTo install theai-modelscommand, run the following command:pipinstallai-modelsAvailable ModelsCurrently, four models can be installed:pipinstallai-models-panguweather pipinstallai-models-fourcastnet pipinstallai-models-graphcast# Install details at https://github.com/ecmwf-lab/ai-models-graphcastpipinstallai-models-fourcastnetv2Seeai-models-panguweather,ai-models-fourcastnet,ai-models-fourcastnetv2andai-models-graphcastfor more details about these models.Running the modelsTo run model, make sure it has been installed, then simply run:ai-models<model-name>Replace<model-name>with the name of the specific AI model you want to run.By default, the model will be run for a 10-day lead time (240 hours), using yesterday's 12Z analysis from ECMWF's MARS archive.To produce a 15 days forecast, use the--lead-time HOURSoption:ai-models--lead-time360<model-name>You can change the other defaults using the available command line options, as described below.Performances ConsiderationsThe AI models can run on a CPU; however, they perform significantly better on a GPU. A 10-day forecast can take several hours on a CPU but only around one minute on a modern GPU.:warning:We strongly recommend running these models on a computer equipped with a GPU for optimal performance.It you see the following message when running a model, it means that the ONNX runtime was not able to find a the CUDA libraries on your system:[W:onnxruntime:Default, onnxruntime_pybind_state.cc:541 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please referencehttps://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirementsto ensure all dependencies are met.To fix this issue, we suggest that you installai-modelsin acondaenvironment and install the CUDA libraries in that environment. For example:condacreate-nai-modelspython=3.10 condaactivateai-models condainstallcudatoolkit pipinstallai-models ...AssetsThe AI models rely on weights and other assets created during training. The first time you run a model, you will need to download the trained weights and any additional required assets.To download the assets before running a model, use the following command:ai-models--download-assets<model-name>The assets will be downloaded if needed and stored in the current directory. You can provide a different directory to store the assets:ai-models--download-assets--assets<some-directory><model-name>Then, later on, simply use:ai-models--assets<some-directory><model-name>orexportAI_MODELS_ASSETS=<some-directory> ai-models<model-name>For better organisation of the assets directory, you can use the--assets-sub-directoryoption. This option will store the assets of each model in its own subdirectory within the specified assets directory.Input dataThe models require input data (initial conditions) to run. You can provide the input data using different sources, as described below:From MARSBy default,ai-modelsuse yesterday's 12Z analysis from ECMWF, fetched from the Centre's MARS archive using theECMWF WebAPI. You will need an ECMWF account to access that service.To change the date or time, use the--dateand--timeoptions, respectively:ai-models--dateYYYYMMDD--timeHHMM<model-name>From the CDSYou can start the models using ERA5 (ECMWF Reanalysis version 5) data for theCopernicus Climate Data Store (CDS). You will need to create an account on the CDS. The data will be downloaded using theCDS API.To access the CDS, simply add--input cdson the command line. Please note that ERA5 data is added to the CDS with a delay, so you will also have to provide a date with--date YYYYMMDD.ai-models--inputcds--date20230110--time0000<model-name>From a GRIB fileIf you have input data in the GRIB format, you can provide the file using the--fileoption:ai-models--file<some-grib-file><model-name>The GRIB file can contain more fields than the ones required by the model. Theai-modelscommand will automatically select the necessary fields from the file.To find out the list of fields needed by a specific model as initial conditions, use the following command:ai-models--fields<model-name>OutputBy default, the model output will be written in GRIB format in a file called<model-name>.grib. You can change the file name with the option--path <file-name>. If the path you specify contains placeholders between{and}, multiple files will be created based on theeccodeskeys. For example:ai-models--path'out-{step}.grib'<model-name>This command will create a file for each forecasted time step.If you want to disable writing the output to a file, use the--output noneoption.Command line optionsIt has the following options:--help: Displays this help message.--models: Lists all installed models.--debug: Turns on debug mode. This will print additional information to the console.Input--input INPUT: The input source for the model. This can be amars,cdsorfile.--file FILE: The specific file to use as input. This option will set--sourcetofile.--date DATE: The analysis date for the model. This defaults to yesterday.--time TIME: The analysis time for the model. This defaults to 1200.Output--output OUTPUT: The output destination for the model. Values arefileornone.--path PATH: The path to write the output of the model.Run--lead-time HOURS: The number of hours to forecast. The default is 240 (10 days).Assets management--assets ASSETS: Specifies the path to the directory containing the model assets. The default is the current directory, but you can override it by setting the$AI_MODELS_ASSETSenvironment variable.--assets-sub-directory: Enables organising assets in<assets-directory>/<model-name>subdirectories.--download-assets: Downloads the assets if they do not exist.Misc. options--fields: Print the list of fields needed by a model as initial conditions.--expver EXPVER: The experiment version of the model output.--class CLASS: The 'class' metadata of the model output.--metadata KEY=VALUE: Additional metadata metadata in the model output
ai-models-fourcastnet
ai-models-fourcastnet:warning: This plugin is now deprecated. Please use the newer version that can be found athttps://github.com/ecmwf-lab/ai-models-fourcastnetv2ai-models-fourcastnetis anai-modelsplugin to runNVIDIA's FourCastNet.FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operatorshttps://arxiv.org/abs/2202.11214The FourCastNet code was developed by the authors of the preprint: Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, Pedram Hassanzadeh, Karthik Kashinath, Animashree Anandkumar.Version 0.1 of FourCastNet is used as default in ai-models.https://portal.nersc.gov/project/m4134/FCN_weights_v0.1/FourCastNet is released underBSD 3-Clause License, seeLICENSE_fourcastnetfor more details.
ai-models-fourcastnetv2
ai-models-fourcastnetv2ai-models-fourcastnetv2is anai-modelsplugin to run Nvidia's spherical harmonics tranformer.InstallationOnce the model is public, to install the package, run:pipinstallai-models-fourcastnetv2This will install the package and its dependencies.
ai-models-graphcast
ai-models-graphcastai-models-graphcastis anai-modelsplugin to run Google Deepmind'sGraphCast.GraphCast: Learning skillful medium-range global weather forecasting, arXiv preprint: 2212.12794, 2022.https://arxiv.org/abs/2212.12794GraphCast was created by Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri, Timo Ewalds, Zach Eaton-Rosen, Weihua Hu, Alexander Merose, Stephan Hoyer, George Holland, Oriol Vinyals, Jacklynn Stott, Alexander Pritzel, Shakir Mohamed and Peter Battaglia.The model weights are made available for use under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). You may obtain a copy of the License at:https://creativecommons.org/licenses/by-nc-sa/4.0/.InstallationTo install the package, run:pipinstallai-models-graphcastThis will install the package and most of its dependencies.Then to install graphcast dependencies (and Jax on GPU):Graphcast and JaxGraphcast depends on Jax, which needs special installation instructions for your specific hardware.Please see theinstallation guideto follow the correct instructions.We have prepared tworequirements.txtyou can use. A CPU and a GPU version:For the preferred GPU usage:pip install -r requirements-gpu.txt -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.htmlFor the slower CPU usage:pip install -r requirements.txt
aimodelshare
aimodelshareThe mission of the AI Model Share Platform (website w/ integrated Python library) is to provide a trusted non profit repository for machine learning model prediction APIs (python library + integrated website at modelshare.org. A beta version of the platform is currently being used by Columbia University students, faculty, and staff to test and improve platform functionality.In a matter of seconds, data scientists can launch a model into this infrastructure and end-users the world over will be able to engage their machine learning models.Launch machine learning models into scalable production ready prediction REST APIs using a single Python function.Details about each model, how to use the model's API, and the model's author(s) are deployed simultaneously into a searchable website at modelshare.org.Deployed models receive an individual Model Playground listing information about all deployed models. Each of these pages includes a fully functional prediction dashboard that allows end-users to input text, tabular, or image data and receive live predictions.Moreover, users can build on model playgrounds by 1) creating ML model competitions, 2) uploading Jupyter notebooks to share code, 3) sharing model architectures and 4) sharing data... with all shared artifacts automatically creating a data science user portfolio.Use the aimodelshare Python library to deploy your model, create a new ML competition, and more.Tutorials for deploying models.Find model playground web-dashboards to generate predictions now.View deployed models and generate predictions at modelshare.orgInstallationYou can then install aimodelshare from PyPipip install aimodelshare